AI Risk Register Template
A comprehensive template for documenting and managing AI-specific risks throughout the AI lifecycle. This template supports ISO 42001 Clause 6 (Planning) requirements.
Risk Register Overview
Purpose: Centralized documentation of all identified AI risks, their assessments, treatments, and monitoring.
Use: Living document updated throughout AI system lifecycle.
Ownership: AI Risk Officer or designated risk management lead.
Review Frequency:
- Continuous: Add risks as identified
- Monthly: Review high/critical risks
- Quarterly: Full risk register review
- Triggered: After incidents, system changes, or regulatory updates
Risk Register Template
Section 1: AI System Information
System Name: ________________________
System ID: ________________________
System Description:
Purpose and Use Case:
Risk Classification (per EU AI Act):
- Unacceptable Risk (Prohibited)
- High Risk
- Limited Risk
- Minimal Risk
Deployment Status:
- Development
- Testing
- Staging
- Production
- Decommissioned
Risk Assessment Date: ____________________
Next Review Date: ____________________
Risk Owner: ________________________
Section 2: Stakeholder Impact Analysis
Affected Stakeholders:
- Direct Users
- End Users
- Data Subjects
- Communities
- Employees
- Customers
- Regulators
- Society at Large
- Other: ____________________
Stakeholder Vulnerability Assessment:
- Protected groups affected (race, gender, age, disability, etc.)
- Vulnerable populations (children, elderly, low-income)
- Power imbalances
- High-stakes decisions
- Limited recourse options
Section 3: Risk Identification Matrix
| Risk ID | Risk Category | Risk Description | Potential Causes | Affected Stakeholders | Lifecycle Stage |
|---|---|---|---|---|---|
| R001 | Bias/Fairness | ||||
| R002 | Transparency | ||||
| R003 | Data Quality | ||||
| R004 | Safety | ||||
| R005 | Security | ||||
| R006 | Privacy | ||||
| R007 | Compliance | ||||
| R008 | Ethical |
Risk Categories:
- Bias/Fairness: Discrimination, disparate impact, unfair treatment
- Transparency: Explainability, opacity, accountability gaps
- Data Quality: Accuracy, completeness, representativeness, drift
- Safety: Physical harm, psychological harm, economic harm
- Security: Adversarial attacks, data poisoning, model extraction
- Privacy: Data breaches, re-identification, unauthorized use
- Compliance: Regulatory violations, legal liability
- Ethical: Values misalignment, societal harm, rights violations
- Technical: Performance, reliability, scalability
- Operational: Integration, maintenance, support
Lifecycle Stages:
- Planning & Design
- Data Collection
- Data Preparation
- Model Development
- Testing & Validation
- Deployment
- Operations & Monitoring
- Decommissioning
Section 4: Risk Analysis
For each identified risk, complete detailed analysis:
Risk ID: R____
Risk Title: __________________________________
Category: ________________________
Description:
Likelihood Assessment:
Rating:
- Very Unlikely (<5%)
- Unlikely (5-20%)
- Possible (20-50%)
- Likely (50-80%)
- Very Likely (>80%)
Likelihood Justification:
Factors Increasing Likelihood:
- Insufficient training data
- Biased historical data
- Complex, opaque model
- Lack of testing
- Inadequate controls
- User misuse potential
- External threat actors
- Rapid deployment pressure
- Other: ____________________
Impact Assessment:
Individual Impact:
- Negligible
- Minor
- Moderate
- Major
- Severe
Impact Areas:
- Rights and freedoms
- Safety and health
- Economic wellbeing
- Reputation and dignity
- Autonomy and choice
Organizational Impact:
- Negligible
- Minor
- Moderate
- Major
- Severe
Impact Areas:
- Financial loss
- Reputational damage
- Operational disruption
- Strategic setback
- Competitive disadvantage
Legal/Regulatory Impact:
- Negligible
- Minor
- Moderate
- Major
- Severe
Potential Consequences:
- Regulatory investigation
- Fines and penalties
- Legal liability
- Compliance violations
- License revocation
Societal Impact:
- Negligible
- Minor
- Moderate
- Major
- Severe
Areas of Concern:
- Social inequality
- Democratic processes
- Environmental harm
- Cultural impact
- Trust erosion
Overall Impact Rating: _______________
Impact Justification:
Inherent Risk Level (before controls):
| Negligible | Minor | Moderate | Major | Severe | |
|---|---|---|---|---|---|
| Very Likely | Medium | High | High | Critical | Critical |
| Likely | Medium | Medium | High | High | Critical |
| Possible | Low | Medium | Medium | High | High |
| Unlikely | Low | Low | Medium | Medium | High |
| Very Unlikely | Low | Low | Low | Medium | Medium |
Inherent Risk Level: __________________
Section 5: Risk Evaluation
Risk Priority:
- P1 - Critical (Immediate action required)
- P2 - High (Action before deployment or within 30 days)
- P3 - Medium (Action within 90 days)
- P4 - Low (Standard monitoring and review)
Priority Justification:
Regulatory Requirements:
- GDPR
- EU AI Act
- Sector-specific regulations
- Contractual obligations
- Organizational policies
- None identified
Ethical Considerations:
- Human rights concerns
- Vulnerable population impacts
- Societal implications
- Environmental considerations
- Value alignment issues
Risk Acceptance Criteria:
- Risk is unacceptable, must avoid or eliminate
- Risk requires treatment to acceptable level
- Risk is acceptable with monitoring
- Risk is acceptable as-is
Section 6: Risk Treatment
Treatment Strategy:
- Avoid (Eliminate risk by not pursuing activity)
- Reduce (Implement controls to lower risk)
- Transfer (Share or shift risk to another party)
- Accept (Formally accept the risk)
Treatment Strategy Justification:
Planned Controls and Mitigations:
| Control ID | Control Type | Control Description | Responsible Party | Implementation Date | Status |
|---|---|---|---|---|---|
| C001 | |||||
| C002 | |||||
| C003 |
Control Types:
- Preventive: Stop risk from occurring
- Detective: Identify when risk occurs
- Corrective: Fix issues when detected
- Compensating: Alternative when primary control infeasible
Specific Control Measures:
For Bias/Fairness Risks:
- Diverse, representative training data
- Fairness testing across demographics
- Bias detection algorithms
- Regular fairness audits
- Stakeholder involvement
- Human review of decisions
- Recourse mechanisms
For Transparency Risks:
- Explainability techniques (LIME, SHAP)
- Model cards and documentation
- Decision logging
- User explanations
- Audit trails
- Clear communication
For Data Quality Risks:
- Data validation pipelines
- Quality monitoring
- Drift detection
- Regular data audits
- Data governance policies
- Automated quality checks
For Safety Risks:
- Comprehensive testing
- Human oversight
- Fail-safe mechanisms
- Emergency stop functionality
- Redundancy and fallbacks
- Incident response procedures
For Security Risks:
- Adversarial training
- Input validation
- Access controls
- Encryption
- Security monitoring
- Penetration testing
- Red team exercises
For Privacy Risks:
- Data minimization
- Differential privacy
- Encryption
- Access controls
- Anonymization/pseudonymization
- Consent management
- Privacy impact assessment
Implementation Plan:
| Milestone | Description | Owner | Target Date | Status | Notes |
|---|---|---|---|---|---|
| 1 | |||||
| 2 | |||||
| 3 |
Resource Requirements:
- Personnel: ________________________
- Budget: ________________________
- Technology: ________________________
- Timeline: ________________________
Success Criteria:
Residual Risk Level (after controls):
Likelihood: __________________ Impact: __________________ Level: __________________
Residual Risk Acceptance:
- Residual risk acceptable, proceed with controls
- Residual risk still too high, additional controls needed
- Risk cannot be reduced adequately, recommend avoid
Risk Acceptance Approval (if accepting risk):
Approved by: ________________________ Title: ________________________ Date: ________________________ Signature: ________________________
Section 7: Monitoring and Review
Monitoring Approach:
- Automated monitoring
- Manual review
- Periodic audits
- Continuous testing
- User feedback
- Incident tracking
Key Monitoring Metrics:
| Metric | Target | Frequency | Alert Threshold | Owner |
|---|---|---|---|---|
Review Schedule:
- Continuous: ________________________
- Weekly: ________________________
- Monthly: ________________________
- Quarterly: ________________________
- Annually: ________________________
Review Triggers:
- Significant incidents
- System changes or updates
- New regulations or requirements
- Stakeholder concerns
- Performance anomalies
- Control failures
- Emerging threats
Review Responsibilities:
- Primary Reviewer: ________________________
- Additional Reviewers: ________________________
- Approval Authority: ________________________
Section 8: Incident History
Related Incidents:
| Date | Incident ID | Description | Impact | Response | Lessons Learned |
|---|---|---|---|---|---|
Near Misses:
| Date | Description | Potential Impact | Preventive Action Taken |
|---|---|---|---|
Section 9: Change Log
| Date | Changed By | Change Description | Reason for Change | Approved By |
|---|---|---|---|---|
Section 10: Supporting Documentation
References:
- Risk assessment methodology: ________________________
- Impact assessment report: ________________________
- Fairness testing results: ________________________
- Security assessment: ________________________
- Ethics review: ________________________
- Stakeholder consultation: ________________________
- Other: ________________________
Attachments:
- Technical documentation
- Model cards
- Datasheets
- Testing reports
- Audit results
- Stakeholder feedback
- Compliance evidence
Example: Completed Risk Entry
Risk ID: R001
Risk Title: Gender Bias in Resume Screening
Category: Bias/Fairness
Description: AI resume screening model may discriminate against female candidates due to historical hiring patterns in training data. Model trained on past hiring decisions which favored male candidates in technical roles.
Likelihood Assessment: Likely (60%)
Likelihood Justification: Historical data shows 80% male hires in technical positions. Model will learn this pattern. Similar industry incidents (Amazon hiring AI) demonstrate likelihood.
Factors Increasing Likelihood:
- ✓ Biased historical data
- ✓ Insufficient testing for fairness
- ✓ Complex, opaque model (neural network)
Impact Assessment:
Individual Impact: Major
- Denies employment opportunities to qualified women
- Economic harm (lost income)
- Perpetuates discrimination
- Violates rights to equal opportunity
Organizational Impact: Major
- Legal liability (discrimination lawsuits)
- Reputational damage
- Regulatory penalties (EEOC)
- Loss of diverse talent
Legal/Regulatory Impact: Major
- EEOC violations
- Potential class-action lawsuit
- Fines and penalties
- Consent decree possible
Societal Impact: Moderate
- Perpetuates gender inequality in tech
- Discourages women from applying
- Industry-wide problem amplified
Inherent Risk Level: High (Likely + Major)
Risk Priority: P2 - High (Action before deployment)
Regulatory Requirements:
- ✓ EEOC equal employment opportunity laws
- ✓ EU AI Act (high-risk AI in employment)
- ✓ Company DEI commitments
Treatment Strategy: Reduce
Treatment Strategy Justification: Risk unacceptable as-is. Can be reduced to acceptable level through multiple controls. Complete avoidance would eliminate business value.
Planned Controls:
| Control ID | Control Type | Control Description | Responsible Party | Implementation Date | Status |
|---|---|---|---|---|---|
| C001 | Preventive | Rebalance training data to ensure gender parity | Data Team | 2024-12-15 | In Progress |
| C002 | Preventive | Implement fairness constraints in model training | ML Team | 2024-12-22 | Planned |
| C003 | Detective | Automated fairness testing in CI/CD pipeline | QA Team | 2024-12-20 | In Progress |
| C004 | Preventive | Human review of all recommended candidates | HR Team | 2025-01-01 | Planned |
| C005 | Detective | Monthly disparate impact analysis | Compliance | Ongoing | Planned |
Implementation Plan:
| Milestone | Description | Owner | Target Date | Status | Notes |
|---|---|---|---|---|---|
| 1 | Curate gender-balanced dataset | Data Team | 2024-12-15 | 75% Complete | Additional data needed |
| 2 | Retrain model with fairness constraints | ML Team | 2024-12-22 | Not Started | Depends on M1 |
| 3 | Deploy fairness testing suite | QA Team | 2024-12-20 | 50% Complete | Framework selected |
| 4 | Train HR team on review process | HR Manager | 2024-12-30 | Not Started | Materials prepared |
| 5 | Launch with full controls | Product Owner | 2025-01-05 | Not Started | Go/no-go decision |
Resource Requirements:
- Personnel: Data team (2 weeks), ML team (3 weeks), QA team (2 weeks), HR team (1 week training)
- Budget: $15,000 (additional data acquisition, computational resources)
- Technology: Fairness testing library (Fairlearn), monitoring dashboard
- Timeline: 6 weeks from start to production-ready
Success Criteria:
- Gender distribution of recommended candidates reflects applicant pool (±5%)
- Disparate impact ratio > 0.80 (legal threshold)
- No statistically significant difference in false negative rates by gender (p < 0.05)
- HR team satisfaction with human review process (>80% approval)
- Zero discrimination complaints in first 6 months
Residual Risk Level: Medium (Possible + Moderate)
Residual Risk Acceptance: Residual risk acceptable, proceed with controls
Risk Acceptance Approval: Approved by: Jane Smith, Chief People Officer Date: 2024-12-01
Monitoring Approach:
- ✓ Automated monitoring (daily fairness metrics)
- ✓ Manual review (HR reviews all recommendations)
- ✓ Periodic audits (quarterly fairness audit)
- ✓ User feedback (candidate experience surveys)
- ✓ Incident tracking (complaints logged)
Key Monitoring Metrics:
| Metric | Target | Frequency | Alert Threshold | Owner |
|---|---|---|---|---|
| Disparate impact ratio | ≥ 0.80 | Daily | < 0.80 | Compliance |
| Female recommendation rate | 35-45% | Daily | < 30% or > 50% | ML Team |
| False negative rate disparity | < 5% | Weekly | ≥ 5% | QA Team |
| HR override rate | 5-15% | Weekly | > 20% | Product Team |
| Candidate complaints | 0 | Daily | ≥ 1 | HR Manager |
Review Schedule:
- Daily: Automated fairness metrics review
- Weekly: ML team performance review
- Monthly: Cross-functional risk review meeting
- Quarterly: Full fairness audit by external consultant
- Annually: Comprehensive risk reassessment
Risk Register Summary Dashboard
Total Risks Identified: ______
Risk Distribution by Category:
- Bias/Fairness: ______
- Transparency: ______
- Data Quality: ______
- Safety: ______
- Security: ______
- Privacy: ______
- Compliance: ______
- Ethical: ______
- Other: ______
Risk Distribution by Level:
- Critical: ______
- High: ______
- Medium: ______
- Low: ______
Risk Distribution by Status:
- Open: ______
- In Treatment: ______
- Monitoring: ______
- Accepted: ______
- Closed: ______
Top 5 Risks (by priority):
Overdue Controls: ______
Upcoming Reviews: ______
Recent Incidents: ______
Usage Instructions
1. Initial Assessment:
- Complete Sections 1-2 with system information
- Conduct risk identification workshops
- Document all identified risks in Section 3
2. Detailed Analysis:
- Complete Section 4 for each risk
- Assess likelihood and impact
- Calculate inherent risk level
- Involve stakeholders and experts
3. Evaluation and Prioritization:
- Complete Section 5 for each risk
- Determine priorities
- Consider regulatory and ethical factors
- Establish acceptance criteria
4. Treatment Planning:
- Complete Section 6 for each risk requiring treatment
- Select appropriate strategy
- Design controls and mitigations
- Create implementation plans
- Obtain necessary approvals
5. Ongoing Management:
- Implement monitoring per Section 7
- Track incidents in Section 8
- Maintain change log in Section 9
- Update regularly based on reviews
6. Documentation:
- Maintain all supporting documentation (Section 10)
- Version control risk register
- Archive superseded versions
- Ensure accessibility for audits
7. Reporting:
- Generate regular summary reports
- Escalate critical/high risks to management
- Provide dashboards for stakeholders
- Support compliance and audit activities
Integration with AIMS
This risk register supports multiple ISO 42001 requirements:
Clause 6.1: Actions to address risks and opportunities Clause 7.5: Documented information Clause 9.1: Monitoring and measurement Clause 9.2: Internal audit Clause 9.3: Management review Clause 10: Continual improvement
Next Steps
After completing risk register:
- Review with stakeholders for validation
- Present to management for approval
- Begin implementing prioritized controls
- Establish monitoring and reporting
- Schedule regular reviews
- Integrate into project management
- Use for compliance evidence
Next Lesson: Risk Assessment Workshop - Apply these concepts in practical exercises.