Foundation Assessment
This assessment tests your understanding of AI governance foundations covered in Module 1. Take your time and think through each question carefully.
Assessment Instructions
- 25 questions covering all topics from Module 1
- Multiple choice and scenario-based questions
- Minimum passing score: 80% (20/25 correct)
- You can retake the assessment if needed
- Review lesson materials before taking the assessment
Section 1: ISO 42001 Fundamentals
Question 1: What is ISO 42001? a) A standard for AI system development b) An international standard for AI Management Systems (AIMS) c) A certification for AI developers d) A guide for machine learning algorithms
Question 2: Which structure does ISO 42001 follow? a) Custom AI-specific structure b) NIST framework structure c) Annex SL high-level structure d) Agile methodology structure
Question 3: Which of the following is NOT a key component of ISO 42001? a) Clauses 4-10 (Management system requirements) b) Annex A (AI-specific controls) c) Annex SL (Integration requirements) d) Annex C (AI system lifecycle considerations)
Question 4: What is an AI Management System (AIMS)? a) Software for managing AI models b) A systematic approach to governing AI development and use c) A database for storing AI training data d) An automated AI monitoring tool
Question 5: Which of the following organizations does NOT need ISO 42001? a) Organizations developing AI systems b) Organizations deploying AI systems in high-risk areas c) Organizations that use no technology d) AI service providers
Section 2: AIMS Framework and Lifecycle
Question 6: Clause 4 of ISO 42001 (Context of Organization) requires identifying: a) Only internal factors b) Only external factors c) Internal factors, external factors, interested parties, and AIMS scope d) Technical specifications only
Question 7: What is the primary focus of Clause 6 (Planning)? a) AI development tools b) AI risk assessment and treatment planning c) Employee training d) Infrastructure setup
Question 8: Which phase is NOT part of the AI lifecycle in ISO 42001? a) Planning and design b) Data management c) Market research d) Deployment and monitoring
Question 9: What role does Clause 9 (Performance Evaluation) serve? a) Hiring AI talent b) Monitoring AI system performance and AIMS effectiveness c) Setting AI budgets d) Marketing AI products
Question 10: Human oversight in AI systems should include: a) Ability to stop or disable the AI system b) Understanding of AI capabilities and limitations c) Awareness of automation bias d) All of the above
Section 3: AI Risk Categories
Question 11: Which of the following is a source of bias in AI systems? a) Historical bias in training data b) Underrepresentation of certain groups c) Measurement bias from proxy features d) All of the above
Question 12: What is the "black box" problem in AI? a) AI systems stored in black boxes b) Difficulty in understanding how AI makes decisions c) Security vulnerabilities in AI d) Hardware limitations
Question 13: Which is NOT a type of AI security threat? a) Data poisoning b) Adversarial examples c) Model extraction d) Data visualization
Question 14: Privacy-enhancing technologies include: a) Differential privacy b) Federated learning c) Synthetic data generation d) All of the above
Question 15: What is model drift? a) AI model performance degrading over time as data changes b) Moving AI models between servers c) Training data becoming outdated d) Physical movement of hardware
Section 4: EU AI Act
Question 16: What approach does the EU AI Act use? a) Blanket prohibition on all AI b) Risk-based regulatory framework c) Voluntary guidelines only d) Self-regulation by companies
Question 17: Which AI systems are PROHIBITED under the EU AI Act? a) Spam filters b) Government-run social credit scoring systems c) Video game AI d) Weather prediction systems
Question 18: High-risk AI systems under the EU AI Act include: a) AI in recruitment and hiring b) AI in critical infrastructure c) AI for creditworthiness assessment d) All of the above
Question 19: What must deployers of chatbots do under the EU AI Act? a) Nothing if the chatbot is low-risk b) Inform users they are interacting with AI c) Get government approval d) Pay licensing fees
Question 20: Maximum penalties for using prohibited AI under the EU AI Act can reach: a) €1M or 1% of turnover b) €7.5M or 1.5% of turnover c) €15M or 3% of turnover d) €35M or 7% of turnover
Section 5: AI Ethics
Question 21: Which is a core ethical principle for AI? a) Profit maximization b) Human dignity and rights c) Technical complexity d) Market dominance
Question 22: What is "fairness" in AI ethics? a) AI must be free b) AI should treat all people equitably and avoid unjust discrimination c) AI should maximize accuracy d) AI should be simple
Question 23: Transparency in AI requires: a) Publishing all source code publicly b) Making models as complex as possible c) Providing appropriate explanations about how AI makes decisions d) Hiding technical details from users
Question 24: Accountability in AI means: a) AI systems make all decisions b) Clear lines of responsibility when AI causes harm c) Blaming AI for all mistakes d) Eliminating human oversight
Question 25: In the Amazon hiring algorithm case study, what was the problem? a) The AI was too slow b) The AI discriminated against women due to biased training data c) The AI was too expensive d) The AI rejected all candidates
Scenario-Based Questions
Scenario A: Your company is developing an AI system to screen loan applications. The system will automatically approve or reject applications based on applicant data.
Question 26: Under the EU AI Act, this system is likely: a) Unacceptable risk (prohibited) b) High-risk (strict requirements apply) c) Limited risk (transparency obligations only) d) Minimal risk (no specific requirements)
Question 27: What should be a priority consideration for this system? a) Processing speed b) Fairness and non-discrimination across demographic groups c) Minimizing human involvement d) Reducing explanation complexity
Question 28: According to ISO 42001, what should you do before deploying this system? a) Launch immediately to gain competitive advantage b) Conduct risk assessment and impact analysis c) Wait for competitors to deploy first d) Only test with internal data
Scenario B: You discover that an AI system in production is showing bias against a particular demographic group, resulting in less favorable outcomes.
Question 29: What is the most appropriate immediate action? a) Ignore it if overall accuracy is high b) Increase human oversight and consider suspending the system while investigating c) Hide the findings from management d) Wait to see if it corrects itself
Question 30: This situation best illustrates the need for: a) Faster processing b) Continuous monitoring and improvement (Clause 10) c) Cheaper cloud infrastructure d) More training data only
Assessment Scoring Guide
26-30 Correct: Excellent (90-100%)
Outstanding understanding of AI governance foundations. You're well-prepared for Module 2.
Strengths: Comprehensive grasp of ISO 42001, AI risks, regulations, and ethics.
Next Steps: Proceed confidently to Module 2 on AI Risk Management.
21-25 Correct: Good (70-83%)
Solid understanding with minor gaps. Review missed topics before Module 2.
Recommendations:
- Review sections where you missed questions
- Re-read relevant lesson materials
- Focus on practical applications of concepts
Next Steps: Brief review, then proceed to Module 2.
16-20 Correct: Fair (53-67%)
Basic understanding but significant gaps remain. Additional review recommended.
Recommendations:
- Thoroughly review all Module 1 lessons
- Pay special attention to risk categories and EU AI Act
- Complete additional practice questions
- Consider discussing challenging concepts with peers or instructors
Next Steps: Comprehensive review before retaking assessment.
Below 16 Correct: Needs Improvement (<53%)
Fundamental concepts need reinforcement. Substantial review required.
Recommendations:
- Carefully re-study all Module 1 lessons
- Take notes on key concepts
- Create flashcards for important terms
- Seek additional resources or support
- Don't rush - understanding foundations is critical
Next Steps: Complete review and retake assessment before proceeding.
Answer Key and Explanations
Section 1 Answers
Q1: B - ISO 42001 is the international standard for AI Management Systems (AIMS), providing a framework for responsible AI development, deployment, and use.
Q2: C - ISO 42001 follows the Annex SL high-level structure used by other ISO management standards like ISO 27001 and ISO 9001, enabling easier integration.
Q3: C - Annex SL is the high-level structure, not a component of ISO 42001. The standard includes Clauses 4-10 and Annexes A, B, C, and D.
Q4: B - An AIMS is a systematic approach to governing AI development and use, managing AI-related risks, and ensuring responsible AI practices.
Q5: C - Organizations using no technology have no need for AI governance. All others listed (developers, deployers, service providers) benefit from ISO 42001.
Section 2 Answers
Q6: C - Clause 4 requires comprehensive understanding of internal factors, external factors, interested parties, and clearly defined AIMS scope.
Q7: B - Clause 6 focuses on planning, specifically AI risk assessment and treatment planning, along with setting AI objectives.
Q8: C - Market research is not a phase in the AI lifecycle. The lifecycle includes planning/design, data management, development/validation, deployment/monitoring, and decommissioning.
Q9: B - Clause 9 addresses performance evaluation, including monitoring AI system performance, AIMS effectiveness, internal audits, and management review.
Q10: D - Effective human oversight requires all these elements: ability to intervene, understanding of capabilities/limitations, and awareness of automation bias.
Section 3 Answers
Q11: D - All listed items are sources of bias: historical bias, underrepresentation, and measurement bias from proxies.
Q12: B - The "black box" problem refers to difficulty understanding how AI systems (especially deep learning) make decisions due to complexity and opacity.
Q13: D - Data visualization is a technique for displaying information, not a security threat. Data poisoning, adversarial examples, and model extraction are all real threats.
Q14: D - All are privacy-enhancing technologies: differential privacy adds noise for protection, federated learning keeps data distributed, and synthetic data creates artificial datasets.
Q15: A - Model drift occurs when AI performance degrades over time as the real-world data distribution changes from the training data.
Section 4 Answers
Q16: B - The EU AI Act uses a risk-based regulatory framework, categorizing AI systems by risk level with corresponding requirements.
Q17: B - Government-run social credit scoring is prohibited as unacceptable risk. The other systems are either low-risk or not specifically regulated.
Q18: D - All are high-risk AI systems under the EU AI Act: recruitment, critical infrastructure, and credit assessment all affect fundamental rights or safety.
Q19: B - Deployers must inform users they're interacting with AI unless it's obvious from context. This is a transparency requirement for limited-risk AI.
Q20: D - Maximum penalties for prohibited AI systems reach €35M or 7% of global annual turnover, whichever is higher.
Section 5 Answers
Q21: B - Human dignity and rights is a core ethical principle. Profit maximization, technical complexity, and market dominance are not ethical principles.
Q22: B - Fairness means treating people equitably and avoiding unjust discrimination, not just maximizing accuracy or making AI free.
Q23: C - Transparency requires appropriate explanations about AI decision-making, tailored to the audience. It doesn't mean publishing all code or making systems complex.
Q24: B - Accountability means clear lines of responsibility when AI causes harm, not eliminating oversight or blaming AI for everything.
Q25: B - Amazon's hiring AI discriminated against women because it was trained on historical data that reflected past hiring biases toward men in tech.
Scenario Answers
Q26: B - Loan application screening is high-risk AI under the EU AI Act as it involves creditworthiness assessment affecting access to essential services.
Q27: B - Fairness and non-discrimination must be a priority to prevent discriminatory lending practices and comply with both regulations and ethics.
Q28: B - ISO 42001 requires risk assessment and impact analysis before deploying AI systems, especially high-risk ones.
Q29: B - When bias is discovered, increase human oversight and consider suspending the system while investigating. This protects affected people and enables proper remediation.
Q30: B - This illustrates the need for continuous monitoring and improvement (Clause 10), detecting issues post-deployment and taking corrective action.
Key Concepts to Remember
ISO 42001 Core Elements
- International standard for AI Management Systems
- Follows Annex SL structure (Clauses 4-10)
- Covers entire AI lifecycle
- Emphasizes risk-based approach
- Integrates with other management systems
AI Risk Categories
- Bias and fairness
- Transparency and explainability
- Safety and reliability
- Privacy and data protection
- Security and adversarial risks
- Accountability and governance
- Environmental sustainability
EU AI Act Framework
- Risk-based approach (unacceptable, high, limited, minimal)
- High-risk systems have strict requirements
- Transparency obligations for chatbots and deepfakes
- Significant penalties for non-compliance
- ISO 42001 supports compliance
Ethical Principles
- Human dignity and rights
- Fairness and non-discrimination
- Transparency and explainability
- Accountability and responsibility
- Privacy and data protection
- Safety and security
- Beneficial and purposeful AI
AIMS Lifecycle
- Planning and design
- Data management
- Development and validation
- Deployment and monitoring
- Decommissioning
Preparation for Module 2
Now that you've completed Module 1, you should understand:
Foundational Knowledge:
- Why AI governance matters
- Structure and components of ISO 42001
- Major AI risk categories
- Regulatory landscape (especially EU AI Act)
- Core ethical principles
Next in Module 2: You'll dive deeper into practical AI risk management:
- Detailed risk assessment methodologies
- Identifying specific risks in your AI systems
- Evaluating likelihood and impact
- Creating risk registers and treatment plans
- Implementing risk controls
Study Tips for Module 2:
- Keep Module 1 materials handy for reference
- Focus on applying concepts to real scenarios
- Practice identifying risks in AI systems
- Think about your own organization's AI use cases
- Engage with case studies actively
Reflection Questions
Before moving to Module 2, reflect on:
-
Your Organization: What AI systems does your organization currently use or plan to develop?
-
Risk Assessment: What risks might be most relevant to your AI systems?
-
Regulatory Position: Does your organization operate in the EU or serve EU customers? What are your compliance obligations?
-
Ethical Stance: What ethical principles are most important to your organization's AI use?
-
Gaps and Needs: Where are the biggest gaps in your current AI governance?
-
Action Items: What immediate steps could you take to improve AI governance?
Additional Resources
For Deeper Understanding
- ISO/IEC 42001:2023 standard text
- EU AI Act full regulation
- NIST AI Risk Management Framework
- OECD AI Principles
Practical Tools
- AI risk assessment templates
- Ethical AI checklists
- Impact assessment frameworks
- Model cards and datasheets
Communities and Learning
- ISO/IEC JTC 1/SC 42 (AI standardization)
- Partnership on AI
- AI Now Institute
- Industry-specific AI governance groups
Congratulations!
You've completed Module 1: AI Governance Foundations. You now have a solid understanding of:
- ISO 42001 and the AIMS framework
- AI risk categories and management
- EU AI Act and regulatory landscape
- AI ethics principles and practices
Achievement Unlocked: AI Apprentice Badge 🎖️
XP Earned: Complete all lessons and this assessment to earn full Module 1 XP plus 500 bonus XP!
Ready for Module 2: Proceed when ready to master AI Risk Management with practical methodologies and tools.
Next Module: Module 2 - AI Risk Management