ISO/IEC 38507:2022 (Information technology — Governance of IT — Guidance on the governance implications of the use of artificial intelligence by organizations) is an international standard that provides comprehensive guidance to organizations' governing bodies (such as boards of directors or executive management) on the effective governance of artificial intelligence (AI) use. It outlines the principles, tasks, and high-level responsibilities necessary to ensure that the deployment of AI is conducted in a responsible, ethical, and trustworthy manner. This includes addressing key considerations like accountability, transparency, fairness, risk management, and the protection of fundamental rights. The standard helps governing bodies establish policies and frameworks to maximize the benefits of AI while mitigating associated risks, ensuring that AI use aligns with the organization’s overall strategy and legal obligations. It is intended to be used in conjunction with other IT governance standards.
Use Case
An established global Financial Services Provider decides to implement a new AI-driven system to automate and significantly enhance its loan application approval process. The Governing Body uses ISO/IEC 38507:2022 as a fundamental framework to establish its AI Governance structure.
1. Directing: The Governing Body first uses the standard's guidance to Direct the organization's approach. They issue a clear mandate that the AI system must uphold principles of Fairness and non-discrimination, ensuring that the model's decisions are not biased against legally protected groups based on training data. They also require high standards of Transparency regarding the decision-making process, ensuring that applicants can receive understandable explanations for loan denials.
2. Evaluating: Next, the Governing Body Evaluates the AI's performance and associated risks. They establish a formal Risk Management process that specifically identifies and continuously monitors risks related to data privacy, algorithmic bias, and potential legal non-compliance. Independent audits are commissioned to verify the model's accuracy, robustness, and adherence to the mandated ethical principles, paying close attention to data provenance and model drift.
3. Monitoring: Finally, the Governing Body ensures continuous Monitoring. They require regular reports on key performance indicators (KPIs) that track not only the business impact (e.g., speed of approval) but also Ethical Compliance Metrics (e.g., disparity analysis across demographic groups). This ongoing oversight, rooted in the principles of Accountability and Due Care outlined in the standard, ensures that the AI system remains trustworthy, aligns with corporate values, and complies with evolving regulatory requirements throughout its lifecycle. This structured governance approach safeguards the organization's reputation and financial stability.