ISO/IEC 23894:2023 | Artificial intelligence — Guidance on risk management

ISO/IEC 23894:2023 is an international standard providing guidance on the management of risk related to the use of artificial intelligence (AI) systems and AI-enabled products. It outlines an AI-specific risk management process built upon the foundational principles and framework of ISO 31000:2018 (Risk management — Guidelines), integrating seamlessly into an organization's overall governance and risk management system. The standard helps organizations of all types and sizes, regardless of their industry, to systematically identify, analyze, evaluate, treat, and monitor risks inherent in the design, development, deployment, and use of AI technologies. Key focus areas include risks to safety, fairness, transparency, privacy, and accountability. By implementing this guidance, organizations can enhance the trustworthiness, reliability, and ethical development of their AI initiatives, supporting compliance and responsible innovation.




Use Case

A global technology firm is developing a novel predictive maintenance system that uses machine learning (ML) models to analyze sensor data from industrial machinery and forecast equipment failures. This AI-enabled system is intended to maximize operational efficiency and prevent catastrophic breakdowns.

The firm utilizes ISO/IEC 23894:2023 to establish a robust risk management process tailored for this AI application.


1. Risk Identification:


The team identifies potential risks across the AI lifecycle. For example, data poisoning or model drift could lead to inaccurate predictions, causing unnecessary shutdowns or, conversely, missing imminent failures. Other risks include algorithmic bias if the training data disproportionately represents certain operational conditions, leading to suboptimal performance in different settings, and security vulnerabilities in the deployment environment.


2. Risk Analysis and Evaluation:


Each identified risk is analyzed for its likelihood (e.g., how frequently is the training data updated, and how stable are the operating conditions?) and consequence (e.g., a missed prediction could result in millions of dollars in downtime and potential injury). The risk of a major false negative (missed failure prediction) is evaluated as high-consequence and medium-likelihood, making it a priority risk requiring immediate treatment.


3. Risk Treatment:


To treat the priority risk of inaccurate predictions, the firm implements treatment strategies. This involves robust MLOps practices for continuous monitoring of model performance and data quality (detecting drift), establishing a human-in-the-loop mechanism to validate high-stakes predictions, and instituting a retraining and validation protocol using diverse, real-world operational datasets to mitigate bias. They also develop a contingency plan (fallback to traditional maintenance schedules) for system outages.


4. Monitoring and Review:


The firm establishes key risk indicators (KRIs), such as the prediction accuracy rate and the frequency of data anomalies, which are continuously tracked. A dedicated AI Risk Review Board meets regularly to review the effectiveness of the implemented controls and to re-evaluate the risk landscape as the system is updated or deployed in new operational contexts, ensuring the AI system remains safe, reliable, and trustworthy throughout its operational life.

Share: