AI Compliance

AI Standards

The International Organization for Standardisation (ISO), in collaboration with the International Electrotechnical Commission (IEC), has developed several standards and guidelines to support the responsible development and deployment of Artificial Intelligence (AI) systems. These standards are primarily developed under the joint technical committee ISO/IEC JTC 1, Subcommittee SC 42, which focuses on Artificial Intelligence.

Here is a list of key ISO/IEC standards and guidelines related to AI:

1. ISO/IEC 42001:2023 – Artificial Intelligence Management System (AIMS): Specifies requirements and provides guidance for establishing, implementing, maintaining, and continually improving an AI management system within an organisation.  

2. ISO/IEC 22989:2022 – Artificial Intelligence – Concepts and Terminology: Defines fundamental concepts and terminology for AI to ensure a common understanding across stakeholders.

3. ISO/IEC 23053:2022 – Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML): Provides a framework for AI systems that utilize machine learning, outlining processes and lifecycle stages.

4. ISO/IEC 23894:2023 – Artificial Intelligence – Guidance on Risk Management: Offers guidelines for identifying and managing risks associated with AI systems to ensure their reliability and trustworthiness.

5. ISO/IEC TR 24027:2021 – Bias in AI Systems and AI-Aided Decision Making: Addresses issues related to bias in AI systems and provides recommendations to identify, assess, and mitigate bias.

6. ISO/IEC TR 24028:2020 – Overview of Trustworthiness in Artificial Intelligence: Discusses aspects of trustworthiness in AI, including reliability, robustness, and transparency.

7. ISO/IEC TR 24368:2022 – Overview of Ethical and Societal Concerns: Explores ethical and societal implications of AI, offering guidance on addressing these concerns in AI development and deployment.

8. ISO/IEC 25059:2023 – Systems and Software Quality Requirements and Evaluation (SQuaRE) – Quality Model for AI Systems: Establishes a quality model specific to AI systems, aiding in their evaluation and assurance.

9. ISO/IEC 5338:2023 – Artificial Intelligence System Life Cycle Processes: Provides a comprehensive framework for managing AI system life cycles, ensuring consistency and quality from conception to decommissioning.

10. ISO/IEC 5469:2024 – Functional Safety and AI Systems: Addresses the integration of AI systems into environments where functional safety is critical, offering guidelines to ensure safety requirements are met.

FDA Guidelines

The U.S. Food and Drug Administration (FDA) has developed several guidelines to ensure the safe and effective integration of Artificial Intelligence (AI) in medical products. These guidelines provide a framework for developers and manufacturers, addressing various aspects of AI implementation. Key FDA guidelines on AI include:

1. Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft Guidance, January 2025): This draft guidance offers recommendations for marketing submissions of AI-enabled medical devices, focusing on lifecycle management and necessary documentation to demonstrate safety and effectiveness.  

2. Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products (Draft Guidance, January 2025): This document provides a risk-based framework for assessing the credibility of AI models intended to support regulatory decisions regarding the safety, effectiveness, or quality of drugs and biological products.  

3. Artificial Intelligence and Medical Products: Collaborative Efforts Across FDA Centers (March 2024): This paper outlines the FDA’s coordinated approach to AI integration across various centers, including the Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and the Office of Combination Products (OCP).  

4. Good Machine Learning Practice for Medical Device Development: Guiding Principles (October 2021):Developed in collaboration with Health Canada and the UK’s Medicines and Healthcare products Regulatory Agency, this document outlines guiding principles for the development of AI and machine learning in medical devices.  

5. Artificial Intelligence and Medical Products: Collaborative Efforts Across FDA Centers (March 2024): This document outlines the FDA’s coordinated approach to integrating AI across various centers, including the Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and the Office of Combination Products (OCP). It emphasizes collaboration to foster innovation while ensuring patient safety.  

6. Artificial Intelligence in Drug Manufacturing (March 2023): This guidance discusses the application of AI technologies in drug manufacturing processes, focusing on ensuring product quality and consistency.  

7. Distributed Manufacturing and Point-of-Care Manufacturing of Drugs (October 2022): This document explores the use of AI in distributed and point-of-care drug manufacturing, providing recommendations for maintaining compliance with regulatory standards.

These guidelines collectively aim to foster innovation while ensuring that AI applications in healthcare are safe, effective, and trustworthy.

European Guidelines

The European Union (EU) has established a comprehensive framework to regulate Artificial Intelligence (AI), aiming to ensure ethical development and deployment of AI technologies. The cornerstone of this framework is the Artificial Intelligence Act (AI Act), adopted in March 2024, which classifies AI applications based on risk levels and imposes corresponding obligations.  

To provide clarity on the AI Act’s implementation, the European Commission has issued several guidelines:

1. Guidelines on Prohibited AI Practices: Released in February 2025, these guidelines delineate AI practices that are banned within the EU, such as AI systems that manipulate human behavior or exploit vulnerabilities.  

2. Ethics Guidelines for Trustworthy AI: Developed by the High-Level Expert Group on Artificial Intelligence in 2019, this document outlines key principles for AI systems, including human agency, privacy, transparency, and accountability.

These guidelines, alongside the AI Act, form the EU’s robust approach to fostering trustworthy and human-centric AI.

IMDRF Guidelines

The International Medical Device Regulators Forum (IMDRF) has developed several key guidelines to harmonize the regulation of Artificial Intelligence (AI) and Machine Learning (ML) in medical devices. These guidelines aim to ensure the safety, effectiveness, and quality of AI/ML-enabled medical devices across member jurisdictions. Notable IMDRF guidelines include:

1. Good Machine Learning Practice (GMLP) for Medical Device Development: Guiding Principles (IMDRF/AIML WG/N88 FINAL:2025)

Published in January 2025, this document outlines ten guiding principles for the development of AI/ML-enabled medical devices. The principles emphasize aspects such as well-defined intended use, robust software engineering practices, representative data sets, and continuous performance monitoring. The goal is to promote the development of safe, effective, and high-quality AI-enabled medical devices.  

2. Machine Learning-enabled Medical Devices: Key Terms and Definitions (IMDRF/AIMD WG/N67 FINAL:2022)

Released in May 2022, this document provides standardized terminology for machine learning-enabled medical devices. It aims to facilitate a common understanding among regulators, manufacturers, and stakeholders, thereby supporting harmonized regulatory approaches.  

These guidelines reflect IMDRF’s commitment to fostering international collaboration and consistency in the regulation of AI/ML technologies in healthcare. By adhering to these principles, stakeholders can contribute to the development of trustworthy and effective AI-enabled medical devices.

error: Content is protected !!