5.7 Governance and Compliance Regulations for AI Systems

Key Governance and Compliance Regulations for AI Systems

Emerging AI Compliance Standards
  • ISO 42001 & ISO 23894 (2023) focus on assessing and managing AI risks.
  • These standards provide a framework for addressing AI-related risks and promoting responsible AI practices.
E.U. AI Act
  • First comprehensive AI regulation by the EU.
  • AI applications are categorized into three risk categories:
  • Unacceptable risk: Banned applications (e.g., social scoring, emotion detection).
  • High risk: Subject to legal requirements (e.g., CV scanning tools).
  • Low risk: Largely unregulated.
  • High-risk AI must have a risk management system, data governance, and documented compliance.
Global Influence of EU Regulations
  • EU regulations often become global standards (e.g., GDPR).
  • Organizations should aim to meet EU regulations even if not serving EU citizens.
AI Risk Management Framework (RMF) by NIST
  • Provides guidance for managing AI risks and promoting trustworthy AI development.
  • Four key functions: Govern, Map, Measure, Manage.
  • A voluntary framework for organizations designing, developing, or deploying AI systems.
Risk Estimation in AI
  • AI risks are calculated by multiplying the likelihood of an event by its severity.
  • Inherent risk: Mitigated with security controls.
  • Residual risk: Risk left after mitigations; highest residual risk determines overall risk level.
Algorithmic Accountability Act (U.S.)
  • Aims to assess AI system impacts, create transparency, and protect consumers.
  • Ensures consumers understand how AI impacts decisions (e.g., loan rejections).
  • Calls for transparency in AI models, including explainability of outputs.
Explainability in AI
  • It’s important to understand how an AI system reaches its conclusions (known as explainability).
  • Model-agnostic approach: Focus on input/output behavior.
  • Use interpretable algorithms (e.g., decision trees) to make AI more understandable.
Bias Removal in AI

Tools like Amazon SageMaker Clarify help detect and monitor for biases in AI systems.

AI models must be tested for biases in their outcomes and training data.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like