Understanding the differences among the Artificial Intelligence (AI) Guidelines and Regulation.

The National Institute of Standards and Technology (NIST) has recently published the NIST Assessing Risks and Impacts of AI (ARIA) following the Executive Order on trustworthy AI, which stipulates testing and impact of AI to society. The NIST’s AI Risk Management Framework (AI RMF) offers guidance to risk, governance and ethics of AI, and ARIA expands on the AI RMF to help organisations operationalise the framework. In contrast, the European AI Act is a mandatory regulatory framework, meaning that compliance is obligatory and failure to adhere to it constitutes a breach of responsibility. On the other hand, NIST’s AI RMF and ARIA are voluntary guidelines, not legally binding, and organisations can choose whether or not to implement them.

These frameworks can be confused by many; hence we use this opportunity to distil the subtle differences and similarities amongst them.

This tabulation hopefully offers the clearest opportunity to understanding what each offers, and how they differ.

First, we provide a table summarising the key differences between the NIST AI RMF and the EU AI Act:

Feature

NIST AI RMF

EU AI Act

Purpose

Guidelines for risk management and ethical considerations in AI

Comprehensive legal framework for AI regulation

Focus

Risk management, ethical AI development

Risk-based regulation, compliance requirements

Scope

Applicable globally across various sectors and AI applications

Applies to organisations operating in or targeting the EU market

Legal Implications

Voluntary guidelines

Legally binding with significant penalties for non-compliance

Risk Management

Identifying, evaluating, and mitigating AI risks; continuous monitoring and updates

Comprehensive risk management for high-risk AI systems, covering health, safety, and fundamental rights

Governance

Promotes ethical principles, transparency, and accountability

Strong regulatory approach with specific provisions to prevent unacceptable risks

Compliance

Voluntary

Mandatory for specified AI systems

Flexibility

Highly adaptable to various organisational needs

Specific requirements based on risk categories

Geographical Relevance

Global

EU Member States

Primary Audience

Organisations of all sizes and industries

Companies developing or marketing AI systems in the EU

 

Secondly, we provide a comparison of the three.

Feature

NIST AI RMF

NIST ARIA

EU AI Act

Purpose

Guidelines for managing AI risks and ethical considerations

Assess societal risks and impacts of AI in realistic settings

Comprehensive legal framework for AI regulation

Focus

Risk management, ethical AI development

Sociotechnical testing and evaluation of AI systems

Risk-based regulation, compliance requirements

Scope

Applicable globally across various sectors and AI applications

Global, focuses on societal context

Applies to organisations in the EU or targeting EU market

Legal Implications

Voluntary guidelines

Voluntary, supports NIST AI RMF and U.S. AI Safety Institute

Legally binding with significant penalties for non-compliance

Risk Management

Identifying, evaluating, and mitigating AI risks

Understanding AI impact in real-world scenarios

Comprehensive risk management for high-risk AI systems

Governance

Promotes ethical principles, transparency, and accountability

Promotes responsible and ethical AI use

Strong regulatory approach, specific provisions to prevent unacceptable risks

Compliance

Voluntary

Voluntary

Mandatory for specified AI systems

Flexibility

Highly adaptable to various organisational needs

Adaptable to various organisational needs

Specific requirements based on risk categories

Geographical Relevance

Global

Global

EU Member States

Primary Audience

Organisations of all sizes and industries

Organisations using AI systems

Companies developing or marketing AI systems in the EU

Sources:

 

Article Author: Dr Cyril Onwubiko, CISO, Research Series Limited, London, UK

Date: 1st June 2024