The Defensible AI Platform

Home / Learn / EU AI Act High-Risk Classification

EU AI Act Article 6: High-Risk Classification

Which AI systems are classified as high-risk under the EU AI Act? What obligations apply? A practical guide to Article 6, Annex III categories, and the compliance requirements that take effect in 2026.

Published: Reading time: 13 minutes Category: EU AI Act

Note: This article covers Regulation (EU) 2024/1689 as published. The European Commission may issue delegated acts and guidance that refine these classifications. This is educational content, not legal advice.

The EU AI Act Risk Pyramid

The EU AI Act classifies AI systems into four risk levels, each with different regulatory requirements:

Risk Level Regulatory Treatment Examples
PROHIBITED Banned entirely (Article 5) Social scoring by governments, real-time biometric mass surveillance (with exceptions), manipulative AI targeting vulnerabilities
HIGH-RISK Permitted with strict obligations (Articles 6–15) AI in healthcare, employment, credit scoring, law enforcement, education, critical infrastructure
LIMITED RISK Transparency obligations only (Article 50) Chatbots (must disclose they're AI), deepfake generators, emotion recognition systems
MINIMAL RISK No specific obligations Spam filters, AI in video games, inventory management

Article 6: The High-Risk Classification Rules

Article 6 establishes two pathways for an AI system to be classified as high-risk:

Pathway 1: Product Safety Legislation (Article 6(1))

An AI system is high-risk if it is a safety component of a product covered by EU harmonized legislation listed in Annex I, and that product requires third-party conformity assessment. This covers:

  • Medical devices and in-vitro diagnostic devices
  • Machinery and lifts
  • Toys safety
  • Marine equipment
  • Civil aviation security
  • Motor vehicles and their trailers
  • Radio equipment

Pathway 2: Annex III Stand-Alone Systems (Article 6(2))

An AI system is high-risk if it falls into one of the categories listed in Annex III. These are the categories most relevant to enterprise decision intelligence:

Annex III: The Eight High-Risk Categories

1. Biometric Identification and Categorisation

Real-time and post remote biometric identification systems. This includes facial recognition, fingerprint matching, and other biometric systems used to identify individuals in public or semi-public spaces.

2. Critical Infrastructure Management

AI used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity. If your AI controls or monitors critical systems where failure could endanger public safety, it's high-risk.

3. Education and Vocational Training

AI systems that determine access to education, evaluate learning outcomes, assess appropriate levels of education, or monitor cheating during examinations. Automated grading systems and student performance prediction tools fall here.

4. Employment, Workers Management, and Self-Employment

AI used for recruitment (resume screening, interview analysis), promotion decisions, task allocation, performance monitoring, or termination decisions. This is one of the broadest categories and catches many HR-tech AI tools.

5. Essential Private and Public Services

AI that evaluates creditworthiness, sets risk premiums for life and health insurance, evaluates eligibility for public benefits, dispatches emergency services, or determines credit scores. Financial services AI is heavily impacted.

6. Law Enforcement

AI used for risk assessment (recidivism), polygraph and deception detection, evidence reliability assessment, crime prediction (predictive policing), and profiling.

7. Migration, Asylum, and Border Control

AI for polygraph use in immigration, document authenticity assessment, application evaluation, and migration monitoring and forecasting.

8. Administration of Justice and Democratic Processes

AI used to assist judicial authorities in researching and interpreting facts and law, and in applying the law to facts. AI used in electoral processes also falls here.

The Article 6(3) Exception

Important: Article 6(3) provides an exception. An Annex III AI system is NOT high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. Specifically, it's excluded if the AI performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task for an assessment listed in Annex III. However, AI systems that profile natural persons are always considered high-risk regardless of this exception.

High-Risk AI Obligations (Articles 8–15)

If your AI system is classified as high-risk, you must comply with these requirements:

Article Requirement What It Means in Practice
Art. 9 Risk Management System Continuous risk identification, analysis, mitigation, and testing throughout the AI lifecycle
Art. 10 Data Governance Training data must be relevant, representative, free of errors, and complete. Bias examination required.
Art. 11 Technical Documentation Detailed documentation of the AI system before it's placed on the market
Art. 12 Record-Keeping Automatic logging of events during operation, with traceability throughout the lifecycle
Art. 13 Transparency Instructions for use that enable deployers to interpret outputs and use the system appropriately
Art. 14 Human Oversight Designed to allow effective human oversight, including the ability to override or stop the AI
Art. 15 Accuracy, Robustness, Cybersecurity Appropriate levels of accuracy, resilience against errors and adversarial attacks, cybersecurity measures

Penalties for Non-Compliance

  • Prohibited AI practices: Up to 35 million EUR or 7% of global annual revenue
  • High-risk obligations: Up to 15 million EUR or 3% of global annual revenue
  • Incorrect information to authorities: Up to 7.5 million EUR or 1% of global annual revenue

Timeline: When Does This Apply?

  • February 2, 2025: Prohibited AI practices provisions apply
  • August 2, 2025: Obligations for general-purpose AI models apply
  • August 2, 2026: Most high-risk AI obligations apply (Article 6(1) systems)
  • August 2, 2027: Remaining high-risk obligations apply (Annex I product safety systems)

How Decision Intelligence Platforms Address High-Risk Requirements

Enterprise decision intelligence platforms that deploy AI for governance, compliance, or advisory purposes may fall under Annex III categories — particularly Category 5 (essential services) and Category 8 (administration of justice). A well-designed platform addresses the Article 8–15 obligations natively:

  • Record-keeping (Art. 12): Immutable audit trails with cryptographic integrity — every decision logged with full evidence chain
  • Transparency (Art. 13): Multi-agent deliberation with per-agent reasoning — not a black box, but a documented debate
  • Human oversight (Art. 14): Recommendations only, never autonomous action — humans approve, override, or reject every decision
  • Robustness (Art. 15): Adversarial red-team testing, bias monitoring, and continuous compliance scanning

Frequently Asked Questions

Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act applies to providers who place AI systems on the EU market or put them into service in the EU, regardless of where the provider is established. It also applies to deployers of AI systems located within the EU, and to providers/deployers outside the EU whose AI system output is used in the EU. Like GDPR, it has extraterritorial reach.
Is a decision support tool high-risk?
It depends on what decisions it supports. A tool that helps with marketing copy is minimal risk. A tool that assists with credit scoring, employment decisions, or healthcare diagnoses likely falls under Annex III. The Article 6(3) exception may apply if the AI only performs preparatory or narrow procedural tasks — but this must be carefully assessed.
What's the difference between a provider and a deployer?
A provider develops or has an AI system developed and places it on the market or puts it into service under its own name. A deployer uses an AI system under its own authority. Both have obligations, but providers bear the primary compliance burden for high-risk systems (conformity assessment, technical documentation, CE marking). Deployers must ensure proper use, human oversight, and input data quality.

EU AI Act Compliance Built In

Datacendia maps to EU AI Act Articles 9–15 with immutable audit trails, human oversight controls, adversarial testing, and automated compliance monitoring.

See Trust Center →