The Defensible AI Platform
Which AI systems are classified as high-risk under the EU AI Act? What obligations apply? A practical guide to Article 6, Annex III categories, and the compliance requirements that take effect in 2026.
Note: This article covers Regulation (EU) 2024/1689 as published. The European Commission may issue delegated acts and guidance that refine these classifications. This is educational content, not legal advice.
The EU AI Act classifies AI systems into four risk levels, each with different regulatory requirements:
| Risk Level | Regulatory Treatment | Examples |
|---|---|---|
| PROHIBITED | Banned entirely (Article 5) | Social scoring by governments, real-time biometric mass surveillance (with exceptions), manipulative AI targeting vulnerabilities |
| HIGH-RISK | Permitted with strict obligations (Articles 6–15) | AI in healthcare, employment, credit scoring, law enforcement, education, critical infrastructure |
| LIMITED RISK | Transparency obligations only (Article 50) | Chatbots (must disclose they're AI), deepfake generators, emotion recognition systems |
| MINIMAL RISK | No specific obligations | Spam filters, AI in video games, inventory management |
Article 6 establishes two pathways for an AI system to be classified as high-risk:
An AI system is high-risk if it is a safety component of a product covered by EU harmonized legislation listed in Annex I, and that product requires third-party conformity assessment. This covers:
An AI system is high-risk if it falls into one of the categories listed in Annex III. These are the categories most relevant to enterprise decision intelligence:
Real-time and post remote biometric identification systems. This includes facial recognition, fingerprint matching, and other biometric systems used to identify individuals in public or semi-public spaces.
AI used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity. If your AI controls or monitors critical systems where failure could endanger public safety, it's high-risk.
AI systems that determine access to education, evaluate learning outcomes, assess appropriate levels of education, or monitor cheating during examinations. Automated grading systems and student performance prediction tools fall here.
AI used for recruitment (resume screening, interview analysis), promotion decisions, task allocation, performance monitoring, or termination decisions. This is one of the broadest categories and catches many HR-tech AI tools.
AI that evaluates creditworthiness, sets risk premiums for life and health insurance, evaluates eligibility for public benefits, dispatches emergency services, or determines credit scores. Financial services AI is heavily impacted.
AI used for risk assessment (recidivism), polygraph and deception detection, evidence reliability assessment, crime prediction (predictive policing), and profiling.
AI for polygraph use in immigration, document authenticity assessment, application evaluation, and migration monitoring and forecasting.
AI used to assist judicial authorities in researching and interpreting facts and law, and in applying the law to facts. AI used in electoral processes also falls here.
Important: Article 6(3) provides an exception. An Annex III AI system is NOT high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. Specifically, it's excluded if the AI performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task for an assessment listed in Annex III. However, AI systems that profile natural persons are always considered high-risk regardless of this exception.
If your AI system is classified as high-risk, you must comply with these requirements:
| Article | Requirement | What It Means in Practice |
|---|---|---|
| Art. 9 | Risk Management System | Continuous risk identification, analysis, mitigation, and testing throughout the AI lifecycle |
| Art. 10 | Data Governance | Training data must be relevant, representative, free of errors, and complete. Bias examination required. |
| Art. 11 | Technical Documentation | Detailed documentation of the AI system before it's placed on the market |
| Art. 12 | Record-Keeping | Automatic logging of events during operation, with traceability throughout the lifecycle |
| Art. 13 | Transparency | Instructions for use that enable deployers to interpret outputs and use the system appropriately |
| Art. 14 | Human Oversight | Designed to allow effective human oversight, including the ability to override or stop the AI |
| Art. 15 | Accuracy, Robustness, Cybersecurity | Appropriate levels of accuracy, resilience against errors and adversarial attacks, cybersecurity measures |
Enterprise decision intelligence platforms that deploy AI for governance, compliance, or advisory purposes may fall under Annex III categories — particularly Category 5 (essential services) and Category 8 (administration of justice). A well-designed platform addresses the Article 8–15 obligations natively:
Datacendia maps to EU AI Act Articles 9–15 with immutable audit trails, human oversight controls, adversarial testing, and automated compliance monitoring.
See Trust Center →