EU AI Act Compliance Guide
The EU AI Act is the world's first comprehensive AI regulation. This guide covers risk classifications, compliance requirements, enforcement timelines, and penalties — everything enterprises need to prepare for the new regulatory landscape.
EU AI Act: Regulation (EU) 2024/1689 establishing harmonized rules on artificial intelligence. It classifies AI systems into risk categories (unacceptable, high, limited, minimal) and imposes corresponding obligations. Penalties reach €35 million or 7% of global turnover. Full enforcement begins August 2027.
What is the EU AI Act?
The EU AI Act is the world's first comprehensive legal framework specifically regulating artificial intelligence. Adopted in March 2024 and entered into force August 1, 2024, it establishes a risk-based approach to AI governance.
Key principles:
- Risk-based regulation — Requirements scale with potential harm
- Technology-neutral — Applies to all AI techniques, not specific technologies
- Extraterritorial scope — Applies to any AI affecting EU residents
- Human oversight — High-risk AI must enable human intervention
- Transparency — Users must know when they're interacting with AI
Note: The EU AI Act is now in force. Prohibited practices became enforceable February 2, 2025. High-risk AI requirements become enforceable August 2, 2026. Companies operating in the EU must act now.
EU AI Act Implementation Timeline
How Does the EU AI Act Classify Risk?
The Act uses a four-tier risk classification:
| Risk Level | Examples | Obligations |
|---|---|---|
| Unacceptable Risk (Prohibited) |
Social scoring, real-time biometric ID in public (with exceptions), subliminal manipulation, exploitation of vulnerabilities | Banned outright |
| High Risk | Credit scoring, employment decisions, critical infrastructure, education access, law enforcement, migration | Full compliance: risk management, data governance, transparency, human oversight, accuracy, robustness, cybersecurity |
| Limited Risk | Chatbots, emotion recognition, deepfake generators | Transparency: users must be informed they're interacting with AI |
| Minimal Risk | Spam filters, AI in video games, inventory management | No specific obligations (voluntary codes of conduct encouraged) |
What Are High-Risk AI Systems?
Annex III of the EU AI Act specifies high-risk use cases. If your AI system falls into these categories, you must comply with extensive requirements:
| Domain | High-Risk Applications |
|---|---|
| Biometrics | Remote biometric identification, biometric categorization, emotion recognition in workplace/education |
| Critical Infrastructure | Safety components for road traffic, water/gas/electricity supply, digital infrastructure |
| Education | Admissions decisions, learning assessment, proctoring, adaptive learning that affects outcomes |
| Employment | CV screening, hiring decisions, task allocation, performance monitoring, promotion/termination |
| Essential Services | Credit scoring, insurance pricing, emergency services dispatch, benefit eligibility |
| Law Enforcement | Risk assessment, polygraphs, evidence evaluation, profiling, crime prediction |
| Migration & Border | Visa/asylum application assessment, security risk assessment, document verification |
| Justice & Democracy | Legal research assistance, sentencing support, election influence detection |
What Are the Requirements for High-Risk AI?
High-risk AI systems must satisfy these requirements before being placed on the EU market:
1. Risk Management System (Article 9)
- Establish and maintain a risk management system throughout AI lifecycle
- Identify and analyze known and foreseeable risks
- Estimate and evaluate risks from intended use and misuse
- Adopt risk mitigation measures
- Test to ensure residual risk is acceptable
2. Data Governance (Article 10)
- Training data must be relevant, representative, and error-free
- Examine data for biases that could affect health, safety, or fundamental rights
- Document data sources, preparation methods, and assumptions
3. Technical Documentation (Article 11)
- General description of the AI system
- Detailed description of development process
- Information about monitoring, functioning, and control
- Risk management documentation
- Standards applied
4. Record-Keeping / Logging (Article 12)
- Automatic logging of events while system operates
- Logs must enable tracing of system operation
- Retention period appropriate to purpose
- Logs must facilitate post-market monitoring
5. Transparency (Article 13)
- Clear instructions for deployers
- Information about capabilities and limitations
- Expected accuracy levels and known failure conditions
- Human oversight measures needed
6. Human Oversight (Article 14)
- Enable human oversight during operation
- Allow humans to understand system capabilities and limitations
- Enable humans to correctly interpret outputs
- Allow humans to override or interrupt the system
7. Accuracy, Robustness, Cybersecurity (Article 15)
- Achieve appropriate levels of accuracy for intended purpose
- Be resilient to errors, faults, and inconsistencies
- Be robust against attempts to alter outputs or behavior
- Implement cybersecurity measures appropriate to risks
What About General-Purpose AI (GPAI)?
The EU AI Act includes special rules for General-Purpose AI models (like GPT-4, Claude, Gemini):
| Requirement | Standard GPAI | GPAI with Systemic Risk |
|---|---|---|
| Technical documentation | Required | Required |
| Copyright compliance info | Required | Required |
| Summary of training data | Required | Required |
| Model evaluation | Basic | Adversarial testing, red teaming |
| Systemic risk assessment | Not required | Required |
| Incident reporting | Not required | Required |
| Cybersecurity measures | Basic | Enhanced |
Systemic risk threshold: GPAI with training compute > 10^25 FLOPs is presumed to have systemic risk (currently applies to frontier models like GPT-4, Claude 3, Gemini Ultra).
What Are the Penalties?
The EU AI Act has significant penalties, similar in scale to GDPR:
| Violation | Maximum Penalty |
|---|---|
| Prohibited AI practices | €35 million or 7% of global turnover (whichever is higher) |
| High-risk AI requirements | €15 million or 3% of global turnover |
| Incorrect information to authorities | €7.5 million or 1.5% of global turnover |
For SMEs and startups, penalty caps are calculated as the lower of the percentage or fixed amount.
How Do You Prepare for EU AI Act Compliance?
Step 1: Inventory Your AI Systems
Identify all AI systems in use or development. Include third-party AI components. Document purpose, users, and data.
Step 2: Classify by Risk
Map each system to the Act's risk categories. High-risk systems need full compliance programs. Document your classification rationale.
Step 3: Gap Analysis
For high-risk systems, assess current state against each Article 9-15 requirement. Identify gaps in documentation, logging, oversight, and testing.
Step 4: Remediation Plan
Prioritize gaps by enforcement timeline and risk. Establish accountability (who owns each requirement). Budget for compliance activities.
Step 5: Implement Controls
- Build or enhance risk management processes
- Implement logging and audit trail capabilities
- Create technical documentation
- Establish human oversight mechanisms
- Test for accuracy, robustness, and bias
Step 6: Ongoing Governance
Establish post-market monitoring. Plan for periodic reassessment. Train staff on AI governance requirements.
Does the EU AI Act Apply Outside Europe?
Yes. The Act has extraterritorial scope similar to GDPR:
- Providers placing AI on the EU market — regardless of where they're based
- Deployers using AI systems within the EU
- Providers and deployers outside EU whose AI output is used in the EU
- Importers and distributors placing AI systems on EU market
If you have EU customers or your AI outputs affect EU residents, you're likely in scope.
Frequently Asked Questions
What is the EU AI Act?
The EU AI Act is the world's first comprehensive legal framework regulating artificial intelligence. It classifies AI systems by risk level and imposes requirements ranging from transparency obligations to outright bans, with penalties up to €35 million or 7% of global revenue.
When does the EU AI Act take effect?
The EU AI Act entered into force August 1, 2024. Prohibited AI practices apply from February 2025. GPAI (foundation model) rules apply from August 2025. High-risk AI system requirements apply from August 2026. Full enforcement begins August 2027.
What are high-risk AI systems under the EU AI Act?
High-risk AI systems include those used for: biometric identification, critical infrastructure management, education and vocational training access, employment and worker management, access to essential services (credit, insurance), law enforcement, migration and border control, and administration of justice.
What are the penalties for EU AI Act violations?
Penalties scale by violation severity: up to €35 million or 7% of global turnover for prohibited practices, up to €15 million or 3% for high-risk violations, and up to €7.5 million or 1.5% for providing incorrect information. For SMEs, lower caps apply.
Does the EU AI Act apply to companies outside Europe?
Yes. The EU AI Act applies to any company that places AI systems on the EU market or whose AI outputs are used within the EU, regardless of where the company is headquartered. This extraterritorial scope is similar to GDPR.
Build EU AI Act-Ready AI Systems
Datacendia provides built-in compliance for high-risk AI: immutable audit trails, human oversight workflows, risk documentation, and transparency reporting.
Request a Compliance Briefing