EU AI Act Risk Classification
The EU AI Act uses a risk-based approach to regulate AI systems. Every organisation deploying or providing AI in the EU must classify each system into one of four risk tiers — and the classification determines your compliance obligations.
Why classification matters
Misclassifying an AI system is one of the costliest mistakes under the AI Act. Classify too low and you risk fines up to €15 million or 3% of global turnover. Classify too high and you waste resources on unnecessary conformity assessment procedures.
The classification also determines whether you need Annex IV technical documentation, a Fundamental Rights Impact Assessment, and incident reporting procedures.
The four risk tiers
Unacceptable Risk
AI practices banned outright under Article 5: subliminal manipulation, social scoring, untargeted facial scraping, emotion recognition in workplaces/education, and more.
Prohibited — must not be deployed.
High Risk
AI systems in Annex III areas (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) or safety components of products requiring third-party conformity assessment (Annex I).
Full compliance required: conformity assessment, CE marking, technical documentation, post-market monitoring.
Limited Risk
AI systems with transparency obligations: chatbots, deepfake generators, emotion recognition systems, and AI-generated content. Users must be informed they are interacting with AI.
Transparency obligations under Article 50.
Minimal Risk
All other AI systems — spam filters, AI-powered video games, inventory management, etc. No specific obligations, but voluntary codes of conduct are encouraged.
No mandatory requirements. Voluntary best practices recommended.
Annex III: high-risk use cases
If your AI system's intended purpose falls within any of these 8 areas, it is classified as high-risk under Article 6(2):
- 1Biometric identification and categorisation
- 2Management and operation of critical infrastructure
- 3Education and vocational training
- 4Employment, workers management, and access to self-employment
- 5Access to essential private and public services
- 6Law enforcement
- 7Migration, asylum, and border control management
- 8Administration of justice and democratic processes
Who must classify?
Providers (who develop or place AI on the market) bear primary responsibility for classification. But deployers (who use AI under their authority) must also verify the classification, especially if they change the intended purpose of a system.
Importers and distributors must verify that the provider has completed classification and compliance procedures before making the system available in the EU. All personnel involved in classification should have completed AI literacy training under Article 4.
How to classify: step by step
1. Inventory your AI systems
List every system that uses machine learning, deep learning, or inference-based logic. Include third-party AI components integrated into your products.
2. Screen for prohibited practices
Check each system against Article 5 prohibitions. If any system falls in this category, it must be decommissioned immediately.
3. Check Annex I (product safety)
Determine if your AI system is a safety component of a product covered by EU harmonisation legislation that requires third-party conformity assessment.
4. Check Annex III (use-case based)
Map your system's intended purpose against the 8 Annex III categories. A system is high-risk if its intended use falls within any of these areas.
5. Apply the Article 6(3) exception
Even if listed in Annex III, a system is NOT high-risk if it does not pose a significant risk of harm and is used for narrow procedural tasks, to improve a human decision, or for preparatory tasks only.
6. Document your reasoning
Record the classification decision, the criteria applied, and the evidence supporting it. This audit trail protects you regardless of the outcome.
Common classification mistakes
- Assuming rule-based systems are automatically out of scope — hybrid systems with ML components are in scope.
- Classifying based on the technology instead of the intended purpose and deployment context.
- Overlooking AI components embedded in third-party SaaS tools your organisation uses.
- Ignoring the Article 6(3) exception and over-classifying systems as high-risk unnecessarily.
- Treating classification as a one-time exercise — reclassification is required after significant modifications.
How ActLoom automates classification
- AI-powered classifier — register your system and get an instant risk classification with documented reasoning chain.
- Article 6(3) exception check — automatically evaluates whether your system qualifies for the exception, avoiding unnecessary compliance burden.
- Audit-ready documentation — classification decisions are logged with timestamps, criteria, and evidence for regulator inspection.
- Change monitoring — get alerts when system modifications may require reclassification.
Related resources
Frequently asked questions
- What are the 4 risk levels in the EU AI Act?
- Unacceptable (banned under Article 5), High (full compliance + CE marking), Limited (transparency obligations), and Minimal (no mandatory requirements).
- How do I know if my AI system is high-risk?
- If it is a safety component of a product requiring third-party conformity assessment (Annex I), or its intended purpose falls in one of the 8 Annex III categories. The Article 6(3) exception may apply for narrow procedural tasks.
- What is the Article 6(3) exception?
- Even if listed in Annex III, a system is NOT high-risk if it poses no significant risk of harm and is used for narrow procedural tasks, to improve a human decision, or for preparatory tasks only. This must be documented.
- What are the consequences of misclassification?
- Classifying too low risks fines up to €15 million or 3% of global turnover. Classifying too high wastes resources. Classification must be reviewed after significant system modifications.
- Who is responsible for classification?
- Providers bear primary responsibility. Deployers must verify, especially when changing intended purpose. Importers and distributors must verify before distributing in the EU.
Classify your AI systems in minutes
ActLoom's AI-powered classifier determines your risk tier, documents the reasoning, and generates your compliance roadmap automatically.
Start free assessment