The Two-Gate Classification System
The EU AI Act classifies AI systems as high-risk through two distinct pathways defined in Article 6.
Gate 1 — Article 6(1): AI systems that are themselves a safety component of a product covered by Union harmonisation legislation listed in Annex I (machinery, medical devices, vehicles, aviation, etc.), or are themselves such a product, and must undergo third-party conformity assessment.
Gate 2 — Article 6(2): AI systems listed in Annex III, across 8 specific domains. This is where most enterprise AI systems fall.
The 8 Annex III Domains
If your AI system falls into any of these domains, it is presumed high-risk:
Biometrics
Remote biometric identification, emotion recognition, biometric categorisation based on sensitive attributes.
Critical Infrastructure
AI used in management of critical infrastructure — water, gas, electricity, road traffic, digital infrastructure.
Education
AI that determines access to educational institutions, evaluates learning outcomes, assesses students.
Employment & HR
AI for recruitment, CV screening, promotion decisions, task allocation, performance monitoring, termination.
Essential Services
AI that evaluates creditworthiness, sets insurance premiums, or determines access to public benefits.
Law Enforcement
AI used by police for profiling, crime prediction, evidence evaluation, or lie detection.
Migration & Asylum
AI that assesses migration risk, verifies travel documents, or determines asylum eligibility.
Administration of Justice
AI that assists courts in researching facts and law, or influences legal proceedings.
The Article 6(3) Exception
Even if your system falls under Annex III, it may be exempt from high-risk classification if it does not pose a significant risk of harm to health, safety, or fundamental rights, and meets one of these four conditions:
- It performs a narrow procedural task
- It is intended to improve the result of a previously completed human activity
- It detects decision-making patterns without replacing human assessment
- It performs a preparatory task to an assessment relevant to the use cases listed in Annex III
Warning: Providers who claim this exemption must document their reasoning and notify their market surveillance authority. The exemption is not self-declaring.
Obligations for High-Risk Providers
If classified high-risk, Articles 9 through 15 impose mandatory obligations:
Penalties for Non-Compliance
For high-risk AI systems, penalties can reach 15 million euros or 3% of global annual turnover (Art. 99(4)), whichever is higher. For prohibited practices (Article 5), penalties rise to 35 million euros or 7%.
Full enforcement applies from August 2, 2026. The conformity assessment process for high-risk systems can take several months.
Not sure if your AI system is high-risk? The free Sprinkling Act diagnostic classifies your system in minutes — article by article.
Sources
- [1]EUR-Lex (July 12, 2024) — Regulation (EU) 2024/1689 — Artificial Intelligence Act (full text) eur-lex.europa.eu/eli
- [2]EU AI Act — Article 6 — Classification Rules for High-Risk AI Systems artificialintelligenceact.eu/article
- [3]EU AI Act — Annex III — High-Risk AI Systems Referred to in Article 6(2) artificialintelligenceact.eu/annex
- [4]
- [5]
- [6]EU AI Act — Articles 10–15 — Requirements for High-Risk AI Systems artificialintelligenceact.eu/article
- [7]
Art. 5 prohibitions and GPAI rules apply today. Transparency follows in 105 days. The question is not when — it’s whether you’ve documented your position.