Sprinkling ActSprinkling Act
Sign In

Assess

Free DiagnosticGet your score instantlyPricing€690 report + enterprise plansFull ReportWhat your report containsQualifyApply for a full reportWaitlistReserve your report

Monitor

Compliance IndexPublic AI Act screening registryWhat-If EngineSimulate regulatory changesEnterpriseFull-portfolio AI Act monitoring & intelligence

Intelligence

SprinklingAct+Expert analysis updated weeklyReportsIndependent research on EU AI Act readiness

Methodology

MethodologyHow the scoring works — 6 gatesResourcesGuides, checklists & white papersAI PositiveEthical performance framework — beyond complianceAI AgentsThe 4 ACTS — AI Act for agent buildersReport an issueBug, typo, or data concern

About

AboutOur mission and standardWho Is This ForDecision-makers who act firstTransparencyWhat you gain — and what we cannot doWhat We Are NotThe lines we do not cross

Network

PartnersLaw firms, auditors & certification bodiesPress & MediaMedia kit, coverage, interview requestsContactGet in touch

CHOOSE YOUR REGION

International (English)FranceBelgiqueLuxembourgIreland
See all countries and regions →
SprinklingAct+

Analysis

What Makes an AI System High-Risk Under the EU AI Act?

By Lamar B. Shucrani — February 28, 2026 · 8 min read

High-risk classification triggers the most demanding obligations under the EU AI Act — risk management systems, data governance, technical documentation, human oversight, and more. Here is exactly how classification works.

Get notified when this article is updated

The Two-Gate Classification System

The EU AI Act classifies AI systems as high-risk through two distinct pathways defined in Article 6.

Gate 1 — Article 6(1): AI systems that are themselves a safety component of a product covered by Union harmonisation legislation listed in Annex I (machinery, medical devices, vehicles, aviation, etc.), or are themselves such a product, and must undergo third-party conformity assessment.

Gate 2 — Article 6(2): AI systems listed in Annex III, across 8 specific domains. This is where most enterprise AI systems fall.

The 8 Annex III Domains

If your AI system falls into any of these domains, it is presumed high-risk:

1

Biometrics

Remote biometric identification, emotion recognition, biometric categorisation based on sensitive attributes.

2

Critical Infrastructure

AI used in management of critical infrastructure — water, gas, electricity, road traffic, digital infrastructure.

3

Education

AI that determines access to educational institutions, evaluates learning outcomes, assesses students.

4

Employment & HR

AI for recruitment, CV screening, promotion decisions, task allocation, performance monitoring, termination.

5

Essential Services

AI that evaluates creditworthiness, sets insurance premiums, or determines access to public benefits.

6

Law Enforcement

AI used by police for profiling, crime prediction, evidence evaluation, or lie detection.

7

Migration & Asylum

AI that assesses migration risk, verifies travel documents, or determines asylum eligibility.

8

Administration of Justice

AI that assists courts in researching facts and law, or influences legal proceedings.

The Article 6(3) Exception

Even if your system falls under Annex III, it may be exempt from high-risk classification if it does not pose a significant risk of harm to health, safety, or fundamental rights, and meets one of these four conditions:

  • It performs a narrow procedural task
  • It is intended to improve the result of a previously completed human activity
  • It detects decision-making patterns without replacing human assessment
  • It performs a preparatory task to an assessment relevant to the use cases listed in Annex III

Warning: Providers who claim this exemption must document their reasoning and notify their market surveillance authority. The exemption is not self-declaring.

Obligations for High-Risk Providers

If classified high-risk, Articles 9 through 15 impose mandatory obligations:

→Art. 9 — Risk management system (continuous, documented, iterative)
→Art. 10 — Data governance (training data requirements, bias mitigation)
→Art. 11 — Technical documentation (before market placement)
→Art. 12 — Record-keeping and logging (automatic, tamper-proof)
→Art. 13 — Transparency to deployers (instructions for use)
→Art. 14 — Human oversight (meaningful control mechanisms)
→Art. 15 — Accuracy, robustness, and cybersecurity standards

Penalties for Non-Compliance

For high-risk AI systems, penalties can reach 15 million euros or 3% of global annual turnover (Art. 99(4)), whichever is higher. For prohibited practices (Article 5), penalties rise to 35 million euros or 7%.

Full enforcement applies from August 2, 2026. The conformity assessment process for high-risk systems can take several months.

Not sure if your AI system is high-risk? The free Sprinkling Act diagnostic classifies your system in minutes — article by article.

Free diagnostic — instantSee full report

Sources

  1. [1]
    EUR-Lex (July 12, 2024) — Regulation (EU) 2024/1689 — Artificial Intelligence Act (full text) eur-lex.europa.eu/eli
  2. [2]
    EU AI Act — Article 6 — Classification Rules for High-Risk AI Systems artificialintelligenceact.eu/article
  3. [3]
    EU AI Act — Annex III — High-Risk AI Systems Referred to in Article 6(2) artificialintelligenceact.eu/annex
  4. [4]
    EU AI Act — Article 9 — Risk Management System artificialintelligenceact.eu/article
  5. [5]
    EU AI Act — Article 99 — Penalties artificialintelligenceact.eu/article
  6. [6]
    EU AI Act — Articles 10–15 — Requirements for High-Risk AI Systems artificialintelligenceact.eu/article
  7. [7]
    EU AI Act — Implementation Timeline artificialintelligenceact.eu/implementation-timeline
ALREADY ENFORCEABLE105 days

Art. 5 prohibitions and GPAI rules apply today. Transparency follows in 105 days. The question is not when — it’s whether you’ve documented your position.

Free Diagnostic — 9 questionsSee pricing →

Regulatory signals, when they happen.

AI Act updates, new analysis, enforcement news — delivered only when the regulation moves. No scheduled cadence.

Unsubscribe anytime. No third-party sharing.

SEE ALSO

Product

Free Diagnostic

Is your system high-risk? 9 questions, instant result.

Standard

Methodology

How Annex III systems are scored — gate by gate.

Blog

GPAI Obligations

What if your high-risk system uses a GPAI model?

Product

Full Report

Get the complete article-by-article assessment.

Product

Free DiagnosticPricingFull ReportReport PreviewQualifyWaitlistWhat-If EngineEnterpriseCompliance Index

Content

SprinklingAct+Research ReportsMethodologyResourcesAI PositiveReport an issue

Company

AboutWho Is This ForTransparencyWhat We Are NotPartnershipsPress & MediaContactLinkedIn

Legal

Legal NoticePrivacy PolicyCookie PolicyTerms of ServiceData ProcessingSecuritySources & ReferencesGlossaryOperator Charter

Copyright © 2026 Sprinkling Act. All rights reserved.

Choose your country
Privacy Policy|Terms of Service|Cookie Policy|Security|x402 soon