Sprinkling ActSprinkling Act
Sign In

Assess

Free DiagnosticGet your score instantlyPricing€690 report + enterprise plansFull ReportWhat your report containsQualifyApply for a full reportWaitlistReserve your report

Monitor

Compliance IndexPublic AI Act screening registryWhat-If EngineSimulate regulatory changesEnterpriseFull-portfolio AI Act monitoring & intelligence

Intelligence

SprinklingAct+Expert analysis updated weeklyReportsIndependent research on EU AI Act readiness

Methodology

MethodologyHow the scoring works — 6 gatesResourcesGuides, checklists & white papersAI PositiveEthical performance framework — beyond complianceAI AgentsThe 4 ACTS — AI Act for agent buildersReport an issueBug, typo, or data concern

About

AboutOur mission and standardWho Is This ForDecision-makers who act firstTransparencyWhat you gain — and what we cannot doWhat We Are NotThe lines we do not cross

Network

PartnersLaw firms, auditors & certification bodiesPress & MediaMedia kit, coverage, interview requestsContactGet in touch

CHOOSE YOUR REGION

International (English)FranceBelgiqueLuxembourgIreland
See all countries and regions →
SprinklingAct+

Timeline

EU AI Act Enforcement Timeline: Every Key Date from 2024 to 2027

By Lamar B. Shucrani — March 5, 2026 · 12 min read

The EU AI Act does not apply all at once. It rolls out in phases — each triggering a distinct set of obligations for a distinct set of actors. Missing a phase is not a technicality. It is a compliance failure with fines attached. This is the complete, accurate timeline.

Get notified when this article is updated

The Architecture of the Rollout

The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. From that date, a 24-month transition clock started for most obligations, with carve-outs creating earlier and later deadlines for specific provisions. The result is five distinct compliance phases between 2024 and 2027.

Understanding which phase applies to you requires knowing two things: what role you play (provider, deployer, importer, distributor), and which category your AI system falls into (prohibited, high-risk, GPAI, limited risk, minimal risk).

Phase 1 — August 1, 2024: Entry into Force

The Regulation entered into force. No operational obligations applied yet — but the clock started. Organizations had 6 months until the first hard deadline.

What this meant in practice

Legal and compliance teams should have begun mapping AI systems against the Act's classification framework. Risk inventories, initial gap analyses, and internal governance structures were due to begin at this point. Organizations that started here had adequate runway. Those who waited did not.

Phase 2 — February 2, 2025: Prohibited Practices and AI Literacy

The first binding obligations came into effect. Two articles became enforceable simultaneously.

Article 5 — Prohibited Practices

Eight categories of AI systems became illegal to place on the market, put into service, or use within the EU. These are not gray areas. They are hard prohibitions with the highest penalty tier in the Act: up to €35 million or 7% of global annual turnover, whichever is higher.

1

Subliminal manipulation

AI systems that deploy subliminal techniques below the threshold of consciousness to distort behavior in ways that cause harm.

2

Exploitation of vulnerabilities

AI that exploits vulnerabilities due to age, disability, or social/economic situation to distort behavior harmfully.

3

Social scoring by public authorities

AI used by public authorities to evaluate or classify individuals based on social behavior or personal characteristics over time.

4

Real-time biometric identification (law enforcement)

Remote biometric ID in publicly accessible spaces for law enforcement — with three narrow exceptions for missing persons, imminent terrorist threats, and serious crime suspects.

5

Retrospective biometric categorisation

Systems that categorise individuals based on biometric data into sensitive groups (race, political opinion, religion, sexual orientation).

6

Emotion recognition (workplace and education)

AI that infers emotional states of individuals in workplace or educational settings, except for medical or safety purposes.

7

Crime prediction based on profiling

AI that assesses the risk of a person committing a crime based solely on profiling or personality traits.

8

Facial recognition databases via scraping

Building or expanding facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.

Article 4 — AI Literacy

Providers and deployers became obligated to ensure their staff have sufficient AI literacy — appropriate to their role and the AI systems they work with. This is not a one-time training requirement. It is an ongoing organizational obligation to maintain competence. No specific format or certification is prescribed; what matters is demonstrable adequacy.

Phase 3 — August 2, 2025: GPAI Obligations and Governance Infrastructure

The second major phase activated obligations for a specific and consequential category: General-Purpose AI (GPAI) models.

What is a GPAI model?

A model trained on large amounts of data, capable of performing a wide range of tasks, and made available for integration into downstream AI systems. GPT-4, Claude, Gemini, Mistral — these are GPAI models. So is any model a company builds for multi-purpose internal use.

Technical documentation (Art. 53)

Providers must maintain detailed documentation of training methodologies, data sources, capabilities, and limitations.

Copyright compliance policy (Art. 53)

A policy for complying with EU copyright law must be in place, including respect for rights reservations under Article 4(3) of Directive 2019/790.

Training data summary (Art. 53)

A sufficiently detailed summary of training data used — published and publicly available.

Code of Practice adherence (Art. 56)

Providers may demonstrate compliance by adhering to the EU AI Office's Code of Practice. Those who don't must show alternative adequate means of compliance.

Systemic Risk — The Higher Tier

GPAI models exceeding 10²⁵ FLOPs of training compute are classified as models with systemic risk (Article 51). They face additional obligations: adversarial testing (red-teaming), incident reporting to the EU AI Office, cybersecurity protections, and energy efficiency reporting.

Governance Infrastructure

August 2025 also activated the EU AI Act's institutional layer. The EU AI Office became operational as the central supervisory authority for GPAI models. National competent authorities in each Member State became formally designated. Finland was the first to activate fully operational national supervision powers, in January 2026. Other Member States are following throughout 2026.

Phase 4 — August 2, 2026: The Critical Deadline

This is the date that defines the Act for most organizations. The majority of the Regulation becomes fully applicable. Missing this deadline is not a transition issue — it is non-compliance.

High-Risk AI Systems — Annex III (Article 6§2)

AI systems listed in Annex III's 8 domains become subject to the full high-risk compliance regime. The obligations are extensive and non-delegable:

Risk management system (Art. 9)

A continuous, iterative process throughout the system's lifecycle. Not a one-time assessment. Must identify known and foreseeable risks, estimate and evaluate them, and implement mitigation.

Data governance (Art. 10)

Training, validation, and testing datasets must be relevant, sufficiently representative, and free from errors to the extent possible. Bias monitoring is required.

Technical documentation (Art. 11)

Comprehensive documentation demonstrating compliance — covering architecture, development process, performance metrics, risk management measures. Must be kept up to date throughout the lifecycle.

Automatic logging (Art. 12)

High-risk systems must automatically generate logs enabling traceability of their operation. Logs must be kept for a minimum period determined by the use case.

Transparency to deployers (Art. 13)

Instructions for use must be provided to deployers — covering capabilities and limitations, performance indicators, and circumstances requiring human oversight.

Human oversight (Art. 14)

Systems must be designed to allow effective oversight by natural persons. This includes the ability to monitor, interpret, override, and stop the system.

Accuracy, robustness, cybersecurity (Art. 15)

Systems must achieve an appropriate level of accuracy for their intended purpose and remain resilient against errors, faults, and attempts by third parties to exploit vulnerabilities.

Conformity assessment (Art. 43)

Before placing on the market: complete a conformity assessment. For most Annex III systems, this is self-assessment. For biometrics and law enforcement systems, third-party assessment is required.

EU database registration (Art. 49)

High-risk AI systems must be registered in the EU database before deployment. The database is publicly accessible. Registration is the provider's responsibility.

Quality management system (Art. 17)

Providers must have a documented quality management system covering the entire lifecycle — design, development, testing, deployment, monitoring, and disposal.

Transparency Obligations — Article 50

Also effective August 2, 2026: transparency obligations for AI systems that interact with humans, generate synthetic content, or influence decisions.

Chatbots and conversational AI

Must inform users they are interacting with an AI system — clearly, at the beginning of the interaction. Exception: where it is obvious from context.

Emotion recognition and biometric categorisation

Must inform individuals when they are subject to such systems.

Deep fakes and AI-generated content

Synthetic audio, video, and images must be labelled as artificially generated or manipulated — in a machine-readable format, detectable and identifiable.

AI-generated text on public interest matters

Text generated by AI and published to inform the public on matters of public interest must be labelled as AI-generated.

Phase 5 — August 2, 2027: Embedded Safety Components

One category of high-risk AI systems receives an extended transition: systems that are safety components embedded in products already regulated by EU harmonisation legislation listed in Annex I.

Examples: AI embedded in medical devices, machinery, aviation equipment, motor vehicles, toys. These systems fall under Article 6(1) rather than Annex III. Their extended deadline reflects the complexity of conformity assessment under pre-existing product safety regimes.

Important: This extension applies only to AI systems placed on the market before August 2, 2026 under existing product safety legislation. New products launched after that date must comply immediately.

The Digital Omnibus — What the Proposed Delay Actually Means

In November 2025, the European Commission introduced the Digital Omnibus package — a proposal to simplify EU digital regulation across the AI Act, GDPR, NIS2, DORA, and the Data Act.

For the AI Act, the Commission proposed extending the application of high-risk rules by up to 16 additional months. If adopted as proposed, the August 2026 deadline for Annex III systems could shift to as late as December 2027.

As of March 5, 2026, this proposal is under the ordinary legislative procedure — still being examined by the European Parliament and the Council. It is not law. It is not confirmed. It may be amended, delayed, or rejected.

The risk of waiting

Organizations that pause compliance work based on the Digital Omnibus proposal are making a structural bet on an outcome that has not materialized. The prudent position — consistent with advice from every major legal and compliance firm active in this space — is to treat August 2, 2026 as the binding deadline and treat any extension as a bonus runway if it arrives.

What the Digital Omnibus does change — now

The package introduces a single incident reporting point across EU digital regulations and aligns breach notification thresholds. It also clarifies the use of personal data in AI, including for creditworthiness assessments. These clarifications are relevant regardless of the timeline extension outcome.

What You Should Be Doing Right Now — By Role

CEO / Founder

  • →Confirm your AI inventory is complete — every system your company develops, deploys, or procures.
  • →Know the classification of each system. One high-risk system in your stack changes your compliance posture entirely.
  • →Assign a named owner for AI Act compliance. This is not a task that can sit in legal without executive visibility.
  • →Understand the penalty structure: up to 3% of global turnover for most high-risk violations. Board-level issue.

CTO / VP Engineering

  • →Audit your AI systems against the Annex III domains. Document each system's classification and the reasoning behind it.
  • →Implement automatic logging for any system you suspect may be high-risk. Logging is a technical requirement, not an afterthought.
  • →Review your GPAI dependencies. If you are integrating a foundation model, you are a deployer and have Article 26 obligations.
  • →Technical documentation must be up to date before August 2. Start writing it now — retroactive documentation is harder and less credible.

DPO / Compliance Lead

  • →Map your AI systems against both the AI Act classification framework and your existing GDPR obligations. Overlaps are significant — particularly around automated decision-making (Art. 22 GDPR vs. high-risk oversight requirements).
  • →Prepare for the Fundamental Rights Impact Assessment (FRIA) requirement for certain high-risk deployers (Art. 27).
  • →Check your chatbots and conversational AI tools against Article 50. Transparency obligations are in effect August 2, 2026 — this is often overlooked because the focus is on high-risk systems.
  • →Register applicable high-risk systems in the EU database before deployment.

Investor / VC

  • →Add EU AI Act classification to your standard due diligence framework. Ask portfolio companies: what is your AI Act position score, which systems are high-risk, and what is the status of conformity assessment.
  • →Understand that high-risk non-compliance is a financial liability — fines plus mandatory market withdrawal plus reputational damage.
  • →The August 2026 deadline creates a near-term catalyst: companies that are compliant become demonstrably more investable in regulated verticals.

Penalty Structure — At a Glance

Non-compliance penalties apply to both EU-based and non-EU companies offering AI systems to users in the EU.

ViolationMaximum Fine
Prohibited practices (Art. 5)Up to €35M or 7% global turnover
High-risk violations (Arts. 9–15, 17, 43, 49)Up to €15M or 3% global turnover
Incorrect or misleading information (Art. 101)Up to €7.5M or 1% global turnover
GPAI violations (Arts. 53–55)Up to €15M or 3% global turnover

The higher of the two figures applies in each case. For multinational organizations, turnover-based fines will almost always exceed the fixed ceiling.

Summary Timeline

Aug 1, 2024

Entry into force

Regulation published. Transition clock starts.

Feb 2, 2025

Prohibited practices + AI literacy

Art. 5 prohibitions and Art. 4 AI literacy obligations enforceable.

Aug 2, 2025

GPAI obligations + governance

Arts. 51–55 for GPAI providers. EU AI Office and national authorities operational.

Aug 2, 2026

Most high-risk + transparency

Annex III systems, Art. 50 transparency, quality management, conformity assessments, EU database registration.

Aug 2, 2027

Embedded safety components

Art. 6§1 systems embedded in Annex I products — extended transition.

Establish Your Position Now — Obligations Are Already Active

The Sprinkling Act free assessment runs your AI system through the same 6-gate framework that maps directly to these enforcement phases — Article 5, Article 6, Annex III, Articles 50–53. You get a star rating, a score, and a traceable audit trail. 9 questions, instant result.

Establish Your Position →

Sources

  1. [1]
    EUR-Lex (June 13, 2024) — Regulation (EU) 2024/1689 of the European Parliament and of the Council — Artificial Intelligence Act eur-lex.europa.eu/eli
  2. [2]
    EUR-Lex — Article 113 — Entry into force and date of application artificialintelligenceact.eu/article
  3. [3]
    EUR-Lex — Article 5 — Prohibited artificial intelligence practices artificialintelligenceact.eu/article
  4. [4]
    EUR-Lex — Article 4 — AI literacy artificialintelligenceact.eu/article
  5. [5]
    EUR-Lex — Article 99 — Penalties artificialintelligenceact.eu/article
  6. [6]
    EUR-Lex — Articles 51-55 — General-purpose AI models artificialintelligenceact.eu/article
  7. [7]
    EUR-Lex — Article 6 — Classification rules for high-risk AI systems artificialintelligenceact.eu/article
  8. [8]
    EUR-Lex — Article 50 — Transparency obligations artificialintelligenceact.eu/article
  9. [9]
    European Parliament (March 18, 2026) — MEPs support postponement of certain rules on artificial intelligence www.europarl.europa.eu/news
  10. [10]
    European Commission (November 2025) — Digital Omnibus Regulation — Proposed amendments to AI Act timelines digital-strategy.ec.europa.eu/en
  11. [11]
    EU AI Act Explorer — Implementation Timeline artificialintelligenceact.eu/implementation-timeline
ALREADY ENFORCEABLE105 days

Art. 5 prohibitions and GPAI rules apply today. Transparency follows in 105 days. The question is not when — it’s whether you’ve documented your position.

Free Diagnostic — 9 questionsSee pricing →

Regulatory signals, when they happen.

AI Act updates, new analysis, enforcement news — delivered only when the regulation moves. No scheduled cadence.

Unsubscribe anytime. No third-party sharing.

SEE ALSO

Free Diagnostic

9 questions, instant result

Methodology

The 6-gate framework explained

High-Risk Systems

The 8 Annex III domains

GPAI Obligations

Art. 51-55 for model providers

Product

Free DiagnosticPricingFull ReportReport PreviewQualifyWaitlistWhat-If EngineEnterpriseCompliance Index

Content

SprinklingAct+Research ReportsMethodologyResourcesAI PositiveReport an issue

Company

AboutWho Is This ForTransparencyWhat We Are NotPartnershipsPress & MediaContactLinkedIn

Legal

Legal NoticePrivacy PolicyCookie PolicyTerms of ServiceData ProcessingSecuritySources & ReferencesGlossaryOperator Charter

Copyright © 2026 Sprinkling Act. All rights reserved.

Ireland
Privacy Policy|Terms of Service|Cookie Policy|Security|x402 soon