Sprinkling ActSprinkling Act
Sign In

Assess

Free DiagnosticGet your score instantlyPricing€690 report + enterprise plansFull ReportWhat your report containsQualifyApply for a full reportWaitlistReserve your report

Monitor

Compliance IndexPublic AI Act screening registryWhat-If EngineSimulate regulatory changesEnterpriseFull-portfolio AI Act monitoring & intelligence

Intelligence

SprinklingAct+Expert analysis updated weeklyReportsIndependent research on EU AI Act readiness

Methodology

MethodologyHow the scoring works — 6 gatesResourcesGuides, checklists & white papersAI PositiveEthical performance framework — beyond complianceAI AgentsThe 4 ACTS — AI Act for agent buildersReport an issueBug, typo, or data concern

About

AboutOur mission and standardWho Is This ForDecision-makers who act firstTransparencyWhat you gain — and what we cannot doWhat We Are NotThe lines we do not cross

Network

PartnersLaw firms, auditors & certification bodiesPress & MediaMedia kit, coverage, interview requestsContactGet in touch

CHOOSE YOUR REGION

International (English)FranceBelgiqueLuxembourgIreland
See all countries and regions →
SprinklingAct+

Opinion

“EU AI Act-Ready” — Claim or Proof?

By Lamar B. Shucrani — March 20, 2026 · 7 min read

More organizations claim “AI Act-ready” without documented proof. Here’s why the difference between a claim and a verifiable artifact matters — and what regulators will actually look for.

Get notified when this article is updated

The pattern is already visible

Product comparisons now include “EU AI Act-ready” as a feature line. AI vendors and model providers use it as a quality signal. But there is no public standard that defines what “AI Act-ready” means. No certification. No registry. No verifiable artifact behind the claim.

The phrase appears in pitch decks, landing pages, and procurement checklists — always as a statement, never with a reference to a specific article, annex, or obligation. It functions as marketing language, not as a compliance position.

The GDPR precedent

After May 2018, every company claimed “GDPR-compliant.” Most were not. Regulators started asking for proof — processing records, DPIAs, DPAs with subprocessors. The claims without documentation behind them became liabilities, not assets.

The AI Act is following the exact same cycle, with an 8-year delay. Organizations that invested in documented compliance positions after GDPR were protected. Those that relied on self-declared claims faced enforcement actions, fines, and reputational damage.

The lesson is simple: a claim without a dated, verifiable artifact behind it is worse than no claim at all — because it creates the appearance of awareness without the substance of compliance.

What a claim looks like vs what proof looks like

A claim:

  • “We are AI Act-ready.”
  • Undated. Unverifiable.
  • No article mapping. No audit trail.
  • Self-assessed.

A proof:

  • Gate-by-gate classification.
  • Dated and versioned.
  • Article-mapped to EU AI Act 2024/1689.
  • Independently verifiable.
  • Audit trail included.

The difference matters because regulators do not assess intentions. They assess documentation. A claim without an artifact behind it is indistinguishable from no compliance effort at all.

What regulators will look for

Under the AI Act, national market surveillance authorities will assess whether organizations have fulfilled their obligations — not whether they claimed to.

→Art. 9 — Risk management system (continuous, documented, iterative)
→Art. 11 — Technical documentation (before market placement)
→Art. 13 — Transparency measures (clear information to deployers and users)

None of these are satisfied by a marketing claim. Each requires a documented, traceable process with outputs that can be reviewed by a third party.

Sources

  1. [1]
    EUR-Lex (July 12, 2024) — Regulation (EU) 2024/1689 — Artificial Intelligence Act (full text) eur-lex.europa.eu/eli
  2. [2]
    EU AI Act — Article 5 — Prohibited AI Practices artificialintelligenceact.eu/article
  3. [3]
    EU AI Act — Article 6 — Classification Rules for High-Risk AI Systems artificialintelligenceact.eu/article
  4. [4]
    EU AI Act — Article 50 — Transparency Obligations artificialintelligenceact.eu/article
  5. [5]
    EU AI Act — Article 53 — Obligations for Providers of General-Purpose AI Models artificialintelligenceact.eu/article
  6. [6]
    EU AI Act — Article 9 — Risk Management System artificialintelligenceact.eu/article
  7. [7]
    EU AI Act — Article 11 — Technical Documentation artificialintelligenceact.eu/article
ALREADY ENFORCEABLE105 days

Art. 5 prohibitions and GPAI rules apply today. Transparency follows in 105 days. The question is not when — it’s whether you’ve documented your position.

Free Diagnostic — 9 questionsSee pricing →

The classification question nobody asks

“AI Act-ready” without specifying the use case is meaningless. The same model can be minimal risk for creative writing, limited risk as a chatbot (Art. 50), or high-risk under Annex III if deployed in HR or credit scoring.

Classification depends on intended purpose, not technical capability. A large language model is not inherently high-risk or low-risk. Its classification is determined by how it is deployed, in which domain, and for what decision.

Any organization claiming “AI Act-ready” without specifying which use case, which risk tier, and which articles apply has not completed a classification — they have made a marketing statement.

From claim to proof: what a documented position looks like

A Sprinkling Act assessment evaluates your system through 6 regulatory gates mapped to Art. 5, 6, 50, 51, 53. The output is a dated, versioned artifact — not a checkbox.

It can be verified by any third party, attached to a data room, or presented to a regulator. It includes the classification rationale, the article mapping, and the obligations triggered by your specific use case.

The goal is not to declare readiness. It is to document a defensible position — one that holds up when a market surveillance authority asks the only question that matters: “Show me your documentation.”

Turn your claim into a documented compliance position. The free Sprinkling Act diagnostic classifies your system in minutes — article by article.

Free diagnostic — instantSee full report

Regulatory signals, when they happen.

AI Act updates, new analysis, enforcement news — delivered only when the regulation moves. No scheduled cadence.

Unsubscribe anytime. No third-party sharing.

SEE ALSO

Product

Free Diagnostic

Classify your AI system in minutes — 9 questions, instant result.

Standard

Methodology

How classification works — gate by gate, article by article.

Product

Pricing

From free diagnostic to full compliance artifact.

Product

Free DiagnosticPricingFull ReportReport PreviewQualifyWaitlistWhat-If EngineEnterpriseCompliance Index

Content

SprinklingAct+Research ReportsMethodologyResourcesAI PositiveReport an issue

Company

AboutWho Is This ForTransparencyWhat We Are NotPartnershipsPress & MediaContactLinkedIn

Legal

Legal NoticePrivacy PolicyCookie PolicyTerms of ServiceData ProcessingSecuritySources & ReferencesGlossaryOperator Charter

Copyright © 2026 Sprinkling Act. All rights reserved.

Choose your country
Privacy Policy|Terms of Service|Cookie Policy|Security|x402 soon