Sprinkling ActSprinkling Act
Sign In

Assess

Free DiagnosticGet your score instantlyPricing€690 report + enterprise plansFull ReportWhat your report containsQualifyApply for a full reportWaitlistReserve your report

Monitor

Compliance IndexPublic AI Act screening registryWhat-If EngineSimulate regulatory changesEnterpriseFull-portfolio AI Act monitoring & intelligence

Intelligence

SprinklingAct+Expert analysis updated weeklyReportsIndependent research on EU AI Act readiness

Methodology

MethodologyHow the scoring works — 6 gatesResourcesGuides, checklists & white papersAI PositiveEthical performance framework — beyond complianceAI AgentsThe 4 ACTS — AI Act for agent buildersReport an issueBug, typo, or data concern

About

AboutOur mission and standardWho Is This ForDecision-makers who act firstTransparencyWhat you gain — and what we cannot doWhat We Are NotThe lines we do not cross

Network

PartnersLaw firms, auditors & certification bodiesPress & MediaMedia kit, coverage, interview requestsContactGet in touch

CHOOSE YOUR REGION

International (English)FranceBelgiqueLuxembourgIreland
See all countries and regions →

SPRINKLING ACT — SCORING METHODOLOGY

How We Score AI Act Position

Sprinkling Act operates two distinct assessment levels. The free diagnostic is a self-assessment: you answer 9 questions based on your own perception of your AI system, and the 6 regulatory gates produce an indicative score. The Full Report is a selective, human-reviewed analysis with an extended intake questionnaire — access requires qualification, and applications may be declined.

Self-assess your system — FreeSee full report

Two Levels, Two Questions

Free Diagnostic

“Based on what you know about your AI system, what is its most likely regulatory position?”

Self-declared answers → 6-gate scoring → indicative signal. Instant. No account required. This is your perception, not our verdict. The free score is not designed to simplify a 144-page regulation — it is designed to guide your self-assessment.

Full Report (€690)

“Given verified information about your system, what is its defensible regulatory classification?”

Qualification required → you complete a structured intake questionnaire covering 14 sections (20–30 minutes of your time) → we confirm within 1–3 business days → human review + article-by-article analysis → report delivered within 5–7 business days (~1–2 weeks total). The report is 15–22 pages adapted to your specific classification — a HIGH system gets more obligation analysis pages than a LIMITED system. Every report includes a detachable one-pager executive summary, SVG risk gauge, compliance timeline, AI Positive governance radar, GDPR Art. 22 cross-analysis, algorithmic bias assessment, residual risks analysis, and an integrated FAQ. The intake questionnaire is designed to ask precise questions that surface the information that matters — even when you didn’t know it was relevant. Your Full Report score may differ from your free diagnostic score.

Core Principles

01

Article Mapping

Every question maps to a specific article, paragraph, or annex. No interpretation beyond the text of the regulation.

02

Gate Logic

Gates are irreversible. A Prohibited Practice at Gate 01 ends the assessment immediately — no score can override a legal violation.

03

Audit Trail

Every classification comes with a full traceable path — gate by gate, article by article. Exportable and independently verifiable.

04

Temporal Stability

Each report includes a stability indicator: STABLE, MODERATE, or UNSTABLE — reflecting how likely the classification is to change as guidelines evolve.

05

No Interpretation

Sprinkling Act does not interpret ambiguous cases in favor of the client. When classification is uncertain, this is flagged explicitly with a recommendation to seek legal counsel.

06

Dual Versioning

Every report carries two version markers: the Sprinkling Act methodology version and the regulatory freeze date (March 2026). This means a report produced in March 2026 reflects the AI Act as understood at that date. Future delegated acts, AI Board guidelines, or jurisprudence do not invalidate existing reports — they may trigger a re-assessment recommendation.

The 6 Regulatory Gates

Gates are evaluated in sequence. The first gate triggered determines the final classification. Lower gates are not evaluated once a higher gate is triggered.

01
Art. 5PROHIBITED

Prohibited Practices

If any practice falls under Article 5 — subliminal manipulation, social scoring, real-time remote biometric identification in public spaces, exploitation of vulnerabilities, untargeted scraping of facial images, emotion recognition in workplace/education, biometric categorisation of sensitive attributes — the assessment stops immediately. No score is assigned. The system cannot be deployed legally.

Emotion recognition in workplaces without explicit consentReal-time facial recognition in public spaces outside narrow exceptionsUntargeted scraping of facial images from the internet or CCTV (Art. 5§1(e))AI that manipulates users through subliminal techniques
02
Art. 6(1)HIGH RISK

Safety Component of Regulated Product

AI systems that are safety components of products covered by Union harmonisation legislation (Annex I) — machinery, medical devices, civil aviation, motor vehicles, marine equipment — and which must undergo third-party conformity assessment. Medical AI fast-track: if your AI system is a medical device under MDR Class IIa or above, it is automatically classified as high-risk under AI Act Art. 6(1). No further classification analysis required — your MDR class determines your AI Act exposure.

AI in medical diagnostic devices (MDR/IVDR)AI in automotive safety systemsAI in civil aviation control systems
03
Art. 6(2) + Annex IIIHIGH RISK

High-Risk Sector

AI systems listed in Annex III across 8 domains. This is where most enterprise AI systems are classified. The Article 6(3) exception (narrow procedural task, no significant harm) may apply but requires documented justification.

CV screening and recruitment toolsCredit scoring and insurance pricingStudent assessment and academic evaluation
04
Art. 51 + 55HIGH RISK

GPAI with Systemic Risk

General Purpose AI models trained with more than 10²⁵ FLOPs are classified as systemic risk models. Additional obligations apply: adversarial testing, incident reporting without undue delay (Art. 55(1)(c)), cybersecurity measures, energy consumption reporting.

GPT-4, Claude 3 Opus, Gemini UltraLlama 3 405B and equivalent scale modelsCustom-trained foundation models exceeding the compute threshold
05
Art. 50LIMITED RISK

Transparency Obligations

AI systems that interact with humans or generate content must disclose their AI nature. Chatbots must identify themselves at the start of each interaction. AI-generated synthetic media must be labelled.

Customer service chatbotsAI writing assistantsSynthetic media and deepfake generation
06
Art. 53LIMITED RISK

GPAI Standard Obligations

GPAI model providers (not systemic risk) must maintain technical documentation, publish training data summaries, comply with EU copyright law, and provide downstream providers with necessary compliance information.

Open-source foundation models below 10²⁵ FLOPsSpecialized language models for specific domainsMultimodal models used in downstream products

The Scoring Scale

100/100

Prohibited / Unacceptable Risk

System cannot be legally deployed. Immediate remediation required.

85/100

High Risk (Safety Component)

Full Art. 9-15 obligations. Third-party conformity assessment required.

80/100

High Risk (Annex III)

Full Art. 9-15 obligations. Registration in EU database required.

35/100

Limited Risk (GPAI / Transparency)

Art. 50 or Art. 53 obligations. Disclosure requirements apply.

15/100

Minimal Risk

No mandatory obligations. Voluntary code of conduct recommended.

Assessed

Assessed

Completed full Sprinkling Act process: diagnostic + report + review.

Conformity assessment: self-certification or third-party?

A widely-held misconception: “if my system is classified high-risk, I need to pay a Notified Body €50–150K.” True for some cases — not all. Article 43 of the AI Act defines two distinct conformity assessment procedures. The vast majority of Annex III systems (HR, credit, education) can take the internal route.

INTERNAL ROUTE

Annex VI — Self-assessment

Applies to most Annex III systems: employment and HR, credit and insurance, education, essential public services, asylum and migration, law enforcement (under conditions), administration of justice. The provider self-assesses against Art. 9–15, produces technical documentation (Art. 11) and signs the EU declaration of conformity (Art. 47). No Notified Body involved.

Indicative cost: €10–50K depending on external support level.

THIRD-PARTY ROUTE

Annex VII — Notified Body

Mandatory for: (1) AI as a safety component of a product already subject to third-party assessment under harmonization legislation (MDR, machinery, toys, vehicles) — the AI Act procedure integrates into the product procedure; (2) Annex III §1(a) on remote biometrics, where Annex VII is explicitly required. A Notified Body audits the QMS and technical documentation before placing on the market.

Indicative cost: €50–150K+ per system, on top of the QMS.

Why this distinction matters: the gap between self-certification and third-party audit can represent a 5–10× factor on the compliance bill. Sprinkling Act identifies which route applies to your system in the full report — before you commit a budget.

Sources: Art. 43 AI Act · Annex VI · Annex VII · Art. 47 (EU Declaration of Conformity)

Limitations & Disclaimers

The EU AI Act requires contextual interpretation. Art. 5 prohibited practices are clear for some cases, ambiguous for others. Art. 6 high-risk classification depends on context of use, not just technology. Art. 51 GPAI systemic risk thresholds are still being operationalized. We do not resolve ambiguity — we flag it explicitly and recommend legal counsel for edge cases.

Sprinkling Act is an operational tool, not a legal opinion. It produces the structured, article-mapped artefact that your lawyer, your regulator, or your investor needs as a starting point. The same pattern applied to GDPR: 80% of compliance is documentation and operations; the remaining 20% is legal judgment. We handle the 80%.

Classification is based on information provided by the client. The structured intake questionnaire (20–30 minutes) is designed to reduce information asymmetry, but incomplete or inaccurate inputs will produce incomplete or inaccurate classifications.

The EU AI Act is subject to ongoing interpretation through guidelines, delegated acts, and decisions by the European AI Office. Classifications may change as authoritative guidance evolves. Reports include a temporal stability indicator reflecting this risk.

Sprinkling Act does not cover national implementing legislation, sector-specific regulatory overlaps (e.g. GDPR + AI Act interaction), or AI systems deployed outside the EU.

International Standards Alignment

The Sprinkling Act methodology is built exclusively on the EU AI Act (Regulation 2024/1689). It is not derived from, certified by, or dependent on any external standard. However, the gate logic and risk assessment structure are consistent with the principles of established international frameworks:

NIST AI RMF 1.0

NIST AI 100-1 (2023)

The four NIST functions — Govern, Map, Measure, Manage — mirror the lifecycle approach embedded in our gates. Gate evaluation (Map), scoring with obligations (Measure), and ongoing regulatory monitoring (Manage) follow the same iterative logic. The Sprinkling Act methodology addresses the Map and Measure functions; operational Govern and Manage remain the responsibility of the assessed organisation.

ISO/IEC 42001:2023

AI Management Systems

ISO 42001 requires organisations to establish risk assessment processes (Clause 6), operational controls (Clause 8), and performance evaluation (Clause 9). Our 6-gate assessment produces the risk classification and obligation mapping that feeds into an ISO 42001-compliant AI Management System. The assessment does not replace an AIMS — it provides the regulatory input that an AIMS requires.

Sprinkling Act does not claim ISO 42001 certification or NIST compliance. These references indicate structural coherence, not formal alignment or endorsement.

Methodology Changelog

v1.1

March 2026

Art. 5§1(d) added — criminal risk profiling. Art. 5§1(h) reference corrected — real-time biometric. Art. 6(1) vs 6(2) reference corrected. Art. 50§2 content marking activated. All 8 prohibited practice checks (a-h) now covered. HIGH RISK obligations enriched with Art. 9-15 details. International standards alignment section added (NIST AI RMF, ISO 42001).

v1.0

March 2026

Methodology versioned and published. GPAI systemic risk gate (Art. 55). Art. 6(3) exemption logic. AI Office guidance on Annex III. Regulatory freeze date: March 2026.

v0.1

January 2026

Initial release. 6 regulatory gates based on Art. 5, 6, 50, 51, 53. Article mapping for all Annex III domains.

Regulatory Intelligence

This methodology is kept current by an internal signal detection system that monitors global AI regulatory developments daily across regulatory sources spanning EU institutions, national authorities, and industry publications in 15 languages.

When a signal reaches CRITICAL or MAJOR tier, the Temporal Stability Indicator on affected reports is updated automatically. Every scoring axis is documented, every threshold is versioned.

Active signals affecting this methodology:

GPAI Code of PracticeMAJORGate 04 — Art. 51 + 55[March 2026]
Guidelines High-Risk delayMAJORGate 03 — Art. 6(2) + Annex III[March 2026]

COMPANION FRAMEWORK

The 4 ACTS — a builder-facing lens on the same 6 gates.

The 6 Gates above are how we score AI Act position. The 4 ACTS are how we explain that position to AI agent builders. Each ACT (Accountability, Consent, Traceability, Skill) maps to specific gates above, but uses the language and patterns of how agents are actually built in 2026 — autonomous action, ambient capture, multi-agent chains, vibe-coded shipping. The framework is open MIT on GitHub. The full report applies both lenses simultaneously.

Read the 4 ACTS framework →

See it in action

The free diagnostic applies all 6 gates to your system. 9 questions, instant result.

Free diagnostic — instant

SEE ALSO

Product

Free Diagnostic

Run the 6-gate assessment on your AI system. 9 questions, instant result.

Product

Full Report

See what a complete report contains — 15-20 pages.

Standard

AI Agents

The 4 ACTS — companion framework for AI agent builders, mapped to the 6 gates.

Blog

High-Risk Systems

Which AI systems fall under Annex III? Full breakdown.

Blog

GPAI Obligations

Art. 53-55 — what GPAI providers and deployers must do.

Product

Free DiagnosticPricingFull ReportReport PreviewQualifyWaitlistWhat-If EngineEnterpriseCompliance Index

Content

SprinklingAct+Research ReportsMethodologyResourcesAI PositiveReport an issue

Company

AboutWho Is This ForTransparencyWhat We Are NotPartnershipsPress & MediaContactLinkedIn

Legal

Legal NoticePrivacy PolicyCookie PolicyTerms of ServiceData ProcessingSecuritySources & ReferencesGlossaryOperator Charter

Copyright © 2026 Sprinkling Act. All rights reserved.

Ireland
Privacy Policy|Terms of Service|Cookie Policy|Security|x402 soon