SPRINKLING ACT — SCORING METHODOLOGY
Sprinkling Act operates two distinct assessment levels. The free diagnostic is a self-assessment: you answer 9 questions based on your own perception of your AI system, and the 6 regulatory gates produce an indicative score. The Full Report is a selective, human-reviewed analysis with an extended intake questionnaire — access requires qualification, and applications may be declined.
Free Diagnostic
“Based on what you know about your AI system, what is its most likely regulatory position?”
Self-declared answers → 6-gate scoring → indicative signal. Instant. No account required. This is your perception, not our verdict. The free score is not designed to simplify a 144-page regulation — it is designed to guide your self-assessment.
Full Report (€690)
“Given verified information about your system, what is its defensible regulatory classification?”
Qualification required → you complete a structured intake questionnaire covering 14 sections (20–30 minutes of your time) → we confirm within 1–3 business days → human review + article-by-article analysis → report delivered within 5–7 business days (~1–2 weeks total). The report is 15–22 pages adapted to your specific classification — a HIGH system gets more obligation analysis pages than a LIMITED system. Every report includes a detachable one-pager executive summary, SVG risk gauge, compliance timeline, AI Positive governance radar, GDPR Art. 22 cross-analysis, algorithmic bias assessment, residual risks analysis, and an integrated FAQ. The intake questionnaire is designed to ask precise questions that surface the information that matters — even when you didn’t know it was relevant. Your Full Report score may differ from your free diagnostic score.
01
Every question maps to a specific article, paragraph, or annex. No interpretation beyond the text of the regulation.
02
Gates are irreversible. A Prohibited Practice at Gate 01 ends the assessment immediately — no score can override a legal violation.
03
Every classification comes with a full traceable path — gate by gate, article by article. Exportable and independently verifiable.
04
Each report includes a stability indicator: STABLE, MODERATE, or UNSTABLE — reflecting how likely the classification is to change as guidelines evolve.
05
Sprinkling Act does not interpret ambiguous cases in favor of the client. When classification is uncertain, this is flagged explicitly with a recommendation to seek legal counsel.
06
Every report carries two version markers: the Sprinkling Act methodology version and the regulatory freeze date (March 2026). This means a report produced in March 2026 reflects the AI Act as understood at that date. Future delegated acts, AI Board guidelines, or jurisprudence do not invalidate existing reports — they may trigger a re-assessment recommendation.
Gates are evaluated in sequence. The first gate triggered determines the final classification. Lower gates are not evaluated once a higher gate is triggered.
If any practice falls under Article 5 — subliminal manipulation, social scoring, real-time remote biometric identification in public spaces, exploitation of vulnerabilities, untargeted scraping of facial images, emotion recognition in workplace/education, biometric categorisation of sensitive attributes — the assessment stops immediately. No score is assigned. The system cannot be deployed legally.
AI systems that are safety components of products covered by Union harmonisation legislation (Annex I) — machinery, medical devices, civil aviation, motor vehicles, marine equipment — and which must undergo third-party conformity assessment. Medical AI fast-track: if your AI system is a medical device under MDR Class IIa or above, it is automatically classified as high-risk under AI Act Art. 6(1). No further classification analysis required — your MDR class determines your AI Act exposure.
AI systems listed in Annex III across 8 domains. This is where most enterprise AI systems are classified. The Article 6(3) exception (narrow procedural task, no significant harm) may apply but requires documented justification.
General Purpose AI models trained with more than 10²⁵ FLOPs are classified as systemic risk models. Additional obligations apply: adversarial testing, incident reporting without undue delay (Art. 55(1)(c)), cybersecurity measures, energy consumption reporting.
AI systems that interact with humans or generate content must disclose their AI nature. Chatbots must identify themselves at the start of each interaction. AI-generated synthetic media must be labelled.
GPAI model providers (not systemic risk) must maintain technical documentation, publish training data summaries, comply with EU copyright law, and provide downstream providers with necessary compliance information.
Prohibited / Unacceptable Risk
System cannot be legally deployed. Immediate remediation required.
High Risk (Safety Component)
Full Art. 9-15 obligations. Third-party conformity assessment required.
High Risk (Annex III)
Full Art. 9-15 obligations. Registration in EU database required.
Limited Risk (GPAI / Transparency)
Art. 50 or Art. 53 obligations. Disclosure requirements apply.
Minimal Risk
No mandatory obligations. Voluntary code of conduct recommended.
Assessed
Completed full Sprinkling Act process: diagnostic + report + review.
A widely-held misconception: “if my system is classified high-risk, I need to pay a Notified Body €50–150K.” True for some cases — not all. Article 43 of the AI Act defines two distinct conformity assessment procedures. The vast majority of Annex III systems (HR, credit, education) can take the internal route.
INTERNAL ROUTE
Annex VI — Self-assessment
Applies to most Annex III systems: employment and HR, credit and insurance, education, essential public services, asylum and migration, law enforcement (under conditions), administration of justice. The provider self-assesses against Art. 9–15, produces technical documentation (Art. 11) and signs the EU declaration of conformity (Art. 47). No Notified Body involved.
Indicative cost: €10–50K depending on external support level.
THIRD-PARTY ROUTE
Annex VII — Notified Body
Mandatory for: (1) AI as a safety component of a product already subject to third-party assessment under harmonization legislation (MDR, machinery, toys, vehicles) — the AI Act procedure integrates into the product procedure; (2) Annex III §1(a) on remote biometrics, where Annex VII is explicitly required. A Notified Body audits the QMS and technical documentation before placing on the market.
Indicative cost: €50–150K+ per system, on top of the QMS.
Why this distinction matters: the gap between self-certification and third-party audit can represent a 5–10× factor on the compliance bill. Sprinkling Act identifies which route applies to your system in the full report — before you commit a budget.
Sources: Art. 43 AI Act · Annex VI · Annex VII · Art. 47 (EU Declaration of Conformity)
The EU AI Act requires contextual interpretation. Art. 5 prohibited practices are clear for some cases, ambiguous for others. Art. 6 high-risk classification depends on context of use, not just technology. Art. 51 GPAI systemic risk thresholds are still being operationalized. We do not resolve ambiguity — we flag it explicitly and recommend legal counsel for edge cases.
Sprinkling Act is an operational tool, not a legal opinion. It produces the structured, article-mapped artefact that your lawyer, your regulator, or your investor needs as a starting point. The same pattern applied to GDPR: 80% of compliance is documentation and operations; the remaining 20% is legal judgment. We handle the 80%.
Classification is based on information provided by the client. The structured intake questionnaire (20–30 minutes) is designed to reduce information asymmetry, but incomplete or inaccurate inputs will produce incomplete or inaccurate classifications.
The EU AI Act is subject to ongoing interpretation through guidelines, delegated acts, and decisions by the European AI Office. Classifications may change as authoritative guidance evolves. Reports include a temporal stability indicator reflecting this risk.
Sprinkling Act does not cover national implementing legislation, sector-specific regulatory overlaps (e.g. GDPR + AI Act interaction), or AI systems deployed outside the EU.
The Sprinkling Act methodology is built exclusively on the EU AI Act (Regulation 2024/1689). It is not derived from, certified by, or dependent on any external standard. However, the gate logic and risk assessment structure are consistent with the principles of established international frameworks:
NIST AI RMF 1.0
NIST AI 100-1 (2023)
The four NIST functions — Govern, Map, Measure, Manage — mirror the lifecycle approach embedded in our gates. Gate evaluation (Map), scoring with obligations (Measure), and ongoing regulatory monitoring (Manage) follow the same iterative logic. The Sprinkling Act methodology addresses the Map and Measure functions; operational Govern and Manage remain the responsibility of the assessed organisation.
ISO/IEC 42001:2023
AI Management Systems
ISO 42001 requires organisations to establish risk assessment processes (Clause 6), operational controls (Clause 8), and performance evaluation (Clause 9). Our 6-gate assessment produces the risk classification and obligation mapping that feeds into an ISO 42001-compliant AI Management System. The assessment does not replace an AIMS — it provides the regulatory input that an AIMS requires.
Sprinkling Act does not claim ISO 42001 certification or NIST compliance. These references indicate structural coherence, not formal alignment or endorsement.
v1.1
March 2026
Art. 5§1(d) added — criminal risk profiling. Art. 5§1(h) reference corrected — real-time biometric. Art. 6(1) vs 6(2) reference corrected. Art. 50§2 content marking activated. All 8 prohibited practice checks (a-h) now covered. HIGH RISK obligations enriched with Art. 9-15 details. International standards alignment section added (NIST AI RMF, ISO 42001).
v1.0
March 2026
Methodology versioned and published. GPAI systemic risk gate (Art. 55). Art. 6(3) exemption logic. AI Office guidance on Annex III. Regulatory freeze date: March 2026.
v0.1
January 2026
Initial release. 6 regulatory gates based on Art. 5, 6, 50, 51, 53. Article mapping for all Annex III domains.
This methodology is kept current by an internal signal detection system that monitors global AI regulatory developments daily across regulatory sources spanning EU institutions, national authorities, and industry publications in 15 languages.
When a signal reaches CRITICAL or MAJOR tier, the Temporal Stability Indicator on affected reports is updated automatically. Every scoring axis is documented, every threshold is versioned.
Active signals affecting this methodology:
COMPANION FRAMEWORK
The 6 Gates above are how we score AI Act position. The 4 ACTS are how we explain that position to AI agent builders. Each ACT (Accountability, Consent, Traceability, Skill) maps to specific gates above, but uses the language and patterns of how agents are actually built in 2026 — autonomous action, ambient capture, multi-agent chains, vibe-coded shipping. The framework is open MIT on GitHub. The full report applies both lenses simultaneously.
Read the 4 ACTS framework →The free diagnostic applies all 6 gates to your system. 9 questions, instant result.
Free diagnostic — instantSEE ALSO
Free Diagnostic
Run the 6-gate assessment on your AI system. 9 questions, instant result.
Full Report
See what a complete report contains — 15-20 pages.
AI Agents
The 4 ACTS — companion framework for AI agent builders, mapped to the 6 gates.
High-Risk Systems
Which AI systems fall under Annex III? Full breakdown.
GPAI Obligations
Art. 53-55 — what GPAI providers and deployers must do.