Sprinkling ActSprinkling Act
Sign In

Assess

Free DiagnosticGet your score instantlyPricing€690 report + enterprise plansFull ReportWhat your report containsQualifyApply for a full reportWaitlistReserve your report

Monitor

Compliance IndexPublic AI Act screening registryWhat-If EngineSimulate regulatory changesEnterpriseFull-portfolio AI Act monitoring & intelligence

Intelligence

SprinklingAct+Expert analysis updated weeklyReportsIndependent research on EU AI Act readiness

Methodology

MethodologyHow the scoring works — 6 gatesResourcesGuides, checklists & white papersAI PositiveEthical performance framework — beyond complianceAI AgentsThe 4 ACTS — AI Act for agent buildersReport an issueBug, typo, or data concern

About

AboutOur mission and standardWho Is This ForDecision-makers who act firstTransparencyWhat you gain — and what we cannot doWhat We Are NotThe lines we do not cross

Network

PartnersLaw firms, auditors & certification bodiesPress & MediaMedia kit, coverage, interview requestsContactGet in touch

CHOOSE YOUR REGION

International (English)FranceBelgiqueLuxembourgIreland
See all countries and regions →
SprinklingAct+

Sector

EU AI Act for SaaS Companies: Classification, Obligations, and What to Do Now

By Lamar B. Shucrani — March 5, 2026 · 14 min read

Most SaaS companies building AI features assume they are in the clear — minimal risk, no obligations, nothing to worry about. Some of them are right. Many are not. The EU AI Act does not regulate AI as a technology. It regulates specific uses of AI in specific contexts. Whether your SaaS falls under the Act depends entirely on what your AI does, for whom, and in what domain. This article works through the four SaaS profiles that most commonly trigger obligations.

Get notified when this article is updated

Provider, Deployer, or Both?

Before classification, one question determines your obligations: what is your role under the Act?

Provider

You develop an AI system and make it available — either as a product or as a service embedded in your SaaS platform. You are responsible for the system's design, training, technical documentation, and conformity.

A SaaS company that builds a CV-screening engine and sells it to HR departments is a provider of a high-risk AI system.

Deployer

You integrate an AI system built by someone else into your product or workflow. You are responsible for its use — ensuring it is used as intended, that users are informed, and that human oversight is in place.

A SaaS company that integrates a third-party credit scoring API into its platform is a deployer of a high-risk AI system.

Both

If you build AI features on top of a foundation model (a GPAI model) and expose them to end users, you are simultaneously a deployer of the GPAI model and a provider of the downstream AI system.

A SaaS company that builds a contract analysis feature using an LLM API and sells it to law firms is both a deployer (of the LLM) and a provider (of the contract analysis system).

This distinction matters because providers and deployers have different obligations — and in most SaaS architectures, you are both simultaneously for different parts of your stack.

Profile 1 — HR SaaS: Recruitment, Performance, and Workforce Management

ClassificationPresumed High-Risk — Annex III, Domain 4

Domain 4 of Annex III covers AI systems used in employment, workers management, and access to self-employment. It is deliberately broad. If your SaaS platform includes any of the following AI features, the presumption of high-risk applies:

  • →CV screening or candidate ranking
  • →Automated shortlisting or rejection of applicants
  • →Interview assessment tools (including video analysis)
  • →Performance evaluation or scoring of employees
  • →Task allocation, work monitoring, or productivity scoring
  • →Promotion or dismissal recommendation systems

The presumption can be rebutted under Article 6(3) — but only if the system performs a narrow procedural task, does not materially influence outcomes, and does not profile individuals. In practice, most HR AI features do not meet this threshold. A system that ranks candidates influences the hiring outcome. A system that scores performance influences promotion or termination decisions.

What many HR SaaS companies overlook: the obligation applies even if a human makes the final decision. The Act does not require the AI to decide autonomously. It requires that the AI system materially influence the decision-making process — and most HR AI features are designed to do exactly that.

Obligations if high-risk (provider)

Art. 9

Risk management system

Document known and foreseeable risks of the system — including bias risks across protected characteristics. This is not a one-time assessment.

Art. 10

Data governance

Training data must be sufficiently representative of the populations the system will evaluate. Bias testing across age, gender, ethnicity is required.

Art. 13

Transparency to deployers

Your HR SaaS customers (the employers) are deployers. You must provide instructions for use covering limitations, intended purpose, and the conditions requiring human oversight.

Art. 14

Human oversight design

The system must be designed so that deployers can effectively monitor, override, and correct its outputs. A 'review all AI decisions' button is not sufficient — the design must make oversight genuinely feasible.

Art. 26

Deployer obligations (for your customers)

Your SaaS customers, as deployers, must conduct a Fundamental Rights Impact Assessment before use, monitor the system in operation, and log its use. You should inform them of these obligations contractually.

Profile 2 — Finance and Insurance SaaS: Credit, Risk, and Eligibility

ClassificationPresumed High-Risk — Annex III, Domain 5

Domain 5 covers AI systems used to evaluate creditworthiness, set insurance premiums, or determine access to essential services — including public benefits, emergency services, and social assistance.

High-risk triggers in finance SaaS:

  • →Credit scoring or creditworthiness assessment
  • →Loan approval or rejection recommendations
  • →Insurance premium calculation based on individual risk profiles
  • →Fraud detection that affects individual account access
  • →Know-Your-Customer (KYC) systems that make eligibility decisions
  • →Investment suitability assessments

The domain is defined by the consequence to the individual, not the technical sophistication of the system. A rule-based scoring model that outputs a credit decision is just as subject to Domain 5 as a neural network doing the same thing — if the output materially affects access to financial services.

The B2B nuance

Many finance SaaS companies sell to banks and financial institutions, not to consumers directly. This does not remove the obligation — it distributes it. As a provider, you still owe technical documentation, risk management, and oversight design to your B2B customers. Your customers, as deployers, owe conformity assessments and EU database registration. The contractual relationship between you and your customers must reflect this division of responsibility explicitly — the Act does not allow it to remain implicit.

One clarification introduced by the Digital Omnibus proposal (not yet law as of March 2026): the use of personal data in AI for creditworthiness assessments is explicitly addressed, aligning AI Act obligations with GDPR Article 22 automated decision-making protections. DPOs in finance SaaS should monitor this closely.

Obligations if high-risk (provider)

Art. 9

Risk management

Includes documenting the risk of discriminatory outcomes — a credit model that systematically disadvantages a protected group is a foreseeable risk that must be addressed, not just noted.

Art. 11

Technical documentation

Must cover architecture, training methodology, performance metrics across demographic subgroups, known limitations, and intended scope of use. Must be kept current.

Art. 43

Conformity assessment

Self-assessment for most Domain 5 systems. Must be completed before placing the system on the market. Document the assessment — it must be available to market surveillance authorities on request.

Art. 49

EU database registration

High-risk systems must be registered before deployment. Registration is public. It requires the system name, intended purpose, provider details, and a summary of the conformity assessment.

Profile 3 — B2B SaaS with Scoring and Recommendation Engines

ClassificationContext-Dependent — Minimal to High Risk

This is the most common and the most misunderstood profile. SaaS companies building scoring and recommendation engines often assume they are not regulated because their output is 'just a suggestion.' The AI Act does not accept this framing.

Classification depends on three variables: the domain in which the system operates, who is being scored or ranked, and whether the output materially influences a consequential decision affecting a natural person.

Classification matrix for scoring engines:

Use caseDomainClassification
Product recommendation (e-commerce)No regulated domainMinimal risk — no obligations
Content ranking (media platform)No regulated domainMinimal risk — no obligations
Lead scoring (B2B sales tool)No regulated domainMinimal risk — no obligations
Candidate scoring (HR tool)Annex III, Domain 4Presumed high-risk
Customer creditworthiness scoringAnnex III, Domain 5Presumed high-risk
Risk scoring for insuranceAnnex III, Domain 5Presumed high-risk
Student performance scoringAnnex III, Domain 3Presumed high-risk
Supplier risk scoring (procurement)No regulated domainMinimal risk — no obligations

The critical insight: the same technical architecture — a scoring model that ranks entities — produces entirely different regulatory outcomes depending on the domain. A supplier scoring model and a candidate scoring model may be technically identical. Regulatorily, they are not.

The profiling trap

Article 6(3) provides an escape route from high-risk classification for Annex III systems that do not 'profile individuals' — defined as automated processing of personal data to assess aspects of a person's life such as performance, economic situation, health, preferences, or behavior. If your scoring engine processes personal data to rank or evaluate individuals in a regulated domain, it almost certainly profiles them within the meaning of the Act. The exception does not apply.

Profile 4 — SaaS with Chatbots and Generative AI

ClassificationLimited Risk to GPAI Obligations — Article 50 + Articles 51–55

SaaS companies integrating generative AI — whether as a customer-facing chatbot, a document generation tool, or an AI assistant — face two distinct regulatory layers that operate independently of the high-risk classification framework.

Layer 1 — Article 50: Transparency (effective August 2, 2026)

Chatbots must disclose their AI nature

Any AI system designed to interact with natural persons must inform those persons that they are interacting with AI — clearly, at the beginning of the interaction. The only exception is where it is obvious from context. A customer support chatbot on a SaaS platform is not 'obvious from context.' Disclosure is required.

AI-generated content must be labelled

If your SaaS generates synthetic audio, video, or images, these must be labelled as AI-generated in a machine-readable format. This applies to document generation, voice synthesis, and image creation features. Text generation for informational purposes on matters of public interest also requires labelling.

Emotion recognition features require notification

If your SaaS includes any feature that infers emotional states from user inputs — tone analysis, sentiment scoring applied to individuals — users must be notified when such a system is active.

Layer 2 — GPAI obligations (effective August 2, 2025)

If you integrate a GPAI model (an LLM API such as GPT-4, Claude, Gemini, or an open-source equivalent) into your SaaS product, you are a deployer of that GPAI model. Your obligations as deployer are defined in Article 26 and depend on the risk classification of the downstream system you build.

Your downstream system may inherit high-risk classification

If you build a GPAI-powered feature that operates in an Annex III domain — for example, an AI assistant that helps HR managers evaluate candidates — the downstream system may be classified as high-risk, regardless of the underlying model. The model being general-purpose does not make the application minimal risk.

You cannot contract away provider obligations

Some SaaS companies assume that because they use an API, the model provider (OpenAI, Anthropic, Google) bears all obligations. This is incorrect. As a provider of the downstream application, you bear the obligations associated with that application's classification. The GPAI model provider bears obligations for the model. Both sets of obligations exist simultaneously.

Terms of service from GPAI providers are not compliance

Accepting the usage policies of an LLM API is not equivalent to conducting a conformity assessment under the EU AI Act. If the system you build is high-risk, you owe a conformity assessment on that system — not on the underlying model.

The Question Every SaaS Company Needs to Answer

Classification under the EU AI Act is not self-evident. It requires answering a specific sequence of questions — the same sequence that the Act's 6-gate framework formalizes:

Gate 1

Does any feature of your product use an AI system that performs prohibited practices under Article 5? (Subliminal manipulation, social scoring, real-time biometric ID, etc.)

Gate 2

Is any AI system in your product a safety component of a regulated product under Annex I? (Medical devices, machinery, vehicles, aviation.)

Gate 3

Does any AI system in your product fall within the 8 domains of Annex III? (Biometrics, infrastructure, education, employment, essential services, law enforcement, migration, justice.)

Gate 4

Does any AI model you develop or deploy qualify as a GPAI model with systemic risk under Article 51? (10²⁵ FLOPs training compute threshold.)

Gate 5

Does any customer-facing AI feature require transparency disclosure under Article 50? (Chatbots, synthetic content, emotion recognition.)

Gate 6

Do you provide or deploy a GPAI model, triggering Article 53 obligations?

A 'no' at Gate 1 through Gate 4 with a 'yes' at Gate 5 means no high-risk obligations but active transparency obligations. A 'yes' at Gate 3 means you have high-risk obligations regardless of your answers to Gates 5 and 6. Gates are not alternatives — they are cumulative. Multiple gates can fire simultaneously.

What 'Minimal Risk' Actually Means

Most SaaS AI features genuinely fall into the minimal or no-risk category — AI-enabled search, product recommendation, spam filtering, basic automation. For these, the EU AI Act imposes no mandatory obligations. Voluntary codes of conduct exist but are not required.

However, 'minimal risk' is a conclusion that must follow from classification, not an assumption made before it. The most common compliance failure in SaaS is not building high-risk systems without safeguards — it is building high-risk systems while assuming minimal risk because the feature 'just makes suggestions.'

The Act evaluates consequence, not intent.

Run the 6-Gate Assessment on Your SaaS Stack

The Sprinkling Act free assessment walks through all six regulatory gates — Article 5, Article 6, Annex III, Articles 50–53 — and produces a star rating, a score, and a traceable audit trail. 9 questions, instant result. The result is not legal advice. It is a defensible, article-mapped classification you can bring to your legal team, your board, or your investors.

Establish Your Position →

Sources

  1. [1]
    EUR-Lex (July 12, 2024) — Regulation (EU) 2024/1689 — Artificial Intelligence Act (full text) eur-lex.europa.eu/eli
  2. [2]
    EU AI Act — Article 3 — Definitions (Provider, Deployer, AI System) artificialintelligenceact.eu/article
  3. [3]
    EU AI Act — Article 6 — Classification Rules for High-Risk AI Systems artificialintelligenceact.eu/article
  4. [4]
    EU AI Act — Article 25 — Obligations of Product Manufacturers artificialintelligenceact.eu/article
  5. [5]
    EU AI Act — Article 26 — Obligations of Deployers of High-Risk AI Systems artificialintelligenceact.eu/article
  6. [6]
    EU AI Act — Article 50 — Transparency Obligations artificialintelligenceact.eu/article
  7. [7]
    EU AI Act — Article 53 — Obligations for Providers of General-Purpose AI Models artificialintelligenceact.eu/article
  8. [8]
    EU AI Act — Annex III — High-Risk AI Systems Referred to in Article 6(2) artificialintelligenceact.eu/annex
ALREADY ENFORCEABLE105 days

Art. 5 prohibitions and GPAI rules apply today. Transparency follows in 105 days. The question is not when — it’s whether you’ve documented your position.

Free Diagnostic — 9 questionsSee pricing →

Regulatory signals, when they happen.

AI Act updates, new analysis, enforcement news — delivered only when the regulation moves. No scheduled cadence.

Unsubscribe anytime. No third-party sharing.

SEE ALSO

Free Diagnostic

9 questions, instant result

Methodology

The 6-gate framework explained

Enforcement Timeline

All key dates 2024–2027

High-Risk Systems

The 8 Annex III domains

Product

Free DiagnosticPricingFull ReportReport PreviewQualifyWaitlistWhat-If EngineEnterpriseCompliance Index

Content

SprinklingAct+Research ReportsMethodologyResourcesAI PositiveReport an issue

Company

AboutWho Is This ForTransparencyWhat We Are NotPartnershipsPress & MediaContactLinkedIn

Legal

Legal NoticePrivacy PolicyCookie PolicyTerms of ServiceData ProcessingSecuritySources & ReferencesGlossaryOperator Charter

Copyright © 2026 Sprinkling Act. All rights reserved.

Ireland
Privacy Policy|Terms of Service|Cookie Policy|Security|x402 soon