Sprinkling ActSprinkling Act
Sign In

Assess

Free DiagnosticGet your score instantlyPricing€690 report + enterprise plansFull ReportWhat your report containsQualifyApply for a full reportWaitlistReserve your report

Monitor

Compliance IndexPublic AI Act screening registryWhat-If EngineSimulate regulatory changesEnterpriseFull-portfolio AI Act monitoring & intelligence

Intelligence

SprinklingAct+Expert analysis updated weeklyReportsIndependent research on EU AI Act readiness

Methodology

MethodologyHow the scoring works — 6 gatesResourcesGuides, checklists & white papersAI PositiveEthical performance framework — beyond complianceAI AgentsThe 4 ACTS — AI Act for agent buildersReport an issueBug, typo, or data concern

About

AboutOur mission and standardWho Is This ForDecision-makers who act firstTransparencyWhat you gain — and what we cannot doWhat We Are NotThe lines we do not cross

Network

PartnersLaw firms, auditors & certification bodiesPress & MediaMedia kit, coverage, interview requestsContactGet in touch

CHOOSE YOUR REGION

International (English)FranceBelgiqueLuxembourgIreland
See all countries and regions →
SprinklingAct+

Analysis

GPAI fine-tuning: when do you become a provider under Article 25?

By Lamar B. Shucrani — April 10, 2026 · 13 min read

You build on GPT-4, Claude, or Llama. You can prompt engineer, add RAG, or fine-tune the model. The question no wrapper wants to ask: at what point do you become a provider of a modified GPAI, with all the Art. 53 obligations that come with it?

TL;DR

  • • Article 25 of the AI Act defines three situations where a downstream actor becomes a provider of an AI system. The one that matters for fine-tuning is substantial modification (Art. 25(1)(b)).
  • • Article 3(23) defines “substantial modification” as a change that (a) was not anticipated in the initial conformity assessment and (b) affects compliance OR (c) modifies the system's intended purpose.
  • • Prompt engineering and RAG are almost certainly out of scope. Heavy fine-tuning that changes capabilities or behavior is almost certainly in scope. Light fine-tuning is the grey zone where documentation matters.
  • • Precise quantitative thresholds (e.g. a percentage of retraining FLOPs) are not in the regulation text. They will come from AI Office guidelines — not yet published as of this writing. Until then, caution and documentation are your best defense.

1. Why this question changes everything

In our previous article, we established that your GPT/Claude wrapper is NOT a GPAI. The GPAI is the underlying model (GPT-4, Claude, Gemini) — and Art. 53 obligations (technical documentation, copyright policy, downstream transparency) are borne by OpenAI, Anthropic or Google, not you. You are a deployer of the model.

Except. The moment you start MODIFYING that model — fine-tuning, continual pretraining, training-time integration of proprietary data —, you enter a different regulatory terrain. At some point, the modification becomes substantial enough that you become a provider of the modified GPAI in the eyes of the AI Act. And that point has massive consequences:

  • →Art. 53 obligations: technical documentation of the modified model, training data summary, copyright policy, downstream user information, EU database registration.
  • →Art. 55 obligations if systemic risk: if your modified model crosses the systemic compute threshold (≥ 10²⁵ FLOPs on cumulative training), you also owe adversarial testing, incident reporting, cybersecurity measures, energy consumption reporting.
  • →Liability: you become the accountable party for the model's behavior toward downstream deployers who use it through your API. The original provider (OpenAI, Anthropic) is no longer on the front line.

In other words: if you thought you were at €20K in compliance costs as a deployer, and you did a fine-tuning that propels you into GPAI provider territory, you are potentially looking at €200K–500K — plus ongoing obligations. The question “am I a provider of the modified model?” deserves a real answer, not a shrug.

2. What Article 25 actually says

Article 25 is titled “Responsibilities along the AI value chain.” Its paragraph 1 lists the three situations in which a distributor, importer, deployer, or other third party becomes a “provider” within the meaning of the AI Act, inheriting the full set of provider obligations:

Art. 25(1)(a)

They put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating otherwise.

Art. 25(1)(b) — le cas du fine-tuning

They make a substantial modification to a high-risk AI system that has already been placed on the market or put into service in such a way that it remains a high-risk AI system pursuant to Article 6.

Art. 25(1)(c)

They modify the intended purpose of an AI system, including a GPAI system which has not been classified as high-risk and has already been placed on the market or put into service, in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6.

Note: the formulations above are faithful reformulations based on the official consolidated version of Regulation 2024/1689 published in the Official Journal. For the exact full text, see the EUR-Lex source at the bottom of the article.

3. What counts as a “substantial modification”?

Article 3(23) of the regulation defines the term. A substantial modification is a change made to an AI system after it has been placed on the market or put into service that meets TWO cumulative conditions:

  1. 1.Was not foreseen or planned in the initial conformity assessment. In other words: the original provider did not anticipate that type of change in its technical documentation and conformity tests.
  2. 2.Affects the system's compliance OR modifies its intended purpose. Cosmetic or purely ergonomic changes don't count. The change must affect either how the system meets regulatory requirements, or what the system is supposed to do.

Both conditions must apply together. A change that was foreseen by the original provider — say, a light fine-tuning performed through OpenAI's official fine-tuning API — is probably out of scope, even if it affects behavior, because it falls within what the provider “planned for.” Conversely, heavy continual pretraining done on an open-source model (Llama for example) is potentially substantial: it was not foreseen by Meta and it affects both capabilities and potentially compliance (bias, safety, robustness — all the things Art. 15 requires).

4. The spectrum of modifications, from lightest to heaviest

Here's how to read the spectrum of possible GPAI modifications in practice. Each line is a reasonable reading of the text — not a legal verdict.

Prompt engineering

You write system prompts, instructions, templates. No model weights are modified. You remain a deployer of the model — provider of your own AI system. Out of scope of Art. 25(1)(b).

Retrieval-augmented generation (RAG)

You inject documents into the context at inference time. The model itself is unchanged. This is a deployment technique, not a modification. Out of scope.

Light fine-tuning (a few hundred examples through the provider's official fine-tuning API)

Grey zone. Argument in your favor: the original provider explicitly planned for this pathway (it's their fine-tuning API), so the modification was “foreseen.” Argument against: if the fine-tuning significantly changes behavior or intended purpose, the second condition of Art. 3(23) may still trigger. Caution: explicitly document what the fine-tuning does and does not do.

Significant fine-tuning (thousands of examples, new capability or behavior)

High interpretation risk zone. If the fine-tuning produces a model with capabilities the original provider did not test, and which affects compliance (e.g. introduces bias, bypasses safety guardrails, creates responses in an undocumented domain), you are likely within substantial modification territory. At this stage, specialized legal advice is strongly recommended before placing the modified model on the market.

Continual pretraining or training from scratch

You are a GPAI provider. Full stop. You have all Art. 53 obligations (technical documentation, training data summary, copyright policy, downstream information, EU database registration). If your cumulative training crosses 10²⁵ FLOPs, you fall under Art. 55 (systemic risk) with the additional obligations that come with it.

5. Quantitative thresholds — what we know and don't know

Tech press and some legal blogs circulate precise numbers like “if fine-tuning represents more than one-third of the original training FLOPs, you are a provider of the modified model.” Let's be clear: these thresholds are NOT in the text of Regulation 2024/1689.

The only explicit quantitative threshold in the regulation concerns GPAI systemic risk classification: Recital 110 and Art. 51 set 10²⁵ FLOPs of cumulative training as the presumed systemic risk threshold. For substantial modification itself, no number is given. The text remains qualitative: “change not foreseen and affects compliance or intended purpose.”

More precise guidelines will come from the European AI Office. At the time of this article, those guidelines have not been published. They are expected progressively between 2026 and 2027. Some voluntary GPAI Codes of Practice, being drafted by industry working groups coordinated by the AI Office, will likely address substantial modification — but those codes are not hard law.

Practical consequence

Until the AI Office rules, specialized lawyers rely on the qualitative text of Art. 3(23) and on analogy with other EU regimes (CE marking, MDR) for interpretation. Meaning: your best strategy is not to wait for a numeric threshold, but to rigorously document (a) what the original model did, (b) what your modification changes, (c) why that change respects or doesn't respect the original conformity. This dossier is your defense in case of scrutiny.

6. Checklist: am I a provider of a modified GPAI?

If you answer yes to one or more of the following, you must seriously consider the possibility of being a provider under Art. 25(1)(b) or (c):

  1. 1. Did you perform training (fine-tuning or continual pretraining) on the model weights, not just on prompts or inference context?
  2. 2. Has the modified model gained a capability the original did not demonstrate? (new domain, new language, new behavior, new performance level)
  3. 3. Has your modification potentially affected the safety, robustness, cybersecurity, or bias handling of the model in a way not documented by the original provider?
  4. 4. Are you using the modified model in a use case matching Annex III (employment, credit, education, public services, biometrics, justice, migration), making the AI system high-risk?
  5. 5. Are you making the modified model available to third parties (large-scale internal distribution, public API, client-facing product)?

Three or more “yes” answers = seek specialized AI Act legal advice before deployment. A single “yes” to question 3 or 4 alone can be enough to tip your case.

7. What to do in practice

  1. 1. Document your baseline. What model are you using? What version? What technical documentation did the original provider make available (model card, system card, usage policy)? Keep a dated copy.
  2. 2. Document your modification. For each fine-tune or training run: type (LoRA, full fine-tune, continual pretraining), data volume, duration, compute used, business objective, before/after testing.
  3. 3. Document the effect. How has behavior changed? What tests did you run to verify your modification does not introduce bias or safety regression? If you did not test, that's a red flag.
  4. 4. Take a position. Based on your documentation, write down your position: in your view, are you a provider of the modified model under Art. 25(1)(b)? Yes, no, or uncertain? This is precisely what a Sprinkling Act report produces — a dated position artefact, grounded in the text, that you can then hand to a lawyer for validation.
  5. 5. If uncertain: legal counsel. Borderline cases deserve specialized legal advice. Our report is not legal advice — it's the structured pre-qualification that lets your lawyer work faster and cheaper.

Does your product fine-tune a GPAI? The free Sprinkling Act diagnostic includes a dedicated question for this situation and can tell you whether your case warrants provider or deployer treatment. 9 questions, 60 seconds, no account.

Free diagnostic — 9 questions← Article 1: Is my wrapper a GPAI?

Sources

  1. [1]
    EUR-Lex (July 12, 2024) — Regulation (EU) 2024/1689 — Artificial Intelligence Act (full consolidated text) eur-lex.europa.eu/eli
  2. [2]
    EU AI Act — Article 3 — Definitions (including 'substantial modification', Art. 3(23)) artificialintelligenceact.eu/article
  3. [3]
    EU AI Act — Article 16 — Obligations of providers of high-risk AI systems artificialintelligenceact.eu/article
  4. [4]
    EU AI Act — Article 25 — Responsibilities along the AI value chain artificialintelligenceact.eu/article
  5. [5]
    EU AI Act — Article 28 — Notifying authorities artificialintelligenceact.eu/article
  6. [6]
    EU AI Act — Article 53 — Obligations for providers of GPAI models artificialintelligenceact.eu/article
  7. [7]
    EU AI Act — Article 55 — Obligations for providers of GPAI models with systemic risk artificialintelligenceact.eu/article
  8. [8]
    EU AI Act — Recital 84 — On substantial modifications and downstream responsibility artificialintelligenceact.eu/recital
  9. [9]
    EU AI Act — Recital 97 — GPAI model definition and training compute threshold artificialintelligenceact.eu/recital
  10. [10]
    European AI Office (Commission) — AI Act Service Desk & AI Office guidance pages ai-act-service-desk.ec.europa.eu
ALREADY ENFORCEABLE105 days

Art. 5 prohibitions and GPAI rules apply today. Transparency follows in 105 days. The question is not when — it’s whether you’ve documented your position.

Free Diagnostic — 9 questionsSee pricing →

Regulatory signals, when they happen.

AI Act updates, new analysis, enforcement news — delivered only when the regulation moves. No scheduled cadence.

Unsubscribe anytime. No third-party sharing.

SEE ALSO

Blog

Article 1: Is my wrapper a GPAI?

The starting point of the GPAI series.

Blog

GPAI Obligations 2026

Art. 53-55 obligations in detail.

Product

Free Diagnostic

Assess your position in 9 questions.

Standard

Methodology

How we score AI Act position.

Product

Free DiagnosticPricingFull ReportReport PreviewQualifyWaitlistWhat-If EngineEnterpriseCompliance Index

Content

SprinklingAct+Research ReportsMethodologyResourcesAI PositiveReport an issue

Company

AboutWho Is This ForTransparencyWhat We Are NotPartnershipsPress & MediaContactLinkedIn

Legal

Legal NoticePrivacy PolicyCookie PolicyTerms of ServiceData ProcessingSecuritySources & ReferencesGlossaryOperator Charter

Copyright © 2026 Sprinkling Act. All rights reserved.

Choose your country
Privacy Policy|Terms of Service|Cookie Policy|Security|x402 soon