TL;DR
- • Your wrapper is NOT a GPAI. The GPAI is the underlying model (GPT-4, Claude, Gemini). Not your product.
- • You are probably a deployer of the model AND a provider of your own AI system.
- • Your obligations depend on WHAT YOUR PRODUCT DOES, not on the underlying model. If your use case falls under Annex III, you're subject to the full high-risk regime.
- • If you substantially modify the model (significant fine-tuning), you may become a provider of the modified GPAI — Art. 25.
1. What is a GPAI, precisely?
Article 3(63) of the AI Act defines a GPAI model (General Purpose AI) as an AI model trained on large amounts of data, designed to serve a variety of purposes, capable of performing a wide range of distinct tasks, and which can be integrated into downstream systems or applications.
Recital 97 clarifies: the relevant models are those with at least 10²³ FLOPs of training compute, capable of generating language (text or audio), text-to-image or text-to-video. This explicitly captures: GPT-4, GPT-3.5, Claude (all versions), Gemini (all versions), Llama, Mistral, and equivalent foundation models.
Key point: A « GPAI » is a MODEL, not a SYSTEM. Your product — which wraps that model in an interface, system prompts, guardrails, business logic — is not a GPAI. It is an « AI system » under Art. 3(1).
2. Provider or deployer? Both at once.
This is the most misunderstood point. When you build a SaaS product that calls GPT-4 via the OpenAI API, you simultaneously hold two distinct roles:
ROLE 1
Deployer of the GPAI model (OpenAI/Anthropic/Google)
You « use » the GPAI model within the meaning of Art. 3(4). The obligations of the model's provider (Art. 53: technical documentation, copyright policy, downstream transparency) are borne by OpenAI, not by you. You don't duplicate these obligations.
ROLE 2
Provider of your own AI system
You « place on the market or put into service » an AI system — yours, not OpenAI's. Art. 3(3) is clear: a « provider » is anyone who develops an AI system AND places it on the market under their own name or trademark. Your wrapper is your product. You are its provider.
Direct consequence: your obligations depend on what YOUR system does, not on what the underlying model is capable of. A wrapper that filters CVs is an Annex III high-risk system (§4(b), employment). A wrapper that generates cat images is not.
3. When do you become a provider of the GPAI model itself?
Article 25 of the AI Act defines responsibilities along the value chain. It specifies that if you substantially modify an AI system already on the market (including a GPAI), you become its provider for AI Act purposes. Concretely:
Prompt engineering only: Not a substantial modification. You remain a deployer of the model.
RAG / retrieval augmentation: No modification to the model itself. You remain a deployer.
Light fine-tuning (a few hundred examples): Grey area. Likely not substantial, but to be documented.
Significant fine-tuning (change of capabilities or behavior): You may become a provider of the modified model, with Art. 53 obligations on top of those of your AI system.
Training from scratch or massive continual pretraining: You are a GPAI provider. Full regime.
4. What this actually costs you
The range depends entirely on the regime you fall under. Here are the sourced orders of magnitude from the CEPS 2021 report and market estimates for deployers:
| Profile | Initial | Annual |
|---|---|---|
| Simple wrapper, non-Annex III use case | €0–3K | <€5K |
| Annex III wrapper, self-assessment (Annex VI) | €20–50K | €10–30K |
| Annex III provider with full QMS | €144–330K | ~€71K |
| GPAI provider (massive fine-tuning) | €200–500K+ | €100K+ |
Important: if you already have GDPR maturity (DPO, records, DPIAs), your marginal AI Act cost drops 20–30%. The AI Act extends existing governance, it doesn't reinvent it.
5. What to do now
- 1. Clarify your role. Deployer of the model AND provider of your system, most likely. Document this position in writing — for yourself, your team, your investors.
- 2. Identify your use case in Annex III. If your product touches employment, credit, education, public services, justice, asylum, biometrics, or health — you may be high-risk.
- 3. Check Art. 5 (prohibited practices). Emotion recognition in workplaces, social scoring, criminal profiling are prohibited — regardless of the underlying model.
- 4. If you're limited risk: apply Art. 50. Transparency obligation — tell the user they're interacting with AI. Often that's all you need.
- 5. Document EVERYTHING. The regulator doesn't check what you did, they check what you can prove. A dated artefact of your position is worth a thousand conversations.
Building a product on GPT, Claude or Gemini? The free Sprinkling Act diagnostic tells you exactly which regime you're in — in 9 questions, 60 seconds, no account.
Sources
- [1]EUR-Lex (July 12, 2024) — Regulation (EU) 2024/1689 — Artificial Intelligence Act (full text) eur-lex.europa.eu/eli
- [2]EU AI Act — Article 3 — Definitions (AI system, GPAI model, provider, deployer) artificialintelligenceact.eu/article
- [3]EU AI Act — Article 16 — Obligations of providers of high-risk AI systems artificialintelligenceact.eu/article
- [4]EU AI Act — Article 25 — Responsibilities along the AI value chain artificialintelligenceact.eu/article
- [5]EU AI Act — Article 26 — Obligations of deployers of high-risk AI systems artificialintelligenceact.eu/article
- [6]EU AI Act — Article 50 — Transparency obligations for certain AI systems artificialintelligenceact.eu/article
- [7]EU AI Act — Article 53 — Obligations for providers of GPAI models artificialintelligenceact.eu/article
- [8]
- [9]CEPS (Laurer, Renda & Yeung) (September 2021) — Clarifying the Costs for the EU's AI Act www.ceps.eu/clarifying-the-costs-for-the-eus-ai-act
Art. 5 prohibitions and GPAI rules apply today. Transparency follows in 105 days. The question is not when — it’s whether you’ve documented your position.