Locations
The EU AI Act introduces specific obligations for companies that develop general-purpose AI ("GPAI") models. These are powerful AI models trained on broad data sets and capable of performing a wide range of tasks, e.g., generating text, images or other content, that are not limited to a specific application. These models can be integrated into various downstream AI systems.
The AI Office has published two key documents that clarify how GPAI-related obligations will be applied in practice: the GPAI Code of Practice ("CoP") (see chapters on Transparency, Copyright and Safety and Security) and the GPAI Guidelines (the "Guidelines"). These documents give insight into how the AI Office will supervise GPAI models and what companies need to prepare for.
This article outlines the legal obligations under the AI Act, what the CoP offers in terms of practical implementation, how the Guidelines clarify the scope of the obligations and what companies can do to prepare.
- Legal background: What the AI Act requires from GPAI model providers
The AI Act distinguishes between all GPAI model developers and those whose GPAI models are designated by the European Commission as presenting systemic risks.
All GPAI model providers must:
- Prepare technical documentation on training and testing;
- Provide key information to downstream providers to support their own compliance and responsible use of the model;
- Implement a policy to comply with copyright rules, including text and data mining ("TDM") opt-outs; and
- Publish a summary of the training data used.
Open-source GPAI models are exempted from the obligation to prepare technical documentation and to share key information with downstream users, except if they are considered to present systemic risks.
Additional obligations apply if a model is considered to present systemic risks. In that case, the provider must perform model evaluation, risk assessments and mitigation, report serious incidents and implement adequate cybersecurity measures.
These obligations apply regardless of whether a company signs the CoP. The next section sets out what the CoP offers in terms of practical implementation.
- The CoP: How to implement the GPAI-related obligations
The CoP, published in July 2025, is a compliance framework developed with the industry. Signing the CoP is voluntary. It offers an operational framework for implementing the AI Act’s obligations.
The CoP is structured around three chapters:
- Transparency: The CoP introduces a Model Documentation Form to help signatories present key information about their models, including on architecture, data sources, compute and energy consumption.
- Copyright compliance: The CoP sets out how to implement the copyright policy obligation in practice. Signatories must ensure that web crawlers respect TDM opt-outs, avoid infringing sites as published by the EU and do not bypass paywalls or other protection measures. Signatories are also expected to reduce the risk of infringing outputs through technical safeguards and contractual restrictions.
- Safety and Security: For GPAI models presenting systemic risk, the CoP sets out how providers can meet their obligations through a structured Safety and Security Framework. This includes identifying and analysing systemic risks (e.g., model misuse or loss of control), implementing mitigations like input/output filtering or fine-tuning and maintaining adequate cybersecurity. Signatories must submit a Model Report to the AI Office with their risk assessment, justifications and evaluation results. They must also assign responsibility for systemic risk internally and document serious incidents and corrective measures throughout the model’s lifecycle.
Companies that do not sign must still comply but will need to develop and justify their own approach, without the benefit of a shared benchmark.
The Commission maintains a list of signatories. Current signatories include Amazon, Google, IBM, Mistral AI and OpenAI.
Don't miss a thing, subscribe today!
Stay up to date by subscribing to the latest Technology and Data insights from the experts at Fieldfisher.
Subscribe now- Guidelines: Scope and clarification
The AI Office’s Guidelines, also published in July 2025, give important clarifications on how the GPAI rules will be applied. While the CoP is a voluntary instrument, the Guidelines apply to all companies that develop GPAI models and to downstream providers. They clarify how the AI Office interprets the rules and how these will be enforced in practice.
The key points are:
- Training compute as a trigger: The Guidelines clarify that if a model was trained using more than 10²³ FLOP and can generate language or images, it will likely qualify as a GPAI model. This means that companies should track compute usage to determine whether GPAI obligations apply.
- Placing on the market includes open access: The concept of “placing a GPAI model on the market” is a key legal trigger under the AI Act. The Guidelines give examples of what counts as such, e.g., uploading a model to a public repository, providing access via API or integrating it into an app or service for EU users.
- Modifying a GPAI model may change your role: A company that fine-tunes or retrains a model using more than one-third of the original training effort may become a GPAI provider themselves. Examples could be where a company retrains a GPAI model on a large volume of proprietary data to optimise it for use in high-stakes domains, such as healthcare, or where a model is adapted through intensive fine-tuning to build a new multilingual assistant with expanded capabilities.
- Open-source exemption is narrow: To qualify for the open-source exemption, a model must be genuinely free to use, non-monetised and accompanied by sufficient technical documentation. Models that are labelled as open source while use or access is restricted are unlikely to meet these criteria.
The Guidelines also indicate that signing the CoP may lead to more predictable supervision and fewer ad hoc requests from the AI Office. In practice, however, it also brings increased visibility and effort. Companies that do not sign remain accountable and may face greater scrutiny. This trade-off should be weighed carefully.
- Next steps for companies
The provisions of the AI Act relating to GPAI models and enforcement entered into force on 2 August 2025. Companies that develop, fine-tune, publish or integrate GPAI models should take steps now to assess their exposure and prepare for compliance. Key recommended actions include:
- Assess your activities: Are you training, fine-tuning, publishing or integrating a GPAI model into your own systems? This may trigger obligations under the AI Act.
- Prepare documentation: Record how the model was trained, how copyright compliance was assessed, what compute was used and what risk mitigations are in place.
- Evaluate the CoP: Consider whether signing the CoP is suitable for your organisation. It may support engagement with the AI Office but also requires internal resources and commitment.
- Stay informed: Further guidance is expected. Companies should continue to monitor developments.
For further questions on AI compliance or support with the AI Act, please reach out to your usual contacts in our Tech & Data team.