2025年7月11日
Radar - July 2025 – 1 / 4 观点
The GPAI Code of Practice, published on 10 July 2025, provides a crucial, albeit voluntary, framework for providers of General Purpose AI models to comply with the EU AI Act. For businesses adhering to the Code, this is a straightforward way to demonstrate compliance with their obligations under the AI Act. Still, this is not a 'safe harbour', but requires diligent implementation of detailed transparency, copyright, and safety protocols. Uncertainty remains, as key complementary documents, including definitive guidelines and a data disclosure template, are still pending internal review by the EU Commission.
On 10 July 2025, the AI Office published the final version of the GPAI Code of Practice (“Code”) The Code gives businesses and other stakeholders guidance in compliance with the implementation of GPAI-related rules in the EU AI Act (Regulation (EU) 2024/1689, “AI Act”).
Under the AI Act, providers of “General Purpose AI” (“GPAI”) models face a broad set of obligations. GPAI model providers must fulfil certain documentation and transparency obligations and must implement a policy on compliance with copyright law (Art. 53 AI Act). The more limited number of providers of GPAI models with systemic risk additionally must fulfil certain safety and security requirements.
The Code of Practice is primarily relevant for providers of GPAI models, such as – but not limited to – the well-known large-language models GPT (Open AI), Gemini, (Google) or the image generators Midjourney (Midjourney, Inc.) and DALL-E (Open AI). Specifically, the Safety and Security Chapter is relevant for providers of general-purpose AI models with systemic risk. However, measures in the Transparency Chapter generally do not apply to providers of GPAI models released under a free and open-source license, unless the model poses a systemic risk. The Commission assumes this applies to a very small group of five to fifteen providers worldwide.
“Downstream providers”, i.e. businesses implementing GPAI models, should familiarise themselves with the Code, too. The Code will likely have an impact on what developers of AI systems can expect and not expect from GPAI models, and influence negotiations of contracts with GPAI model providers. For example, signatories must enable downstream providers to comply with their obligations pursuant to the AI Act and provide such information within a reasonable timeframe but no later than 14 days.
The Code of Practice aims to strengthen the internal market, foster human-centric and trustworthy AI, and safeguard health, safety, and fundamental rights while supporting innovation. It serves as a compliance guide for Articles 53 and 55 of the AI Act and helps the AI Office assess providers who adopt the Code. Key chapters cover transparency (including documentation obligations and the Model Documentation Form), copyright (ensuring adherence to EU copyright law), and safety/security (focused on systemic risk mitigation for high-impact models). Applicability varies depending on licensing and risk level.
The Code is not legally binding; it was developed by GPAI industry stakeholders with support from the EU Commission's AI Office. Its main goal is to guide providers on complying with the AI Act, but it expressly states that it does not affect compliance with other laws, such as copyright regulations.
The EU Commission can, but does not have to, approve the Code and by this give it a general validity within the EU. It is not entirely clear, what "general validity" means, but in some way or another providers adhering to the Code of Practice can demonstrate compliance with the obligations of the AI Act. Additionally, the Commission has now officially acknowledged it will offer a grace period to ease implementation. The AI Office has stated that if providers "do not fully implement all commitments immediately after signing the code," they will not be considered in breach, but rather will be seen as "acting in good faith" to achieve full compliance.
Furthermore, a "pick-and-choose" approach, allowing companies to adopt only select measures from the Code, is under consideration. While this could attract more signatories, the Commission views it as a last resort, fearing it could undermine the Code's integrity if sensitive elements are uniformly opted out of.
Providers of GPAI models are required to draw up and keep up-to-date technical information. A key change in the final text is that the level of detail required in technical documentation should be proportionate to the size of the model provider. The Model Documentation shall be updated regularly but older versions shall be kept for at least 10 years after the model has been withdrawn from the market. Primarily intended for the supervisory authority, the Providers are encouraged to make information and documentation available to downstream providers of AI systems that intend to integrate the GPAI model in their system. Providers of free and open source GPAI models are exempted from the transparency requirements unless the model poses a systemic risk (Art. 53 (2) AI Act).
Notably, while Signatories are encouraged to disclose the Model Documentation (or parts thereof), there is no general obligation to publish it. Primarily, the documentation is intended to be provided to the supervisory authority (on request) and to downstream providers of AI Systems, subject to certain confidentiality requirements. However, if necessary to assess and/or mitigate systemic risks, signatories will publish a summarised version of their Framework and Model Report(s). One of the most significant changes in the final draft is a new exemption: providers are exempt from disclosing the amount of energy used to train a model if they lack "critical information from a compute or hardware provider".
Signatories fulfil these requirements by having in place a single document titled “Information and Documentation about the General-Purpose AI Model”, the so-called Model Documentation. A template, the "Model Documentation Form", has been designed for this purpose, requiring details on, among other things, the method of data acquisition, the computing power used for training, and the model's energy consumption.
Signatories must draw up, keep up-to-date and implement a copyright policy on compliance with EU copyright law.
While Signatories are generally allowed to use web crawlers for the purposes of text and data mining (“TDM”) to obtain training data for their GPAI models, they shall ensure that web crawlers identify and comply with a TDM opt-out declared by rightsholders. EU copyright law does not generally prohibit TDM, but rightsholders may declare to opt-out, thereby excluding their copyrighted works from the use by third parties for TDM, such as the training of GPAI models. The Code specifies that providers shall respect the widely used robots.txt protocol or other appropriate machine-readable protocols, however, this specification is subsequently softened by acknowledging that other appropriate TDM opt-out mechanisms can also be used.
Signatories must take appropriate measures to publish information on web crawlers employed and provide means to automatically update affected rightsholders.
The obligation for GPAI model providers to make reasonable efforts to obtain information about protected content not web-crawled by the Signatory (and their compliance with TDM opt-outs) has been removed from the final version.
A risk in using GPAI models is that they may generate output that infringes copyrights, e.g. by duplicating code or a picture that was found online but is subject to copyright protection. Signatories must implement “appropriate and proportionate” technical safeguards to prevent their models from generating copyright-infringing reproductions While the 3rd draft only required to take “reasonable efforts” to implement such measures, the final version stipulates an actual obligation. This may challenge providers, as the Code does not provide further details what technical safeguards are deemed appropriate to prevent such output. Further, the copyright-infringing use of their GPAI must be prohibited in their general terms and conditions for providers of downstream AI.
Signatories must provide a point of contact for rightsholders and must implement a mechanism to allow rightsholders to submit complaints about copyright infringements. Notably, this commitment does not affect the measures, remedies and sanctions available to enforce copyright and related rights under Union and national law – namely under the Digital Services Act.
Providers of GPAI models with systemic risk additionally have to comply with certain safety and security requirements set out in Art. 55 AI Act. A GPAI model only becomes a GPAI model with systemic risk if certain requirements are met. In particular, a systemic risk is present when the model has high impact capabilities which generally is the case when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025.
The assessment and mitigation of systemic risks should be proportionate to the risks. This means the degree of scrutiny and detail in documentation and reporting should match the systemic risks throughout the model's lifecycle. Simplified ways of compliance for Small and Medium-sized Enterprises (SMEs) and Small Mid-Cap Enterprises (SMCs), including startups, should be possible and proportionate to their size and capacity. For example, SMEs and SMCs may be exempted from some reporting commitments.
Signatories are encouraged to advance the "state of the art" in AI safety and security and related processes and measures. If they can demonstrate equal or superior safety or security outcomes through alternative, more efficient means, such innovations should be recognised.
The Q&A to the GPAI Code foresee a sort of “enforcement grace period” by the AI Office until 2 August 2026 for signatories, although its exact scope remains unclear.
In addition to the GPAI Code of Practice, the European Commission currently drafts guidelines (Art. 96 AI Act) on the scope and the interpretation of the AI Act's rules on GPAI. However, their publication is delayed, and the crucial data disclosure template may not be finalised before the GPAI rules take effect on 2 August 2025. This uncertainty makes it difficult for companies to commit to the Code, as they lack the full compliance picture. It is important to note that while adherence to the Code demonstrates compliance with AI Act obligations, it "does not constitute compliance with Union law on copyright and related rights"
Aligning early with the Code's guidance offers clear advantages. Adhering to the Code can significantly ease the compliance burden, because the EU Commission will focus its enforcement activities on monitoring compliance with the Code. It is vital to note that this is not conclusive evidence of compliance. Instead, it's a tool to demonstrate efforts towards compliance, helping reduce the risk of enforcement actions. With the effective date close ahead, companies should familiarize themselves with both the AI Act’s core obligations and the GPAI Code of Practice and begin integrating these standards into their processes.
2025年7月11日
作者 作者
2025年7月29日
作者 Alexander Schmalenberger, LL.B. 以及 Dr. Jakob Horn, LL.M. (Harvard)
作者
作者