The Role of the General-Purpose AI Code of Practice
Published on 15 November 2024, the draft General-Purpose AI Code of Practice (GPAI Code of Practice) is a cornerstone in the EU’s efforts to ensure compliance with the AI Act, particularly in navigating its complex and often open-to-interpretation provisions. The Code focuses on transparency, safety, and risk management while aligning with Articles 53, 55, and 56 of the AI Act. Importantly, adherence to such codes can create a presumption of compliance with the AI Act—a significant advantage for businesses.
This article focuses on a few key issues, though it’s important to recognize that the draft contains much more detail and raises numerous open questions. Moreover, the final document, due by May 2025, is expected to be even more comprehensive, reflecting stakeholder feedback and addressing evolving challenges.
Why the Code Is Critical for Compliance
Under Article 56 of the AI Act, the EU AI Office is tasked with encouraging and facilitating the development of codes of practice to guide AI providers in complying with the regulation. These codes are particularly valuable for ensuring clarity in areas where the AI Act’s requirements might otherwise seem ambiguous.
For businesses, adhering to a code of practice not only provides a clear compliance pathway but also signals a commitment to ethical and responsible AI development. The codes are designed to:
- Update compliance practices in line with market and technological developments.
- Provide detailed guidance on managing systemic risks across the AI value chain.
- Standardize the documentation of systemic risks and their mitigation measures.
By adopting the GPAI Code of Practice, companies can demonstrate proactive compliance, potentially avoiding regulatory scrutiny and penalties.
Key Features of the Draft Code
The General-Purpose AI Code of Practice covers a broad range of topics, addressing numerous challenges related to AI transparency, safety, and governance. However, given the depth and complexity of the draft, this section focuses on a selection of key points that are most relevant for businesses and legal advisors. It is important to note that the draft delves into these areas in much greater detail and includes many other topics that are not discussed here. As the final version of the Code is expected to be even more comprehensive, businesses should treat this summary as a starting point and consult the full document for a complete understanding.
Transparency and Documentation Obligations
The draft sets out robust obligations aligned with Article 53 of the AI Act:
- Data Transparency: Businesses must document the provenance of training data, disclose collection methods, respect opt-out mechanisms such as Robots.txt files, make best efforts to honor other opt-out technologies, and contribute to the development and adoption of relevant standards.
- Copyright Compliance: Providers must ensure adherence to EU copyright law, both in their operations and through downstream contracts.
- Reporting and Accessibility: Transparency reports, detailing AI systems’ purposes, data sources, and compliance measures, must be accessible to stakeholders and regulators.
Risk Management and Safety Frameworks
Aligned with Article 55, the Code emphasizes:
- Systemic Risk Identification: Businesses must categorize risks (e.g., cyber threats, manipulation, discrimination) and address their root causes.
- Proportional Mitigation Measures: Safeguards and countermeasures should reflect the severity and probability of risks.
- Continuous Monitoring: Risk management must span the entire lifecycle of AI models, supported by regular Safety and Security Reports (SSRs).
Governance and Oversight
Governance structures are central to managing compliance with the AI Act:
- Executive and Board-Level Responsibilities: Article 56 stresses the importance of assigning risk ownership at senior levels, with boards potentially creating dedicated risk committees.
- Decision Protocols: The Code requires procedures for determining whether to proceed with, halt, or modify AI systems based on risk assessments.
Adherence to the Code: A Strategic Advantage
Article 56 explicitly states that adherence to a code of practice can establish a presumption of compliance with the AI Act. This presumption significantly reduces the burden on businesses by:
- Offering clarity in ambiguous areas of the AI Act, such as systemic risk documentation.
- Reducing the risk of regulatory penalties by demonstrating good faith efforts to comply.
- Providing a framework for internal governance and risk management that aligns with EU expectations.
Looking Ahead: Finalizing the Code
While the draft provides valuable insights, the final GPAI Code of Practice is expected to address unresolved issues and incorporate extensive stakeholder input. For instance:
- Definitions and Scope: The draft leaves open key questions for further discussion and refinement, such as what constitutes a “serious incident” or “systemic risk.”
- Specific Measures: Future iterations will likely provide more detailed guidance on risk categorization, mitigation, and reporting procedures.
If the Code is not finalized or deemed inadequate by August 2025, the European Commission has the authority under Article 56 to establish common rules via implementing acts. This highlights the importance of stakeholder engagement during the consultation period, ending November 28, 2024.
Practical Steps for Businesses
- Review AI Systems: Audit current AI practices to ensure alignment with draft provisions, particularly in data transparency and systemic risk management.
- Engage in the Draft Process: Provide feedback on the Code to influence its final structure and ensure it meets your business needs.
- Strengthen Governance: Adapt governance frameworks to include clear roles for managing AI risks and overseeing compliance efforts.
- Document Everything: Maintain detailed records of data sources, risk assessments, and compliance measures to demonstrate adherence to both the Code and the AI Act.
- Prepare for Updates: Monitor developments to anticipate additional obligations in the final Code or future implementing acts.
Final thoughts and recommendations
The GPAI Code of Practice offers businesses a practical and strategic pathway to comply with the AI Act’s complex requirements. While the draft provides a useful starting point, companies should prepare for significant expansions in the final version. Adopting the Code early not only simplifies compliance but also positions businesses as leaders in responsible AI development.