Auteurs

Mareike Christine Gehrmann

Partner

Read More

Fritz-Ulli Pieper, LL.M.

Salary Partner

Read More

Dr. Benedikt Kohn, CIPP/E

Collaborateur senior

Read More

Alexander Schmalenberger, LL.B.

Knowledge Lawyer

Read More
Auteurs

Mareike Christine Gehrmann

Partner

Read More

Fritz-Ulli Pieper, LL.M.

Salary Partner

Read More

Dr. Benedikt Kohn, CIPP/E

Collaborateur senior

Read More

Alexander Schmalenberger, LL.B.

Knowledge Lawyer

Read More

11 décembre 2023

Metaverse Dezember 2023 – 3 de 5 Publications

Analysis of the AI Act trilogue breakthrough

  • Briefing

After 36 hours of intensive negotiations, a breakthrough in the trilogue on the EU AI Act was reached on the evening of 8 December. This agreement marks a crucial moment in the development and regulation of Artificial Intelligence (AI) in the European Union. The compromise covers key issues such as the definition of AI, transparency obligations, governance structures and the use of biometric categorisation systems. Of particular note is the focus on free and open source software, the regulation of large language models such as ChatGPT, and the introduction of environmental standards.

Key discussions and the final agreement

  • Systems covered (as per press conference): Only finished systems and models will be included. Not systems under development.
  • Definition of AI: Based on the OECD definition, ensuring consistency with international standards.
  • Free/Open Source Software: General exclusion from regulation, but with respect for copyright and transparency of training data. There is also no exemption from prohibitions and requirements for high-risk systems and AI models with systemic risks.
  • General Purpose AI Models (GPAI): Introduction of a tiered approach with automatic classification for models and systems based on them, trained with extremely high computational power. The Commission will be given the possibility to adapt the parameters to take account of technical progress.
  • Transparency obligations: Obligation to disclose training data for all general purpose AI models.
  • Responsibilities for top-tier models: Including model assessment, systemic risk assessment and cybersecurity.
  • Governance: Establishment of an AI office within the Commission.
  • Prohibited practices: List of prohibited uses, including manipulative techniques.
  • National security exception: Exceptions, but with safeguards against abuse.
  • Application to existing systems: Whether the rules will apply to AI systems that existed before the AI Act comes into force has been controversial. It is unclear what the agreement will be on this point.
  • Territorial application of prohibitions: The outcome of the debate on the scope of the prohibitions is also unknown.
  • Biometric classification systems: Debate on banning systems based on sensitive personal characteristics. There will be exceptions, but their scope remains unknown at this point.
  • Predictive policing: Discussion of bans in specific contexts. There will be allowed applications, but their scope remains unknown at this point.
  • Emotion recognition: Disagreement on banning in certain areas. There will be exceptions, but their scope remains unknown at this point.
  • Remote biometric identification: These are allowed for certain catalogue offences and under judicial control. This applies to "real-time" applications, such as the use of such systems after the fact.

Impact and implications

The agreement marks a turning point in the regulation of AI. It will have far-reaching implications for developers, businesses and end users. The definition of AI and the specific regulations for large language models such as ChatGPT will have a significant impact on the development and deployment of AI technologies. The exemptions for free and open source software and the counter-exemptions underline the importance of research and openness of the new technology on the one hand, and the need for transparency and respect for copyright on the other. With regard to copyright in particular, there are already calls for EU copyright and data protection law to be adapted in the next legislative period (2024-2029).

The introduction of transparency obligations and environmental standards sets new standards for accountability and sustainability in AI development. Governance structures and prohibited practices will define the ethical boundaries of AI use and potentially spark controversial debates. The regulation of existing systems and territorial application show the complexity of the global AI landscape. Discussions on biometric categorisation systems and predictive policing highlight the sensitive aspects of AI in the areas of data protection and public safety.

Comparative analysis of previous versions of the draft law

Compared to previous drafts of the AI law and international standards, this compromise shows a progressive and detailed approach to the regulation of AI. The adoption of the OECD definition of AI creates harmonisation with global standards. The differentiated treatment of open source software and the introduction of environmental standards are examples of modern and responsible technology policy. The regulations on large language models and top-tier models reflect a growing awareness of the complexity and potential of AI. However, the discussion on biometric systems and predictive policing also shows that there is still a need for clarification in some areas and that conflicts between data protection and security interests may arise.

Timetable after press conference

The timetable was also discussed at the press conference: Surprisingly, the AI Office is to be set up now. The regulation will enter into force in spring 2024, with the prohibitions coming into effect after six months. After 12 months, the rules on transparency and governance will apply. After 24 months, the rest of the legislation will apply.

Outlook and recommendations

Given the dynamic development of AI, it is important that the EU continuously reviews and adapts its regulations. The establishment of an AI office is an important step towards effective implementation and monitoring of the rules. The AI developer community should actively participate in the development and compliance of the codes of conduct. It is essential that future developments in AI meet ethical, legal and social standards while fostering innovation and progress. The EU should continue to actively participate in the global dialogue to promote international harmonisation of AI regulation.

Dans cette série

Artificial intelligence

Analysis of the AI Act trilogue breakthrough

11 December 2023

par plusieurs auteurs

Technologie, Médias et Communications (TMC)

Technology and media predictions 2024

Graham Hann looks at predictions for technology and media in 2024.

4 December 2023

par Graham Hann

Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe

Related Insights

Technologies de l'information

EDPB emphasises the importance of free consent in "consent or pay" models

18 avril 2024
Briefing

par plusieurs auteurs

Cliquer ici pour en savoir plus
Intelligence artificielle et machine learning

AI Act at risk? – the regulation of foundation models and general-purpose AI

27 novembre 2023
In-depth analysis

par Dr. Benedikt Kohn, CIPP/E

Cliquer ici pour en savoir plus
Technologie, Médias et Communications (TMC)

ECJ ruling on withdrawal rights in free trials and auto-renewals in e-commerce

11 octobre 2023
Briefing

par Dr. Nicolai Wiegand, LL.M. (NYU) et Alexander Schmalenberger, LL.B.

Cliquer ici pour en savoir plus