11 December 2023
Metaverse Dezember 2023 – 3 of 5 Insights
After 36 hours of intensive negotiations, a breakthrough in the trilogue on the EU AI Act was reached on the evening of 8 December. This agreement marks a crucial moment in the development and regulation of Artificial Intelligence (AI) in the European Union. The compromise covers key issues such as the definition of AI, transparency obligations, governance structures and the use of biometric categorisation systems. Of particular note is the focus on free and open source software, the regulation of large language models such as ChatGPT, and the introduction of environmental standards.
The agreement marks a turning point in the regulation of AI. It will have far-reaching implications for developers, businesses and end users. The definition of AI and the specific regulations for large language models such as ChatGPT will have a significant impact on the development and deployment of AI technologies. The exemptions for free and open source software and the counter-exemptions underline the importance of research and openness of the new technology on the one hand, and the need for transparency and respect for copyright on the other. With regard to copyright in particular, there are already calls for EU copyright and data protection law to be adapted in the next legislative period (2024-2029).
The introduction of transparency obligations and environmental standards sets new standards for accountability and sustainability in AI development. Governance structures and prohibited practices will define the ethical boundaries of AI use and potentially spark controversial debates. The regulation of existing systems and territorial application show the complexity of the global AI landscape. Discussions on biometric categorisation systems and predictive policing highlight the sensitive aspects of AI in the areas of data protection and public safety.
Compared to previous drafts of the AI law and international standards, this compromise shows a progressive and detailed approach to the regulation of AI. The adoption of the OECD definition of AI creates harmonisation with global standards. The differentiated treatment of open source software and the introduction of environmental standards are examples of modern and responsible technology policy. The regulations on large language models and top-tier models reflect a growing awareness of the complexity and potential of AI. However, the discussion on biometric systems and predictive policing also shows that there is still a need for clarification in some areas and that conflicts between data protection and security interests may arise.
The timetable was also discussed at the press conference: Surprisingly, the AI Office is to be set up now. The regulation will enter into force in spring 2024, with the prohibitions coming into effect after six months. After 12 months, the rules on transparency and governance will apply. After 24 months, the rest of the legislation will apply.
Given the dynamic development of AI, it is important that the EU continuously reviews and adapts its regulations. The establishment of an AI office is an important step towards effective implementation and monitoring of the rules. The AI developer community should actively participate in the development and compliance of the codes of conduct. It is essential that future developments in AI meet ethical, legal and social standards while fostering innovation and progress. The EU should continue to actively participate in the global dialogue to promote international harmonisation of AI regulation.
15 December 2023
15 December 2023
11 December 2023
by Multiple authors
Graham Hann looks at predictions for technology and media in 2024.
4 December 2023
by Graham Hann
27 November 2023
by multiple authors
by Dr. Nicolai Wiegand, LL.M. (NYU) and Alexander Schmalenberger, LL.B.