作者

Dr. Benedikt Kohn, CIPP/E

高级律师

Read More
作者

Dr. Benedikt Kohn, CIPP/E

高级律师

Read More

2023年11月27日

AI Act at risk? – the regulation of foundation models and general-purpose AI

  • In-depth analysis

The current trilogue negotiations between the EU Council, Parliament and Commission are focusing on the regulation of basic and general-purpose AI. After initial progress with a graduated regulatory approach that provides for stricter requirements for more powerful AI models, the Spanish Council Presidency met with resistance in November 2023. Under pressure from their domestic AI companies, Germany, France and Italy in particular are now rejecting comprehensive regulation and are in favor of self-regulation through a code of conduct. This U-turn could jeopardize the entire AI Act. The European Commission then proposed a compromise text that retains the tiered approach but weakens regulation overall.

This text now forms the basis for further deliberations, although resistance is to be expected, particularly from the Parliament. Time is pressing, as a final trilogue is due to take place in December before Belgium takes over the Council Presidency and a new European Parliament is elected in the summer. This would considerably delay the legislative process or could even cause it to fail. The EU sees itself as a global pioneer in AI regulation, but is under pressure to find a solution in light of global developments and the upcoming European elections.

Background – Previous consideration of foundation models and general-purpose AI in the AI Act

ChatGPT is now familiar to almost everyone, as particularly powerful foundation models and general-purpose AI – such as GPT-3 and GPT-4 from OpenAI, on which ChatGPT is based – have spread rapidly in recent months. According to the definition recently introduced by the Council of the European Union ("Council"), a foundation model is

"a large AI model that is trained on a large amount of data, which is capable to competently perform a wide range of distinctive tasks, including, for example generating video, text, images, conversing in lateral language, computing or generating computer code".

According to the Council, the related term "general-purpose AI" refers to systems

"that may be based on an AI model, can include additional components such as traditional software and through a user interface has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems".

Accordingly, a general-purpose AI can be based on a foundation model and be an implementation of it, but is therefore downstream of it. However, both terms ultimately refer to AI models or AI systems that can be used as a basis for realising a wide range of different applications.

At the time of the European Commission's ("Commission") original proposal for the AI Act in April 2021, foundation models and general-purpose AI were not yet so widely recognised by the general public, which is why the AI Act did not initially contain any provisions in this regard. However, the first voices were already calling for their regulation in August 2021. The Slovenian Council Presidency added Article 52a of the AI Act shortly afterwards, although this still stated that general-purpose AI – the term "foundation model" had not yet been introduced – was not to be covered per se by the provisions of the AI Act. Only the placing on the market, putting into service or use of general-purpose AI for a specific application that falls under the AI Act should trigger corresponding obligations.

However, under the French Presidency, the Council significantly modified the provisions on the regulation of general-purpose AI. Certain obligations should now apply to general-purpose AI if they may be used as high-risk AI systems or as components of such high-risk AI systems. The subsequent Czech Council Presidency went one step further and proposed that high-risk general-purpose AI should even fulfil all the obligations of high-risk AI systems.

Finally, the European Parliament ("Parliament") published its position in June 2023, in which it inserted Article 28b of the AI Act and thus the term "foundation model" into the legal text for the first time. The Parliament provided for a number of obligations for foundation models regardless of their risk category.

Recent developments

The regulation of general-purpose AI and foundation models continues to play a central role in the current trilogue negotiations between the Council, Parliament and the Commission and is the subject of controversial debate. Following the last political trilogue on 24 October 2023, an agreement on a tiered approach to the regulation of foundation models initially appeared to be on the cards. According to this, stricter obligations would apply in particular to the most powerful AI models with a greater impact on society. As a result, these would primarily affect leading – mostly non-European – AI providers. The Parliament thus abandoned its original plan to introduce horizontal rules for all foundation models without exception.

Subsequently, on 5 November 2023, the Council under the Spanish presidency presented a corresponding draft text, which set out a series of obligations – again controversial in detail. According to this, providers of foundation models should fulfil transparency obligations, for example by providing technical documentation on the performance and limits of their systems and proof of compliance with copyright law. Providers of the most powerful foundation models should also register their AI models in the EU's public database, for example, carry out an assessment of their systemic risks and have auditing obligations. In particular, a debate broke out over the criteria for determining the "most powerful" AI within the tiered approach. While the Council wanted to require the Commission to define these within 18 months of the AI Act coming into force through secondary legislation, the Parliament called for a regulation in the AI Act itself in order to be able to legislate on such an important decision.

The Spanish Council Presidency also laid down obligations for general-purpose AI in the published text in the event that the provider of such general-purpose AI systems concludes licence agreements with downstream economic operators that use the AI system for purposes classified as risky. The provider would then have to specify possible high-risk areas of use and provide information to enable the downstream actor to fulfil the requirements of the AI Act.

Headwind from the Member States

The trilogue negotiations therefore progressed, even though there were still heated discussions on key points. On 9 November 2023, however, the negotiations then unexpectedly experienced a major setback. At a meeting of the Telecommunications Working Group, a technical committee of the Council, voices were raised against any plans to regulate foundation models. Parliament representatives reportedly ended the meeting two hours earlier than planned as a result. There was nothing left to discuss. This is because political heavyweights such as Germany, France and Italy in particular – under pressure from national AI companies – made a sudden U-turn regarding the regulation of foundation models.

German company Aleph Alpha and French start-up Mistral in particular fear that excessive regulation of foundation models in the EU could put them at a massive competitive disadvantage compared to their American and Chinese competitors. It is true that non-European AI giants – such as OpenAI, Meta and Google – are already far ahead of EU companies in terms of computing resources, funding, data and talent. However, European companies are gaining ground: Aleph Alpha, for example, recently received a funding commitment totalling 500 million dollars. Regulation of foundation models in the EU would hinder the development of European AI providers, they argue. This would slow down the race to catch up at a crucial time and cause the EU to fall further behind the global AI leaders. The tiered approach for foundation models is a "regulation within regulation" and jeopardises both innovation and the risk-based approach on which the AI Act is based.

Instead of a binding regulation, corresponding obligations and sanctions, France, Germany and Italy are now in favour of self-regulation based on a code of conduct for foundation models.

Is the AI Act in danger?

What does the sudden change of direction by influential member states regarding the regulation of foundation models mean for the AI Act? Is it possibly in danger?

The outcome is currently difficult to predict. The Spanish Council Presidency now wants to reconsider the regulatory plans for foundation models due to the strong dissenting votes and endeavour to find an acceptable solution directly with the Member States concerned. This is because the regulation of foundation models is a central aspect of the AI Act.

Due to the current situation, the Commission presented a possible compromise text on 19 November 2023. Although it maintained Parliament's tiered approach, it also significantly softened the regulation. Firstly, the term "foundation model" no longer appears in the text. Instead, the Commission distinguishes between "general-purpose AI models" and "general-purpose AI systems" – according to the Commission's definition, however, these terms continue to correspond to the terms "foundation model" and "general-purpose AI" introduced by the Parliament. According to the proposal, providers of general-purpose AI models should, among other things, be obliged to document the functionality of their AI models by means of so-called "model cards". If the AI model poses a systemic risk – which should initially be measured in terms of computing power – they are subject to additional monitoring obligations. The text also contains an article according to which the Commission is to draw up – non-binding – codes of practice. This refers to practical guidelines, for example on the implementation of model cards, on the basis of which players can ensure their compliance with the AI Act. However, possible sanctions are not mentioned.

On 21 November, MEPs and representatives of the Council and Commission then met to discuss the Commission's proposal. Although no agreement has yet been reached, it appears that the text is now the new basis for negotiations. However, resistance from Parliament, which has called for much stricter rules, is to be expected.

It remains to be seen what the outcome of further negotiations will be. In any case, an agreement should be reached as soon as possible, as time is pressing. The next – and, according to the original plan, final – trilogue will take place on 6 December. After that, the Spanish Council Presidency will only have a short time left before Belgium takes over the presidency in January 2024. Under Belgian leadership, there would then be particular pressure to reach an agreement. This is because the European elections are due in June 2024, which will result in a new Parliament.

A failure of the "AI Act" project would probably be a bitter blow for everyone involved, as the EU has long seen itself as a global pioneer with its plans to regulate artificial intelligence. However, since the Commission's draft in April 2021, other countries have also taken steps to regulate AI. US President Joe Biden issued an executive order on AI, the United Kingdom organised the AI Safety Summit and the G7 countries published an AI Code of Conduct. It therefore remains to be seen whether, how and when those responsible will be able to bring the hard fronts together and fulfil their pioneering role. In any case, the negotiations surrounding a compromise on the regulation of foundation models are continuing at full speed.

Co-author

Lennart van Neerven

Paralegal, Taylor Wessing Germany


Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe

Related Insights

Artificial intelligence

Analysis of the AI Act trilogue breakthrough

2023年12月11日
Briefing

作者

点击此处了解更多

AI regulation – will Switzerland be following the EU's lead?

2021年12月27日
Briefing

作者 Dr. Benedikt Kohn, CIPP/E

点击此处了解更多