12 mai 2023
Yesterday, the EU Parliament’s committees for the Internal Market and Liberties and Justice have approved compromise amendments for the draft report on the Artificial Intelligence Act (2021/0106(COD)). The draft is now ready for the plenary vote in the EU Parliament scheduled for mid-June. After that, the final stage of the trilogue will start, during which the three main players, i.e. the EU Council, the EU Commission and the EU Parliament hammer out the last details of the AI Act. As the current legislative term is elapsing soon, they will hurry up, so is it likely that the Act will enter into force before the end of 2023.
The compromise includes new details on the limits of AI systems for biometric identification, categorization and relating surveillance and policing purposes. Reflecting public discussions on interference with elections, respective systems are now explicitly added to the list of high-risk systems. Furthermore, a stronger alignment with the GDPR has been woven in in various provisions and a multitude of recitals. “Users” are now called “deployers”, Art. 2 (1)b, to better differentiate them from end users of such systems… but before losing your attention, dear reader, let’s turn to the hotter stuff:
The Parliament is suggesting to include as new Art. 4a general principles for the development and use of all AI systems such as “human agency and oversight”; “technical robustness and safety”, “privacy and data governance”, “transparency”, “diversity”, “non-discrimination and fairness” as well as “social environmental well-being”. While most of those will probably not trigger contradiction anywhere in the World, the Parliament might have felt prompted to emphasize the understanding of those values to a “coherent human-centric European approach.” The meaningfulness of this explicit reference should probably be critically questioned once again: It has the potential to ignite unnecessary discussions about whether the European understanding of e.g. “technical robustness and safety” differs from the American or Asian understanding. Regulatory requirements should aim to be crystal clear and avoid reference to a territorial understanding thereof.
However, the core of the latest public discussion was mostly about generative AI systems after it had been kick-started at the end of last November when OpenAI released ChatGPT to the open public. Consequently, the latest compromise amendments include new rules on generative AI. The definition of this newly introduced term is as such not part of the definition catalogue in Art. 3 but hidden in Art. 28b (4): There, it is made reference to “foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”)”. The central component, the foundation model, is defined as “AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”, Art. 3 (1c). The Council’s draft – published five days before the release of ChatGPT – seems to at least partially cover corresponding aspects by its then introduced term of “general purpose AI system” which continues to pop up in the Parliament’s draft in a shorter version. The trilogue will hopefully aim to condense and specify the content and delimitation of the final terms.
Art 28b will set out the duties for providers of a foundation model. Its paragraph 2 lists seven fundamental requirements to be met:
- all of which look plausible. Even though their rawboned level of detail reflects the ongoing efforts to develop reasonably specified standards.
For generative AI systems, Art. 28b paragraph 4 lists three additional duties. They must (1) comply with transparency obligations pursuant to Art. 52 (1), (2) ensure adequate safeguards against the generation of content violating EU law in line with the state of the art and without prejudice fundamental rights such as the freedom of expression and (3) publicly document a summary of the use of training data protected under copyright. This last requirement together with the inconspicuously appearing details on the description of registration of the data and training resources are fundamental changes. And possibly the starting point for future discussions to sharper define the reach and limits of copyright protection: Does such a system have to recognize the standards for copyright protection of a given work, to understand and respect the scope of protection and the statutory regimes for adaptations and transformations thereof? And all of that in the context of a low level of global harmonization in copyright and national particularities of territorially limited rights where an end user types in a request in a given country and receives a result the AI system has generated within seconds based on input probably taken from publications made available in a variety of other countries ... A lot of wishful thinking appears to be involved. While the legislators’ motives and justification of the underlying concern are beyond question, one might want to review the practicability of the implied solution. Otherwise, we might be witnessing the dawn of a new golden age for copyright litigation. At least on the long run, concepts of fair use or an adjusted levy system could provide less a less elaborate and litigious solution for balancing the interest of those creating the substance on which the generative AI system can create probably at least a considerable amount of its results.
Those amendments shall not become a toothless tiger: The penalty includes foundation models into the system of administrative fines for non-compliance amounting up to EUR 10 Million, or if a company is the offender, for up to 2 % of the total worldwide annual turnover, whichever is higher. Nevertheless, it is remarkable that the proposal of the EU Council was shooting for EUR 30 Million and/or up to 6 % of the total worldwide annual turnover, so the Parliament believes that one-third of the stick ensures sufficient encouragement for compliance.
To ensure proper governance, the Parliament will probably aim for replacing the EU AI Board by a respective Office. The latter shall not only serve to provide advise and support the EU Commission and the national authorities but shall be in charge to monitor and ensure the effective and consistent application of the AIA. Consequently, the list of duties of the new AI Office is a lot longer than the one for the less ambitiously designed EU AI Board. Let’s wait whether or not the Parliament’s attempt to draw further competences to new EU bodies will stay in the game when the Member States in the EU Council play their cards during the upcoming trilogue.
Hence, we will have an interesting summer. A lot of natural brain and sanity is necessary to finalize the setting of proper rules to regulate artificial intelligence.
The disclosure concept of the draft AI Liability Directive.