On 21 April 2021, the European Commission published far-reaching draft legislation on the regulation of AI systems (Draft AI Regulation), which aims to create a framework for the development and use of artificial intelligence (AI). The goal is to regulate the development and use of AI in such a way that this promising technology is in line with the values, fundamental rights and principles of the European Union (EU).
What exactly are AI systems?
Whether in autonomous driving or in talent recruiting using chatbots: artificial intelligence is increasingly becoming part of our everyday lives and will have a significant influence on our future. From the outset, this raises the question of what AI actually is.
The EU Commission defines at the least an “artificial intelligence system” (AI system) as software that, on the one hand, has been developed using machine learning, logic and knowledge-based concepts or statistical approaches and, on the other hand, is capable of producing results such as content, predictions, recommendations or decisions that influence the environment with which it interacts, with regard to a set of objectives set by humans (Article 3 No. 1 Draft AI Regulation). This definition is quite broad, since on the one hand it allows flexibility with regard to the rapid technical progress in AI systems, but on the other hand it will also be associated with legal uncertainty for developers, operators and users of AI systems.
Risk-based approach
The Draft AI Regulation pursues a risk-based approach: the more significant the risks of an AI system for the health and safety or the fundamental rights of persons, the stricter the regulatory requirements. Particularly dangerous AI systems will even be prohibited (Article 5 Draft AI Regulation). In addition, a distinction is made between AI systems with minimal, low or high risk. The latter, the so-called “high-risk AI systems”, are the focus of the Draft AI Regulation; more than half of the regulations refer to these AI systems.
Classification as a high-risk AI system
Article 6 Draft AI Regulation determines in which cases a risk to the health and safety or fundamental rights of persons is so blatant that the AI system is to be classified as a “high-risk AI system”.
- According to Article 6 (1) Draft AI Regulation, this includes AI systems that are used as safety components within the meaning of Article 3 No. 14 Draft AI Regulation with conformity assessment by a third party in an area relevant to EU product safety (such as medical devices, toys or transport infrastructure).
- According to Article 6 (2) Draft AI Regulation, AI systems listed in Annex III Draft AI Regulation are also to be classified as “high-risk AI systems”. In Annex III, the European Commission names areas in which the use of AI systems is already classified as high-risk on the merits – i.e. through their mere use. In order to be able to take technical progress into account, it has the option of regularly amending or supplementing the list in Annex III (Article 7 Draft AI Regulation). Annex III currently lists the following areas:
- Biometric identification and categorisation of natural persons;
- Critical infrastructure management and operation;
- Education and training;
- Employment, personnel management and access to self-employment;
- Accessibility and use of basic private and public services and benefits;
- Law enforcement;
- Migration, asylum and border control;
- Administration of justice and democratic processes.
Requirements for a high-risk AI system
If such a “high-risk AI system” is to be used, it must above all meet the requirements of:
- Establishing a risk management system that ensures adequate risk assessment and risk minimisation or elimination (Article 9 Draft AI Regulation);
- Ensuring data quality, in particular through appropriate data governance and data management procedures[1] (Article 10 Draft AI Regulation);
- Technical documentation of the “high-risk AI system” (Article 11 Draft AI Regulation);
- Automatic recording of operations and events in the “high-risk AI system” (Article 12 Draft AI Regulation);
- Ensuring transparency and provision of information towards users (Article 13 Draft AI Regulation);
- Obligation to develop and design the “high-risk AI system” in such a way that human, effective oversight is ensured for the duration of its use (Article 14 Draft AI Regulation);
- Compliance with an appropriate level of robustness, cybersecurity and accuracy of the respective “high-risk AI system” (Article 15 Draft AI Regulation).
The provider is responsible for ensuring that these requirements set out in Articles 9 to 15 Draft AI Regulation are met (Article 16 (a) Draft AI Regulation). A provider is any natural or legal person, public authority, agency or other body that develops an AI system or has it developed with a view to placing it on the market or putting it into operation under its own name or trademark, whether in return for payment or free of charge (Article 3 No. 2 Draft AI Regulation). In this regard, they are to be subject to the Draft AI Regulation if they place the AI system on the market or put it into operation in the EU, regardless of whether these providers are established in the EU or in a third country (Art. 2 No. 1 a Draft AI Regulation). Thus, the marketplace rule applies to the Draft AI Regulation – just as it does to the General Data Protection Regulation.
However, it is not only the provider who will be obliged by the AI Regulation in the future. Articles 16 to 29 Draft AI Regulation define further rules of conduct for users and other actors along the value chain, such as importers or traders. For example, according to Article 24 Draft AI Regulation, the product manufacturer is subject to the same obligations as the supplier if the AI system is placed on the market under his name. The user of “high-risk AI systems” is also obliged to operate them in accordance with the instructions of use (Article 29 Draft AI Regulation). Distributors or importers of “high-risk AI
Liability
Article 71 (1) Draft AI Regulation sees it as the task of the Member States to enact provisions for sanctions, for example in the form of fines, to be applied in the event of infringements of the AI Regulation. The sanctions provided for should be effective, proportionate and dissuasive, but also take into account the interests of small providers and start-ups as well as their economic survival. At the same time, however, the Draft AI Regulation provides a rough framework. According to this, violations of the AI Regulation can be punished with a fine, which in particularly serious cases can amount to up to 30 million Euro or 6 percent of the company's worldwide annual turnover.[2]
EU database
Finally, Article 60 (1) Draft AI Regulation provides that the European Commission, in cooperation with the Member States, shall establish and maintain an EU database in which independent “high-risk AI systems” are listed in accordance with Article 6 (2) Draft AI Regulation. This is intended to make it easier for the European Commission as well as the national authorities of the Member States, above all, to fulfil their responsibilities (see inter alia Articles 63 to 68 Draft AI Regulation).
Conclusion
Experience shows that it still take some time before the AI Regulation comes into force. However, the EU is already taking a clear stance with the draft and in this respect is playing a pioneering role: technical progress should not be at the expense of people.
Companies that want to (continue to) use AI in the future, especially in “high-risk AI systems”, must deal intensively with these AI systems. Only when they understand the AI system can they implement the regulatory requirements. Manufacturers and other players can also no longer shirk their responsibilities so easily. As with data protection or other compliance topics, the following applies: First take stock and then identify which measures still need to be implemented in order to avoid a breach of the AI Regulation.
[1] Data governance in the AI Regulation – in conflict with the GDPR?
[2] Fines under the AI Act - A bottomless pit?