On 21 April 2021, the European Commission published far-reaching draft legislation on the regulation of AI systems (Draft AI Regulation), which aims to create a framework for the development and use of artificial intelligence (AI). The goal is to regulate the development and use of AI in such a way that this promising technology is in line with the values, fundamental rights and principles of the European Union (EU).
Whether in autonomous driving or in talent recruiting using chatbots: artificial intelligence is increasingly becoming part of our everyday lives and will have a significant influence on our future. From the outset, this raises the question of what AI actually is. The EU Commission defines at the least an “artificial intelligence system” (AI system) as software that, on the one hand, has been developed using machine learning, logic and knowledge-based concepts or statistical approaches and, on the other hand, is capable of producing results such as content, predictions, recommendations or decisions that influence the environment with which it interacts, with regard to a set of objectives set by humans (Article 3 No. 1 Draft AI Regulation). This definition is quite broad, since on the one hand it allows flexibility with regard to the rapid technical progress in AI systems, but on the other hand it will also be associated with legal uncertainty for developers, operators and users of AI systems.
The Draft AI Regulation pursues a risk-based approach: the more significant the risks of an AI system for the health and safety or the fundamental rights of persons, the stricter the regulatory requirements. Particularly dangerous AI systems will even be prohibited (Article 5 Draft AI Regulation). In addition, a distinction is made between AI systems with minimal, low or high risk. The latter, the so-called “high-risk AI systems”, are the focus of the Draft AI Regulation; more than half of the regulations refer to these AI systems.
Article 6 Draft AI Regulation determines in which cases a risk to the health and safety or fundamental rights of persons is so blatant that the AI system is to be classified as a “high-risk AI system”.
If such a “high-risk AI system” is to be used, it must above all meet the requirements of:
The provider is responsible for ensuring that these requirements set out in Articles 9 to 15 Draft AI Regulation are met (Article 16 (a) Draft AI Regulation). A provider is any natural or legal person, public authority, agency or other body that develops an AI system or has it developed with a view to placing it on the market or putting it into operation under its own name or trademark, whether in return for payment or free of charge (Article 3 No. 2 Draft AI Regulation). In this regard, they are to be subject to the Draft AI Regulation if they place the AI system on the market or put it into operation in the EU, regardless of whether these providers are established in the EU or in a third country (Art. 2 No. 1 a Draft AI Regulation). Thus, the marketplace rule applies to the Draft AI Regulation – just as it does to the General Data Protection Regulation. However, it is not only the provider who will be obliged by the AI Regulation in the future. Articles 16 to 29 Draft AI Regulation define further rules of conduct for users and other actors along the value chain, such as importers or traders. For example, according to Article 24 Draft AI Regulation, the product manufacturer is subject to the same obligations as the supplier if the AI system is placed on the market under his name. The user of “high-risk AI systems” is also obliged to operate them in accordance with the instructions of use (Article 29 Draft AI Regulation). Distributors or importers of “high-risk AI
Article 71 (1) Draft AI Regulation sees it as the task of the Member States to enact provisions for sanctions, for example in the form of fines, to be applied in the event of infringements of the AI Regulation. The sanctions provided for should be effective, proportionate and dissuasive, but also take into account the interests of small providers and start-ups as well as their economic survival. At the same time, however, the Draft AI Regulation provides a rough framework. According to this, violations of the AI Regulation can be punished with a fine, which in particularly serious cases can amount to up to 30 million Euro or 6 percent of the company's worldwide annual turnover.
Finally, Article 60 (1) Draft AI Regulation provides that the European Commission, in cooperation with the Member States, shall establish and maintain an EU database in which independent “high-risk AI systems” are listed in accordance with Article 6 (2) Draft AI Regulation. This is intended to make it easier for the European Commission as well as the national authorities of the Member States, above all, to fulfil their responsibilities (see inter alia Articles 63 to 68 Draft AI Regulation).
Experience shows that it still take some time before the AI Regulation comes into force. However, the EU is already taking a clear stance with the draft and in this respect is playing a pioneering role: technical progress should not be at the expense of people. Companies that want to (continue to) use AI in the future, especially in “high-risk AI systems”, must deal intensively with these AI systems. Only when they understand the AI system can they implement the regulatory requirements. Manufacturers and other players can also no longer shirk their responsibilities so easily. As with data protection or other compliance topics, the following applies: First take stock and then identify which measures still need to be implemented in order to avoid a breach of the AI Regulation.
1 of 9 Insights
2 of 9 Insights
4 of 9 Insights
5 of 9 Insights
6 of 9 Insights
7 of 9 Insights
8 of 9 Insights
9 of 9 Insights