AI Hands
1 of 9

17 November 2021

KI-Verordnung / AI Act (dt./eng.) – 1 of 9 Insights

The AI Act – does this mark a turning point for the regulation of artificial intelligence? An overview

  • In-depth analysis

Fritz-Ulli Pieper, LL.M.

Salary Partner

Read More

For some time now, the EU has been preoccupied with the question of “artificial intelligence”. This includes in particular, the creation of an appropriate legal framework. At the end of April 2021, the EU Commission finally presented a “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts”(hereinafter “AI Act”) which constituted, in its own words, the “world’s first legal framework for AI”. But what is behind it? What is to be regulated and how? What effects can the AI Act have? These are the questions we will explore in this Plugin edition dedicated to the AI Act. We will start off with an overview.   

Background to the legislation

The European Parliament and the European Council have in the past explicitly and repeatedly called for legislative action or adopted resolutions in relation to artificial intelligence (“AI”) systems. In 2018, the EU Commission published its European AI strategy entitled “Artificial Intelligence for Europe” and a “Coordinated Plan for Artificial Intelligence”, set up a high-level expert group on artificial intelligence and published “Guidelines for Trustworthy AIon this basis in 2019. In 2020, against this background, the EU Commission finally published its “White Paper on Artificial Intelligence – A European Concept for Excellence and Trust” (“White Paper”), which for the first time developed a specific concept for regulating AI. These measures in particular form the basis for the current proposal of the AI Act.   

Basic information on the AI Act

The rapid development of AI technologies is witnessed daily. On the one hand, they are said to bring multiple benefits to the economy and society across the entire spectrum of industrial and social activities. On the other hand, their use can also potentially result in new or changed risks or disadvantages for individuals or society, for example in connection with AI-based “social scoring” or biometric facial recognition. In this respect, the AI Act is basically intended to balance the benefits and risks of AI technologies. According to the explanatory memorandum to the proposal, the AI Act contains a regulatory approach to AI that “respects proportionality and is limited to the minimum requirements necessary to address the risks and problems associated with AI without unduly restricting or hindering technological development or otherwise disproportionately increasing the costs of placing AI solutions on the market”. The proposal accordingly sets out harmonised rules for the development, placing on the market and use of AI systems in the Union. The main objective is to ensure the smooth functioning of the internal market by defining them. Weighing up various policy options, the EU Commission opted in the proposal for “a horizontal EU legislative instrument based on proportionality and a risk-based approach, complemented by a code of conduct for AI systems that do not pose a high risk”. In this respect, the proposal follows a risk-based approach already laid out in the White Paper, according to which AI systems are grouped into categories according to their potential risk: unacceptable risk, high risk and low or minimal risk.  

Overview of the main regulatory content

The AI Act first defines its scope of application. Two aspects of regulation are particularly striking here: the definition of AI systems and the extensive territorial scope of application. AI systems are legally defined as “software developed using one or more of the techniques and concepts listed in Annex I and capable of producing results such as content, predictions, recommendations or decisions that influence the environment with which they interact, in relation to a set of goals defined by humans”. The techniques and concepts listed in Annex I include machine learning, logic and knowledge-based approaches, as well as statistical approaches and Bayesian estimation, search and optimisation methods. Critics recognise several features here that require interpretation and specification. In a sense, all kinds of more complex software could be included here, meaning that the definition could be described as rather imprecise. The AI Act also applies the so-called “place of market” principle to determine the territorial scope. Providers that place AI systems on the market or put them into service in the Union, regardless of whether those providers are also established in the Union or in a third country, are covered by the territorial scope of the AI Act. Furthermore, users of AI systems located in the Union and providers and users of AI systems established or located in a third country are covered if the result produced by the system is used in the Union. Consequently, there is a kind of “extraterritorial claim” of EU law and a wide territorial scope - the AI Act reflects the General Data Protection Regulation in this respect. A core element of the AI Act is the risk-based approach: some AI practices classified as particularly harmful are to be banned (please see the article “Prohibited practices under the draft AI Act – Does the European Commission want to ban Instagram?”). Furthermore, the proposal contains extremely extensive regulation of high-risk AI systems, i.e. those systems that pose significant risks to the health and safety or fundamental rights of individuals (see the article on this subject “High-risk systems: A danger foreseen is half avoided – or is it?"). For certain AI systems that do not fall under the aforementioned risk categories, only minimal transparency obligations are proposed, in particular for the use of chatbots or so-called “deepfakes”. Finally, AI systems without an inherent risk that requires regulation are not to be covered by the AI Act at all. The EU Commission assumes that the vast majority of AI systems fall under this and cites applications such as AI-supported video games or spam filters as examples. The regulation of high-risk AI systems could be considered the centrepiece of the proposal. They will have to comply with horizontal requirements for trustworthy AI and undergo conformity assessment procedures before being placed on the market in the Union. In order to ensure safety and compliance with existing legislation protecting fundamental rights throughout the lifecycle of AI systems, the obligations imposed on providers and users of these systems are extremely comprehensive. These include, for example, in terms of conformity assessment, risk management systems, technical documentation, record-keeping obligations, transparency and provision of information to users, human oversight, accuracy, robustness and cybersecurity, quality management systems, post-market monitoring, notification of serious incidents and malfunctions, and corrective actions. In this context, special attention must also be given to compliance with data quality criteria and data governance (see the article on this topic). “Data governance in the AI Regulation - in conflict with the GDPR?”). Affected companies are likely to face an extremely comprehensive and complex implementation effort in this case. The AI Act also clearly aims to establish a comprehensive framework for “AI product compliance” (see the article on this topic “CE mark for AI systems - extension of product safety law to artificial intelligence”). With regard to high-risk AI systems, which are safety components of products, the proposal will be integrated into existing sector-specific safety legislation to maintain coherence, avoid overlap and reduce administrative burden. Thus, the requirements for high-risk AI systems associated with products covered by the New Legislative Framework (NLF) (such as machinery, medical devices, toys) will be assessed under the existing conformity assessment procedures under the relevant NLF legislation. According to the explanatory memorandum to the Act, the interplay of the requirement is that the security risks dependent on the respective AI systems are to be subject to the requirements of the AI Act, while the NLF law is to ensure the security of the end product as a whole. Furthermore, the regulations of the AI Act shall be enforced by the Member States through a governance structure as well as a cooperation mechanism at Union level, by the establishment of a European Committee on Artificial Intelligence. In addition, measures to support innovation are proposed, especially in the form of AI real labs (see the article on this “Innovation meets regulation: A sandbox for artificial intelligence (AI)”). On the basis of the AI Act, the Member States will also adopt rules on sanctions applicable to infringements of the AI Act. Fines are also specifically mentioned here. The sanctions provided for must be effective, proportionate and dissuasive. Depending on the infringement, the range of fines is to be between up to 30,000,000 Euro or - in the case of companies - up to 6 percent and up to 10,000,000 Euro or - in the case of companies - up to 2 percent of the total worldwide annual turnover of the preceding business year, whichever is higher (see the article “Fines under the AI Act - A bottomless pit?”). Article 10 of the AI Act plays a special role here: there are obvious parallels with the GDPR and its system of heavy fines imposed by data protection authorities which have recently been witnessed. 


With the proposed regulation, the EU Commission has laid down a fundamental foundation for the regulation of AI in the EU. It has the potential to subject the development, placing on the market and use of a large proportion of AI systems in the Union to comprehensive and complex regulation. This applies both very generally and according to respective sectors, for example in the area of “Work 4.0” (see the article on this “The impact of the AI Act on HR technology”) or “InsurTech” (see the article on this “Regulation of the use of Big Data and AI by insurance undertakings”). Accordingly, the proposal is already facing some harsh criticism in the first draft, for example from industry associations. For others, however, the draft does not go far enough: for example, it is criticised that far too few applications are subject to prohibited practices in the field of artificial intelligence. The EU still has a mammoth task ahead of it until a final legal framework is achieved: the draft regulation must now pass through the European Parliament and other EU bodies in the legislative process, which will only proceed with amendments and after years of tough wrangling.

Return to


Go to Interface main hub