12 November 2024
With the AI Act, the EU has created an instrument that is essential for regulating artificial intelligence (AI). It has been in force since 1 August 2024. The AI Act takes a risk-based approach.
This means that the legal requirements placed on AI systems increase with the respective risk of the AI systems. It is therefore not surprising that most of the provisions in the AI Act are devoted to so-called “high-risk AI systems”. But what does the EU mean by the term “high-risk AI system”? And how will “high-risk AI systems” be regulated in the future? An overview by Mareike Christine Gehrmann and Dr. Anne Förster.
Classifications
The AI Act does not define an all-encompassing legal framework for AI, but instead takes a horizontal, risk-based approach that focuses primarily on product safety aspects for AI systems and AI models / systems with a general-purpose (GPAI). Particular attention is paid to AI applications that are subject to stricter regulations due to their risk potential for fundamental rights and sensitive legal interests. These include, in particular, high-risk AI systems.
The AI Act does not define formal risk classes. However, the following categories can be identified to help understanding the different regulatory approaches:
Article 6 of the AI Act defines the conditions under which an AI system is considered “high-risk”. High-risk AI systems are AI systems that the EU considers to pose a high risk to the health, safety or fundamental rights of EU citizens, but whose major socio-economic benefits outweigh these risks (see recital 46), which is why they are not banned.
The methodology for classifying AI systems as high-risk is based on a combination of product safety requirements and specific areas of application. For example, the AI Act lists cases in Annexes I and III where the use of AI systems is considered to be high-risk.
The classification as high-risk for embedded AI systems is explained by the dangers and risks of failure or malfunction of the AI system. The feared functional impairments are possibly so severe that they pose a threat to the health, safety or fundamental rights of EU citizens.
Example 1: AI systems that enable autonomous driving in a motor vehicle are to be classified as high-risk AI systems in accordance with Article 6 (1) in conjunction with Annex I No. 19 of the AI Act in conjunction with Regulation 2018/858 on the approval of vehicle types.
The reasoning is different for non-embedded AI systems. The danger for the health, safety or fundamental rights of natural persons and thus the classification of the AI system as high-risk is based on the specific, rule-compliant use of the AI system in an area of life that is sensitive to fundamental rights.
An area in which an AI system is to be considered “high-risk” is, according to Annex III No. 4 of the AI Act “Employment, workers’ management”. AI systems, that are
Any company that wants to use an AI to facilitate the search for candidates, for example, must first check whether it is subject to the strict rules for high-risk AI. In addition, companies must bear in mind that other regulations may apply alongside the AI Act, particularly in the area of human resources. These include the provisions of the General Data Protection Regulation and, in the future, the Platform Work Directive and corresponding national implementations.
Example 2: If an AI system is used to assess an employee's work performance, it is considered a high-risk AI system in accordance with Article 6 (2) in conjunction with Annex III No. 4 lit. b of the AI Act.
The AI Act provides for the possibility of proof to the contrary for non-embedded AI systems in Article 6 (3) of the AI Act. This applies whenever the use of the AI system “does not pose a significant risk to the health, safety or fundamental rights of natural persons, including by significantly influencing the outcome of the decision-making process.”
Essentially: If the AI system does not have “final decision-making authority” and is only used to support, but not to replace, a human being, it is highly likely that the exception will apply, for example if an AI system in the HR area parses a CV for grades or merely categorises the documents received in the application process. The AI system then performs a task that is so narrowly defined that it does not increase the risk to the fundamental rights of EU citizens. If the exception applies, such an assessment must be documented and the documentation handed over to the authorities on request.
Article 6 (3) of the AI Act itself makes an exception to the exception: an AI system is always considered high-risk if it performs profiling of natural persons.
If none of the exceptions mentioned applies, the company has different obligations depending on the role of the company in relation to the AI system used:
Providers of high-risk AI systems are natural or legal persons that develop an AI system themselves or have one developed and market or puts the AI system into service under their own name or trademark, whether or not they do so free of charge. They must ensure that the requirements set out in Article 8 et seq. of the AI Act are met throughout the entire life cycle of the AI system, which should be guaranteed by an appropriate risk management system (see Articles 9 and 16 of the AI Act). These obligations include, among other things:
Deployers of high-risk AI systems, entities who use the AI system under their own authority, are subject to a less extensive regulation. They have to, among other things:
Practical tip: Besides the AI Act, other laws must be observed. In addition to data protection, the requirements of labour law, in particular, are likely to play a decisive role. In Germany, the works council must be involved when hardware and software that is capable of monitoring the performance or behaviour of employees, is to be introduced in the workplace. The co-determination rights under the German Works Constitution Act, in particular under Section 90 (1) No. 3, Section 95 (2a) or Section 87 (1) No. 6 of the German Works Constitution Act, must continue to be observed. In addition, the works council may, in accordance with Section 80 (3) of the German Works Constitution Act, call in an expert if this is necessary for the proper fulfilment of the works council's tasks. Further significant changes are likely to be brought about by the expected Employee Data Protection Act and the implementation of the Platform Work Directive.
Practical tip: According to Recital 83, an operator may be subject to different obligations at the same time. This means that a provider who uses an AI system developed by it for its own purposes may at the same time also be subject to the deployers obligations of Article 26 of the AI Act. It is also possible that a deployer become a provider through fine-tuning, in which case it must then also fulfill the more extensive provider obligations. However, it is not yet legally clear whether fine-tuning has such far-reaching consequences.
Importers of AI systems whose providers are based outside the EU have the task of verifying the conformity of the AI system before placing it on the EU market. In particular, they must verify that the provider has carried out the conformity assessment procedure, that technical documentation is available, that the AI system has a CE marking and that the provider has appointed an authorised representative (see Article 23 (1) of the AI Act). In addition, the importer must ensure that the conformity of the AI system is not affected by the storage and transport (see Article 23 (2) of the AI Act). Furthermore, importers are obliged to provide certain information on the packaging or in the accompanying documentation (see Article 23 (3) of the AI Act). They must also cooperate with the competent authorities and provide information on the conformity of the AI system and keep this information available for a period of ten years (see Article 23 (5) and (6) of the AI Act).
The obligations of distributors of high-risk AI systems extend primarily to verifying that the upstream actors (manufacturer, provider and importer) have fulfilled their respective obligations. Distributors must verify the presence of the CE marking, the copy of the EU declaration of conformity and the instructions (Article 24 (1) of the AI Act). In the event of a lack of conformity, a distributor may not make an AI system on the EU market available (Article 24 (2) of the AI Act). The distributor shall cooperate with the competent authorities and provide information to mitigate any risks posed by a high-risk AI system that it has made available on the EU market (Article 24 (6) of the AI Act).
If certain AI systems are to be used in the sense of Article 50 of the AI Act, the transparency obligations standardised therein must also be complied with. According to Article 50 (6) of the AI Act, the transparency obligations apply in addition to the obligations for high-risk AI systems.
Although the AI Act is in force since 1 August 2024, the application will be step by step.
For operators of high-risk AI systems, that have been placed on the market or put into service before 2 August 2026, only if, as from that date, those systems are subject to significant changes in their designs.
Non-compliance to the provisions of the AI Act could lead to administrative fines of up to EUR 15 000 000 or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Companies should therefore address the requirements in good time. It is important to check whether the AI provided or used is a high-risk AI system and which measures need to be taken to be compliant by the end of the implementation period.