10 September 2024
It's done! The AI Act came into force on 1 August 2024. The AI Act is the world's first comprehensive set of rules for artificial intelligence (AI) and aims to establish standardised requirements for the development and use of AI in the European Union. Similar to the introduction of the General Data Protection Regulation, the AI Act will have a significant impact on companies, especially in the HR sector. HR departments will have to observe and comply with the requirements of the AI Act when using AI systems, e.g. for the pre-selection of applications. Particularly with regard to AI systems already in use, a (rapid) inventory and an "AI Act check" are therefore required to ensure that the AI systems in use are legally compliant.
The AI Act does not define an all-encompassing legal framework for AI, but pursues a horizontal, risk-based approach that focuses primarily on product safety aspects for AI systems and general-purpose AI. Particular attention is paid to AI systems that are subject to stricter regulation due to their risk potential for fundamental rights and sensitive legal interests. This approach differentiates obligations based on the level of risk of the use or potential use of AI systems, regardless of the underlying technology. AI systems are classified into five risk classes:
The AI Act applies regulatory measures specifically where there is a risk to public order or fundamental rights and regulates the market launch, commissioning, and use of AI systems in order to ensure safe and legally compliant use within the EU.
Depending on how risky the AI system is classified, the company has different obligations. Providers of AI systems are the main addressees of the AI Act. However, the law also focuses on the deployers, i.e. the users of AI systems, unless the AI system is used privately and not as part of professional activities. This means that if companies use an AI system as part of their own activities, they are deployers and fall within the scope of the AI Act.
Practical tip: Employers will generally qualify as deployers if they use AI systems in the HR area. However, if they change the purpose of an AI system or make another significant change to the AI system, they can change from deployer to provider.
Given the high complexity of its requirements, especially for high-risk AI systems, the AI Act is not a "toothless tiger". If a company fails to comply with the requirements, it could face fines of up to 35 million euros or 7% of its total global annual turnover in the previous financial year, whichever is higher.
In addition to fines, market surveillance authorities also have the option of withdrawing high-risk AI systems that do not comply with the regulations from the market by withdrawing the conformity assessment and prohibiting their use until conformity can be guaranteed again.
One of the most discussed points during the legislative process was what constitutes an AI system. According to Art. 3 No. 1 of the AI Act, an AI system is defined as a machine-based system that is designed to operate with varying degrees of autonomy, that can be adaptive once deployed and that derives from the inputs received for explicit or implicit goals how to produce results such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This covers both the "machine learning" and "deep learning" approaches.
According to recital 12 of the AI Act, the definition should be based on the main characteristics of AI systems that distinguish them from simpler conventional software systems or programming approaches and should not include systems that are based solely on rules established by natural persons for the automatic execution of operations. The key criterion should therefore be the term "infers". The AI system differs from "non-intelligent" software in that it works with a varying degree of autonomy and can "derive" results such as predictions, content, recommendations, or decisions based on the input it receives. However, it remains to be seen how authorities and courts will interpret the concept of AI systems.
Numerous AI systems are already being used in the HR sector, particularly in recruiting. For these AI systems, it is now necessary to check which risk group they belong to and, depending on the role of the employer - provider or deployer - the obligations set out for the respective risk group must then be complied with.
According to the AI Act, AI-based emotion recognition systems are prohibited in the workplace unless they are to be installed or placed on the market for medical or safety reasons. Such emotion recognition systems are already widely used in the workplace today. These are often AI systems that recognise fatigue or concentration problems to prevent accidents - e.g. in pilots or lorry drivers or AI systems that serve to verify identity (e.g. for access controls). Such AI systems will also be permitted under the AI Act in the future, as the exemption rule - safety-related reasons - applies here. Although they are permitted, they are classified as high-risk AI systems and must therefore adhere to the regulations of Art. 16 et seq. AI Act. AI systems that can recognise and evaluate people's feelings may also fall under this regulation: In the workplace, for example, the computer should be able to recognise overload or boredom and react accordingly and, conversely, also be able to support a good workflow: e.g. by muting calls. None of the exemptions are likely to apply to such AI systems, meaning that they should be classified in the highest risk group under the AI Act and therefore banned from 2 February 2025.
AI systems that generally qualify as high-risk AI systems are
Art. 6 para. 3 of the AI Act provides an exemption for high-risk AI systems. This states that an AI system referred to in Annex III is not considered high-risk "where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making." When this is the case is specified in the following subparagraph. In the HR sector, there are certainly conceivable applications that could fall under the exemption: For example, there is no high-risk AI in the case of an AI system that is intended for the performance of a narrow procedural task (Art. 6. para. 3 subpara. 2 (1) AI Act). A "narrow procedural task" could, for example, include an AI system that performs a CV analysis (so-called parsing) according to grades.
Practical tip: It is therefore always necessary to check whether the (planned) use of an AI that is actually classified as high-risk AI is not considered high-risk due to the exemption. If the exemption applies, such an assessment must be documented. The documentation must be handed over to the authorities on request.
However, if none of the above exceptions apply, the company is subject to different obligations depending on the role it plays in relation to the AI system used:
Providers of high-risk AI systems must set up and maintain a risk management system, test the AI systems for compliance with their intended function and the requirements of the AI Act before they are put into operation, ensure that they can be supervised by natural persons during use and disclose the interaction with an AI system.
Deployers of high-risk AI systems – which are likely to mostly include employers – are subject to a wide range of requirements. Among other things, they must take appropriate technical and organisational protective measures to ensure that the AI system is used in accordance with the instructions for use. The AI system must be supervised by competent, trained persons and monitored in accordance with the instructions for use.
Practical tip: The development of "AI literacy" and human oversight within the organisation is key to meeting this requirement. "AI literacy" is even mandatory starting as from 2 February 2025. Companies should therefore ensure at an early stage that personnel are available who have the necessary expertise, training and authorisation to supervise high-risk AI. Due to the wording ("natural persons"), external service providers can probably also be used for this purpose.
The company must also ensure the high quality of the input data by only entering information into the AI that is relevant and sufficiently representative for the intended purpose. There are additional reporting, documentation and storage obligations that must be observed, and employees, including their representatives, must be informed in advance of any use in the workplace.
Practical tip: In Germany, "employee representatives" refers in particular to works councils, even if they are not affected by the use of the AI system as a works council. The co-determination rights existing under the Works Constitution Act, in particular under Section 90 (1) No. 3, Section 95 (2) a or Section 87 (1) No. 6 BetrVG, must continue to be observed by the employer in addition to the AI Ordinance.
There is also a special obligation to provide information when high-risk AI systems make decisions about natural persons or assist in these decisions. In such cases, data subjects have a new right to a declaration of an individual decision.
If companies use less high-risk AI systems, they have to take less stringent, but also specific measures. For example, they must ensure that the staff who work with AI have a sufficient understanding of AI. Transparency obligations must be observed when using certain AI. If image, audio or video content is created via AI, it must be disclosed that this content has been created or modified by AI. The same applies to texts that are published for public information purposes. If a company uses ChatGPT for this purpose, it must comply with these transparency obligations in future.
When using AI, data protection must also be taken into account. If personal data is entered into the AI, all requirements of the General Data Protection Regulation apply – even after the AI Act comes into force. There are no relaxations. This means that data processing is only permitted if the data subject - i.e. the employee or customer - has consented to this or the processing can be based on a law. Comprehensive information must also be provided about the data processing. The biggest challenge is likely to be implementing the so-called data subject rights. This includes, for example, the employee's right to delete or block data or the right to obtain information about data processing. If personal data is entered into the AI, it should no longer be possible to erase it without going to reset. This means that the company should always run the risk of not being able to fully fulfil the rights of data subjects. Above all, there is a risk of fines and claims for damages from data subjects. Companies must be aware of this and decide whether and which data records they enter into the AI and what risks they are prepared to take.
The AI Act poses major challenges for companies. Close cooperation between HR, legal and data protection departments will be essential in order to fulfil the requirements of the AI Act.