It's done! The EU AI Act came into force on 1 August 2024.
The AI Act is the world's first comprehensive set of rules for artificial intelligence and aims to establish standardised requirements for the development and use of AI in the European Union. Similar to the introduction of the General Data Protection Regulation, the AI Act will have a significant impact on organisations, especially in the HR sector. HR departments will have to observe and comply with the requirements of the AI Act when using AI systems, e.g. for the initial stages of the recruitment process. They will also need to review their existing AI systems to ensure they are compliant as the provisions of the AI Act come into effect.
Framework conditions of the AI Act
The AI Act regulates the market launch, commissioning, and use of AI systems in the EU. It does not set out an all-encompassing legal framework for AI, but takes a horizontal, risk-based approach that focuses primarily on product safety aspects for AI systems and general-purpose AI. AI systems which present a potential risk to fundamental rights and sensitive legal interests are subject to stricter regulation. Obligations are based on the level of risk the use or potential use of an AI system presents, regardless of the underlying technology, with AI systems classified into five risk classes:
- unacceptable risk (prohibited AI)
- high risk (high-risk AI)
- systemic risk (general-purpose AI with systemic risks)
- limited risk (specific and general purpose AI)
- low risk (all other AIs).
How does the AI Act impact HR AI systems?
AI systems are already common in the HR sector, particularly in recruiting. Existing AI systems now need to be assigned to the relevant risk group and, the employer needs to prepare for compliance with obligations relevant to their designation as potentially a provider or deployer under the AI Act.
AI with unacceptable risk (Article 5 AI Act) – emotion recognition systems in the workplace
The AI Act prohibits AI-based emotion recognition systems in the workplace unless they are to be installed or placed on the market for medical or safety reasons. Where, for example, an AI system is used in the workplace to recognise fatigue or concentration problems and prevent accidents - e.g. in pilots or lorry drivers, or to verify identity (e.g. for access controls), the system will fall under the safety exemption. While they are not banned, they are classified as high-risk AI systems.
Other AI systems that can recognise and evaluate people's feelings and are used in the workplace, for example, to recognise overload or boredom and react accordingly and, conversely, to support a good workflow e.g. by muting calls, are unlikely to benefit from exemptions and will therefore be classified as presenting an unacceptable risk and be banned from 2 February 2025.
High-risk AI in the HR area (Article 6(2) in conjunction with Annex III)
Many AI systems used in the employment sector will qualify as high-risk, so organisations need to familiarise themselves with the resulting obligations. This is likely to include AI systems used for:
- the recruitment or selection of natural persons (e.g. to place targeted job adverts, to screen or filter applications and to evaluate applicants)
- decisions that affect the conditions or termination of employment relationships or promotions
- the assignment of tasks on the basis of individual behaviour or personal characteristics or traits
- the observation and evaluation of people's performance and behaviour.
Requirements for providers and deployers of high-risk AI
The most onerous obligations under the AI Act apply to providers of AI systems, however, the law also focuses on the deployers, i.e. the users of AI systems, unless the AI system is used privately and not as part of professional activities. This means that if organisations use an AI system as part of their own activities, they are deployers and fall within the scope of the AI Act.
Employers will usually qualify as deployers if they use AI systems in the HR area. However, if they change the purpose of an AI system or make another significant change to the AI system, they can change from deployer to provider.
Providers of high-risk AI systems
Providers of high-risk AI systems must set up and maintain a risk management system, test the AI systems for compliance with their intended function and the requirements of the AI Act before putting them into operation, ensure that they can be supervised by natural persons during use and disclose the interaction of individuals with an AI system.
Deployers of high-risk AI systems
Deployers of high-risk AI systems – the category most likely to apply to employers – are subject to a wide range of requirements. Among other things, they must take appropriate technical and organisational protective measures to ensure that the AI system is used in accordance with the instructions for use. The AI system must be supervised by competent, trained persons and monitored in accordance with the instructions for use.
Practical tips
- The development of "AI literacy" and human oversight within the organisation is key to meeting this requirement. "AI literacy" is even mandatory starting from 2 February 2025. Organisations should therefore ensure at an early stage that personnel are available who have the necessary expertise, training and authorisation to supervise high-risk AI. Due to the wording ("natural persons"), external service providers can probably also be used for this purpose.
- The organisation must also ensure the high quality of input data by only entering information into the AI that is relevant and sufficiently representative for the intended purpose. This is particularly important to prevent bias and can be challenging, especially in the HR area. Added to this there are additional reporting, documentation and storage obligations that must be observed and employees, including their representatives, must be informed in advance of any use in the workplace.
- In Germany, "employee representatives" refers in particular to works councils, even if they are not affected by the use of the AI system as a works council. The co-determination rights existing under the Works Constitution Act, in particular under sections 90(1) No. 3, 95(2)(a) or 87(1) No. 6 BetrVG, must continue to be observed by the employer in addition to the AI Act.
- There is also a special obligation to provide information when high-risk AI systems make decisions about natural persons or assist in these decisions. In such cases, deployers need to tell people about the use of high-risk AI in decisions to which they are subject.
Is there an exemption for high-risk AI systems?
Article 6(3) of the AI Act provides exemptions from the designation as high-risk for some systems which initially fall within the high-risk category. An AI system referred to in Annex III is not considered high-risk "where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making". In the HR sector, an AI system which performs a narrow procedural task, for example, one that carries out CV analysis (so-called parsing) according to grades, could fall within the Article 6(3)(a) criterion as an AI system intended to perform a narrow procedural task, and be exempt.
Organisations need to analyse whether their AI-systems that seem to be classified as high-risk fall within the Article 6 exemption. If they consider the exemption does apply, they need to document this and be prepared to hand over the assessment to the relevant authorities on request.
AI with limited or low risk
If organisations use lower risk AI systems, they will still be subject to compliance obligations, albeit less stringent ones. For example, they must ensure that the staff who work with the AI have a sufficient understanding of AI. Transparency obligations must be observed when using certain AI. If image, audio or video content is created via AI, it must be disclosed that this content has been created or modified by AI. The same applies to texts that are published for public information purposes. If a company uses ChatGPT or any kind of similar AI system for this purpose, it will have to comply with these transparency obligations in future.
AI and data protection
When using AI, data protection must also be taken into account. If personal data is entered into the AI, all requirements of the GDPR apply with no exceptions – even after the AI Act comes into force. This means that a lawful basis is required to justify the processing and, where special data is processed, an exemption to the general prohibition on processing special data must apply. Transparency obligations on top of those in the AI Act will apply and comprehensive information must be provided about the data processing.
The biggest challenge is likely to be implementing the GDPR data subject rights. This includes, for example, the employee's right to have data erased, and to obtain information about the data processing. If personal data is entered into the AI, it may not be possible to erase it without going to reset which would make the employer unable to fully fulfil the rights of data subjects. Another hurdle for the use of many AI systems in HR will be Article 22 GDPR which states that everyone has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or similarly significant effects on them. This can be applied to the use of many AI systems, especially those that make decisions about the conditions or termination of employment relationships or promotions.
Above all, there is a risk of fines and claims for damages from data subjects. Organisations must be aware of this and decide whether and which data records they enter into the AI and what risks they are prepared to take.
What next?
The AI Act poses major challenges for deployers and providers using AI in the HR sector, particularly in relation to high-risk AI systems and especially where personal data is processed. Close cooperation between HR, legal and data protection as well as commercial functions will be essential in order to prepare for and fulfil the requirements of the AI Act.