Artificial intelligence (AI) is becoming more and more important for the German economy. It is increasingly being used by companies to simplify and accelerate processes. In particular, its use in the HR sector, for example in the context of application processes, has expanded significantly. While technology has been steadily advancing, the “law” has been lagging along behind. But recently, a lot has happened: In Germany, the Works Council Modernization Act (Betriebsrätemodernisierungsgesetz) was passed, which (among other things) covers the participation rights of the works council concerning the use of AI. Moreover, the EU Commission has proposed new rules on AI. The AI Act will have significant implications for companies wishing to use AI in their HR work. The following article shows what companies need to be prepared for.
Use of AI in human resources management
AI should make HR management easier, faster and more objective. In HR management, too, increasing datafication enables the use of learning algorithms and similar AI applications. Thus, more and more HR processes can be automated, for example the processing of employee inquiries, the search for suitable candidates or the coordination of training content. In addition, probability predictions can be generated, which can be helpful in HR decision-making. In the future, technology should enable companies to make predictions about “what will happen.” However, some recent cases have shown that there are also dangers to the AI being used. For example, algorithms can be “biased” and discriminatory. Due to a faulty database or incorrect programming, algorithms can lead to indirect discrimination against applicants, e.g. because of their gender. In particular, the case of an online retailer became known: The application tool used by the company tended to filter out female applicants because the algorithm used had concluded that it was mainly men who were enthusiastic about the company and were therefore the more suitable applicants.
The draft AI Act now presented by the Commission aims to regulate the legally compliant (non-discriminatory) use of AI. What does this mean for AI systems used in the HR sector?
The key takeaways of the AI Act from an HR perspective
Which HR AI systems are covered by the AI Act?
The Commission has classified the AI system into four levels. From minimal to high-risk. According to the AI Act, AI systems intended to be used for recruitment or selection of natural persons or to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships are classified as high risk, as these systems can have a tangible impact on the future career prospects and livelihoods of these individuals.
Given the broad definition of AI in the AI Act, which covers all types of algorithmic decision-making and recommendation systems, a large number of HR systems already in use today are likely to fall within the scope of the AI Act. There is hardly a company left that does not use at least “simple” algorithmic decision-making systems in the HR area, such as optimising job advertisements or analysing CVs. Therefore, providers and users are well advised - as a lesson learned from the GDPR reform- to carry out a compliance check regarding the HR systems used and their conformity with the AI Act at an early stage.
Who is subject to the obligations arising from the AI Act?
The AI Act imposes obligations on all participants along the AI value chain, with most obligations applying to providers (e.g. the developer of a CV analysis programme) and users (e.g. a bank purchasing this programme). It should be noted that the AI Act does not apply to purely private, non-commercial use.
What requirements must HR AI systems satisfy?
Before high risk AI systems can be placed on the market or otherwise put into service in the EU, they must undergo a conformity assessment procedure. For high risk AI, risk management systems must be established, applied, documented and maintained throughout the lifecycle of the AI system. Providers must comply with strict requirements in this respect before they are allowed to place high risk AI systems on the market: Are adequate risk assessment and mitigation systems in place? Has the AI software been fed with high quality data sets to avoid discriminatory results? Is there detailed documentation on the AI system and its purpose for the authority, as well as instructions for users? Are the operations of the system recorded to track unusual results? Is there human oversight that can monitor the system and turn it off if necessary? Has the system an appropriate level of accuracy, robustness and cybersecurity?
But users also have obligations. In particular, they must use such AI systems in accordance with the instructions of use accompanying the systems. In addition, to the extent the user exercises control over the input data, that user must ensure that input data is relevant in view of the intended purpose of the high-risk AI system. The information from the operating instructions must also be used to carry out data protection impact assessments in accordance with Article 35 of the GDPR.
Does the AI Act contain provisions on co-determination?
The Act does not at present provide for any participation and co-determination possibilities of the works council for the operational application of AI systems. However, this does not mean that the works council does not have to be involved in the introduction and application of HR AI systems. Of course, the co-determination rights set out in the Works Constitution Act (Betriebsverfassungsgesetz) continue to apply. In this respect, the Works Modernisation Act has once again strengthened the rights of works councils by stipulating, among other things, that the involvement of an expert is required when introducing AI systems.
The AI Act, should it be adopted in its current form, will pose major challenges for HR departments - even if they are “only” users of AI systems. Companies should therefore get an early overview of which systems are used and whether or what need for action arises from this with regard to the AI Act.