The TUC's report on Artificial Intelligence, published jointly with the AI Consultancy in May 2021, highlights the many ways in which using Artificial Intelligence in the employment field could lead to discrimination and calls for urgent legal reform in this area. The first problem is knowing what you are dealing with. There is no common definition of what AI is and, for an area which has such a pervasive influence in our everyday lives, this needs to be addressed.
Secondly, there is the problem of 'the black box'. Meaning, because the way in which algorithms are designed is not opaque, workers do not always know what factors are being used in decisions being made about them. The lack of transparency is at the heart of the report's critical appraisal.
What feels like a fringe, possibly academic area, is already making waves in some tangible ways. Take the decision in the Dutch courts, for example, against Uber Lyft, in which the Dutch courts held the dismissal by machine of several English drivers was unlawful, ordering that they should be reinstated. Although the case was brought under GDPR, which prohibits purely automated decision-making, the case highlights how there are traps for both employers and employees in this area. Although AI gives businesses greater opportunities for efficiency, slavish reliance on outcomes produced by machines can undermine the efficiency of the business, as the Uber case illustrates, as well as being unfair to employees.
The report mentions an Italian employment law decision against Deliveroo (judgment not available in English) in which criteria used to select workers for jobs were found to be indirectly discriminatory against women. Although the details of the case are not available, it is not hard to imagine how criteria like 'availability' and 'responsiveness', whilst apparently neutral, might disadvantage women more than men. Employers are used to considering whether redundancy selection criteria are in any way discriminatory but are less used to thinking whether terms of engagement, as applied to workers, have a discriminatory impact.
Since the employment relationship is one in which the employer's obligation of trust and confidence plays a key part, the report questions how AI is fit to deliver this. Many decisions require empathy and a human touch. In many scenarios employees also have a right to challenge decisions made about them and to know that their employer has not acted irrationally or perversely. Unless a machine can be trained to observe the duty of trust and confidence (think of the many ways in which handling human resource issues requires a nuanced approach), there is inevitably going to be a clash between human values and a machine-designated "field of values".
Key recommendations from the report include the following:
- a "right to explainability" with regard to 'high risk' decision made by AI and a right to challenge any decision in which a machine plays a part
- an amendment to the Employment Rights Act 1996 to provide a right not to be dismissed or subjected to a detriment for the processing of inaccurate data
- making all those in the 'value chain' of provision of AI jointly and severally liable for any discriminatory AI
- statutory guidance on how to prevent discriminatory use of AI in the workplace
- a statutory right to disconnect from work technologies
- the development of ethical principles in relation to AI use in the workplace
- a duty to consult unions in relation to 'high risk' decisions involving AI in the workplace.
Given that the EU is also proposing a regulation on the safe use of AI, seeking to ensure an EU wide ethical framework and to limit the use of 'high risk' AI, we are likely to see something akin to the GDPR evolving in this sphere. Because the area is complex, any regulation will not happen overnight, but change is definitely coming.