By contrast to the EU which in March 2024 approved the Artificial Intelligence Act, the UK government recently reiterated that it had no plans to introduce domestic AI legislation in its response in February this year to its own 2023 AI white paper.
Instead, it confirmed that it would continue with its intent to bring in a voluntary regulatory regime for AI, although acknowledging that "the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured". Since then the Department for Science, Innovation and Technology did publish new guidance on procuring and deploying AI technology and systems responsibly in the HR and recruitment sector.
Providing an alternative view, and focussing on the potential effects of AI in the workplace, the TUC union formed a taskforce in 2023 which brought together experts in politics, HR, law, technology and the voluntary sector to prepare a draft bill aimed at protecting workplace rights. The draft Artificial Intelligence (Employment and Regulation) Bill (Bill) was published last month and the background to the Bill can be found here.
The Bill aims to regulate protections and rights for workers, employees, jobseekers and trade unions, as well as obligations for employers and prospective employers when dealing with decision-making at work that is based on artificial intelligence systems. It aims to provide for the fair and safe operation of AI systems where there is 'high-risk' decision making. This is similar in approach to the one taken by the EU in their AI Act. The Bill defines high risk being where there are "legal effects or other similarly significant effects”.
Key provisions of the Bill include that:
- the employer ensures only safe AI systems make it into the workplace, carries out detailed AI risk assessments of AI decision making and publishes a register of the AI decision - making systems in operation
- employees, workers and unions are fully consulted, involved and informed before high-risk AI decision making systems in relation to employees are introduced, as well as on a rolling 12-month basis. This statutory right would mirror the existing collective redundancy consultation obligations, all parties would have access to information about how the AI system is operating and would have a right to human review of AI decision-making. There is a ban on emotion recognition technology which is used to the detriment of workers, employees, and jobseekers.
Legal rights in the Bill which would go beyond current UK employment law include:
- a right for unions to be given data about union members that is being used in relation to workplace AI decision-making
- employers would have to show that there had not been AI based discrimination but would have a defence to show they have properly audited an AI system
- guidance on AI and the workplace by the EHRC, ACAS and the ICO
- a right for employees not to be unfairly dismissed by an AI system
- a potential right for employees to disconnect outside of agreed working hours based on precedents from Europe and Australia.
The AI Bill, while seeking to protect the rights and interests of employees, appears to go further by providing new rights to trade unions in relation to AI systems and employee data, and somewhat tangentially, the suggestion of a new 'right to disconnect'. It is not clear how or whether the Bill will make its way through Parliament in any shape or form, but the need for some level of more detailed regulation is likely as the role of AI in the workplace develops. This has been considered recently by our Technology, Media and Communications group: Is UK AI regulation on the way in after all?
For now, other, potentially more pressing issues face the government, including the prospect of a general election in the months ahead.