What's the issue?
The UK government confirmed it had no plans to introduce AI-specific legislation in February as we discussed here. Two months is, however, a long time when it comes to AI and there are now reports that work has begun on draft legislation, most likely to address issues relating to large language models rather than AI applications themselves.
What's the development?
According to the Financial Times, details are thin on the ground but there is speculation that the government may seek to make current voluntary agreements to submit large language model (LLM) algorithms to a safety assessment process mandatory. There are also suggestions that the UK will consider amending copyright legislation to allow organisations and individuals to opt out of allowing LLMs to scrape their content.
So why the change in approach now? Prime Minister Sunak has been adamant that the UK will not rush to regulate AI and a spokesman confirmed that this remains the government's policy. However, the global mood on AI has been changing as safety concerns increase. The EU is already leading the way. Its AI Act has almost completed the legislative process with only a few formalities remaining before its publication. It will then take three years to apply in full but some obligations will apply from six months and others from a year after it comes into force. Meanwhile some are too impatient to wait for government or Parliamentary action. Notably, the TUC has taken the unusual step of publishing its own AI Bill, drafted by multiple stakeholders. The TUC Artificial Intelligence (Employment and Regulation) Bill sets out proposals for regulating AI in an employment context and is no doubt intended to influence any incoming Labour government as well as the current one.
Is the UK seeking to catch up with what the EU likes to think will be global standard setting legislation? Or has it become more sceptical about realising the goal of a global arrangement? Perhaps the upcoming AI Safety Summit in France will provide more insight.
Even if nothing comes of these rumours, the UK is continuing to focus on AI in the way the government intends – ie sector by sector and regulator by regulator, with varying degrees of a joined up approach. Recent UK policy developments (in March and April 2024) include:
- A House of Lords Library briefing was published on 18 March 2024 which discusses the key elements of the Artificial Intelligence (Regulation) Bill 2023-24. This is a private members' Bill which, as such, is unlikely to progress. Among other things, it seeks to introduce a new body, the AI Authority, which would have various functions designed to help address AI regulation in the UK. The Bill had its second reading on 22 March.
- On 25 March, DSIT published a responsible AI toolkit of guidance developed by the Responsible Technology Adoption Unit (formerly CDEI). It aims to support organisations and practitioners in developing and deploying AI systems safely and responsibly by housing guidance, resources and research in one place. It collates some pre-existing guidance including on AI assurance and will be added to over time. Now added to the toolkit, the government also published guidance on Responsible AI in recruitment. The guidance looks at potential risks and suggests a range of assurance mechanisms to manage them. These include putting an AI governance framework in place, carrying out impact assessments and bias audits, performance testing, risk assessments, model cards, and training and upskilling employees. On 11 April, the OECD announced the partnership of its own AI Assurance catalogue which provides a global exchange for AI tools and metrics, with the UK's AI Assurance Portfolio.
- On 27 March, the Institute for Public Policy Research published a report on how generative AI could impact employment in the UK. The report warns that up to 8 million jobs could be at risk due to generative AI and urges the government to develop a jobs-centric industrial strategy to facilitate job transition and ensure the benefits of automation are spread across society.
- The UK and US AI Safety Institutes signed an MoU on 1 April 2024 aiming to work together on AI safety research. This will include developing tests for advanced AI models and working together to align approaches and evaluation suites for AI models, systems and agents.
- On 11 April, the CMA published an update paper on AI foundation models as part of its review launched in May 2023. The update paper follows an initial report published in September 2023 which outlined proposed principles to guide the development and deployment of foundation models to achieve positive competition and consumer protection outcomes. The update paper looks at key changes since publication of the initial report and confirms the final guiding principles. It also sets out three key risks to competition posed by foundation model AI and looks at how the CMA's proposals will mitigate risk. The CMA proposes stepping up its use of merger control and taking account of developments in foundation model-related markets when deciding on its enforcement priorities for its incoming powers under the DMCC Bill to help mitigate and address risks. The CMA's update paper was accompanied by a technical report published on 16 April. The CMA plans to publish a further update in autumn 2024.
What does this mean for you?
We are certainly a long way off actual draft legislation, and of course we expect a general election, probably in autumn 2024. The outcome of that may well be a change of government, potentially leading to revised AI policy. In the meantime, regulators and the government more widely, continue to work on the issues and opportunities raised by AI in their areas.