2024年4月25日
– 3 / 3 观点
The UK government confirmed it had no plans to introduce AI-specific legislation in February as we discussed here. Two months is, however, a long time when it comes to AI and there are now reports that work has begun on draft legislation, most likely to address issues relating to large language models rather than AI applications themselves.
According to the Financial Times, details are thin on the ground but there is speculation that the government may seek to make current voluntary agreements to submit large language model (LLM) algorithms to a safety assessment process mandatory. There are also suggestions that the UK will consider amending copyright legislation to allow organisations and individuals to opt out of allowing LLMs to scrape their content.
So why the change in approach now? Prime Minister Sunak has been adamant that the UK will not rush to regulate AI and a spokesman confirmed that this remains the government's policy. However, the global mood on AI has been changing as safety concerns increase. The EU is already leading the way. Its AI Act has almost completed the legislative process with only a few formalities remaining before its publication. It will then take three years to apply in full but some obligations will apply from six months and others from a year after it comes into force. Meanwhile some are too impatient to wait for government or Parliamentary action. Notably, the TUC has taken the unusual step of publishing its own AI Bill, drafted by multiple stakeholders. The TUC Artificial Intelligence (Employment and Regulation) Bill sets out proposals for regulating AI in an employment context and is no doubt intended to influence any incoming Labour government as well as the current one.
Is the UK seeking to catch up with what the EU likes to think will be global standard setting legislation? Or has it become more sceptical about realising the goal of a global arrangement? Perhaps the upcoming AI Safety Summit in France will provide more insight.
Even if nothing comes of these rumours, the UK is continuing to focus on AI in the way the government intends – ie sector by sector and regulator by regulator, with varying degrees of a joined up approach. Recent UK policy developments (in March and April 2024) include:
The ICO published a third call for evidence on generative AI. This focuses on accuracy of training data and model outputs and closes at 17:00 on 10 May 2024. The call looks at the meaning of accuracy in a generative AI and data protection context and the impact of accuracy as well as the link between purpose and accuracy. It also looks at the impact of training data on accuracy of output.
We are certainly a long way off actual draft legislation, and of course we expect a general election, probably in autumn 2024. The outcome of that may well be a change of government, potentially leading to revised AI policy. In the meantime, regulators and the government more widely, continue to work on the issues and opportunities raised by AI in their areas.
2024年4月25日
作者 Debbie Heywood 以及 Louise Popple