What's the issue?
The UK government published its long-awaited White Paper on AI in March 2023, as we discuss here. Unlike the EU which is working on its top-down legislation, the AI Act, the UK decided it would not introduce legislation to regulate AI. Instead, it plans to take a sector-based approach, under which existing regulators including the ICO, the CMA and the FCA, will be required to take into account five principles to inform their work on AI-related issues, and to publish non-statutory guidance on AI.
What's the development?
Barely had the White Paper been published, and it seemed that the government's attitude might be shifting as calls for urgent regulation of AI increased, both on a national, but also on a global level. Most recently, scientists and tech leaders, including the CEO of OpenAI, published a statement warning that managing the risks posed by AI should be a global priority. In the last month or so, we've seen so much on AI that it's hard keep track. Here are some of the latest developments with a focus on the UK, EU and US.
UK
The government's tone appears to be changing to one which is more cautious on the need for regulation, with Rishi Sunak saying "guardrails" are needed. The Prime Minister is now positioning the UK as charting a 'middle way' between over and under-regulation, and is hoping to make the UK "not just the intellectual home, but the geographical home of global AI safety regulation". The UK is also reportedly advocating setting up a global AI watchdog in the UK, modelled on the International Atomic Energy Agency which oversees the safe use of nuclear energy. To this end, the UK will be holding a global summit on AI safety planned for the autumn. It's being touted as the AI version of the COP summits on climate change - we can only hope it makes more progress.
In April, the government announced a Foundation Model Taskforce to carry out research on AI safety with £100m in funding, later backed with further funding across the AI and data science workforce. Two months later, Tony Blair and William Hague (not an obvious pairing) published a report which is highly critical of the model and of what it terms the government's failure "to anticipate the trajectory of progress". They call for the government's advisors on the AI Council and at the Alan Turing Institute to be replaced, saying "the Alan Turing Institute has demonstrably not kept the UK at the cutting edge of international AI developments" and they argue the Taskforce is under-funded and should report directly to the Prime Minister. They recommend a new AI lab modelled on CERN (the European organisation for Nuclear Research) to research and test safe AI "with the aim of becoming a 'brain' for both a UK and an international AI regulator".
Amidst the criticism, there are reports that the AI Council has had its last meeting and that the government is looking to appoint new advisors.
In the interim, the Centre for Data Ethics and Innovation (CDEI) has published a portfolio of AI assurance techniques in collaboration with techUK. It is intended to be used by anybody involved in designing, developing, deploying or procuring AI-enabled systems, and sets out examples of AI assurance techniques being used in the real world, to support the development of trustworthy AI. The techniques have been mapped on to the principles set out in the government's AI White Paper. They include:
- impact assessments
- impact evaluations
- bias audit
- compliance audit
- certification
- conformity assessment
- performance testing
- formal verification.
These are applied across a number of sectors and will be added to over time.
The Centre for Data Ethics and Innovation also published a report on Enabling responsible access to demographic data to make AI systems fairer in mid-June. It sets out approaches to accessing demographic data responsibly for bias detection and mitigation. The report suggests using data intermediaries and proxy data may help manage risks although they will not always be suitable. It underlines that in the short term, direct collection of demographic data is likely to be the best option in most circumstances, saying this can usually be done lawfully provided care is taken to comply with applicable data protection law. However, the CDEI also sees an opportunity for an ecosystem involving intermediaries and proxies to emerge which offer better options.
On 19 June, The UK's ICO called on businesses to address the privacy risks of generative AI before adopting the technology and says it will carry out tougher checks on whether organisations have complied with data protection law before and when using generative AI. Businesses are cautioned to "spend time at the outset to understand how AI is using personal information, mitigate any risks….and then roll out your AI approach with confidence that it won't upset customers or regulators". The ICO is signalling that this will be a priority area, saying "businesses need to show us how they've addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different question (sic) compared with one for a sexual health clinic, for instance".
EU
The European Parliament adopted its negotiating position on the AI Act without further amendment from the provisional version in mid-June. A number of last minute amendments were tabled leading to fears that consensus would collapse, but none were adopted and the vote progressed smoothly. Trilogues between the European Council and European Parliament are likely to begin in earnest after Spain takes over the presidency of the Council in July. The aim is to reach agreement by November and to get the Act passed by the end of 2023. Read more about changes being proposed by both legislators to the original proposal here.
ENISA, the EU cybersecurity agency, published four reports on AI and cybersecurity in early June:
- A multi-layer framework for good cybersecurity practices for AI.
- AI and Cybersecurity research.
- Cybersecurity and privacy in AI – forecasting demand on electricity grids.
- Cybersecurity and privacy in AI – medical imaging diagnosis.
Separately, the European Commission is reportedly working with Google on an AI standards agreement.
The USA and the G7
Both the UK and the EU are in talks with the USA to align policy. The EU has said it expects to draft a voluntary code of conduct on AI with the USA within weeks which will be open to other 'like-minded countries' to sign up to.
AI was also on the agenda when the Prime Minister visited President Biden in Washington DC earlier this month. The Atlantic Declaration: a framework for a twenty-first century US-UK Economic Partnership announced on 8 June 2023, that the US and UK have agreed to accelerate co-operation regarding "safe and responsible development" of new technologies including AI.
More interestingly, the US is also reportedly considering setting up a risk-based system of regulation similar to that envisaged by the EU's AI Act. Details are thin on the ground but it would presumably be more extensive than the US's non-binding Blueprint for an AI Bill of Rights although Bloomberg reports that there is no agreement on approach.
Meanwhile, the US, supported by the UK and Canada, is reportedly trying to water down a Council of Europe-proposed binding worldwide treaty on AI. The Council of Europe (not to be confused with the European Council) is a pan-European (not EU) human rights body with 46 Member States in addition to observers from countries including the USA, Israel, Japan and Canada. The US is proposing that while all public organisations in signatory countries would automatically be covered, each country would have to opt-in its own companies. Other suggestions could weaken the text due to national security carve outs and focus the treaty more on a set of principles than on restrictions.
Not to be outdone, the G7 announced it will "advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values". Another big statement which is not yet supported by detail.
What does this mean for you?
Obviously these are just some of many initiatives going on globally from the last few weeks. And there's the rub. While there is a lot of movement and a growing belief that something needs to be done, there is no common view on whether or how to regulate AI. Given how rapidly the technology is developing and its potential impact on society (for good and/or bad), global consensus is needed but is unlikely to emerge any time soon. We are almost certainly facing a fragmented approach for some time to come which will inevitably lead to legislation and international agreements playing catch up.