20 November 2023
Radar - November 2023 – 1 of 3 Insights
There remains widespread disagreement as to whether and, if so, when AI will ever pose an existential threat, but it's undeniable that safety issues, particularly around disinformation, privacy and cyber security, are already in evidence. Consequently, and perhaps inevitably with such new and rapidly evolving technology, there is also disagreement as how best to address current and potential risks without stifling innovation.
The first international AI Safety Summit hosted by the UK at Bletchley Park took place on 1-2 November 2023. It attracted political heavyweights including the EU's Ursula von der Leyen, UN Secretary-General António Guterres, US Vice President Harris (although not President Biden himself), as well as representatives from China's Ministry of Science and Technology. Academics and tech leaders, notably OpenAI's Sam Altman and X's Elon Musk, were also in attendance. There were, though, notable absences including the President of France and the German Chancellor, and there have been complaints that civil society and campaign groups were not afforded a sufficient presence.
Key developments arising from the summit include:
A number of initiatives were announced around the summit including:
The Prime Minister announced the world's first AI Safety Institute to advance knowledge of AI safety, evaluate and test new AI and explore a range of risks. In his speech, the Prime Minister also re-iterated the UK's approach to regulating AI set out in its AI White Paper. DSIT published a discussion paper to support the summit and a report evaluating the six-month pilot of the UK's AI Standards Hub. In addition, leading frontier AI firms responded to the government's request to outline their safety policies.
President Biden issued an Executive Order on safe, secure and trustworthy AI (EO). The EO requires:
The EO calls on Congress to pass bipartisan data privacy legislation and makes a number of privacy-related directions while also covering:
Vice President Harris subsequently announced a range of commitments and policy developments at the summit, including the establishment of an AI Safety Institute intended to operationalise NIST's AI risk management framework, creating guidelines, tools, benchmarks and best practice recommendations to identify and mitigate AI risk. It will also enable information sharing and research, including with the UK's planned AI Safety Institute. The VP also announced draft policy guidance on US government use of AI, and the US made a political declaration on the responsible military use of AI and autonomy.
The G7 leaders have agreed International Guiding Principles for all actors in the AI ecosystem and an International Code of Conduct for developers of advanced AI systems as part of the Hiroshima AI process.
The guiding principles document is intended to be a 'living document' building on the existing OECD AI principles. It currently sets out 11 non-exhaustive principles to help "seize the benefits and address the risks and challenges brought by AI". They are intended to apply to all AI actors when and as applicable, to cover design, development, deployment and use of advanced AI systems. They include commitments to mitigate risks and misuse, and identify vulnerabilities, to encourage responsible information sharing, reporting of incidents, investment in security and a creation of a labelling system to enable users to identify AI-generated content.
The G7 suggests organisations follow the voluntary Code of Conduct which sets out a list of actions to help maximise benefits and minimise risks of advanced AI systems with actions for all stages of the AI lifecycle.
The latest round of trilogues on the EU's draft AI Act was held on 24 October 2023. Agreement was reportedly reached on provisions for classifying high-risk AI applications and on general guidance for using enhanced foundation models. Since then, there have been reports suggesting new disagreements around regulation of foundation models which threaten to derail the legislation. The next and potentially final round of trilogues is planned for 6 December. In her speech at the summit, Ursula von der Leyen not only highlighted the EU's progress with the AI Act but also focused on the EU's plans to set up a European AI Office to deal with the most advanced AI models with an oversight and enforcement capacity. A high-level meeting in Brussels is planned in January 2024 to strengthen EU cooperation on AI development.
In the meantime, the European Data Protection Supervisor (EDPS) has published an Opinion on the AI Act setting out its final recommendations. Much of the Opinion relates to the EDPS's role as the notified body and market surveillance authority as well as competent authority for the supervision of the provision or use of AI systems in respect to which it asks for a number of clarifications. The EDPS also calls for privacy protections to be at the forefront of the legislation, and for a right for individuals to lodge complaints regarding the impact of AI systems on them with the EDPS explicitly recognised as competent to receive complaints alongside DPAs who, the EDPS recommends, should be designated as the national supervisory authorities under the AI Act to cooperate with authorities that have specific expertise in deploying AI systems.
The UN announced the launch of a high-level advisory body on AI. This is a multi-stakeholder body intended to undertake analysis and make recommendations for international governance of AI. The 38 participating experts are made up of government, private sector and civil society stakeholders. They will consult widely to "bridge perspectives across stakeholder groups and networks".
Many will agree with UK Prime Minister Sunak's view that global consensus is the only genuinely effective path to managing potential AI-related doomsday scenarios, but it's important to ask what the summit has really achieved. Getting a wide range of power brokers to sit down and discuss the issues is certainly an important step, and the positioning of the UK as rainmaker has been moderately successful. However, Sunak's communiqué, now signed by politicians from a wide range of countries including the US, China, Nigeria, Canada and Singapore, stops short of calling for specific AI regulation.
This is in line with the UK government's policy outlined in its 2023 White Paper on AI, but at odds with the EU's approach which is to introduce AI-specific legislation. The communiqué is ambitious (calling for international co-operation and the need to be inclusive) but there is no call for specific AI regulation or enforcement, and the 20+ countries which have signed obviously falls far short of global coverage.
However, the summit does appear to be the start of something big – a change in mood music, perhaps. For example, there are commitments for further summits in the years ahead. Significantly, the pledges to establish AI Safety Institutes in the UK and the US and to test AI technology before its release onto the market also indicate a desire for cross-border collaboration on evaluating risks and promoting safety, as well as – in theory at least – collaboration between Big Tech and governments.
Getting to a place of global agreement on AI regulation at this point was always going to be a tough ask. In the first place, there is disagreement as to the nature of the safety issues posed by AI and whether we should be focusing on future existential threats or on the currently destabilising potential of deepfakes and disinformation (or indeed how to effectively focus on both concerns). It's also hard to envisage progress on AI safety regulation keeping up with the pace of technological advances.
The Prime Minister himself acknowledged that the rapid development of technology is in tension with the time and resources required to consult, draft and implement legislation, but there are strong voices calling for some form of international oversight body. Perhaps what form such a body should take will be high on the agenda of the next summit, but for the foreseeable future, a fragmented approach to the safety concerns around AI will persist.
20 November 2023
27 October 2023
by Emma Allen
20 November 2023