1 de 6

9 mai 2023

AI – are we getting the balance between regulation and innovation right? – 1 de 6 Publications

The UK's approach to regulating AI

Debbie Heywood (not ChatGPT) looks at the evolution of the UK's policy on regulating AI.

En savoir plus
Auteur

Debbie Heywood

Senior Counsel – Knowledge

Read More

The UK government's White Paper on AI, published in March 2023, sets out the ambition of being "the best place in the world to build, test and use AI technology".  The European Commission's 'European approach to artificial intelligence' sets out the aim of "making the EU a world-class hub for AI". The UK and EU are not alone in their ambitions so the AI race is on. So much so that the Future of Life Institute recently published an open letter from signatories including Elon Musk and Steve Wozniak, calling for a six month moratorium on development of high functionality AI to allow the world to decide how to ensure that AI serves rather than destroys humanity.

Opinions differ as to how far off we are from sentient or 'superintelligent' AI, capable of outwitting and potentially wiping out humans, but the potential for AI to radically change our world is undisputed and evolving rapidly.  Given how long it's taking individual jurisdictions to develop AI policy (it seems every day brings a new consultation or report), the thought that six months would be sufficient to gain worldwide consensus is wildly optimistic, particularly when there is so much to be gained economically by coming in first.  And yet some governments are trying to ensure that AI is developed within an ethical framework – they just don't necessarily agree what that looks like – you can find out more about various approaches here.

The European Commission proposed the Artificial Intelligence Act in April 2021 (find out more here). It sets out an overarching framework for governing AI at EU level, providing a framework of requirements and obligations for its developers, deployers and users, together with regulatory oversight.  The framework is underpinned by a risk-categorisation system for AI with 'high' risk systems subject to the most stringent obligations, and a ban on 'unacceptable risk' AI.  The EU is hoping to pass the legislation by the end of the year.

China's approach by way of contrast is both more fragmented, and potentially more controlling, with rules being introduced requiring prior security approval for consumer-facing generative AI.

The UK has taken longer to arrive at its own approach and it has turned out to be rather different from the EU's.  One of the most difficult aspects of the EU's AI Act is how to define AI, and then how to allocate risk categories.  The UK's answer is to use a principles-based, sector-focused, regulator-led approach instead of creating umbrella legislation requiring a host of definitions which may become quickly outdated.

The UK's road to regulating AI

The UK has been increasingly focusing on AI over the last decade.  The Alan Turing Institute is the UK's largely government-funded institute for data science and AI, founded in 2015, to "make great leaps in data science and AI research in order to make the world better".  Since its inception, it has worked not only on the research and technical side, but also on ethical questions, liaising closely with government and regulators like the UK's Information Commissioner. 

The UK published its National AI Strategy in September 2021, setting out a ten-year plan to "make Britain a global AI superpower".  In line with this, in January 2022, DCMS announced that the Alan Turing Institute, supported by the British Standards Institute and the National Physical Laboratory, would pilot a new AI Standards Hub intended to increase the UK's contribution to the development of global AI technical standards. 

In July 2022, DCMS announced its AI Action Plan, again, part of its National AI Strategy. An AI paper set out proposed rules based on six principles for regulators to apply with flexibility in order to support innovation while ensuring use of AI is safe and avoids unfair bias.  Rather than centralising AI regulation, the government proposed allowing different regulators to take a tailored, more contextual approach to the use of AI, based on sandboxes, guidance and codes of practice.

Separate to the AI Action Plan, the UK government published a response to its 2020 consultation on AI and IP in July 2022 as we discussed here.  The consultation looked at three areas:

  • copyright protection for computer-generated works (CGWs) without a human author
  • licensing or exceptions to copyright for text and data mining (TDM)
  • patent protection for AI-devised inventions.

The government decided:

  • It will not propose changes to the law regarding CGWs. Proper evaluation is not possible so this area will be kept under review.
  • There will be a new copyright and database exception to allow TDM for any purpose.
  • No changes are planned to patent law to protect AI-devised inventions for now. This area will also be kept under review. Read more about AI issues and IP here.

In November 2022, the House of Commons Science and Technology Committee launched an inquiry into the governance of AI.  The Committee is looking at how to address risks to the public from use of AI, and at how to ensure AI is used ethically and responsibly.  Written submissions to a call for evidence were invited by 25 November 2022, including on the effectiveness of the current AI UK governance framework, areas for improvement, and how AI should be regulated.  The Committee was still gathering evidence at the time of writing and is yet to report.

The AI White Paper

In March 2023, after some delay, the UK government published its White Paper – 'A pro-innovation approach to AI regulation', which sets out a framework for the UK's approach to regulating AI.  The government has decided not to legislate to create a single function to govern the regulation of AI.  It has elected to support existing regulators develop a sector-focused, principles based approach.  Regulators including the ICO, the CMA, the FCA, Ofcom, the Health and Safety Executive the MHRA and the Human Rights Commission will be required to consider the following five principles to build trust and provide clarity for innovation:

  • safety, security and robustness
  • transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress.

UK regulators will publish non-statutory guidance over the next year which will also include practical tools like risk assessment templates, and standards.  The guidance will need to be pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative, underpinned by the following four core elements of the government's AI framework:

  • defining AI based on its unique characteristics to support regulator coordination
  • adopting a context-specific approach
  • providing a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities. The government expects to introduce a statutory duty on regulators to have due regard to the five AI principles, following an initial period
  • delivering new central government functions to support regulators in delivering the AI regulatory framework, including by horizon scanning and supporting an iterative regulatory approach.

Further elements to be considered by regulators are set out in Annex A. 

The government also supports the findings of the Vallance Review published earlier in March, which looked at the approach to regulating emerging and digital technologies.  With regard to AI, Sir Patrick Vallance recommended:

  • the government work with regulators to develop a multi-regulator sandbox to be operational within six months, supported by the Digital Regulatory Cooperation Forum or DRCF (comprising the ICO, CMA, Ofcom and the FCA)
  • the government should announce a clear policy position on the relationship between intellectual property law and generative AI to provide confidence to innovators and investors.

Interestingly, while providing for a regulatory sandbox, the AI White Paper does not set out further policy on the relationship between IP and generative AI although the Intellectual Property Office is working on a code of practice which is expected to be ready by the Summer.

The government has also published:

  • a report setting out evidence to support the analysis of impacts for AI governance (supported by the CDEI which confirmed public engagement)
  • a letter from DSIT to the DRCF setting out its role under the AI framework of facilitating cross-regulator engagement on developing AI framework principles, horizon scanning and establishing a cross-sectoral AI sandbox.

What does a regulator-led, sector approach look like?

Many AI systems are trained using personal data and many generate personal data.  This means that data protection regulators (in countries which have them) are at the forefront of ensuring that AI data is used in a way which protects individuals.  The way they interact with AI issues provides an indication as to how the government's approach might work. 

The UK GDPR (as with its EU predecessor) is principles-based and many of the principles overlap with the government's five AI governance principles.  This makes current data protection law a powerful tool for policing AI – something clearly recognised in the government's policy choices.

In fact, the UK's regulator, the ICO, has already produced significant guidance on AI including an AI risk toolkit in and guidance on using live facial recognition technology in public places in July 2021, and an AI auditing framework and glossary.  This sits alongside two major pieces of guidance – Explaining decisions made with AI (developed with the Turing Institute) which covers transparency issues in some depth, and the ICO's guidance on AI and Data Protection which was updated in March 2023 to include a significantly expanded section on fairness (among other changes).

The power of data protection regulators to step in to protect individuals recently became clear when the Italian data protection regulator, the Garante, announced an immediate ban (since lifted) on LLM Chatbot, ChatGPT, and an investigation into its parent company OpenAI's GDPR compliance.  Following a variety of responses from national data protection regulators, the European Data Protection Board convened a task force to share information and ensure a consistent enforcement approach.

Of course, as the White Paper recognises, the ICO is not the only regulator in town.  Competition law, financial services law, human rights law and other areas can all play their part - the FCA, for example, is about to publish its response to its consultation on the impact of AI on its work. Similarly, the MHRA has published a roadmap clarifying in guidance the requirements for AI and software used in medical devices and is already developing more guidance (find out more here). The regulators themselves are broadly in favour of playing a leading role in regulating AI although both the ICO and the CMA have expressed concerns around funding and coordination. 

The government's approach does raise concerns about contradictory guidance being produced by different regulators and about what happens when an AI system falls within the purview of more than one regulator.  And what about a system where there isn't an obvious regulator?  The government is looking to address the coherence questions and is currently consulting on how best to create a coordinated approach, with the DRCF likely to play a leading role.

Another issue is that the guidance produced by relevant regulators will not be statutory which means it will not be legally binding and while it may be influential in court proceedings, there will be no obligation to take it into account.  This leads to questions around enforcement and what incentive there will be on businesses to comply.  Enforcement will only be possible where other laws (rather than a specific AI law) have been breached and the pace of AI development is arguably considerably faster than, say an ICO or CMA enforcement action.

What happens next?

The government will monitor the effectiveness of this policy and of the resulting guidance, and consider whether it is necessary to introduce legislation to support compliance with the guidance.  It intends to publish an AI regulatory roadmap which will set out plans for establishing central government functions for the four elements of the AI framework.  The government also plans to publish a draft AI risk register for consultation, an updated roadmap and a monitoring and evaluation report some time after March 2024.

Time will tell whether the EC's top-down approach is more successful than the UK's lighter touch one.  While too much regulation can certainly hamper innovation, there's a lot to be said for certainty.  But this is difficult to achieve in such a rapidly developing environment. The EU's AI Act does attempt to future proof itself but is naturally more prescriptive when set against a jurisdiction where there is no dedicated AI law.

There is, however, more than one way to measure success. An ethical framework for AI is vitally important – not just for economic success, but also for trust and adoption. There has already been push back against the UK government's approach with calls for more specific legislation.  There are plenty of people who argue that it is better and safer to have a firmer, holistic framework for this aspect than a patchwork of non-binding guidance underpinned by laws which are not directly related to AI.  Ideally, this would be agreed at a global level – many argue that we will ultimately need an international supra-body to regulate AI in the same way we try to tackle nuclear proliferation, but achieving that, particularly in today's geopolitical climate, seems currently somewhat less likely than creating sentient AI.

Return to

home

Go to Interface main hub