1 of 4

10 July 2023

AI and data – 1 of 4 Insights

Will the ICO be the 'lead regulator' for AI in the UK?

Debbie Heywood looks at the UK government's vision for regulating AI and the role of the ICO and data protection law.


Debbie Heywood

Senior Counsel – Knowledge

Read More

The UK government's plans for regulating AI

In March 2023, after some delay, the UK government published its White Paper – 'A pro-innovation approach to AI regulation', which sets out a framework for the UK's approach to regulating AI. The government has decided not to legislate to create a single function to govern the regulation of AI. It has elected to support existing regulators in developing a sector-focused, principles-based approach. Regulators including the ICO, the CMA, the FCA, Ofcom, the Health and Safety Executive the MHRA and the Human Rights Commission will be required to consider the following five principles to build trust and provide clarity for innovation:

  • safety, security and robustness
  • transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress.

UK regulators will publish potentially joint non-statutory guidance over the next year which will also include practical tools like risk assessment templates, and standards. The guidance will need to be pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative, underpinned by the following four core elements of the government's AI framework:

  • defining AI based on its unique characteristics to support regulator coordination
  • adopting a context-specific approach
  • providing a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities. The government expects to introduce a statutory duty on regulators to have due regard to the five AI principles, following an initial period
  • delivering new central government functions to support regulators in delivering the AI regulatory framework, including by horizon scanning and supporting an iterative regulatory approach.

Barely had the White Paper been published and it seemed that the government's attitude might be shifting as calls for urgent regulation of AI increased, both on a national, but also on a global level. The government's tone appears to be changing to one which is more cautious on the need for regulation, with Rishi Sunak saying "guardrails" are needed. The Prime Minister is now positioning the UK as charting a 'middle way' between over- and under-regulation, and is hoping to make the UK "not just the intellectual home, but the geographical home of global AI safety regulation". The UK is also reportedly advocating setting up a global AI watchdog in the UK, modelled on the International Atomic Energy Agency which oversees the safe use of nuclear energy. To this end, the UK will be holding a global summit on AI safety planned for the autumn and it is not inconceivable that policy might change at that point. 

Data protection law - an inspiration for AI regulation?

While a number of regulators are in the frame in the White Paper, given that many AI systems are trained using personal data and many generate personal data, the UK's data protection regulator the ICO, looks set to be at the forefront of ensuring that AI data is used in a way which protects individuals. The ICO's involvement with AI issues to date provides an indication as to how the government's approach might work. 

The UK GDPR (as with its EU predecessor) is principles-based and many of the principles overlap with the government's five AI governance principles. It is striking quite how much the essential elements of data protection law inform the AI governance principles. Crucially fairness, transparency, accountability and security are central to both (as we explore in more detail in the other articles in this edition) although they may not always mean exactly the same thing. This potentially makes current data protection law a powerful tool for policing AI – something clearly recognised in the government's policy choices.

In fact, the ICO, has already produced significant guidance on AI which is a strategic priority under its ICO25 Strategy. The ICO's work on AI to date includes:

This is in addition to work on various panels and boards nationally and internationally, its regulatory sandbox, running workshops, and its role in advising the government on incoming legislation. On 19 June, the ICO called on businesses to address the privacy risks of generative AI before adopting the technology, saying it will carry out tougher checks on whether organisations have complied with data protection law before and when using generative AI. 

Data protection law - a road block to AI innovation?

Familiar though the government's AI principles may be to those used to dealing with data protection law, there are principles fundamental to EU and UK data protection law which are not included – notably those of data minimisation, purpose limitation and storage limitation, although arguably fairness may come into play in relation to them. These principles are at the core of data protection regulation but are likely to stand in the way of many AI models, potentially  placing AI innovation and data protection in direct opposition.

Another issue is whether it is realistic to expect organisations to comply with data protection principles in the context of AI. As we point out here, it is hard enough for a simple e-commerce website to comply with transparency obligations, or for a small business to respond to an employee subject access request. How hard then is it to explain to users how their data is being processed and on what lawful basis, when complex AI models are involved?  Many organisations with multiple data operations or using innovative technology are already having to grapple with these issues but they are only set to become more complex as use of AI becomes more prevalent.

The ICO's views on the government's approach

In its response to the AI White Paper, the ICO underlined its suitability for participating in regulating AI, stressing its independence, its history of providing guidance, enforcing the law, running sandboxes and working with other regulators, and its commitment to a pro-innovation, risk-based approach to regulation and enforcement. 

While broadly welcoming the government's approach, the ICO also asked for clarification on a number of issues including:

The role of regulators

The AI White Paper suggests creating a central function to oversee the AI regulatory landscape, convening regulators to produce joint regulatory guidance and a joint regulatory sandbox. The ICO stresses that it is the regulators themselves who must produce guidance in order to provide organisations with certainty, and asks for clarification on the respective roles of government and regulators in issuing guidance and advice. The ICO suggests the DRCF should play a central role.

AI principles

As noted, the AI principles are similar but not entirely the same as some of the principles in Article 4 UK GDPR. The ICO underlines the need for the AI principles to be interpreted in a way compatible with the data protection principles, to which end it suggests:

  • The fairness principle should apply not only to the use of an AI system but also to its development.
  • In the context of contestability and redress, the ICO suggests that organisations using AI rather than regulators should be tasked with clarifying existing routes to contestability and redress and with implementing proportionate measures to ensure the contestability of the outcome of the use of AI, as they have oversight of their own systems. The ICO suggests a better task for regulators in the context of contestability and redress, is to make people aware of their rights in the context of AI.
  • Clarification around the issue of interaction with Article 22 UK GDPR which relates to decisions made by solely automated means which have a significant effect on individuals. The ICO suggests clarifying that where Article 22 is engaged, it is a requirement to provide individuals with a justification, not merely a consideration.

Interestingly, the ICO's response did not discuss the possibility of data protection law coming into conflict with AI innovation, for example around issues like data minimisation and retention in any detail (although the ICO has raised this elsewhere). 

Guidelines and sandboxes

The AI White Paper envisages regulators working together on guidelines and regulatory sandboxes. The ICO suggests that sector-specific or case-specific guidelines would be most useful and recommends that the government carry out research to help prioritise what organisations would value. 

Based on experience with its own Regulatory Sandbox, Innovation Advice and Innovation Hub, the ICO recommends the following elements be incorporated into the AI sandbox:

  • The sandbox should cover all digital innovation not just AI as inquiries are unlikely to be limited to AI.
  • The sandbox should be focused on providing timely device aligned with AI development lifecycles and should be designed to support all businesses seeking clarity on the law, rather than a handful of those looking for specific regulatory authorisation prior to launch.
  • To help prioritise support to innovators, there should be a focus on: the degree of innovation relative to existing products and services; the degree of regulatory barriers faced or support needed; and the potential for wider economic, social or environmental benefit.

A new consistency and cooperation procedure?

It is worth noting that the ICO supports the level of inter-regulatory cooperation called for under the AI White Paper, while also suggesting an even more sector-based approach, and recommending that "sector- or case-specific guidance will be of greater usefulness to AI developers than joined-up guidance on each non-statutory principle.  The latter may be too high level, and therefore require a large degree of interpretation…".

The ICO, unsurprisingly, highlights the DRCF as the focal point for regulatory cooperation but while raising the question of funding/costs associated with increased work on AI, it says little about two other potential issues. The ICO comments on the potential for the AI principles to conflict with data protection principles, but does not say much about what would happen if the separate sector guidance it advocates, were inconsistent. Joint regulation would, presumably produce a coherent approach whereas it may be more difficult to ensure that separate regulation produced by the various regulators involved does not result in conflict or a lack of clarity. In addition, there is no suggestion that the guidance be placed on a statutory footing. Without this, it remains guidance and cannot be relied upon when enforcing in the courts (although it may prove influential). Perhaps the ICO feels this is unnecessary given the considerable enforcement tools already at its disposal.

In terms of how different regulators can work cooperatively together, the GDPR's consistency and cooperation mechanism (which the ICO was part of until the end of the Brexit transition period), is arguably an example of good practice. At any rate, the ICO is well versed in having to cooperate and reach common positions with other regulators, albeit involving the same piece of legislation. 

It is clear then that the ICO is familiar not only with a principles-based approach (and with the concepts behind the AI principles) but also with regulatory cooperation. Add to this the fact that (as the ICO points out in its response to the White Paper): "even though not all AI systems process personal data, a substantial portion, and particularly the ones implicit in the Government’s framing of the AI White Paper principles, will", and it seems entirely possible that the ICO will effectively be the UK's 'lead' AI regulator.

Back to

Global Data Hub

Go to Global Data Hub main hub