In March 2023, after some delay, the UK government published its White Paper – 'A pro-innovation approach to AI regulation', which sets out a framework for the UK's approach to regulating AI. The government has decided not to legislate to create a single function to govern the regulation of AI. It has elected to support existing regulators in developing a sector-focused, principles-based approach. Regulators including the ICO, the CMA, the FCA, Ofcom, the Health and Safety Executive the MHRA and the Human Rights Commission will be required to consider the following five principles to build trust and provide clarity for innovation:
UK regulators will publish potentially joint non-statutory guidance over the next year which will also include practical tools like risk assessment templates, and standards. The guidance will need to be pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative, underpinned by the following four core elements of the government's AI framework:
Barely had the White Paper been published and it seemed that the government's attitude might be shifting as calls for urgent regulation of AI increased, both on a national, but also on a global level. The government's tone appears to be changing to one which is more cautious on the need for regulation, with Rishi Sunak saying "guardrails" are needed. The Prime Minister is now positioning the UK as charting a 'middle way' between over- and under-regulation, and is hoping to make the UK "not just the intellectual home, but the geographical home of global AI safety regulation". The UK is also reportedly advocating setting up a global AI watchdog in the UK, modelled on the International Atomic Energy Agency which oversees the safe use of nuclear energy. To this end, the UK will be holding a global summit on AI safety planned for the autumn and it is not inconceivable that policy might change at that point.
While a number of regulators are in the frame in the White Paper, given that many AI systems are trained using personal data and many generate personal data, the UK's data protection regulator the ICO, looks set to be at the forefront of ensuring that AI data is used in a way which protects individuals. The ICO's involvement with AI issues to date provides an indication as to how the government's approach might work.
The UK GDPR (as with its EU predecessor) is principles-based and many of the principles overlap with the government's five AI governance principles. It is striking quite how much the essential elements of data protection law inform the AI governance principles. Crucially fairness, transparency, accountability and security are central to both (as we explore in more detail in the other articles in this edition) although they may not always mean exactly the same thing. This potentially makes current data protection law a powerful tool for policing AI – something clearly recognised in the government's policy choices.
This is in addition to work on various panels and boards nationally and internationally, its regulatory sandbox, running workshops, and its role in advising the government on incoming legislation. On 19 June, the ICO called on businesses to address the privacy risks of generative AI before adopting the technology, saying it will carry out tougher checks on whether organisations have complied with data protection law before and when using generative AI.
Familiar though the government's AI principles may be to those used to dealing with data protection law, there are principles fundamental to EU and UK data protection law which are not included – notably those of data minimisation, purpose limitation and storage limitation, although arguably fairness may come into play in relation to them. These principles are at the core of data protection regulation but are likely to stand in the way of many AI models, potentially placing AI innovation and data protection in direct opposition.
Another issue is whether it is realistic to expect organisations to comply with data protection principles in the context of AI. As we point out here, it is hard enough for a simple e-commerce website to comply with transparency obligations, or for a small business to respond to an employee subject access request. How hard then is it to explain to users how their data is being processed and on what lawful basis, when complex AI models are involved? Many organisations with multiple data operations or using innovative technology are already having to grapple with these issues but they are only set to become more complex as use of AI becomes more prevalent.
In its response to the AI White Paper, the ICO underlined its suitability for participating in regulating AI, stressing its independence, its history of providing guidance, enforcing the law, running sandboxes and working with other regulators, and its commitment to a pro-innovation, risk-based approach to regulation and enforcement.
While broadly welcoming the government's approach, the ICO also asked for clarification on a number of issues including:
The role of regulators
The AI White Paper suggests creating a central function to oversee the AI regulatory landscape, convening regulators to produce joint regulatory guidance and a joint regulatory sandbox. The ICO stresses that it is the regulators themselves who must produce guidance in order to provide organisations with certainty, and asks for clarification on the respective roles of government and regulators in issuing guidance and advice. The ICO suggests the DRCF should play a central role.
As noted, the AI principles are similar but not entirely the same as some of the principles in Article 4 UK GDPR. The ICO underlines the need for the AI principles to be interpreted in a way compatible with the data protection principles, to which end it suggests:
Interestingly, the ICO's response did not discuss the possibility of data protection law coming into conflict with AI innovation, for example around issues like data minimisation and retention in any detail (although the ICO has raised this elsewhere).
Guidelines and sandboxes
The AI White Paper envisages regulators working together on guidelines and regulatory sandboxes. The ICO suggests that sector-specific or case-specific guidelines would be most useful and recommends that the government carry out research to help prioritise what organisations would value.
Based on experience with its own Regulatory Sandbox, Innovation Advice and Innovation Hub, the ICO recommends the following elements be incorporated into the AI sandbox:
It is worth noting that the ICO supports the level of inter-regulatory cooperation called for under the AI White Paper, while also suggesting an even more sector-based approach, and recommending that "sector- or case-specific guidance will be of greater usefulness to AI developers than joined-up guidance on each non-statutory principle. The latter may be too high level, and therefore require a large degree of interpretation…".
The ICO, unsurprisingly, highlights the DRCF as the focal point for regulatory cooperation but while raising the question of funding/costs associated with increased work on AI, it says little about two other potential issues. The ICO comments on the potential for the AI principles to conflict with data protection principles, but does not say much about what would happen if the separate sector guidance it advocates, were inconsistent. Joint regulation would, presumably produce a coherent approach whereas it may be more difficult to ensure that separate regulation produced by the various regulators involved does not result in conflict or a lack of clarity. In addition, there is no suggestion that the guidance be placed on a statutory footing. Without this, it remains guidance and cannot be relied upon when enforcing in the courts (although it may prove influential). Perhaps the ICO feels this is unnecessary given the considerable enforcement tools already at its disposal.
In terms of how different regulators can work cooperatively together, the GDPR's consistency and cooperation mechanism (which the ICO was part of until the end of the Brexit transition period), is arguably an example of good practice. At any rate, the ICO is well versed in having to cooperate and reach common positions with other regulators, albeit involving the same piece of legislation.
It is clear then that the ICO is familiar not only with a principles-based approach (and with the concepts behind the AI principles) but also with regulatory cooperation. Add to this the fact that (as the ICO points out in its response to the White Paper): "even though not all AI systems process personal data, a substantial portion, and particularly the ones implicit in the Government’s framing of the AI White Paper principles, will", and it seems entirely possible that the ICO will effectively be the UK's 'lead' AI regulator.
Victoria Hordern looks at the UK's proposed regulatory AI principles of transparency, explainability and fairness in the context of the UK GDPR.
2 of 4 Insights
Sally Annereau looks at the pillars of accountability and governance for AI systems using or generating personal data
3 of 4 Insights
Jo Joyce looks at the UK ICO's approach to security, safety and robustness.
4 of 4 Insights