2 de 4

10 juillet 2023

AI and data – 2 de 4 Publications

The UK's proposed regulatory AI principles of transparency, explainability and fairness

Victoria Hordern looks at the UK's proposed regulatory AI principles of transparency, explainability and fairness in the context of the UK GDPR.

En savoir plus
Auteur

Victoria Hordern

Associé

Read More

Two of the five core principles set out in the UK government's White Paper on AI focus on, respectively, the message provided to individuals who are subject to AI, and the impact AI can have on them. These principles are (i) appropriate transparency and explainability, and (ii) fairness. It is worth considering them together since they clearly interrelate, similar to the way that the first data protection principle under the (UK) GDPR groups together lawfulness, fairness and transparency as underlying principles for the processing of personal data.

What does the White Paper say about these principles?

In its White Paper, the UK government indicates that it expects UK regulators to apply the principles proportionately to address risks posed by AI within their remits. The requirement for transparency and explainability is not absolute – the requirement is to provide "appropriate" transparency and explainability. The government indicates that transparency and explainability are two different aspects. Transparency refers to the communication of appropriate information about an AI system to relevant individuals, whereas explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system.

The White Paper suggests there are at least two audiences that a developer needs to consider – those individuals who may be subject to the AI system and the regulator who needs sufficient information to carry out its regulatory remit. Individuals directly affected by an AI system should be able to access enough information to enable them to enforce their rights. The White Paper also flags the role of technical standards that can provide guidance on available methods to assess, design and improve transparency and explainability.

The White Paper recognises that, as a concept, fairness affects many areas of law including equality, human rights, data protection, consumer and competition law. But, at its heart, fairness means that AI systems should not undermine the legal rights of individuals or organisations and should not discriminate unfairly against individuals. 

Where an AI decision has a high impact outcome - for example, where it is used to manage job, loan or insurance applications, it should be justified. Consequently, the government expects regulators to be able to describe and illustrate what fairness means within their sectors and remit.  In particular, regulators should issue guidance on fairness which incorporates compliance with laws such as the Equality Act 2010 and Human Rights Act 1998, so that AI systems do not produce discriminatory outcomes.

Transparency, explainability and fairness under current law

Transparency and fairness have always been part of EU data protection law (and therefore UK law currently).  Data protection law is premised on the requirement that individuals have a right to be told how their personal data is used and a right that their personal data be used fairly. This is set out in the (UK) GDPR under Article 5(1)(a) and expanded upon specifically in Articles 13 and 14 which require a controller to provide a privacy notice with specific information so that an individual can know how their personal data is used.

Fairness is a less concretely distinct concept in the (UK) GDPR which associates fairness with the importance of using accurate data and preventing discriminatory effects on individuals (recital 71). These requirements would equally apply to any AI system using personal data now or in the future. Significantly, just in the last few weeks the Centre for Data Ethics and Innovation (part of the UK Government's Department for Science, Innovation and Technology) published its report 'Enabling responsible access to demographic data to make AI systems fairer' as a call to enable greater access to diverse data in order to detect and mitigate bias.

There is no express right to an explanation concerning the processing of personal data in all scenarios under the (UK) GDPR. The closest approximation to a right to an explanation is found in Article 22 which gives individuals the right not to be subject to a decision using solely automated means where that decision has a significant impact on them.

Clearly Article 22 is a key right which may be engaged where AI is processing personal data and there is no human intervention or oversight of that decision.  It is in this context that an individual has a right to know (under Article 13(2)(f) and Article 15(1)(h) for subject access requests) about the 'logic' involved in the decision making process, and the significance and envisaged consequences of such processing for that individual. While this right is not explicitly labelled as a right to an 'explanation', the Article 29 Working Party (now the European Data Protection Board) in their 2018 guidelines on Automated individual decision-making and profiling, interpreted this provision as a requirement for the controller to provide meaningful information about the logic involved. This means that instead of including a complex explanation of the algorithms used, the information should be sufficiently comprehensive for the individual to understand the reasons for the decision eg about the categories of data that have been used, why those categories are considered pertinent, how a profile is built etc.

Beyond the requirements under Articles 13 and 15, however, data protection law doesn't require a controller to explain in detail why it processes personal data about an individual. The question, therefore, is whether current UK data protection legislation meets the level of transparency, fairness and explainability envisaged by the government's White Paper, particularly given that the requirements under Articles 13 and 15 only kick in where an AI decision engages Article 22 – so only where it results in a solely automated decision which has a legal or similarly significant effect on the individual.

The UK regulator, the Information Commissioner's Office (ICO), has published online guidance on AI and data protection which includes sections on transparency and fairness. In particular, the guidance indicates how these principles of transparency, explainability and fairness are closely linked. It also examines the impact of Article 22 on the fairness principle. The ICO has also commented in their response to the White Paper that they expect the UK government to allow that the AI principles are interpreted in a way that is compatible with the data protection principles to avoid further burdens for business, and also to ensure that principles such as fairness cover the whole data lifecycle (see here for more).

Transparency, explainability and fairness in other proposed AI legal frameworks

EU

While the EU is still debating the shape of the AI Act which is currently in trilogue, we can make a number of observations on the place of transparency, explainability and fairness in the draft EU law. In general, the EU's approach is more detailed than the emerging UK approach.

All three European institutions see transparency as vital to the new AI legal framework. High-risk AI systems will be subject to greater transparency obligations. The Commission's draft Article 13 requires sufficient transparency to enable users to interpret the system's output and use it appropriately. The European Parliament's draft aims for transparency that enables providers and users to have a reasonable understanding of the high-risk system's functioning as well as all technical means concerning the AI system so that the AI system's output is interpretable by the provider and user. Additionally, the Parliament requires that users of high-risk AI systems are provided with information so that they can explain the decisions taken by the AI system to persons affected by them.

In Article 52 of the Commission's draft, there is a general transparency obligation for AI systems that are intended to interact with individuals - individuals must be told they are interacting with an AI system. Likewise, generative AI systems must inform users that content has been artificially generated or manipulated if it could falsely appear to be authentic or genuine. The Parliament's draft goes further than the Commission's to includes a general principle that applies to all AI systems requiring that AI systems are developed and used in a way that allows appropriate traceability and explainability while ensuring individuals know they communicate with an AI system and flagging its capabilities and limitations (Article 4a).  Additionally the Parliament expands on the Article 52 transparency obligations to require disclosure of information concerning who is responsible for decision-making, and information about the rights and processes available to individuals to object or seek judicial redress. Unsurprisingly, the Parliament's general direction of travel in its amendments is to widen and deepen the transparency obligation.

While the Commission's draft does not include significant references to explainability or fairness, the Parliament's draft adds the general principle for all AI systems of "diversity, non-discrimination and fairness" so that AI systems are used and developed in a way that includes diverse actors, promotes equal access etc while avoiding discriminatory impacts and unfair biases (Article 4a).

USA

On the other side of the Atlantic, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights in October 2022 which is framed around five core principles to guide the design, use and deployment of AI systems in the US. A couple of these principles overlap with the two we are considering here from the UK White Paper but do not yet flesh out in detail what they will mean in practice. These two principles are Algorithmic Discrimination Protections and Notice and Explanation.

The Algorithmic Discrimination Protections principle states that individuals should not face discrimination by algorithms and systems.  To comply with this principle, designers, developers and deployers are required to carry out proactive equity assessments to ensure accessibility for vulnerable groups.  In other words, individuals should be subject to fair processing.

With respect to Notice and Explanation, designers, developers and deployers should provide generally accessible plain language documentation so that an individual can know that the automated system is being used. Furthermore, the information provided must include an explanation of outcomes that are clear, timely and accessible. Automated systems are required to provide explanations that are technically valid, meaningful and useful to individuals.

Explainability in more detail

While transparency and fairness are reasonably well understood principles due to their presence in data protection law, the concept of explainability is less well developed. However, the UK has taken steps to help organisations understand how they can explain decisions made by AI. In May 2020, the ICO and the Turing Institute published co-badged guidance as part of Project Explain to provide support to organisations to help them explain to individuals the processes, services and decisions affected by AI. The guidance was the culmination of research involving citizen juries which examined in which contexts an explanation was vital, and when accuracy was more important.

The Project Explain guide provides considerable detail to help organisations with the basics of explaining AI, explaining AI in practice and what explaining AI means for an organisation.Within the guidance six different types of explanation are identified: rationale explanation, responsibility explanation, data explanation, fairness explanation, safety and performance explanation, and impact explanation. We expect the output from Project Explain to be developed and built on by the regulators referred to in the White Paper who will be responsible for regulating AI in their respective sectors.

So what are the requirements around these principles likely to be in the UK?

Transparency and explainability will be core to any AI regulatory programme that develops in the UK, EU and US. Where an AI system is high risk, we can expect a more sustained emphasis on these requirements. Clearly if the risk to individuals is greater, there is a greater requirement for transparency, explainability and fairness, and the UK regulators charged with developing guidance will be expected to reflect this.

There is, however, a danger that individuals are provided with so much information and overwhelming technical explanations, that they don't have the ability to understand and make informed decisions. How can non-specialists be expected to understand a complex AI system through the transparency and explainability requirements? And, perhaps controversially, does it matter if they do not? If there is a register of AI systems or some sort of traffic light system (indicating the level of risk associated with an AI system) which helps to indicate at a glance what the implications are for that individual, should that be enough? Then there is the question of whether the requirement for explanation should include or extend to justification.  So where it can be difficult for a vulnerable individual to understand an explanation about a decision, should they have the right to ensure that any decision made about them using AI is justified (in other words, that it was correct and fair)?

Producing guidance requiring businesses to demonstrate justification for the decisions an AI system makes could well become complex for regulators. We can expect that the requirements that will emerge under the UK AI framework will expand beyond the existing (UK) GDPR obligations for transparency, fairness and explainability.

Retour

Global Data Hub

Go to Global Data Hub main hub