The accountability principle under the (UK) GDPR places a responsibility on controllers of personal data to ensure the compliance of their processing and to demonstrate that compliance. The accountability principle becomes particularly important in the context of AI in the UK, as the principle of accountability and governance it is one of the five core principles set out in the government's March 2023 AI White Paper which sector regulators including the ICO will be required to consider when regulating AI which uses or generates personal data.
From a personal data processing perspective, accountability can be supported by the application of established technical and organisational measures over processing operations to safeguard against compliance failures.
The rapid rise of AI systems presents a fundamental challenge to the traditional understanding of what 'good compliance' looks like and how this is evidenced, not least by embedding principles of data protection by design and by default. Accountability within AI systems is a complex and evolving discipline that extends far beyond the scope of this summary. Notwithstanding the challenges, however, an existing understanding of accountability frameworks can help with structuring an approach to responsible personal data governance and risk management in the context of AI.
We can't ignore the technical complexity of AI and indeed part of accountability from a data protection standpoint is about showing that the issues and risks with AI when processing personal data are properly understood and addressed across an organisation. A potentially wide range of different expertise will need to feed in across disciplines including management, technology, software, data, legal, engineering and security to name a few. It will be important that each works collaboratively and openly with the rest of the group to foster a broader appreciation of the issues outside their own respective bubble of specialism which should include an understanding of the data protection issues.
Building teams that collaborate with and support each other requires effective top-down communication and direction by decision makers. In this respect it is also the responsibility of management to upskill themselves on the issues with AI systems and data protection, to clearly define the organisation's values and objectives across the organisation and build recognition for defined roles and reporting structures.
This may extend to establishing an AI ethics committee or board (as either an internal or external body). Such a body would have a variety of functions supporting governance considerations (including those set out below) but key among these which would be furthering education on the issues, supporting cross-organisation engagement and helping decision makers to frame the organisation's principles and values with regard to AI development and use, (including the protections for individuals in respect of any processing of personal data within proposed AI systems).
A core component of data protection compliance involves identifying and assessing the risks posed by the processing of personal data. In the case of AI systems this will mean conducting a prior Data Protection Impact Assessment (DPIA) by which the proportionality of a decision to use AI is explored and justified, and the risks to the rights and freedoms of individuals identified, assessed, mitigated and documented.
Organisations may also wish to consider adapting their existing DPIA process to capture AI system specific risk factors. This would include the approach to training data, the risk of harms from any disproportionate classification of different data subjects (which has implications for the allocation of goods or opportunities within a group to the detriment of others) or the risks of an AI system reinforcing identity bias or prejudice in the discriminatory treatment of different groups of individuals.
Part of risk remediation will include having policies and procedures in place that ensure operational staff have sufficient direction as to their roles and responsibilities. These should be readily available and supported by training.
Risk management policies will need to be implemented or existing policies updated to address AI-specific considerations. For example regarding obtaining and handling AI training and test data, procuring and assessing external software, allocating roles and responsibilities for validation and independent sign-off of AI system development, deployment and updates (which may also include a role for an ethics committee) as well as ensuring policies relevant to automated decision making that address risks of bias, prejudice or lack of interpretability.
The UK GDPR requires controllers to be transparent with individuals about how their personal data will be collected and processed within AI systems, including by telling them how and why such data will be processed and in explaining any decisions made with AI, how long any personal data will be retained and who it will be shared with. For further information about transparency in AI systems see here.
Transparency involves being clear as to the rights individuals have in relation to their personal data and which can be relevant across the different stages of the development and operation of AI systems, (whether in respect of the collection and use of personal data within training data, for predictive modelling or as part of subject rights in respect of any automated decision making).
It will be necessary in all contexts of a subject challenge or rights request to assess whether individuals' rights are engaged in respect of specific data. This may be because they are identifiable from the data or because the nature of the processing has the effect of 'singling out' the individual, (for example by assessing a pattern of subject behaviour that is specific to one individual even if that person is not identifiable directly by the data). Outputs from AI held by reference to individuals will equally be applicable to subject rights although in the case of rights to correction, whether such data is 'inaccurate' may depend on whether the record held is merely a prediction as opposed to a statement of fact.
Last but not least, human review and ongoing monitoring of personal data processing in the context of AI system decision making will be an integral part of risk management across the entire AI system lifecycle. This will include in the training of AI and, where relevant, in taking steps to override specific outputs that are inaccurate or misleading. Oversight of system performance should also occur post deployment to ensure protocols remain consistent (for example, that there is no outcome bias within the system that only emerges over time).
Those organisations which have worked hard to set up effective policies, procedures and accountability trails under the (UK) GDPR will have a major advantage when it comes to using AI involving personal data. In the UK, the approach taken by the government is principles-led and the core principles will be familiar to anyone used to engaging with data protection compliance. The approach under the (UK) GDPR can be rolled out whether or not an AI system uses personal data.
This does not mean there will be no challenges ahead. As discussed here, it is not straightforward to meet standards of explainability and transparency in the context of complex new technology for example, and some of the data protection principles, notably that of data minimisation, are likely to be an issue for many AI models. Sandboxes are a key element of the government's approach to regulating AI and, in its response to the AI White Paper, the ICO has stressed the need to be able to provide timely advice to organisations throughout the development of AI functions, and to organisations looking to use AI which would certainly be helpful.
The advice to organisations developing or looking to use AI though, is much the same as the advice for general data protection compliance. You may not always get it absolutely right with new technologies and new requirements, but if you can demonstrate you are trying your best to get it right, and by cooperating with the ICO, you should get a long way to meeting accountability and governance requirements.
Debbie Heywood looks at the UK government's vision for regulating AI and the role of the ICO and data protection law.
1 of 4 Insights
Victoria Hordern looks at the UK's proposed regulatory AI principles of transparency, explainability and fairness in the context of the UK GDPR.
2 of 4 Insights
Jo Joyce looks at the UK ICO's approach to security, safety and robustness.
4 of 4 Insights