The ICO continues its focus on particular sectors, publishing blogs on the use of AI to make decisions, and on the use of biometric data.
What's the issue?
The ICO has signalled that it will be concentrating on a number of sectors over the coming months. These include Adtech, use of children's data, AI, data brokers and special data.
What's the development
Following on from its blog on Adtech, the ICO has produced two further blogs on using AI to make decisions and when that will be considered to be a fully automated process, and on the use of biometric data.
What does this mean for you?
The ICO is picking out sectors it sees as potential flashpoints for GDPR compliance. Not all of these focus on new and developing technology but the blogs give high-level insights in areas where guidance or relevant codes of practice may not be complete.
AI and automated decision making
The ICO looks at automation bias and lack of interpretability as factors which may determine whether or not the use of an AI application to make decisions is a solely automated process.
Article 22 GDPR gives individuals the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significant effects for them.
There are exceptions to this where the decision is necessary for entering into a contract between the data subject and the controller, where it is authorised by Member State law (subject to suitable safeguards), and where it is made based on the data subject's explicit consent.
The blog looks at the importance of meaningful human review and decision making powers when demonstrating that a decision is not taken by solely automated means.
Automation bias describes the situation when human users routinely rely on the output of a computer decision-support system and stop using their own judgment or questioning the decisions.In these situations, there is a risk that the system may unintentionally be classed as a solely automated decision making process.
Lack of interpretability occurs when the inputs and outputs of AI systems are difficult for humans to understand and other explanation tools are not readily available or reliable, and there is a risk that a human will not be able to review the output of the AI system in a meaningful way.
Unsurprisingly, the ICO recommends that organisations consider both automation bias and interpretability at the outset of a project, supporting meaningful human input from the start where that is intended.
The ICO recommends that organisations should specify and document clearly whether AI will be used to enhance human decisions or to take them by solely automated means. Management should review and sign off on the use of AI systems making sure they are in line with risk appetite. DPIAs should be carried out in advance.
Controls to mitigate automation bias should be built in at the outset. If human reviewers do not have access to additional data their reviews may not be sufficiently meaningful and the decision may be considered to be taken solely by automated means.
The human reviewers should capture additional factors, possibly by interacting directly with the person the decision is about.
Interpretability should also be considered from the design phase, in particular, whether the human reviewer can predict how outputs would change if inputs were different, whether they can identify the most important inputs contributing to a particular output, and whether they can identify when the output may be wrong.
Training is essential and human reviewers must have the authority to override output if they do not want the decision to be treated as being made solely by automated means.
Monitoring including why and how many times a human reviewer accepted or rejected an AI decision, will also be important.
Using biometric data
The ICO published a blog on using biometric data following its enforcement notice against HMRC relating to its use of voice authenticated passwords. HMRC asked callers to some of its helplines to record their voice as their password but did not give them information about how the data would be used or advise them that they did not have to sign up to the service.
There was no clear option for those who did not want to register. This meant that HMRC did not have adequate consent (which it needed as voice recordings are biometric data, classed as special data under the GDPR). In addition, HMRC had not carried out a DPIA prior to beginning the voice recordings.
The ICO reminds data controllers about the requirement to be transparent and accountable when processing personal data. It identifies a number of key points and also cross-refers to its guidance on informed consent:
- Carry out DPIAs where processing is likely to result in a high risk to the rights and freedoms of individuals. Incorporate data protection by design and default.
- Ensure you act on any risks identified in the DPIA and demonstrate that you have taken it into account.
- Accountability is essential and you must be able to demonstrate compliance by putting appropriate technical and organisational methods in place.
If you are relying on consent as your lawful basis, remember that biometric data is special data which attracts additional protections, including that any consent must be explicit.