As the regulatory landscape governing artificial intelligence (AI) continues to take shape, more specific distinctions are beginning to form between the UK and EU's respective approaches to governance.
One such area is the role AI plays in the use of facial recognition technology (FRT) and biometric technologies more broadly.
What are biometric technologies and FRTs?
Biometric technologies are generally used to identify, verify or categorise individuals based on their physiological (eg fingerprints, retina, DNA) or behavioural (eg vocal tone, facial gestures and walking gait) characteristics. The assessment and analysis of these characteristics facilitates biometric technologies such as voice recognition, fingerprint recognition, DNA matching and facial recognition.
FRTs are a specific sub-type of biometric technologies which concern the analysis and assessment of images of individuals' faces, but can vary significantly in their base technologies, use purposes and complexity.
FRTs and AI
From fairly modest beginnings in the early 1990s, FRTs have developed significantly over the last few decades through the access to substantially larger datasets and, more recently, with the incorporation of AI technologies such as machine learning and computer vision algorithms.
AI-driven FRTs' ability to collect and process vast amounts of data via the content of images and videos at exceptional speed and accuracy (which can often include highly sensitive data), has placed this sub-type of biometric technologies at the forefront of public and political debate around public surveillance, the right to privacy, online image scraping and AI regulation, but the approaches to regulating AI-driven FRT taken by the UK and EU are far from similar.
The EU AI Act
The EU's currently draft AI Act is set to become the first comprehensive regulatory framework designed to specifically govern the use of AI technologies.
The AI Act is currently in the final stages of the EU's legislative process. Both the European Council and Parliament have submitted their proposed changes to the Commission's original proposal, and trilogue negotiations between the EU co-legislators are ongoing to reconcile their proposals. The outcome will be an agreed text for the AI Act and, ultimately, a new regulatory regime for AI technologies in the EU market.
The AI Act takes a risk-based approach to regulating AI, classifying AI technologies according to the level of risk they pose to fundamental rights and freedoms or health and safety of individuals. The risk classifications cover unacceptable, high, limited and minimal AI systems:
- Unacceptable AI systems are prohibited under Article 5 and are deemed particularly harmful to individuals' rights and safety.
- High-risk AI systems are permitted, but are subject to strict compliance obligations.
- Limited and minimal risk AI systems are subject to the fewest obligations, largely involving transparency requirements to users.
While the exact definition of AI systems under the AI Act remains an area of debate in the trilogue negotiations, modern day FRTs will undoubtedly be caught by the scope of the AI Act and particular consideration is given to biometric data and associated technologies in the provisions.
Prohibited FRTs
The European Parliament has made several amendments to the Commission's original proposal which may affect how the AI Act regulates FRTs (including an expanded list of AI technologies classified as unacceptable).
While the Commission's initial proposal already prohibits certain FRT-related AI systems such as those used for social scoring evaluation or classification of individuals (in a way that leads to their detriment or unfavourable treatment), and 'real-time' remote biometric identification systems in publicly accessible spaces, it is notable that these prohibitions only relate to use by public authorities (for social scoring) and law enforcement purposes (for biometric identification) and are subject to relatively wide exceptions.
The European Parliament's proposals include a substantial push to expand on the Commission's list of prohibited AI systems utilising certain biometric technologies including by adding:
- social scoring evaluation or classification (not limited to use by public authorities)
- "real-time" remote biometric identification systems in publicly accessible spaces (not limited to use for the purpose of law enforcement)
- "post" remote biometric identification systems (unless subject to judicial authorisation and necessary for law enforcement for the prosecution of serious criminal offences)
- biometric categorisation systems using sensitive characteristics (gender, race, ethnicity, citizenship status, religion, political orientation)
- predictive policing systems (based on profiling, location or past criminal behaviour)
- untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases, and
- emotion recognition systems in law enforcement, border management, the workplace and educational institutions.
These additions clearly widen the potential FRTs which may be prohibited under the AI Act and the Parliament's approach follows calls from academics, data protection authorities and both the European Data Protection Board and European Data Protection Supervisor for stricter regulation of biometric technologies and FRTs than was originally proposed.
It remains to be seen which additions will survive the trilogue negotiations, bearing in mind the European Council took a far less restrictive approach in its own proposal. Recent reports suggest the November trilogues led to the Parliament circulating compromise proposals which include a move away again from a total ban on remote biometric identification applications by providing narrow exemptions for specified law enforcement activities.
High-risk FRTs
FRTs which fall outside the list of prohibited AI systems may be classified as high-risk. Article 6 outlines the obligations for high-risk AI systems and Annex III includes a list of specific types of AI systems that are designated high-risk.
The requirements relating to high-risk AI systems can differ according to the type of entity developing or deploying them but include:
- implementing risk management systems that identify, analyse, evaluate and mitigate the risks associated with the high-risk AI system
- implementing appropriate data governance and management practices in respect AI model training
- drawing up the technical documentation of the high-risk AI system before it is placed on the market or put into service and ensuring this documentation demonstrates compliance with applicable obligations under the Act
- record keeping obligations, including designing the high-risk AI system with capabilities enabling the automatic recording of events (ie logs) while the system is operating
- transparency obligations and the requirement to provide specified transparency information to users
- ensuring high-risk AI systems can be subject to effective human oversight during their use, and
- ensuring high-risk AI systems achieve an appropriate level of accuracy, robustness and cybersecurity.
Both the European Council and the European Parliament have made changes to the list of high-risk AI systems in Annex III. New distinctions relate to "remote biometric identification systems" and "AI systems intended to be used for biometric identification of individuals". The Parliament has also included reference to AI systems intended to be used to make inferences about personal characteristics of individuals on the basis of biometric data, including emotion recognition systems (where these are not prohibited under Article 5). An exception is made for AI systems intended to be used for biometric verification for the sole purpose of confirming that a specific person is who that person claims to be.
At this stage, there is still uncertainty around the appropriate methodology for determining which AI systems fall under the high-risk classification. However, it is clear that FRTs other than those used solely for identity verification purposes have the potential to fall under the high-risk classification (if not to be completely prohibited) and will attract substantial compliance obligations under the AI Act.
The UK Approach
The UK's current approach to regulating AI is set out in its White Paper on AI, published in March 2023 by the UK government, which sets out the ambition of being "the best place in the world to build, test and use AI technology".
UK White Paper on AI
The White Paper, 'A pro-innovation approach to AI Regulation', sets out a framework for the UK's envisaged approach to AI governance and takes a rather different direction to the EU and its AI Act, with a clear focus on facilitating innovation.
In fact, the UK government has elected not to legislate to create a single function to govern the regulation of AI and has decided to support existing regulators to develop a sector-focused, principles-based approach. UK regulators will focus on publishing non-statutory guidance to address context-specific uses of AI technologies.
The White Paper does not outline specific prescriptive provisions for the regulation of AI and certainly not for FRT, stating: "we are not creating blanket new rules for specific technologies or application of AI, like facial recognition". There is no further reference to FRT or biometric technologies in the White Paper which is a statement in itself. Instead, it proposes that the application of existing laws and actions of current regulators will be sufficient.
The role of the ICO
The UK's data protection regulator, the ICO, is undoubtedly seen as critical to this approach. UK data protection law (including the UK GDPR) applies to FRTs to the extent to which personal data is processed (which will almost certainly be the case) and biometric data is afforded additional protections, given its "special category" status under such laws.
As such, the ICO has already been active in this space, having produced guidance on use of AI generally and an Opinion on using live FRT in public places, which considers the relevant requirements of data protection law. The ICO is also currently undertaking a consultation on draft guidance on biometric data and biometric technologies as we discuss here. However, these publications do not have statutory force and while they will support organisations looking to develop and deploy FRTs in understanding their data protection obligations, it remains to be seen how effective such an approach will be at regulating this space.
The ICO's largest regulatory action to date relating to FRT was against Clearview AI Inc. which was fined £7.5m for breaches of UK data protection law in May 2022.
Clearview was fined for using images of individuals in the UK (and worldwide) that were collected from the internet and social media to create a global online database of more than 20 billion images that could be used for facial recognition purposes (largely for law enforcement customers of Clearview).
The ICO found that Clearview breached UK data protection laws by:
- failing to use the information of individuals in the UK in a fair and transparent manner, given that individuals are not made aware or would not reasonably expect their personal data to be used in this way
- failing to have a lawful basis for collecting personal data
- failing to have a process in place to stop the data being retained indefinitely
- failing to meet the higher data protection standards required for biometric data (ie special category data), and
- asking for additional personal data, including photos, when asked by individuals if they are on Clearview's database (possibly acting as a disincentive for individuals seeking to exercise their rights).
However, Clearview recently won its appeal in the First-tier Tribunal (FTT) which overturned the ICO's fine and accompanying enforcement notice. Clearview won its appeal not due to a detailed consideration of its alleged breaches of the UK GDPR, but because its processing activities in question fell outside of the material/territorial scope of the (UK)GDPR. This was due to the nature of its clients as (solely) law enforcement agencies and the fact that the acts of foreign government would not be within the scope of the (UK)GDPR.
In the course of its decision, the FTT provided helpful considerations relating to the application of Article 3(2)(b) (UK)GDPR and what constitutes "monitoring the behaviour" of data subjects, including:
- that Article 3(2)(b) can apply where the monitoring of behaviour is carried out by a third party rather than the controller and
- the term "behaviour" means more than simply identification or descriptive terms (such as name, age, hair colour, data of birth etc) and refers to something about what an individual does (such as their location, habits, occupation, relationship status and what they wear etc).
Organisations carrying out similar activities to Clearview (such as massive image-scraping from the internet) should not see the FTT's decision as a blanket permission to do this themselves and should note the specific facts of this case and the nuanced reasoning of why the ICO acted beyond its jurisdiction. The ICO's initial decision is indicative of its approach to compliance obligations in respect of this type of AI-driven FRT.
Will the DPDI Bill change anything?
The UK's Data Protection and Digital Information No.2 Bill (DPDI Bill) continues to make its way slowly through the UK's legislative process and is currently at the report stage in the House of Commons. The DPDI Bill operates by amending existing legislation such as the Data Protection Act 2018 and the UK GDPR and you can find out more here.
The DPDI Bill does not specifically refer to FRTs but does include provisions concerning the oversight of biometric data. The Bill abolishes the existing office of the UK's Biometrics and Surveillance Camera Commissioner, absorbing the oversight function of the dual role, and removes the requirement for the government to publish a Surveillance Camera Code of Practice. The former UK Biometrics and Surveillance Camera Commissioner (who recently resigned, realising his role has effectively ceased to exist) has warned that while the provisions in the DPDI Bill are seemingly based on the premise that public space surveillance is simply a subset of wider data protection and privacy, key issues exist beyond this remit and will need to be addressed to achieve a clear regulatory landscape.
What does this mean?
One of the challenges of regulating AI is that there is no uniform global approach. In China, for example, FRT can be and is used for social scoring. Deciding how and to what extent to regulate FRT AI systems is just one of the many issues for governments looking to establish a balance between protecting fundamental rights and freedoms, and developing AI to positive effect. That this is a far from easy task is illustrated by the different approaches of the EU and UK to regulating AI-driven FRTs.