Autor

Victoria Hordern

Partner

Read More
Autor

Victoria Hordern

Partner

Read More

22. Mai 2023

Horizons and sandboxes: emerging technology and data protection law

  • In-depth analysis

One of the complaints repeatedly made about the law is that it consistently lags behind real-world developments. While we cannot necessarily expect data protection authorities to be at the bleeding edge of technological developments, they themselves recognise they are less effective when unable to understand the environments they are seeking to regulate.

One way to improve their technological nous is through a regulatory sandbox environment where organisations can openly discuss innovative technologies with the regulator, allowing both business and regulator to get ahead of potential data privacy issues. Regulators are also proactively providing guidance and views on emerging technologies which will interact with data protection requirements – notably the UK Information Commissioner's AI guidance updated as recently as 15 March. However, these interactions and interventions can sometimes highlight that there are currently no clear answers where new technologies intersect with the data protection legal framework.

The ICO’s Tech Horizons Report

The UK Information Commissioner’s (ICO) first annual Technological Horizons Report, published in December 2022, is an indication of the ICO's serious intent to grapple with disruptive technologies.  It covers four emerging technologies: consumer healthtech, next-generation Internet of Things (IoT), immersive technology and decentralised finance (DeFi). 

The ICO considered a list of 60+ emerging technologies and produced a shortlist of the 11 technologies most likely to impact privacy in the near future. From this group of 11, it selected the four identified above while also choosing to focus on applied technologies rather than foundational technologies (such as AI) or use-case technologies (such as smart speakers).  Neurotechnology is seen as sufficiently significant to warrant a separate (forthcoming) report, and the ICO will continue to assess other technologies (such as behavioural analytics, quantum computing and generative AI) in future reports. 

For each of the four emerging technologies selected, the Report examines what the technology involves, sets out the data protection and privacy implications, provides its concluding comments and any next steps for the ICO.  The ICO sees all four technologies as presenting a common set of challenges:

  • Lack of transparency and meaningful control for individuals.
  • Complexity of data ecosystems preventing individuals understanding what is happening to their data.
  • The danger of collecting excessive amounts of data.
  • The likelihood of sensitive personal information being collected.

Consumer healthtech

Many of us already use consumer healthtech. This includes smart fabrics, wearable fitness trackers and apps that help monitor health and wellness e.g. fertility apps, mental health support apps. The emphasis is on improving an individual’s well-being  and is distinct from a medical device, although the ICO recognises that this distinction is likely to blur and more consumer healthtech devices will be considered to provide medical treatment. 

Increasingly sophisticated sensors on healthtech devices will lead to the capture of more granular health data, particularly as they begin to compete with traditional healthcare channels. AI-driven automated therapy apps will need to train on user data to improve their natural language processing.  While the ICO sees the benefits of giving individuals greater  understanding of their health (since this could, for example, lead to increased physical activity and healthier eating), it flags concerns that the average user will not be able to understand the health data they can access.   

All these aspects raise data protection compliance requirements which become further complicated by the volume and sensitivity of data collected. The ICO identifies three main issues.  

  • The large amount of special category data (usually health data collected about a person’s body behaviour e.g. heartrate, eye movement) processed by the devices resulting in the need to identify an Article 9 exception to the prohibition on processing special category data (and explicit consent may be the only option) and to put in place additional safeguards.
  • Issues around transparency and control, in particular, concerning the use of personal data for additional purposes which may lead to sharing data with third parties, as well as to tracking and profiling.
  • Concerns over the accuracy of the data produced by wearables, including that some produce less reliable results for people with darker skin as a result of insufficient diversity in product test data.

The ICO recommends using clear privacy notices, including around the use of special category data, using Data Protection Impact Assessments where appropriate, and checking for bias where AI is used.

Next generation IoT

IoT devices are networked physical objects that connect and share information over the internet.  Examples include smart kettles, smart speakers and smart televisions. By ‘next generation’, the ICO is referring to these things being able to respond to people’s needs in real-time wherever they are.  By their very nature, IoT devices collect vast amounts of data (eg voice commands, user location, movement, home layout, temperature, humidity) and, in the next generation era, the devices will be better able to connect with each other and offer greater personalisation.  

Not all data collected and processed by IoT will be personal data, of course. But environmental data linked to an individual worker or data about a unique environment which is only associated with a single individual, will be personal data.  Where IoT devices are used to support the elderly, vulnerable or incapacitated, they can assist with safety features but may also collect sensitive data. 

The implications of next generation IoT highlighted by the ICO include the likely increase of cybersecurity risks, partly due to human error as well as system defects e.g. device manufacturers not prioritising data protection by design.  Just as with consumer healthtech, there is a real challenge to provide individuals with meaningful transparency and control over their data. There are also concerns that data collected through IoT could be used for further purposes not obvious to the individual and additionally lead to more data being collected than necessary. 

The ICO makes a series of security-focused recommendations and commits to producing guidance on IoT devices and data protection.

Immersive technologies

In examining immersive technologies, the ICO concentrates on augmented reality (AR) and virtual reality (VR), recognising that these experiences are becoming more mainstream (e.g.  in entertainment, training or employment environments) through the use of smartphones and tablets.  Since the use of AR and VR devices is intrinsic to experiencing the technologies, the devices will collect a range of user information – biological, audio, spatial, and location plus the user’s interactions.  While entertainment and media delivery are likely to be the main areas where individuals experience immersive technologies, the ICO also highlights they may be used in wellness activities and the workplace. 

These technologies can collect information about sensitive human characteristics e.g. eye movement and spiked heartrates. Detailed inferences (whether accurate or not) can be made about an individual’s response to certain stimuli. The ICO also expresses concerns about the potential integration of brain computer interface technology which allows for a direct link between the brain and a computer, and which can be embedded within AR or VR wearables. Alongside concerns about data minimisation, the ICO underlines the need for transparency, not just for the user of the wearable but also any third parties whose data may be collected (e.g. when devices are used in public spaces).  The ICO focuses on embedding privacy by design and default and exploring technical solutions alongside policy solutions.

Decentralised finance (DeFi)

By decentralised finance, the ICO means financial systems that remove centralised intermediaries from financial transactions, products and services. Essentially, banks and traditional exchanges are replaced by decentralised networks. Trust and security is provided through distributed ledger technology where peer-to-peer transactions take place and are verified by those participating in the distributed network.  Once information is verified, it is recorded within the ledger and forms an unalterable chain of transaction information.  

The wider impact of DeFi for existing financial systems is substantial although it is currently a very small part of financial services activity. It offers security and accessibility benefits and appeals to those seeking to disrupt the financial system’s status quo.   In future, it could be used for everyday payments and may become the usual way of doing business or carrying out consumer transactions. 

The impact of data protection rules on DeFi is complicated. The concepts of ‘controller’ and ‘processor’ do not easily map onto the roles within DeFi. It follows that it’s difficult to provide a privacy notice and exercise individual rights. It’s also not clear how the rules on international data transfers will apply when data flows across borders (who signs the SCCs?) and, while as a decentralised and distributed technology DeFi may be more resilient than a traditional bank against cyber-attacks, there is no guarantee against full protection. 

The ICO focuses more on the potential for technological developments to resolve some of these issues, rather than on suggesting ways for the current technology to comply with data protection law, raising the question – is it the data protection legal framework which needs to change to accommodate this technology or vice versa?

The rise of the regulatory sandbox 

A regulatory sandbox offers a controlled environment in which an organisation can work with a Data Protection Authority to consider the privacy risks associated with a new idea and test how these can be mitigated or resolved.

Every year since 2020, the ICO has welcomed applications to its sandbox from certain key areas of focus pre-determined by the ICO. This year, the focus areas include the four set out in the Report plus biometrics and exceptional innovations.  The ICO is not alone in encouraging sandboxes and other regulators, such as the CNIL in France and the Datatilsynet in Norway also run sandbox programmes. 

The ICO’s sandbox team works closely with successful applicants for up to one year, providing advice and guidance (although the ICO does not provide IT infrastructure or assist with procuring data). While there is an expectation from the ICO that the data used as part of the sandbox complies with data protection law, a participating organisation will receive an ICO statement of comfort from its enforcement team which allows that any inadvertent contravention of data protection law as a result of product/service development while participating in the sandbox will not immediately lead to regulatory action.  

In order to participate in the sandbox, the organisation must consent to their participation being made public and must not tell any external parties about sandbox participation without ICO’s express written consent. The ICO publishes an exit report on its webpage once a participant leaves the programme.

More sandboxes to come

The European Commission's draft AI Act specifically includes proposals for AI regulatory sandboxes across the EU to provide a controlled environment that facilitates the development, testing and validation of innovative AI systems (Article 53). The precise eligibility criteria for involvement in the sandbox isn’t specified in the draft although there is a requirement for Member States to provide small-scale providers and start-ups with priority access.  There have already been calls for the widely publicised generative AI tool, ChatGPT, to be subject to a regulatory sandbox similar to that envisaged under the draft AI Act. 

In parallel, the UK Government’s White Paper on AI (published in March 2023)  indicated that the Government will support testbeds and sandbox initiatives to help AI development, with the initial pilot to be focused on a single sector, multiple regulator sandbox. 

Busier data protection regulators

For the future economy to grow in a responsible and innovative way, we need regulators who can understand technology and mechanisms like regulatory sandboxes to facilitate dialogue between regulators and business. 

This becomes all the more pressing given the fragmented global approach to regulating disruptive technologies.  Many countries (including the USA) do not have an equivalent to the GDPR.  Both the EU and the UK are legislating or have legislated to step up security of IoT devices.  But while the EU is looking to regulate AI with an overarching piece of legislation, the UK government prefers a sector-based, principles-focused approach and has no plans to legislate, placing the burden firmly on the shoulders of existing regulators to develop guidance and statutory codes of practice.  As a result, the ICO will need to continue if not step up its focus on emerging technologies.

This article was first published in Privacy Laws & Business.

Call To Action Arrow Image

Newsletter-Anmeldung

Wählen Sie aus unserem Angebot Ihre Interessen aus!

Jetzt abonnieren
Jetzt abonnieren