22 mai 2023
One of the complaints repeatedly made about the law is that it consistently lags behind real-world developments. While we cannot necessarily expect data protection authorities to be at the bleeding edge of technological developments, they themselves recognise they are less effective when unable to understand the environments they are seeking to regulate.
One way to improve their technological nous is through a regulatory sandbox environment where organisations can openly discuss innovative technologies with the regulator, allowing both business and regulator to get ahead of potential data privacy issues. Regulators are also proactively providing guidance and views on emerging technologies which will interact with data protection requirements – notably the UK Information Commissioner's AI guidance updated as recently as 15 March. However, these interactions and interventions can sometimes highlight that there are currently no clear answers where new technologies intersect with the data protection legal framework.
The UK Information Commissioner’s (ICO) first annual Technological Horizons Report, published in December 2022, is an indication of the ICO's serious intent to grapple with disruptive technologies. It covers four emerging technologies: consumer healthtech, next-generation Internet of Things (IoT), immersive technology and decentralised finance (DeFi).
The ICO considered a list of 60+ emerging technologies and produced a shortlist of the 11 technologies most likely to impact privacy in the near future. From this group of 11, it selected the four identified above while also choosing to focus on applied technologies rather than foundational technologies (such as AI) or use-case technologies (such as smart speakers). Neurotechnology is seen as sufficiently significant to warrant a separate (forthcoming) report, and the ICO will continue to assess other technologies (such as behavioural analytics, quantum computing and generative AI) in future reports.
For each of the four emerging technologies selected, the Report examines what the technology involves, sets out the data protection and privacy implications, provides its concluding comments and any next steps for the ICO. The ICO sees all four technologies as presenting a common set of challenges:
Many of us already use consumer healthtech. This includes smart fabrics, wearable fitness trackers and apps that help monitor health and wellness e.g. fertility apps, mental health support apps. The emphasis is on improving an individual’s well-being and is distinct from a medical device, although the ICO recognises that this distinction is likely to blur and more consumer healthtech devices will be considered to provide medical treatment.
Increasingly sophisticated sensors on healthtech devices will lead to the capture of more granular health data, particularly as they begin to compete with traditional healthcare channels. AI-driven automated therapy apps will need to train on user data to improve their natural language processing. While the ICO sees the benefits of giving individuals greater understanding of their health (since this could, for example, lead to increased physical activity and healthier eating), it flags concerns that the average user will not be able to understand the health data they can access.
All these aspects raise data protection compliance requirements which become further complicated by the volume and sensitivity of data collected. The ICO identifies three main issues.
The ICO recommends using clear privacy notices, including around the use of special category data, using Data Protection Impact Assessments where appropriate, and checking for bias where AI is used.
IoT devices are networked physical objects that connect and share information over the internet. Examples include smart kettles, smart speakers and smart televisions. By ‘next generation’, the ICO is referring to these things being able to respond to people’s needs in real-time wherever they are. By their very nature, IoT devices collect vast amounts of data (eg voice commands, user location, movement, home layout, temperature, humidity) and, in the next generation era, the devices will be better able to connect with each other and offer greater personalisation.
Not all data collected and processed by IoT will be personal data, of course. But environmental data linked to an individual worker or data about a unique environment which is only associated with a single individual, will be personal data. Where IoT devices are used to support the elderly, vulnerable or incapacitated, they can assist with safety features but may also collect sensitive data.
The implications of next generation IoT highlighted by the ICO include the likely increase of cybersecurity risks, partly due to human error as well as system defects e.g. device manufacturers not prioritising data protection by design. Just as with consumer healthtech, there is a real challenge to provide individuals with meaningful transparency and control over their data. There are also concerns that data collected through IoT could be used for further purposes not obvious to the individual and additionally lead to more data being collected than necessary.
The ICO makes a series of security-focused recommendations and commits to producing guidance on IoT devices and data protection.
In examining immersive technologies, the ICO concentrates on augmented reality (AR) and virtual reality (VR), recognising that these experiences are becoming more mainstream (e.g. in entertainment, training or employment environments) through the use of smartphones and tablets. Since the use of AR and VR devices is intrinsic to experiencing the technologies, the devices will collect a range of user information – biological, audio, spatial, and location plus the user’s interactions. While entertainment and media delivery are likely to be the main areas where individuals experience immersive technologies, the ICO also highlights they may be used in wellness activities and the workplace.
These technologies can collect information about sensitive human characteristics e.g. eye movement and spiked heartrates. Detailed inferences (whether accurate or not) can be made about an individual’s response to certain stimuli. The ICO also expresses concerns about the potential integration of brain computer interface technology which allows for a direct link between the brain and a computer, and which can be embedded within AR or VR wearables. Alongside concerns about data minimisation, the ICO underlines the need for transparency, not just for the user of the wearable but also any third parties whose data may be collected (e.g. when devices are used in public spaces). The ICO focuses on embedding privacy by design and default and exploring technical solutions alongside policy solutions.
By decentralised finance, the ICO means financial systems that remove centralised intermediaries from financial transactions, products and services. Essentially, banks and traditional exchanges are replaced by decentralised networks. Trust and security is provided through distributed ledger technology where peer-to-peer transactions take place and are verified by those participating in the distributed network. Once information is verified, it is recorded within the ledger and forms an unalterable chain of transaction information.
The wider impact of DeFi for existing financial systems is substantial although it is currently a very small part of financial services activity. It offers security and accessibility benefits and appeals to those seeking to disrupt the financial system’s status quo. In future, it could be used for everyday payments and may become the usual way of doing business or carrying out consumer transactions.
The impact of data protection rules on DeFi is complicated. The concepts of ‘controller’ and ‘processor’ do not easily map onto the roles within DeFi. It follows that it’s difficult to provide a privacy notice and exercise individual rights. It’s also not clear how the rules on international data transfers will apply when data flows across borders (who signs the SCCs?) and, while as a decentralised and distributed technology DeFi may be more resilient than a traditional bank against cyber-attacks, there is no guarantee against full protection.
The ICO focuses more on the potential for technological developments to resolve some of these issues, rather than on suggesting ways for the current technology to comply with data protection law, raising the question – is it the data protection legal framework which needs to change to accommodate this technology or vice versa?
A regulatory sandbox offers a controlled environment in which an organisation can work with a Data Protection Authority to consider the privacy risks associated with a new idea and test how these can be mitigated or resolved.
Every year since 2020, the ICO has welcomed applications to its sandbox from certain key areas of focus pre-determined by the ICO. This year, the focus areas include the four set out in the Report plus biometrics and exceptional innovations. The ICO is not alone in encouraging sandboxes and other regulators, such as the CNIL in France and the Datatilsynet in Norway also run sandbox programmes.
The ICO’s sandbox team works closely with successful applicants for up to one year, providing advice and guidance (although the ICO does not provide IT infrastructure or assist with procuring data). While there is an expectation from the ICO that the data used as part of the sandbox complies with data protection law, a participating organisation will receive an ICO statement of comfort from its enforcement team which allows that any inadvertent contravention of data protection law as a result of product/service development while participating in the sandbox will not immediately lead to regulatory action.
In order to participate in the sandbox, the organisation must consent to their participation being made public and must not tell any external parties about sandbox participation without ICO’s express written consent. The ICO publishes an exit report on its webpage once a participant leaves the programme.
The European Commission's draft AI Act specifically includes proposals for AI regulatory sandboxes across the EU to provide a controlled environment that facilitates the development, testing and validation of innovative AI systems (Article 53). The precise eligibility criteria for involvement in the sandbox isn’t specified in the draft although there is a requirement for Member States to provide small-scale providers and start-ups with priority access. There have already been calls for the widely publicised generative AI tool, ChatGPT, to be subject to a regulatory sandbox similar to that envisaged under the draft AI Act.
In parallel, the UK Government’s White Paper on AI (published in March 2023) indicated that the Government will support testbeds and sandbox initiatives to help AI development, with the initial pilot to be focused on a single sector, multiple regulator sandbox.
For the future economy to grow in a responsible and innovative way, we need regulators who can understand technology and mechanisms like regulatory sandboxes to facilitate dialogue between regulators and business.
This becomes all the more pressing given the fragmented global approach to regulating disruptive technologies. Many countries (including the USA) do not have an equivalent to the GDPR. Both the EU and the UK are legislating or have legislated to step up security of IoT devices. But while the EU is looking to regulate AI with an overarching piece of legislation, the UK government prefers a sector-based, principles-focused approach and has no plans to legislate, placing the burden firmly on the shoulders of existing regulators to develop guidance and statutory codes of practice. As a result, the ICO will need to continue if not step up its focus on emerging technologies.
This article was first published in Privacy Laws & Business.