1 of 6

13 January 2020

Life sciences – 1 of 6 Insights

AI, machine learning and data analytics in the UK healthcare sector: data protection considerations

We look at the growth of AI, machine learning and data analytics in the UK healthcare sector and at key data protection compliance issues.

More
Author

Christopher Jeffery

Partner

Read More

The growing capability of Artificial Intelligence, big data, analytic methods and machine learning (which for ease of reference we will call AI), have paved the way for successful deployment in the healthcare sector. AI has the potential to transform the way the health system works, to support clinical research and improve clinical care. However, poor data practices and management can lead AI applications "to intrude into private life and effect human behaviour by manipulating personal data" (as stated by the ICO in its technology strategy report for 2018-2021) and the ICO has made it a priority to help facilitate its lawful use as part of its technology strategy.

AI in the UK healthcare sector

Recent initiatives show that AI has been the focus of various stakeholders across the healthcare ecosystem. Some examples include:

Growing investment in UK AI companies

A Tech Nation report shows that 2018 was a record year for investment in UK AI companies, with startups raising $1.3 billion in 2018, more than the rest of Europe combined. The healthcare sector continues to be an attractive investment prospect as demonstrated by successful funding rounds in UK-based AI startups and scale-ups including Babylon HealthCMR Surgical and Kheiron Medical Technologies this year.

AI and life sciences: a priority of the UK government's industrial strategy

Since 2017, the UK government has intensified its support of AI initiatives and the healthcare sector is a key part of its life sciences strategy. The AI and data grand challenge mission to use data, artificial intelligence and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030, further reflects the government's commitment to the healthcare sector.  

The UK government works in collaboration with businesses, private investors and partners, the NHS, academics and healthcare practitioners in the UK and elsewhere in the world. Examples include:

  • Partnerships to promote the use of AI in the healthcare sector in the UK and globally (eg DigitalGenius).
  • Investments in health-related AI projects, for example, a £250 million investment to create a national AI laboratory and a grant of  £740,000 from the Regulator's Pioneer Fund for MHRA to work with NHS Digital on developing a pilot to test and validate algorithms and other AI used in medical devices.
  • Improving the environment for startups (eg through Global Britain, the Mayor's International Business Programme).
  • Expanding AI clusters in the UK (eg BT/Ulster University).
  • NHSX which comprises the Department of Health and Social Care, NHS England and NHS Improvement and is responsible for delivering the Health Secretary's Tech Vision.
  • The creation of dedicated AI bodies and working groups, including the AI Committee, AI Council, and the Centre for Data Ethics and Innovation.
  • Conducting studies and producing reports (eg developing effective policy to support AI in health, and the Code of conduct for data-driven health and care technology).
  • Having dialogues with regulators including the ICO in order to create a regulatory and policy framework for the application of good data protection practice in AI.

In its latest AI report, the NHS sets out the foundation policy work that has been done in developing the plans for the NHS AI Lab (run collaboratively by NHSX and the Accelerated Access Collaborative) to ensure that AI is used in a safe, effective and ethically acceptable manner. It also sets out the challenges facing the use of AI including data-related issues.

ICO's initiatives and other AI regulatory frameworks

The ICO is developing an AI auditing framework and is committed to working with other bodies including the National Data Guardian and Health Research Authority "to improve guidance and support to the sector so that healthcare organisations like NHS Trusts can implement data-driven technology solutions safely and legally".

The UK is collaborating on various international policy papers and working groups and a number of national regulators outside the UK have also issued position papers relating to AI data protection issues. These include the EU Commission High-Level Expert Group on AI which published guidelines on "Trustworthy AI – AI should be lawful – respecting all applicable laws and Regulations", AI guidelines from the CNIL, and OECD standards on AI. Cross-border cooperation has also been initiated (eg UK-Japan cooperation in the field of robotics, and a partnership between the Alan Turing Institute and DATAIA in France).

Practical data protection issues

The use of AI carries with it data protection and cybersecurity risks that need to be assessed and mitigated at the outset, so as a key part of the data life cycle.

Following an investigation in 2017, the ICO expressed concerns over the use of Google DeepMind's streams application at the Royal Free NHS Foundation Trust. The ICO concluded that the processing of approximately 1.6 million patients' personal data by DeepMind for the purpose of the clinical safety testing of the streams application did not fully comply with the requirements of the Data Protection Act 1998.

The ICO procured an undertaking from the Royal Free Trust (acting as controller) to fulfil a number of requirements and implement the third party audit recommendations commissioned by the Trust. In July 2019, the ICO stated that it is now satisfied that the Trust has fulfilled those requirements. This case highlights some of the main data protection and security issues that AI companies face when implementing AI applications in the healthcare sector.

Fair, lawful and transparent processing

The first principle of the GDPR requires that personal data be processed in a fair, lawful and transparent manner.

The lawful basis for processing personal data is a recurring issue for organisations that process health-related data and often involves identifying a separate condition for processing special category data under the GDPR or/and the Data Protection Act 2018. The ICO, the IGA and the HRA have issued guidance on the lawful basis for processing health-related data in a healthcare context.

Issues may also arise at the point of collection of personal data, mainly because healthcare data is now more widely shared and can be drawn from a number of sources including not only clinicians, but also private and public organisations. In particular, organisations which collect personal data from public sources may find it difficult to identify the original controller of the data or the lawful basis that controller relies on where that controller is not based in the EU and processes personal data on the basis of non-EU-based data protection principles. It then becomes challenging for such organisations to find a lawful basis for processing the data for their own purposes or to share it with third party controllers or processors to be used to train AI systems.

Controllers should also fulfil their GDPR transparency and information obligations including informing data subjects of the purposes for which they are processing the data. Where sensitive personal data is used for a purpose that a data subject would not reasonably expect, or to which they have not directly consented (where applicable), controllers should take steps to explain the new purposes to the data subjects concerned. Consent and information forms used by organisations (especially those provided by public bodies) should also be reviewed to ensure they meet the GDPR and other regulatory requirements (eg in the area of clinical trials) and do not use consent where not required to do so.

Data minimisation: collect what you really need

AI systems need to be trained through data generated from clinical activities (eg screening, diagnosis) so they can learn data patterns. The type of data used for such training (including data from machine learning and natural language processing) comes from various sources but mainly from physical examination notes and clinical laboratory results and the majority are still reliant on patient data either provided by Acute Hospital Trusts (55%) or patients themselves (23%) through the use of third-party apps.

Organisations are often tempted to collect more data they really need which may have an impact on the accuracy and quality of their data which could lead to bias and discriminatory outcomes. Organisations should therefore collect only what is necessary and proportionate to meet their processing purposes, and implement rigorous data minimisation practices to mitigate the risks of re-identification.

Assess and mitigate the data security risks specific to AI

The healthcare sector is one of the industry sectors most affected by cyberattacks and personal data breaches. As the WannaCry attack on the NHS demonstrated, data security and cyber incidents can disrupt healthcare systems and directly contribute to patient harm. Data breaches also expose organisations to third party claims, loss of profits or commercial opportunities, and significant reputational damage.

Data security should, therefore, be a priority for healthcare organisations. They should implement robust technical and security measures to secure the data they process and make available to third parties, and conduct regular cybersecurity audits and data breach tests. They should also take into account the sensitive nature of health data processed as part of an AI ecosystem, the vulnerabilities inherent to AI models (the ICO blog provides useful examples of security attacks against AI models), the purposes for processing, and the state of the art in the healthcare sector.

Before processing personal data using new technologies or novel applications of existing technologies including AI, data controllers should carry out a full Data Protection Impact Assessment (DPIA) and take steps to mitigate the risks identified by the DPIA by implementing data protection by design and default. Where appropriate, AI developers should also seek ethical approval at the beginning of the development process and be involved in the DPIA.

At the very least, organisations are expected to pseudonymise and encrypt health data and ensure that access controls are carefully scrutinised (eg by ensuring the security of portable devices and the transmission of data to them, restricting access to raw patient-identifiable data, and implementing audit trails and logs). The ICO also recommends that all health service bodies in England should use the data security and protection incident reporting tool.

A clear chain of responsibility

Commercial agreements which envisage the processing and sharing of healthcare data should be carefully drafted to ensure that each party complies with its responsibilities and that risk and liabilities are dealt with appropriately.

It's not all about data

The risks associated with the use of AI, big data analytics and machine leaning in the healthcare sector go beyond data protection. Other issues should be taken into consideration such as data ethics and dealing with challenges of bias and discrimination which could have a critical effect on clinical research and decisions on clinical care. Regulatory scrutiny of the use AI and new technologies will help create an ethical AI framework structured around data protection, cybersecurity and other regulatory aspects.

Back to

Global Data Hub

Go to Global Data Hub main hub