9 mars 2023
AI – are we getting the balance between regulation and innovation right? – 6 de 6 Publications
At a time of nurses' strikes, lengthening operation waiting lists and mounting pressure on the NHS, relief for the UK's current healthcare system is being sought from all angles. In the medium to long term, one possible source of relief is the increased use of artificial intelligence (AI). The inevitable development of this technology and its growing implementation will support staff and offer the potential to drive a revolution in healthcare.
Globally, it has been predicted that the market for AI health technologies will expand at a compound annual growth rate of 38.5% from 2022 to 2030, by which point it will be worth USD208.2 billion.
The excitement surrounding the advancement and uptake of AI derives not only from its ability to maximise efficiency and accuracy but also the wide range of areas in which it can be applied, such as pattern identification and information synthesis. When used correctly, AI can facilitate tasks throughout the healthcare sector, with its current capabilities ranging from the automation of tasks at a basic administrative level, through to assisting with more accurate diagnoses, surgeries, drug development and treatment plans. This support will free healthcare workers to focus on human-orientated tasks that cannot (yet?) be automated, making the overall healthcare system more efficient and productive.
This potential has been recognised by both the EU and UK, who have been preparing their respective regulatory strategies to control and promote the use of AI. For example, in its 2017 Industrial Strategy, the UK government stated its aim to use data and AI to "transform the prevention, early diagnosis and treatment of chronic diseases by 2030".
There is currently no legislation in place specifically governing the use of AI in the UK. Instead, AI's use is regulated by a patchwork of more general legislation, such as the UK Medical Device Regulations 2002 or the Data Protection Act 2018, covering certain uses of AI.
However, the UK government hopes that, by directing the regulation of AI, they will drive its evolution and implementation in a way that places the UK at the cutting edge of AI development. To this end, the government published the National AI strategy in September 2021, setting out a plan for the development of AI in the UK over the next ten years.
Given the rapid growth of AI's multifarious capabilities, predicting upcoming developments in the technology is a continual challenge. Therefore, the UK's focus is on developing a "proportionate, light-touch and forward-looking" regulatory framework that can respond quickly and effectively to new opportunities and risks. The hope is that such a framework will "drive growth while also protecting our safety, security and fundamental values". Careful drafting will be required to balance the encouragement of innovative development with the prioritisation of the safety and security of citizens.
The fundamental starting point for any regulation is to define what is being regulated. Therefore, the UK's approach will be to define the core characteristics of AI to establish the scope of its regulatory framework. Once this base definition has been prepared, individual regulators (such as the Information Commissioner’s Office, Competition and Markets Authority, Ofcom, the Medicine and Healthcare Regulatory Authority (MHRA) and the Equality and Human Rights Commission) will build on the definition as appropriate for the context of their regulatory domain. The government hopes that limiting their definition to the core characteristics of AI will be sufficient to enable the regulators to understand the framework's scope, whilst permitting them flexibility in each sector. In effect, the framework will be guiding and regulating the application of AI according to its use or sector, rather than regulating the technology itself.
Any regulation implemented will also need to be clear and transparent. Relying completely on the discretion of multiple regulators would risk inconsistent and contradictory advice. This must be avoided so people feel safe knowing they are working within the framework and to allow innovators to understand how future developments are likely to be regulated. To address this risk, a policy paper was published in July 2022, establishing a pro-innovation approach to AI regulation. It sets out six proposed cross-sectoral principles to ensure cohesion across the regulators' responses and inform their approach to the framework.
In the context of healthcare, in its Response to the Consultation of the Future of Medical Devices in the UK in June 2022, the MHRA outlined that AI as a Medical Device (AIaMD), would be treated as a subcategory of Software as a Medical Device (SaMD) meaning that "robust guidance" will be provided, but this will not be separate from the guidance for software. This is part of the work that the MHRA is undertaking to redesign the regulation of medical devices in the UK.
Building on this response, the Software and AI as a Medical Device Change Programme and subsequent Roadmap were published in September 2021 and October 2022 respectively. The roadmap establishes that guidance, in addition to secondary legislation, will structure the framework. One advantage of guidance over legislation is that it allows a flexible and reactive approach to changes. An independent report to the government in November 2022 on the Regulation of Artificial Intelligence as a Medical Device places a similar emphasis on the importance of a regulatory framework that has the "capacity, capability and agility to deal with increasing demand and emerging challenges in AIaMD". As AI branches out into new areas, a successful framework will be one that can quickly and efficiently address the rise of as yet unknown technologies and sectors. Only time will tell whether the regulatory framework envisioned will be sufficient to match the pace of AI's development.
The UK government has already invested significantly in this sector. For example, the GBP140 million NHS AI Health and Care Award has funded a wide range of AI health technologies at different stages of development, with a view to accelerating innovation and bringing these technologies into routine use.
Of the GBP2.3 billion invested generally into AI by the government since 2014, GBP250 million has been invested into creating the NHS AI Lab. This lab will sit within NHSX, the body designed to drive digital transformation and lead IT policy across the NHS. The aim of the NHS AI Lab is to tackle challenges in health and care using AI-driven technologies. This includes, for example, improving early cancer detection, automating routine administrative tasks, and predicting upcoming pressures on the workforce.
The NHS AI Lab is developing a National Strategy for AI in Health and Social Care, within the context of the National AI Strategy, setting the direction for AI in health and social care up to 2030.
Communication with the public and clinicians will be crucial to gaining trust and encouraging take-up of AI technology. Tricky questions, such as the extent to which a healthcare professional can rely on a diagnosis made by software, will need to be explicitly addressed. This guidance and legislation must be sufficiently accessible so that the various relevant regulatory threads are not spread across assorted websites and documents. To this end, the National Institute for Health and Care Excellence has been working to put together a multi-agency advisory service (MAAS) for AI and data-driven technologies, funded by the NHS AI Lab. MAAS comprises a partnership between the Care Quality Commission, Health Research Authority and MHRA who will use it to help developers and adopters of new technologies navigate the regulatory system. Until regulation and accompanying guidance is clearly implemented, clinicians may be reluctant to place any reliance on AI technologies.
The core of the EU's AI strategy seeks to find a similar balance, so that people and businesses can 'enjoy the benefits of AI while feeling safe and protected'. However, the EU is following a comparatively prescriptive pathway to AI regulation. A detailed legislative framework is important for the EU due to its composition. A UK-style, regulator led approach would be far more challenging to implement across the 27 member states.
As part of its AI package announced in April 2021, the European Commission presented a proposal for harmonised rules on AI, known as the AI Act. As an EU regulation, it would be directly applicable to all member states and, given the Northern Ireland Protocol, to Northern Ireland. At the earliest, it is expected to become applicable to operators in the second half of 2024. A European AI Board, comprising competent authorities from member states, will be established to facilitate consistent implementation of the AI package.
The regulatory proposal is to categorise AI systems based on levels of risk as:
To reach the level of "unacceptable risk", the AI system must be considered "a clear threat to the safety, livelihoods and rights of people" and will be banned. This includes, for example, an outright ban on use of social scoring systems by public authorities and certain uses of real-time remote biometric identification in public spaces.
Conformity assessments will be required for AI systems that are high risk. In a healthcare context, this could include robot-assisted surgery, AI medical devices and in vitro diagnostic devices. They will only be allowed to be placed on the EU market if certain conditions are met, such as establishing a risk management system, complying with data governance requirements, and drawing up technical documentation. Other regulations would still be applicable, such as the EU's Medical Device Regulation, so there is an onus on the EU to make sure that these regulations dovetail and are not inconsistent or contradictory.
Limited risk category systems, such as AI software that processes data received from a fitness or heart rate tracking device and producing an output, will be subject to certain transparency obligations. There are no restrictions on AI systems in the "minimal or no risk" category, such as a spam filter.
One of the major concerns highlighted is the breadth of scope and territorial applicability of the AI Act. A broad-brush approach, required in the attempt to capture all possible AI systems, risks the legislation being vague, without targeted nuances addressing specific AI systems. In addition, a broad definition of AI that is designed to capture future technologies risks overregulating, requiring compliance from those who would otherwise not have been caught within the scope of the regulation. This is potentially a serious issue as compliance will be enforced by competent authorities within the member states, who have the power to issue fines of up to EUR30 million or 6% of a company's global turnover, whichever is higher.
The success of the two regulatory approaches - the future-thinking legislation of the EU and the flexible regulatory framework of the UK - will depend not only on how they are implemented and enforced, but also whether they find a suitable balance between innovation and protection of the public.
One of the most significant challenges for the frameworks will be engendering trust amongst healthcare professionals and confidence in their use of AI. They will want to know that they can work with well-regulated AI systems whose risks have been appropriately assessed and addressed.
The general population will also need to be convinced that any new AI systems are safe, given that many people will be unaware of the extent to which AI systems are already ingrained in everyday life, whether in social media algorithms or voice assistants. A flawed integration of new AI, whether to complement or replace current systems, risks creating new challenges or causing concerns about related technologies.
If issues of discrimination or safety are found in an AI system, misgivings are likely to be amplified in the case of a patient or their family members' health. People are generally much less forgiving of technology than individuals, such as doctors who are "only human". However, an imperfect implementation of AI-enabled technology risks perpetuating and exacerbating our human failures and biases. As such, regulation should provide for the ongoing assessment of AI healthcare technologies, whilst also acting as an enabler, realising the potential of technologies that could represent a huge leap forward from our current treatment and diagnostic capabilities for all patients.
Debbie Heywood (not ChatGPT) looks at the evolution of the UK's policy on regulating AI.
9 May 2023
par Debbie Heywood
Benedikt Kohn and Fritz-Ulli Pieper look at the approach to regulating AI in key jurisdictions.
9 May 2023
Xuyang Zhu and Noelle Huang look at the key things to consider when using and training generative AI tools given the potential IP ownership and infringement issues.
9 May 2023
par Xuyang Zhu
Katie Chandler, Philipp Behrendt and Christopher Bakier look at the EU's proposals to legislate for liability risks in AI products.
9 May 2023
par plusieurs auteurs
Thorsten Troge looks at regulation of dark patterns in the EU and at whether this is sufficient as they become increasingly AI-driven.
9 May 2023
Nicholas Vollers and Alison Dennis compare and contrast the UK and EU approaches to regulating the use of AI in healthcare.
9 March 2023
par Alison Dennis et Alice Matthews
par Nicholas Vollers et Adrian Toutoungi
par Nicholas Vollers