4. Juli 2023
The term 'medical device' is broadly defined and widely used. It has been estimated that there are currently around 2 million different kinds of medical devices available throughout the world, with the market expected to reach a valuation of USD964.9 billion by 2030. Although many medical devices are low tech or have no electronic components, the ever-increasing rise and integration of technology into our daily lives is equally apparent in the medical device industry.
A concurrent combination of the growth of software capabilities and the shrinking size of hardware has led to a proliferation of electronic medical technology incorporating, or in some cases exclusively consisting of, software. For example, software that calculates clinical risk or offers a prognosis of future disease risk would constitute a medical device in its own right, even without the accompanying hardware. This is referred to as software as a medical device (SaMD).
Artificial intelligence (AI) can use machine learning to enhance the capabilities of software, by analysing and interpreting data. In practice within healthcare, this includes AI systems that can be trained to diagnose medical conditions, spot early cancer symptoms or predict cardiac arrest.
As explored in our overview of AI in healthcare, the UK's national AI strategy focuses on a light-touch and flexible approach to AI regulation, relying on a framework that permits regulators to introduce sector-specific rules. In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) is responsible for regulating the medical devices market. It is currently preparing detailed guidance on the regulation of AI as a medical device (AIaMD).
In September 2021, the MHRA announced its development of a work programme, the Software and AI as a Medical Device Change Programme, that will review the regulatory environment applicable to medical software and AI in the UK. This announcement was followed in October 2022 by a Regulatory Roadmap, which sets out how the MHRA intends to regulate medical software and AI in the UK, summarised in eleven 'work packages'. AIaMD will be regulated as a part of SaMD, with no additional legislative medical device requirements being imposed on AI beyond those for software.
Although secondary legislation will be issued, most of the Change Programme will be delivered in the form of clarificatory guidance, standards and streamlined processes that build upon legislation. For example, the definitions of software and AI will be established through guidance rather than legislation. This grants regulators the agility to react quickly to technological developments, proactively and reactively shaping the regulatory environment, without having to navigate legislative procedure.
The MHRA is collaborating with the National Institute for Health and Care Excellence (NICE) and NHS England with the goal of co-ordinating digital health regulation so that it is coherent and consistent, with each agency applying harmonised standards and classification rules. The MHRA has also stated that it will work towards international harmonisation by participating in the International Medical Device Regulators Forum (IMDRF) with other regulators and aligning with the IMDRF's Framework.
Any medical devices that comprise or incorporate software or AI, whether the software/AI is standalone or incorporated into hardware, will fall within the remit of the Change Programme which will cover both existing and new products.
Wellbeing and lifestyle software products, such as a step counter or a meditation guidance app, will not be within scope. Smart wearables tread the fine line between being a wellness product and a medical device (with more information on that boundary here). Other products that will fall outside of the scope include IVD software, medicines and companion diagnostics, SaMD for research only purposes, custom made devices, software in a kit/system/procedure pack or as a service, medical device/IVD accessories and devices with no medical function.
Software code can also qualify as a medical device on its own. Where open-source code has been modified, so that it qualifies as a medical device, the entity making the modifications may be classified as the manufacturer, and would therefore be responsible for that code as a medical device.
Safety and accountability are at the core of the government's Change Programme, which focuses on the protection of patients and the development of responsible innovation. Manufacturers can expect to have to meet comprehensive safety, compliance and accountability obligations. Although this may sound daunting, one of the MHRA's key aims in implementing the Change Programme is the provision of clear guidance on how to comply with the regulatory conditions and effectively demonstrate conformity.
Manufacturers will be relieved by the MHRA's intention for the implementation and content of the Change Programme to be pragmatic, carefully targeting and deploying regulatory obligations, in order to encourage investment and responsible innovation in the UK. This will hopefully streamline the regulatory approval process, preventing onerous or unnecessary obligations.
The MHRA's regulation will cover the life cycle of medical devices, starting from qualification and classification. As with all medical devices, the intended purpose of the product will have to be clearly and specifically defined.
The standards for medical devices will be drawn up and published by the British Standards Institute (BSI). These definitions will help steer manufacturers into producing a compliant medical device and will aid in meeting the Change Programme's regulatory requirements.
Currently, medical devices in the UK are categorised as one of Class I (low risk), Classes IIa and IIb (medium risk) and Class III (high risk). Further guidance will also be issued as part of the Change Programme on the classification of software and AI as medical devices.
Once the medical device has been qualified as such under the laws of Great Britain (see our EU MDR Medical Device Checker tool, albeit for EU qualification) and classified, manufacturers will be required to provide sufficient assurance to the MHRA that their product is acceptably safe and functions as intended. Specialised tools will be rolled out which can be used by manufacturers to demonstrate compliance.
One common issue with AI in healthcare is the risk of inbuilt bias. If non-representative data sets are used to train the systems, such as an AI medical device trained to diagnose disease based on data that does not reflect the diversity of the population, this could affect the efficacy of the device on unrepresented or under-represented individuals. In recognition of this issue, the MHRA will likely require manufacturers to prove that they have appropriately identified, measured, managed and mitigated risks arising from any bias. The MHRA has stated its intention to work with the STANDING Together Project when establishing standards for data inclusivity and generalisability. Inclusivity is not limited to software and AI devices, but extends to all life sciences products, as is stated in the MHRA proposals on clinical trial regulations.
Manufacturers will also be required to produce data to demonstrate the safety, effectiveness and quality of the device before it can be placed on the market. For manufacturers looking to prepare for demonstrating legal compliance, understanding the ten guiding principles in the Good Machine Learning Practice (GMLP), agreed between the U.S. Food and Drug Administration, Health Canada and the MHRA will be a good starting point for high level guidance.
For diagnostic AI medical devices, the MHRA outlined in June 2022 in its response to a regulatory consultation that such devices would be required to undergo performance evaluations similar to those currently used for IVDs, based on IMDRF's proposed Clinical Evaluation for SaMD. Scientific validity, analytical performance and clinical performance would all have to be demonstrated.
Pre-market requirements for medical devices are likely to be introduced through secondary legislation, in the form of updates to the Medical Devices Directive 2002. BSI standards will be introduced to instruct on best practice for the development and deployment of AIaMD and SaMD. For interactive systems, guidance on human factors will be issued to accompany these standards, encouraging 'human-centred design' through consideration of usability, ergonomic and behavioural science.
A regulatory 'airlock' will be introduced to help devices intended to meet an unaddressed clinical need to generate the necessary pre-market phase evidence. This is effectively a sandbox that will allow the manufacturer to generate real-world evidence for the device over a limited time period, while being subject to constant monitoring.
Specific consideration will be given in the Change Programme to the qualification and notification requirements for retrospective non-interventional performance studies for SaMD, the research governance processes for medical devices and the nomenclature of SaMD.
Once the medical device has been made available on the market, evidence of safe use and ongoing safety surveillance will be required. Guidance will be issued on monitoring safety data sources, curating and analysing safety data, appropriately mitigating risks and responding to incidents. The MHRA currently uses the Yellow Card Scheme for the reporting of medical device adverse events and will integrate this reporting system into the Change Programme, clarifying reporting obligations and next steps.
Ongoing change management requirements will be implemented, based on the category of change type, to maintain safe performance of devices throughout their period of sale. How to respond to concept drift (ie a shift in the relationship between the input data and the system output), including changes beyond a manufacturer's control, will also be addressed.
Furthermore, many medical devices will, over their life cycle, undergo expansion of their medical purpose, the intended population, or the intended users of a device. This is to be expected and guidance will address such evolution of devices. Secondary legislation will also be introduced to provide for predetermined change control plans, pre-empting upcoming developments in software and AI.
Guidance will be produced to instruct manufacturers, system providers and local deploying organisations, where necessary, on how to correctly and safely remove products from the market.
Engendering trust in, and thereby encouraging use of, AI is fundamental in the healthcare sector. The development of trustworthy AIaMD is therefore a key target of the Change Programme. As such, tools and a framework will be issued to ensure trustworthiness.
Patients and users are more likely to trust technology that they can understand. So, in line with the 'Appropriate transparency and explainability' principle in the Government's policy paper on AI, guidance will be produced on the interpretability of AIaMD (details to follow on the MHRA's 'Project Glass Box' on AI interpretability), highlighting its relationship with usability, to ensure that AI models are sufficiently transparent to be reproducible and testable. This policy paper's cross-sectoral principles will also be integrated more generally, along with ethical principles for the use of AI in health and social care.
For manufacturers looking to prepare for the upcoming audit requirements, The Lancet's publication on The Medical Algorithmic Audit is a helpful reference. The Government also clarified in its consultation response that it does not intend to mandate logging of outputs to enable auditability.
Cyber security is a key factor for any software and SaMD is no exception. The Change Programme will reflect the evolving state of the art for risk mitigation and will implement secondary legislation, supported by interpreting guidance, to impose minimum cyber security and IT requirements. For those involved in the production and use of SaMD, reporting vulnerabilities and incidents will be key to the Change Programme.
Finally, there will be further requirements for SaMD or AIaMD that processes personal data, to ensure that it is processed in line with current UK data protection principles. The MHRA is working with the Information Commissioner's Office and the National Data Guardian to ensure that data protection is appropriately addressed.
In pursuing a regulatory strategy for AI that is distinct from that of the EU, the MHRA will be keenly aiming for a post-Brexit UK to be 'recognised globally as a home of responsible innovation for medical device software'. The MHRA's formulation of the Change Programme, informed by domestic and international partnerships and collaborations with other regulators, suggests that the upcoming regulation will fit within and around the existing regulatory environment. This will be critical as too much divergence from other international systems will effectively create additional regulatory hurdles for manufacturers, disincentivising their entry to the UK's AI ecosystem.
We can understand from the current roadmap and the emphasis on a flexible regulatory framework that is agilely driven by guidance and standards, that the upcoming regulation will be practical, effective and internationally aligned. This will hopefully provide a sure-enough steer to give manufacturers the confidence to innovate and develop new software and AI products in the knowledge that they are working within a clear, understandable and robust framework that encourages innovation.
An article on this subject has been published by MedTech Dive. Read Alison's take on the importance of building trust with patients and regulators in the adoption of AI in medical devices here.
von Alison Dennis und Alice Matthews
von Nicholas Vollers
von Alison Dennis und Paolo Palmigiano