The current position of the MHRA on regulating software and AI included in medical devices
Legislative position in Great Britain
The UK has long promised new legislation to replace the 'old' regulatory framework that is mostly driven by the European Union's three now largely defunct medical devices directives. While the new post-market surveillance regulations are on the UK statute book, the rest of the regulations have yet to be published.
Meanwhile, innovation in the sector, including AI innovation, continues at pace and the regulatory landscape of Great Britain needs to keep up. In the lacunae of new legislation for Great Britain, the MHRA has published principles and guidance, which are summarised here, showing the MHRA's current thinking in relation to software as a medical device, including AI applications.
Principles, not laws (for now)
The MHRA has published sets of principles applicable to medical devices that include AI and which have been developed in conjunction with the FDA and Health Canada:
- Ten guiding principles for good machine learning practice to support the development of safe, effective and high-quality artificial intelligence/machine learning technologies, which were updated in June 2024 with further expansion of the principles of transparency.
- Guiding principles for predetermined change control plans for machine learning published in October 2023.
Faster-to-market programmes
The MHRA launched the AI Airlock pilot program in 2024, and five candidates have been selected and are participating in the pilot project, which is anticipated to run until April 2025. The AI Airlock brings together the MHRA and experts from UK Approved Bodies and other regulators. Learnings from the pilot project will be applied in the development of an AI Airlock for the future. The AI Airlock is akin to the regulatory sandbox found in the EU's AI Act but is specifically for medical devices and is intended to accelerate innovative AI medical devices getting to market in Great Britain.
New legislation principles
In April 2024 the MHRA published a document implementing the Conservative Government's AI White Paper principles which include the overarching five key principles for regulatory use of AI, which are not specific to medical devices. In their response, the MHRA response pointed out that the UK's Medical Device Regulations 2002 include provisions covering the lifecycle of the product from clinical investigation through to conformity assessment, including requirements to mitigate risk and address safety and performance concerns and to have an appropriate quality management system (QMS).
The MHRA plans to focus on creating clear requirements for software and AI to ensure that the devices are acceptably safe and function as intended. The MHRA proposes to achieve this through guidance and more streamlined processes for software and up-classifying such devices (from Class I). They will also designate standards to be complied with, preferably with those agreed upon internationally, such as those published by the International Medical Device Regulators Forum (IMDRF). The MHRA intends for authorisation processes to be streamlined through a combined process with the MHRA, NICE and NHS England for digital health products.
Anticipated content of new medical device laws
The MHRA is working on the new regulations, including specific legislation for software as a medical device (SaMD) and for devices which include AI (AIaMD).
Qualification as software as a medical device
The MHRA has promised guidance on what SaMD is as against wellbeing and lifestyle software products. As stated above SaMD, particularly that including AI, will be up-classified from current class I. The framework likely to be followed is that of the IMDRF. The MHRA will provide guidance on defining the intended purpose for SaMD and connecting that with patient populations and clinical evidence as well as with the QMS and risk management system. The MHRA will also clarify the position of those deploying SaMD via third-party websites.
We expect to see the publication of guidance for medical device AI development and deployment, as well as cyber security, by the end of Q2.
Premarket requirements
The MHRA intends to smooth the path to market for SaMD and AIaMD through making the data requirements proportionate to the risk of the device. The intention is to include essential requirements that are specific to SaMD and AIaMD.
The MHRA is working with the British Standards Institution (BSI) on the standards that should be applicable to SaMD and AIaMD. We presume that these will largely follow the IMDRF principles. They will highlight areas where current best practice may not meet regulatory requirements or the regulatory definition of 'state of the art'.
The MHRA will provide guidance on human-centred design to avoid systematic misinterpretation of software outputs. This will focus on human factors, usability, ergonomic or behavioural sciences evidence and will be linked to essential requirements. There is a 'project glass box' in process that considers human interpretability and its consequences for safety and effectiveness for AIaMD, with guidance on the interpretability of AIaMD to ensure safety and effectiveness and to ensure that AI models are sufficiently transparent to be reproducible and testable. Trustworthiness for patients and healthcare professionals will be reviewed with Health Education England, the General Medical Council and patient groups, working with the Oxford university group, Trustworthiness Auditing for AI.
Secondary legislation will be published on predetermined change control plans for SaMD and guidance on the gradual expansion of the intended purpose of SaMD and how to maintain regulatory compliance during that process.
Cybersecurity requirements will be prioritised and included in the pre and post-market requirements for SaMD.
AIaMD will also be the subject of supplementary guidance. The MHRA plans for international standards to be applied to AIaMD and include practices such as medical algorithmic audits. The MHRA will provide best practice guidance on bias, breaking it into three challenges:
- Performance across populations and in different real-world conditions (with a standards framework to identify, measure, manage and mitigate bias).
- Ensuring data is contextualised to avoid perpetuating inequalities or poorer performance in sub-populations.
- Ensuring that AIaMD meets the needs of the communities in which it is deployed.
The MHRA will consider statistical and machine learning methods to detect, measure and correct for bias in datasets. They will look at existing approaches like Synthetic Minority Oversampling TEchnique (SMOTE) and Adaptive Synthetic Sampling (AdaSyn) which are used to detect rare events, to rebalance imbalances in the data.
An example of MHRA guidance in SaMD is that published on 3 February 2025 on mental health apps. This explains three concepts: when a mental health app might be considered a medical device, how to define the concept of intended purpose for a digital mental health technology and the application of risk classification to mental health applications.
The MHRA plans to produce guidance on the management of risks of unsupported devices being in use after the manufacturer has withdrawn them from the market.
Finally, the processes and requirements around the management of change will be streamlined for AIaMD.
Post-market requirements
The MHRA has already published legislation on post-market surveillance and will analyse data to generate safety signals for SaMD. However, they note that, to date, there has been little reporting of adverse events for SaMD. The MHRA provided updated guidance in January 2025 on events that might constitute an adverse event for SaMD with the stated aim of encouraging more reporting.