Companies developing pharma or biotech products using AI in any proximity to their clinical trial whether in design of the trial or its administration, patient recruitment, devices or IVDs used in the trial, the creation and use of synthetic data, or for data analysis must keep in mind that the consequences of those uses are likely to be of interest to regulators both with respect to the trial itself and the subjects in it, and to the outputs which are used in applications for authorisation. While the AI Act is not applicable to certain research uses of AI, these exclusions provide little or no comfort to AI uses in clinical trials, where a myriad of regulation and guidance has potential application.
Data, data everywhere – can any of it be used?
AI can be an amazing tool for sorting and analysing clinical data, and even patients. The building of clinical trials and the creation and analysis of data arising are all critical parts of product development, feeding into the regulatory process for authorisation of pharma/biotech products. For regulators to have trust clinical trials in which AI is deployed and data arising from them requires:
- A regulatory impact assessment assessing whether the AI/machine learning (ML) use is low or high risk (in the context of the trial and on regulatory impact or patients in the trial, not the same assessment of risk under the AI Act). EMA examples of high risk uses with patients are assigning patients for treatment or dosing decisions.
- Transparency with regulators on the use of AI is important. Regulators want to judge for themselves whether the use is appropriate and leads to conclusions that might be relied upon for any authorisation. Thus, in its reflection paper on the use of artificial intelligence in the medicinal product life cycle, the EMA considers that use of AI in a clinical trial with high regulatory impact requires a comprehensive regulatory assessment, which requires disclosure of the full model architecture (frozen prior to database lock and unblinding), logs from model development, validation and testing, training data and description of the data processing pipeline.
- Compliance with data standards such as:
(i) ICH E6 GCP chapter 3.16 on data and records and chapter 4.3 on computerised systems which speak to the principles of data integrity including physical integrity and coherence), quality control and validation to ensure completeness, accuracy and reliability of the clinical data generated in the trial
(ii) Draft ICH E6(3), Annex 2, published on 20 November 2024 which includes GCP principles for specific aspects of clinical trials relevant for the use of AI models, such as the handling of real-world data relating to patient health status collected from sources outside clinical trials
(iii) ICH E9: statistical principles for clinical trials – Step 5.
- Thoughtful use of AI for example, when using large language models (LLMs) to support tasks and processes in the medicines regulatory system in the EMA's guiding principles on the use of large language models in regulatory science and for medicines regulatory activities. This guidance includes general advice, which could be applicable to any use of LLMs, such as:
(i) Avoiding the input of sensitive personal data or confidential IP into LLMs which are not locally deployed or controlled
(ii) Applying critical thinking to LLM outputs, checking for veracity, reliability and fairness before applying them in any regulatory documents.
The EMAs do not want unreliable LLM outputs to be used to complete regulatory documentation. We presume their unstated fear is that such uses might be both wrong and not obvious to the regulator.
Use of medical devices including an AI system
Pharma and biotech trials might make use of medical devices or in vitro diagnostic medical devices which means that other legislation must be complied with in addition to the Clinical Trials Regulation (EU) 536/2014. Any use of a medical device in a pharma/biotech clinical trial in the EU/EEA is considered either 'placing on the market' or 'putting into service'. Where the device is not yet CE marked, this requires under the applicable regulation (2017/745 (MDR) or 2017/746 (IVDR)) at minimum a notification to the competent authority for a clinical or performance study, and in many cases an authorisation will be required. Where the device includes an AI system, that is high-risk according to the definition in the AI Act (Regulation 2024/1689) it is unlikely that actual use on patients will fall within the exclusion from the AI Act under Article 2(6) of 'scientific research and development'. Instead, the device would need to additionally comply with the requirements for testing high-risk AI systems in real world conditions in Article 60 and 61 of the AI Act. See this article on the application of the AI Act and EU medical device regulation Medical devices and the EU AI Act AI Act - how will two sets of regulations work together? The layering of these two not entirely aligned regimes adds complications which might only be resolved through guidance from the EU, which we hope will be drafted by those with a thorough understanding of the way in which clinical trials operate as well as a working knowledge of medical devices and in vitro medical devices.
See our other articles dealing with data privacy in a clinical setting Purpose limitation and data minimisation: key considerations for AI training in the life sciences sector.