Challenges and Opportunities with Artificial Intelligence and Machine Learning: a regulatory perspective
Paul H. Schuette (FDA)
Highlights
The article provides a brief summary of FDA efforts to begin to address challenges and opportunities of using Artificial Intelligence and Machine Learning (AI/ML) in a risk-based regulatory framework.
FDA, together with other health authorities, has published Good Machine Learning Practices.
FDA has published a plan to collaboratively address AI/ML management and development.
I. Introduction
Artificial Intelligence and Machine Learning (AI/ML) have become topics of intense interest to the scientific community at large and to biostatistics in particular. The purpose of this article is to describe efforts of the Food and Drug Administration (FDA) in AI/ML as well as highlighting some of the opportunities and challenges that FDA is facing in this rapidly evolving field.
2. Growing Role of AI/ML
Liu et al [1] describe a trend of exponential growth in AI/ML related regulatory submissions of drug and biological products to the FDA during 2016-2021, growing from 1 submission in 2016 to 132 submissions in 2021. The use cases for AI/ML related submissions range from drug discovery to postmarket surveillance [1], while AI/ML related submissions have continued to grow at near exponential levels. To help facilitate a discussion with stakeholders, in May 2023, the Center for Drug Evaluation and Research (CDER), in collaboration with the Center for Biologics Evaluation and Research (CBER) and the Center for Devices and Radiological Health (CDRH), including the Digital Health Center of Excellence (DHCoE) published a discussion paper [2], outlining the application of AI/ML in the broad context of the medical product development process. The potential uses of AI/ML considered in [2] include: drug discovery, nonclinical research, clinical research, postmarket safety surveillance, and advanced manufacturing. There is also discussion of best practices for using AI/ML and potential use of verification and validation frameworks to assess model credibility. Overall, key areas of interest from [2] include:
human-led governance, accountability, and transparency;
quality, reliability, and representativeness of data; and
model development, performance, monitoring, and validation.
Best practices for using AI/ML will be considered below.
3. Good Machine Learning Practices (GMLP)
In October 2021, FDA, Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) identified 10 principles for GMLP, with an emphasis on devices [3]. These guiding principles are listed below.
Multi-Disciplinary Expertise Is Leveraged Throughout the Total Product Life Cycle.
Good Software Engineering and Security Practices Are Implemented.
Clinical Study Participants and Data Sets Are Representative of the Intended Patient Population.
Training Data Sets Are Independent of Test Sets.
Selected Reference Datasets Are Based Upon Best Available Methods.
Model Design Is Tailored to the Available Data and Reflects the Intended Use of the Device.
Focus Is Placed on the Performance of the Human-AI Team.
Testing Demonstrates Device Performance during Clinically Relevant Conditions.
Users Are Provided Clear, Essential Information.
Deployed Models Are Monitored for Performance and Re-training Risks are Managed.
Most of these guiding principles are sufficiently general that one can replace “device” with “medical product.” The guiding principles, while not all inclusive, form a framework for development. We may note, for example, that a need for data to be representative is listed both in [2] and [3] (principle #3). Representative data for a data driven approach is manifestly important. If the data fail to be representative of the intended patient population, then bias may enter the model and adversely affect the conclusions. We may also note that principle #4 would appear to exclude the use of techniques such as K-fold cross-validation, described in [4].
4. Challenges and Opportunities
AI/ML methods have developed quickly, and implementation costs can be nontrivial. In the race to be first, software products may be launched that have not been fully tested or validated. The data used to train AI/ML models may be repurposed data or similar convenience samples, which may introduce unintended biases. Some of the ethical concerns regarding the use of AI/ML for regulatory purposes include incomplete or unrepresentative training data, algorithmic bias, privacy concerns, and adversarial machine learning. Regulators, sponsors, and AI/ML developers will need to work together to meaningfully address the ethical concerns.
AI/ML poses particular challenges to regulators. Ideally, AI/ML methods would possess desirable attributes such as: traceability, transparency, reproducibility, robustness, veracity, and explainability. Unfortunately, the stochastic nature of many AI/ML algorithms, as well as complexities in distributed computing environments, may introduce inherent randomness. Differentiating the underlying signal from potentially random noise consistently can be an issue. As noted in [5], “AI management requires a risk-based regulatory framework built on robust principles, standards, best practices, and state-of-the-art regulatory science tools that can be applied across AI applications and be tailored to the relevant medical product.”
In [5], the FDA outlines an ambitious plan to address AI/ML management and development by focusing on four key areas:
1) Fostering Collaboration to Safeguard Public Health
2) Advancing the Development of Regulatory Approaches that Support Innovation
3) Promoting the Development of Standards, Guidelines, Best Practices, and Tools for the Medical Product Life Cycle
4) Supporting Research Related to the Evaluation and Monitoring of AI Performance.
Achieving these laudable objectives will likely require time, resources, and effort. Some deliverables such as guidance documents are anticipated to be ready in the fall of 2024, while other initiatives such as fostering collaboration and supporting research may take significantly longer to see fruition. Given limited resources available to regulators, as well as fierce competition for a limited pool of available talent with AI/ML development experience, regulators such as FDA will need to be strategic and creative in their approaches.
5. Conclusions
AI/ML has the potential to enable revolutionary changes in medical product development ranging from discovery, development, manufacturing, to post-market evaluations. Yet, changes are likely to emerge incrementally and in a piecemeal fashion. To be commercially viable, AI/ML will need to be cost effective, reliable, robust, and sustainable. Regulators will face challenges evaluating that both the data and algorithms are fit for purpose for a given context of use. Federal policy requires that AI/ML be safe secure and trustworthy, see [6]. AI/ML holds great promise, but will require governance, transparency, and resources to implement successfully.
References
1. Liu Q, Huang R, Hsieh J, Zhu H, Tiwari M, Liu G, Jean D, ElZarrad MK, Fakhouri T, Berman S, Dunn B, Diamond MC, Huang SM. Landscape Analysis of the Application of Artificial Intelligence and Machine Learning in Regulatory Submissions for Drug Development From 2016 to 2021. Clin Pharmacol Ther. 2023 Apr;113(4):771-774. doi: 10.1002/cpt.2668.
2. Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products, https://www.fda.gov/science-research/science-and-research-special-topics/artificial-intelligence-and-machine-learning-aiml-drug-development.
3. Good Machine Learning Practice for Medical Device Development: Guiding Principles, https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles.
4. Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning Data Mining, Inference, and Prediction, 2nd edition, Springer, 2009.
5. Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together, https://www.fda.gov/science-research/science-and-research-special-topics/artificial-intelligence-and-medical-products.
6. Executive Order 14110, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.