Skip to content

FDA AI Program: AI/ML Medical Device Research


5 mins


The FDA’s AI Program in the Center for Devices and Radiological Health conducts regulatory science research to ensure patient access to safe and effective medical devices using AI/ML. AI technologies are transforming healthcare by providing diagnostic, therapeutic, and prognostic recommendations based on vast amounts of data. The program focuses on evaluating the safety and effectiveness of AI-based medical devices, addressing several regulatory challenges such as:

    • Lack of methods that can enhance AI algorithm training for limited labeled training and test data
    • Lack of methods to analyze training and test methods to understand, measure, and minimize bias of AI-
      enabled devices
    • Lack of metrics for performance estimation, reference standards, and uncertainty of AI devices
    • Lack of methods to evaluate the safety and effectiveness of continuously learning AI algorithms
    • Lack of methods to evaluate the safety and effectiveness of emerging clinical applications of AI-enabled
      medical devices
    • Lack of methods for post-market monitoring of AI devices

 

What does this AI research hope to accomplish?

Several ongoing research activities under the FDA’s AI program hope to develop robust test methods and evaluation techniques for novel AI algorithms in both premarket and real-world settings. These program activities include research in the key areas listed below:

1. Addressing the Limitations of Medical Data in AI:

The research under this topic involves the use of synthetic data to supplement medical patient datasets for developing AI models in healthcare. Synthetic data offers a safer and more efficient way to obtain labeled examples due to challenges in obtaining real patient data. One of such projects addressing limitations of data in AI is REALYSM: Regulatory Evaluation of Artificial Intelligence using Physics Simulation.

2. Identifying and Measuring Artificial Intelligence (AI) Bias for Enhancing Health Equity:

The goal of this regulatory science research is to understand and measure bias and improve assessment of AI model generalizability. In this Artificial Intelligence Program, bias is defined as a systematic difference in treatment of certain objects, people, or groups in comparison to others, where treatment is any kind of action, including perception, observation, representation, prediction, or decision. A project addressing these issues is Unsupervised Deep Clustering for Subgroup Identification within Medical Image
Datasets.

3. Evaluation Methods for Artificial Intelligence (AI)-Enabled Medical Devices: Performance Assessment
and Uncertainty Quantification:

This regulatory science research aims to help device developers, reviewers, and other stakeholders to determine and use least burdensome metrics for appropriate evaluation of AI-enabled medical devices. The research addresses challenges such as variability in
defining reference standards, lack of data, and random effects in machine learning that affect device performance assessment. The goal is to provide validated uncertainty outputs to improve decision making by clinicians and benefit patients and public health.

4. Performance Evaluation Methods for Evolving AI-Enabled Medical Devices:

This regulatory science research aims to develop methods for evaluating model updates for AI/ML-enabled devices with a Predetermined Change Control Plan (PCCP). In March 30, 2023 FDA had issued a draft guidance on PCCP. The guidance aimed to address issues with reusing the same test dataset for evaluating AI model updates and identifying knowledge gaps in evaluating devices with PCCPs. The research outcomes can assist device manufacturers in including a plan in FDA submissions for evolving devices within controlled boundaries while ensuring safety and effectiveness. Further technical analysis is needed for a least burdensome path to market for devices with PCCPs, and future work should focus on developing methods to safely reuse evaluation datasets and addressing knowledge gaps in evaluating devices with PCCPs.

5. Regulatory Evaluation of New Artificial Intelligence AI Uses for Improving and Automating Medical
Practices:

The FDA’s Center for Devices and Radiological Health has a regulatory approach for AI-enabled devices, but new types of AI may require novel assessment paradigms. Different applications of AI models have varying regulatory implications. Devices for prognosis, treatment response prediction, and other purposes require different assessment metrics. Combining data sources in AI
devices requires research on data harmonization. Natural language processing and large language models in medical devices raise new evaluation challenges. Several projects are ongoing to address these challenges such as Assessment of Video-based Detection AI for Endoscopy; Multi-omics Prediction of Metastatic Breast Cancer Progression and Drug Response, etc.

6. Multi-omics Prediction of Metastatic Breast Cancer Progression and Drug Response:

The goal of this regulatory science research is to develop methods and practical tools that detect changes to the inputs of AI-enabled medical devices, monitor the performance of their outputs, and understand the causes of performance variations. Changes in data collection systems, protocols, and patient groups can impact the performance of AI models, and exposure to unfamiliar data during development can lead to unexpected results. Tools for monitoring and auditing data and outputs are crucial for ensuring the
quality of AI-enabled medical devices. Several FDA sponsored projects are ongoing to address these challenges.

You can read more about the FDA’s Ai research here.

______________________________________________

If you’d like to discuss with our regulatory consultants about the FDA’s latest research regarding AI’s machine learning to ensure safe and effective medical devices, reach out to us here.

Learn how MedEnvoy can assist you: