• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

US Pharma and Biotech Summit 2024: When FDA Needs to Regulate AI

Feature
Article

Tala Fakhouri spoke at the conference about FDA’s approach to determining when it even needs to regulate the usage of AI in the pharma industry.

US Pharma and Biotech Summit

US Pharma and Biotech Summit 2024
New York City

FDA is working on how it’s going to regulate AI.

The Financial Times hosted the US Pharma and Biotech Summit on May 16 in New York City. During the conference, associate director for policy analysis at FDA Tala Fakhouri participated in a session focused on how the agency is handling AI. According to Fakhouri, this is no easy task as the technology traverses the spectrum of technologies that FDA does and doesn’t regulate.

As such, the biggest issue FDA is considering is when to actually make rules about the usage of AI.

“We actually started tracking submissions that come into the agency with AI and machine learning components in 2016,” Fakhouri said. “And today, we've seen over 300 submissions that have come for drug approvals, and I'm just talking about drug development. This is different from AI enabled devices that have AI and machine learning components, and they traverse the spectrum. They are in areas that we regulate in areas that we don't necessarily regulate. But in discovery, most of the uses are in clinical research. And we're actually very excited about the submissions that we're getting.”

When it comes to drug discovery, the main issue with AI is when it is implemented in the process. If the technology is used early on to find new molecules, that’s not something the FDA necessarily needs to be concerned with. Once it starts being used to assess, analyze, or organize clinical data, however, then FDA may need to get involved.

“We don't regulate linear progression, we'll regulate logistic regression,” she explained. “And that's not how we would address the use of these technologies in drug development. We want to make sure that the use is credible, and that we can trust the data that is coming out of these models. But we're open to a variety of types of technologies that come in. And we're actually, again, very excited for them to walk through some of the spectrum of R&D to start maybe at the simpler, earliest stages of drug discovery.”

She continued, saying that if the AI is used prior to any serious testing, then the molecules still must go through the same toxicology and safety studies.

“Last year, we published a landscape analysis describing the submissions that we've received. And we know that we've received submissions where the sponsor is telling us that they've used AI in discovery. They're excited to give us this information. But it's not necessarily something that we would look at, because like you said, eventually that molecule, that drug will enter through the traditional testing pathways.”

When it comes to situations where FDA would regulate the usage of AI and machine learning technology, Fakhouri discussed the conversations that members of the agency are having regarding how the rules will be written. According to her, there are several questions that FDA focuses on.

“What are the questions that we should think about when we think about data that is used to train machine learning for AI models,” she asked. “The second is related to model performance. And this trade off with explainability. We also get asked that question a lot. Does the FDA require that all models be explainable? I mean, generally AI is not explainable. It depends, again, on how you're using the technology. We would emphasize transparency over issues of explainability. We understand that there's this payoff, in fact, a lot of unexplainable models are significantly better in performance than others.”

She continued, “The third issue that comes up frequently when we talk to our colleagues is related to the governance oversight, specifically the language that we use to define terms related to AI. This is something that I think the entire field is struggling with, because traditionally the people doing drug development and drug approvals are in more traditional fields, like statistics, epidemiology, or clinicians, or pharmacists. And now, enter data scientists who speak very different language than a lot of people with a background in engineering.”

FDA is set to release guidance on AI usage later this year, although the agency does not have a specific date yet. It’s expected to be released by the end of this year, potentially in either the fall or early winter.

Recent Videos
Related Content