Feature
Video
Author(s):
Marcel Botha discusses what the technology is most effective at in the FDA approval process.
One of the goals of the current administration is to implement the use of AI in FDA’s approval process. Proponents of the policy argue that the technology can increase the speed at which FDA can review materials and approve new drugs. Critics, however, worry both that the technology will be used to replace existing workers at the agency and also that the results will be unreliable. Marcel Botha, CEO of 10XBeta, spoke with Pharmaceutical Executive about the technology and the risks and benefits of its use at FDA.
Pharmaceutical Executive: One of the concerns people have when they hear FDA is using AI in the drug approval process is that many people don't have a proper understanding of the technology’s capabilities. What role can AI realistically play in FDA’s approval process?
Marcel Botha: In order to have a successful AI strategy within the FDA, the organization must start with the training data set for the model being used to build the LLM specific to drug discovery. What's the framework that it gives this large language model to reference best past practice, and what are the guardrails being put in place to ensure its not just repeating the status quo from the last 70 years?
We must make sure that we are, in fact, not just replicating but doing it better. I'll give an example: with FDA now banning red dyes as a food additive. My kids freaked out, and they all read every packaging label now to see if there's still red dye in it. They choose for themselves not to consume it. But I think that for too long, FDA has let a lot of borderline additives go through unchecked and made it our problem to decide whether they're safe or not.
So, if our AI models and language models are based on past bad decisions, what value will they bring in the future?
We can curate that data input. What we need to make sure is that we build a system that is, in fact, making a credible assessment based on more structured data that is in line with current policy thinking. One of the challenges that came out from the initial release of the Elsa model for FDA is that it was prone to hallucination. By that, I mean it was making stuff up. We can't have a science-based AI query system make stuff up. What you save in time (in terms of time of review) you lose some because of the human in the loop that's now required to check all the AI's work, and that's exactly the opposite of what we want.
Lead with insight with the Pharmaceutical Executive newsletter, featuring strategic analysis, leadership trends, and market intelligence for biopharma decision-makers.