Feature

Article

Future-Proofing with AI in Mind: Q&A with Laura Lotfi

Author(s):

Key Takeaways

  • Future-proofing data for AI involves standardizing data practices, investing in data lineage tools, and fostering collaboration between technology and science.
  • AI is unlikely to fully replace human regulatory decision-making due to its limitations in ethical judgment and contextual awareness.
SHOW MORE

With AI expected to play a significant role in drug approvals, how can companies build data sets that are ready for AI?

Laura Lotfi

Laura Lotfi, MSc.
Director of product management,
digital projects, discovery, and
safety assessment services
Charles River Laboratories

As more and more organizations in or related to pharma announce plans to implement AI, questions are being raised about what the future of the pharma industry looks like. While many are embracing a potential future where AI solves a wide variety of problems and makes multiple processes more efficient, others are concerned about the feasibility of using the technology as widely as its expected to be. Laura Lotfi of Charles River Laboratories spoke with Pharmaceutical Executive about AI and how she says companies should begin future-proofing their data.

How can pharma companies future-proof their data for AI?

Pharmacy Executive: How can R&D leaders future-proof their science for a potential AI-driven regulatory landscape?
Laura Lotfi: To future-proof science for an AI-driven regulatory landscape, R&D leaders must adopt a dual lens of scientific rigor and digital maturity. At CROs like Charles River, where we support multiple sponsors across diverse therapeutic areas and development phases, it’s essential to create robust data and process frameworks that can withstand both human and machine scrutiny.

Key strategies include:

  • Standardizing data capture and metadata practices to ensure interpretability and traceability for machine-driven analysis.
  • Investing in data lineage and provenance tools, which enable auditability and transparency—crucial for regulatory confidence in AI-generated insights.
  • Enabling a bridge between technology and science by creating a shared space where scientific domain expertise and engineering insights and collaboration continuously circulate, boosting trust in AI and accelerating real-world impact.
  • Partnering early with regulatory agencies, not just to understand evolving AI guidelines, but to help shape them through collaborative pilots or real-world evidence (RWE) initiatives.
  • Embedding explainability into AI models, especially in clinical or safety contexts. This ensures decisions aren’t just accurate, but understandable—a core requirement in regulatory science. Embedding information security and privacy-by-design principles into AI and data platforms. Compliance with evolving frameworks such as GxP, HIPAA, GDPR, and AI risk management guidance from NIST and FDA.
  • And perhaps most importantly, building interdisciplinary teams that blend data science, regulatory expertise, and deep therapeutic insight—a model CROs like ours are increasingly structured to deliver.

PE: Reports have come out about the FDA’s AI struggles with hallucinations. How likely is it that AI can fully take over the regulatory landscape?
Lotfi: It’s highly unlikely that AI will fully take over the regulatory landscape—at least not in the foreseeable future. Regulatory decision-making requires not only data interpretation but also ethical judgment, contextual awareness, and stakeholder accountability—areas where AI still has significant limitations.

The issue of AI hallucinations—generating convincing but false information—underscores the need for hybrid models where AI augments and assist human reviewers rather than replaces them. We see the best success when AI is applied to narrow, well-defined tasks (like automating data cleaning or signal detection), while leaving final decisions to experienced scientists and regulatory professionals.

In fact, CROs can play a pivotal role here—by piloting AI-driven processes in controlled, auditable environments that generate evidence for regulatory confidence. Each pilot should incorporate human-in-the-loop checkpoints and capture model explanations for every decision. This de-risks innovation while building a roadmap for AI adoption

PE: What does it mean for data to be AI-ready?
Lotfi: AI-ready data goes far beyond being digital. It means data is:

  • Structured and standardized (e.g., following CDISC formats for clinical data),
  • High quality and complete, with minimal gaps or ambiguity,
  • Context-rich, meaning metadata and provenance are well documented,
  • Interoperable, enabling linkage across different systems and sources (like labs, imaging, EHRs),
  • And importantly, ethically sourced and privacy-compliant, especially in patient-centric applications.

From our perspective, we often inherit data from multiple clients or systems—so we invest heavily in harmonization pipelines, data wrangling automation, and annotation tools to make disparate datasets usable by AI models. We also increasingly use synthetic data and virtual control groups to extend the value of limited datasets. The industry also increasingly values the generation of intentional data set generation for AI modeling which represents another area of opportunity for CROs.

PE: What is the risk of not preparing for AI?
Lotfi: In my opinion the biggest risk is irrelevance. Organizations that fail to prepare for AI will quickly find themselves outpaced—not only in speed and efficiency, but in their ability to meet future regulatory expectations around traceability, reproducibility, and digital transparency.

More specifically, risks include:

  • Lower quality therapeutic products
  • Regulatory lag: Falling behind as agencies begin to expect AI-readiness in submissions.
  • Operational inefficiencies: Continuing to rely on manual, siloed systems while competitors scale faster with AI.
  • Missed scientific insights: Overlooking complex patterns or early signals that AI could uncover — especially in omics, imaging, or real-world data.
  • Increased costs: Delays and redundancies from having to reformat or reprocess data for AI compatibility later.
  • Top talent gravitates to organizations that actively leverage their expertise. Without an AI roadmap, organizations face brain-drain just as competition for these profiles peaks.

At Charles River, we’re already seeing sponsors select partners based on digital enablement, including AI-readiness. So, preparing for AI isn't just a technical upgrade—it’s a strategic imperative for competitive positioning and regulatory trust.

Newsletter

Lead with insight with the Pharmaceutical Executive newsletter, featuring strategic analysis, leadership trends, and market intelligence for biopharma decision-makers.

Related Videos
Gen Li
Gen Li