• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Using GenAI as a Co-Pilot for Regulatory Intelligence

Feature
Article

GenAI is on the precipice of breaking new ground in regulatory intelligence; making strategists more efficient by alleviating challenges, streamlining regulatory research and submission processes.

Venkatraman “Bala” Balasubramanian, PhD, Senior VP and Global Head of HLS Orion Innovation

Venkatraman “Bala” Balasubramanian, PhD, Senior VP and Global Head of HLS Orion Innovation

Large Language Models (LLMs) are the foundation behind GenAI and have become the topic of the year in 2023. In the Healthcare and Life Sciences sectors, it’s been a game changer for advancing new medicines and ushering in a new renaissance in the pharmaceutical industry. While traditional AI was used to speed the development of the lifesaving COVID-19 vaccine1, GenAI is having an impact on many other areas across the R&D and Commercialization value chains. It is now on the precipice of breaking new ground in regulatory intelligence. A McKinsey report2 suggests that GenAI could unlock a potential value of about 2.6 to 4.5% of revenues ($60 billion to $110 billion) annually in the pharmaceutical and medical product industry.

Historically, regulatory strategists and submission managers kept abreast of global regulatory guidelines and changes by searching Health Authority websites or scanning subscription databases. These professionals spend significant time and effort researching and summarizing regulatory guidelines and changes to understand and define regulatory pathways and strategies to prepare submissions in compliance with health authority regulations. Can GenAI alleviate these challenges by reducing manual effort and automating and streamlining regulatory research and submission processes, making them more productive and efficient?

Quest for answers

Our team explored the current state of LLM models for regulatory use cases. We focused on supporting regulatory intelligence needs through Orion’s RegIntel Chatbot, which uses GenAI to provide an interactive interface to query global health regulations for decision support. Our initiative aimed to find potential productivity improvements around the search and retrieval of regulatory guidelines to speed up the filing and submission processes. 

We gathered 100 regulatory guidelines and position papers from eight health authorities globally and ingested them into Orion’s RegIntel Chatbot for assessment, restricting access to external sources and websites. The idea was to restrict the assessment to only these 100 documents in the storage repository. 100 questions were posed, and the resulting responses were evaluated for accuracy on a six-point scale. Examples of the questions asked included: What are the requirements for registering a new drug application in Singapore? List the FDA’s criteria for clinical trial protocol design to encourage diversity in clinical trials. Create a detailed checklist for licensing healthcare facilities in Bahrain, and so on.

Positive results

Approximately 77% of responses were deemed accurate or closely aligned with the original source documents, marking a substantial improvement in productivity. Approximately 21% of responses were entirely off base. Less than 2% of the questions had no responses, a preferred outcome compared to generating “hallucinations.” A hallucination is when a GenAI model generates inaccurate information as if it were correct. Hallucinations are often caused by limitations or biases in training data and algorithms and can produce wrong or harmful content.

While 77% accuracy significantly improves productivity gains, it is crucial that a human subject matter expert is in the loop to ensure that the generated content is reviewed for accuracy and relevance to the underlying business needs. The ability to validate responses and ensure accuracy becomes challenging when dealing with millions of documents and variations in responses. Regulatory professionals need to be aware of response variations, even in consecutive runs of the same query. As more enterprises adopt GenAI for regulatory intelligence and other purposes, ensuring explainability, referenceability, and repeatability becomes a priority, potentially offsetting the efficiency gains in terms of search, retrieval, and summarization.

Co-piloting with GenAI

Our GenAI experiment generated largely accurate information to the natural language queries on a limited set of regulatory guidance documents from health authorities across the globe. However, based on our observations around explainability, repeatability, and referenceability, it’s clear the industry cannot rely solely on LLM output in terms of summaries or responses to queries related to regulatory intelligence. Instead, LLMs can serve as co-pilots assisting in the search, retrieval, and summarization of regulatory guidelines, requiring human oversight for validating and refining the generated content. As GenAI takes flight in the healthcare, life sciences, and pharma industries, the role of LLMs as co-pilots becomes paramount. Ensuring a balance between automation and human expertise will be crucial in leveraging the full potential of Gen AI in regulatory intelligence.

Venkatraman “Bala” Balasubramanian, PhD, Senior VP and Global Head of HLS, Orion Innovation

References

  1. https://www.pfizer.com/news/articles/how_a_novel_incubation_sandbox_helped_speed_up_data_analysis_in_pfizer_s_covid_19_vaccine_trial
  2. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the%20economic-potential-of-generative-ai-the-next-productivity-frontier
Recent Videos
Related Content