While the AI story continues to unfold across the pharma industry, one aspect may be causing more issues than solutions. Dave Rusher, CCO at Aravo, joined Pharmaceutical Executive to discuss the usage of unauthorized AI tools and the risks that may be impacting pharma companies already. He also explains how certain types of LLM models may be better suited for the industry.
Emerging Pharma Leaders nominations are now open!
Do you know someone who can make tough decisions that continue to face manufacturers? Are they destined to change the future of pharma?
Nominate a colleague with impressive leadership and career intentions – even yourself! – for the Pharmaceutical Executive 2026 Emerging Pharma Leaders Awards.
Pharmaceutical Executive: What is shadow AI and how is it impacting pharma companies?
Dave Rusher: Shadow AI is the unauthorized use of AI tools of employees at a company as they try to support and do their day jobs. While we’re seeing that across all industries, there’s heightened concern when it comes to pharmaceutical organizations due to the nature of the information they may have.
That could be intellectual property about the drugs being produced or patient and clinical trial data, with respect to personal health. The concern is that employees using AI solutions outside of the scope of what their companies may have authorized may unintentionally be leaking that type of information into the public realm and serving as training data for public large-language models.
The challenge with unauthorized use is that employees don’t necessarily know what’s inbounds with data they should be sharing to produce legitimate findings. The gap comes in when the engines are left to make judgements of that, which can result in hallucinations.
This is the result of engines not having the full data set, which they won’t have in the full public realm. If employees are using those and that gap exists, it clouds the data. With private LLMs, there’s more comfort and confidence in providing observations and recommendations.
PE: Can broad LLMs and smaller, focused LLMs coexist?
Rusher: I think they’ll coalesce. It’s already happening where there’s very specific versions of certain LLMs for pharmaceutical research. They need a different type of processing, scrutiny, and control.
Those could continue to evolve, and companies could continue to provide private LLMs for each of these companies to use at their own discretion. I can see them evolving more in that direction to resolve the concerns over shadow AI.
Any new technology disruption has some risk, and companies need to know what that risk is and educate their employees on the dos and don’ts. They need the tools that are useful in doing their jobs better.
PE: What governance resets should pharma companies take in response to AI?
Rusher: We see that more heavily in certain regions than others. The notion of hard compliance or regulatory governance comes and goes in the United States. In Europe, there’s much more of a regulatory enforcement approach. There’s a lot more regulations coming out of the EU in various, different approaches.
I can see ongoing governance and regulatory guidance influencing AI. Certain industries, including pharmaceuticals, are more likely to see that type of thing happen. They’re heavily regulated industries due to the type of information they process and products they produce.