
Delivering Actual Value: Q&A with Sandy Tammisetty
Key Takeaways
- AI performs best by triaging review and surfacing early risk signals in large clinical datasets, rather than attempting end-to-end automation of regulated decisions.
- Clinical R&D gains include faster identification of data quality defects, cross-site anomalies, and operational risk across EDC, ePRO, and vendor data flows.
As the pharma industry continues to experiment with AI implantation, certain areas are showing more promise than others.
Sandy Tammisetty, VP of Veeva Services Practive Group at Conexus Solutions, spoke with Pharmaceutical Executive about where AI implementation is working for the pharma industry, and where it doesn’t appear to be a good fit. According to her, research shows that the technology provides the best results when used in a decision-support role.
Pharmaceutical Executive: Where is AI delivering actual value in clinical data management?
Sandy Tammisetty: AI is delivering real value today in highly focused areas of clinical data management where scale is large, patterns matter, and human oversight remains required. The strongest use cases are not about automation for its own sake, but about helping teams identify risk and signals more quickly in large, complex data environments.
In Clinical R&D, this means using AI to identify data quality issues, unusual patterns across sites or subjects, and operational risks in EDC, ePRO, and vendor data flows. In Quality, it’s being applied to identify trends across deviations, CAPAs, and change records, and to surface early signals of compliance risk.
In both areas, the value comes from directing expert attention to the right places at the right time. AI is most effective when it sharpens human judgment — not when it tries to replace it.
PE: Where is AI going wrong?
Tammisetty: In regulated life sciences environments, AI most often goes wrong not because the models fail, but because governance fails. Problems arise when AI is deployed outside validated GxP systems, built on fragmented or poorly governed data, or introduced without clear ownership between IT, Quality, and the business.
Another common failure point is a lack of transparency. When AI outputs cannot be explained, traced, or defended in an inspection or audit, confidence erodes quickly. In highly regulated settings, trust in the process is as important as the technology's performance.
PE: What makes AI a tool best used in decision-support capacities?
Tammisetty: Clinical data management operates in inspection-driven environments where explainability, data lineage, and individual accountability are non-negotiable. AI can surface patterns, prioritize review, and accelerate analysis, but it cannot own regulatory accountability or defend decisions to inspectors.
That distinction matters. AI strengthens human decision-making by helping teams identify risks and signals earlier, but final judgment must remain with qualified professionals who can explain how the conclusions were reached. In regulated settings, human accountability is not optional—it is the foundation of compliance.
PE: Why is AI often sold as an overall solution that goes beyond its effectiveness?
Tammisetty: AI is often marketed as a broad transformation solution because success stories from non-regulated industries are applied too freely to life sciences. The complexity of GxP environments is often underestimated, and organizations feel pressure to modernize quickly, creating unrealistic expectations about how rapidly and broadly AI can be deployed.
In practice, sustainable value derives from targeted, well-governed use cases that align with existing regulatory frameworks. When AI is positioned as an end-to-end solution, expectations outpace operational reality. That gap between promise and delivery is what causes programs to stall and credibility to suffer.
PE: How will issues with improper AI usage impact adoption?
Tammisetty: In regulated environments, improper use of AI primarily undermines trust. When teams experience outputs they cannot explain, insights they cannot defend during inspection, or rework caused by compliance findings, skepticism grows quickly.
The result is slower adoption, increased resistance from Quality and Compliance stakeholders, and higher governance barriers for future initiatives. Teams move from experimentation to a “prove it first” posture. In clinical data management, trust is the currency of adoption, and once it’s compromised, every future AI initiative faces a much higher bar.
Newsletter
Lead with insight with the Pharmaceutical Executive newsletter, featuring strategic analysis, leadership trends, and market intelligence for biopharma decision-makers.




