Are the technologies and assumptions at the heart of healthcare AI trustworthy?
The vocabulary, methodologies and culture of AI are spreading quickly in healthcare. Each month, we see new use cases in drug discovery and clinical development, collection of real world evidence, drug safety monitoring, treatment planning, insurance claims analysis, natural language apps aimed at health-conscious consumers, and more.
The pace of progress in each of these areas is spellbinding. But are the technologies and assumptions at the heart of healthcare AI trustworthy? And which groups or individuals should be grappling with the perplexities of trust?
Recently, while moderating a lunch panel on artificial intelligence at the Digital Medicine & Medtech Showcase event in San Francisco, I realized there are at least three distinct challenges researchers and marketers face in healthcare AI.
First, we must somehow ensure the evidentiary AI systems we devise are saying what we think they’re saying. Second, we must earn patients’ confidence as we race forward with new applications. Third-and not least-those of us who spend our lives exploring this fast-evolving terrain must create an environment of shared trust, so we can collaborate and co-create without allowing IP to become an obstruction.
Building trustworthy tech
On the first challenge, I was struck by an anecdote from a fellow panelist, Dan Riskin, CEO of Verantos. He and some business partners recently were interrogating an AI system that can draw unexpected conclusions by examining a retinal scan. Dan didn’t reveal what, exactly-except to say that some of the machine’s perceptions were frightening and “had nothing to do with the eye.” Researchers asked why the machine discerned such things. The answer had to do with a deep learning system comprised of 50 hidden layers, not accessible to human researchers because, in effect, we’re not smart enough.
Oh..kay, said Dan and his friends. But not entirely okay. “Asking why of an AI is harder than asking a person, but it’s important,” Dan explained. There are ways to do it, and we’re going to have to get very good at it.
Earning trust of patients
The second challenge emerged from survey data my own company, Syneos Health, will publish very soon. We interviewed about 1,000 patients and caregivers in the EU and the U.S., asking detailed questions about their comfort with AI in various health-related contexts. Many patients, unsurprisingly, were anxious the new systems might expose their personal information to hackers. The responses also suggested patients who are most comfortable with consumer technology today are the most likely to embrace next-generation AI routines.
But there was one thing we couldn’t have predicted. Asked who could be trusted to develop AI systems that patients would soon encounter, neither tech giants nor biopharma companies nor payers scored particularly high. It turns out patients want their physicians to play an active role. Our report documents the parameters of patient trust. Healthcare AI startups and their financial backers will do well to respect their wishes.
Trusting One Another
Finally, there’s the matter of how healthcare AI researchers, marketers and evangelists all play together. Naturally, the field eventually will need consensus on standards of data, frameworks, models, algorithms, etc. But we also must work together to sidestep wasteful and redundant work and expedite our goals.
During the panel, Ed Mingle, Celgene’s executive director for global safety, put his finger on the problem. As an industry, Ed said, “We don’t share well.” And this failure undercuts not just competitive drug development, but vast swaths of administrative work where collaborative AI could unshackle human ingenuity, shrink development timelines, and improve healthcare for everyone.
Ed pins his hopes on natural language generation (NLG) as a path to radical efficiency in business processes. Like all biopharma, he says, Celgene produces mammoth written reports practically on a weekly cycle. NLG could unburden human creators of these volumes. Today, each company is working in isolation with its own technology partners, inventing and reinventing processes in perpetuity. Working as one, we could crack the NLG code, get documentation written, and move on to the business of saving lives. “We have to work together more, share more-and share more smartly,” says Ed.
Trust, in other words, begins right where we stand.
AJ Triano, is the senior vice president of engagement strategy, GSW, a Syneos Health company.
The Impact of Artificial Intelligence on the Creation of Medicines
October 24th 2024Najat Khan, chief R&D officer, chief commercial officer, Recursion, and Fred Hassan, director, Warburg Pincus, discuss how artificial intelligence can help reduce healthcare costs at the 20th Annual Young & Partners Pharmaceutical Executive Summit held at the Yale Club of New York.
Plan Ahead: Mastering Your AI Budget for 2025 Success
October 9th 2024Generative AI is just one part of the artificial intelligence and machine learning that is being used by life science organizations, emerging as a major area of interest and an area in which costs and ROI are still largely unknown.