• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

All in on AI is Not the Solution in Life Sciences and Health Care

Commentary
Article

Although life science organizations are not immune to the benefits of artificial intelligence, it’s important to integrate it with caution because there is no “one-size fits all” approach.

Dan Milczarski, Executive Vice President of Process and Technology, CQfluency.

Dan Milczarski, Executive Vice President of Process and Technology, CQfluency.

From enhancing productivity to transforming customer service, the expansion of artificial intelligence (AI) into various industries has been nothing short of revolutionary. As reported by Harvard Business Review, a recent survey found that business leaders plan to allocate an astounding 6.5% of their functional budget to generative AI in 2024. Clearly, many are looking to jump on the AI bandwagon for fear of being left behind.

However, although life science organizations are not immune to the benefits of AI, it’s important to integrate it with caution because there is no “one-size fits all” approach when it comes to the communication and translation of complex health information and scientific jargon.

Unfortunately, studies show that as AI translation gets better, cultural bias worsens as the machine learning takes on the subconscious biases of the primary programmers or public information it consumes. Without accurate and relatable translations, there is a risk of introducing errors by people managing clinical trials in diverse site locations or creating confusion among patients trying to decipher complicated health information.

For example, a recent study from University of Manchester (UK) demonstrates how someone being evaluated for a mental health condition could misunderstand a cognitive test because AI didn’t translate it well. The researcher’s proposed automated translation tool was imperfect in that translations had spelling errors and 25% of questions required cultural adaptation to make them understandable in daily use by native speakers. When questions aren’t entirely understood results could be skewed or, worse, inaccurate.

AI Inherits Bias from Human Programmers

AI is thought of as neutral because it’s not human, yet that is not the case. Many AI models are trained on huge amounts of public data that contain stereotypes, biases, and prejudices. If the data is a few years old, it may not reflect recent societal efforts to be more inclusive. Problems could include:

  • Sampling Bias: Some groups might be overrepresented or underrepresented.
  • Labeling Bias: When humans labeling the data perpetuate their own bias when performing the task of labeling or classifying data.
  • Algorithmic Discrimination: Bias is embedded in the optimization processes applied during training, meaning the software itself is biased.
  • Contextual Bias: A lack of understanding of the broader social and cultural contexts or nuances.
  • Cyclical Bias(or self-reinforcing bias): This happens when an AI system is used in a real-world setting and its decision impacts other data collections.
  • Gender Bias: MIT researchers reviewed gender bias in machine learning and found AI can suffer from gender biases that can do harm to society. For example, words like “anxious” and “depressed” have been identified as feminine. Translating gender-neutral language can also be problematic, as AI often has to guess genders. This can result in stereotyping male roles like ‘doctor’ to be translated as male, and professions like ‘nurse’ to be associated with females.
  • Racial Bias: Many biased algorithms require racial or ethnic minorities to be more ill to receive the same diagnosis, treatment, or resources as their white patients. In response, an expert panel convened by the Agency for Healthcare Research and Quality and the National Institute on Minority Health and Health Disparities recently proposed guidelines to mitigate bias in healthcare AI.

AI Under the Regulatory Umbrella

Regulatory agencies increasingly recognize the potential benefits of AI in improving health outcomes, which is why life science companies need to stay on top of guidance. The Department of Health and Human Services (HHS) has been at the forefront of most AI regulatory activities, but other government agencies, such as the FDA, are also becoming involved due to AI’s impact in healthcare.

The Bipartisan Policy Center issued a brief detailing the current regulatory landscape for AI. It provides insights into how regulatory agencies are approaching AI regulations, and what guidelines or recommendations they are considering.

According to a recent article from the law firm ArentFox Schiff, there are five main areas in which rigorous AI oversight is needed in the life sciences industry:

  1. Drug discovery and development
  2. Precision medicine and personalized treatments
  3. Medical imaging and diagnostics
  4. Disease prediction and prevention
  5. Clinical trials and research

Given that nearly 1 in 5 Americans speak a language other than English at home, it’s clear that incorporating language considerations into the clinical and technology workflow needs to start before the patient seeks care.

What is the fix? Human oversight

AI isn't going away, which is why it is vital to customize constantly evolving AI applications and innovations to create tailored, effective technologies that reflect life science organizations' regulatory and organizational framework. This is essential to promote health literacy and health equity as accurate, culturally appropriate materials help patients make informed decisions about their healthcare (and insurance) in partnership with a provider. For example, AI is useful to integrate into automated processes to improve efficiency on repetitive tasks, but retaining the human touch—actual translators—is essential to produce culturally sensitive and accurate translated materials.

There are four key areas that businesses can focus on to reduce AI bias:

1. Human Expertise: Trained linguists play a vital role in post-editing AI translations to ensure accuracy and cultural sensitivity. They can also identify and fix biases in the data, retrain data sets with revised translations, and annotate or tag data to help AI adapt its results based on factors such as culture, race, and gender. Over time, this continuous improvement process can lead to fewer fixes and more accurate translations.

2. Automated Quality Checks: These checks can include linguistic analysis tools that compare AI translations to human translations, as well as algorithms that detect patterns of bias in the data. By integrating automated quality checks into the translation process, businesses can ensure that AI translations meet the highest standards of accuracy and cultural sensitivity.

3. Custom-Designed Tools: By customizing AI tools that incorporate industry-specific terminology and cultural nuances, businesses can reduce bias in AI while improving delivery of accurate communication.Training LLM datasets with these elements (or prompting a GenAI review tool to take them into consideration) allows you to be more efficient without sacrificing your intended message.

4. Risk Management Policies: Following the Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning or the Actions to Address Risks and Opportunities elements of ISO 42001, organizations can implement effective AI strategies while taking into consideration the risks of bias.

“Artificial Intelligence (Al) can bring benefits to healthcare - improved clinical outcomes, improved efficiencies, and improvement in the management of healthcare itself. However, the implementation of new technologies such as Al can also present risks that could jeopardize patient health and safety, increase inequalities and inefficiencies, undermine trust in healthcare, and adversely impact the management of healthcare." - Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning

While AI does hold promise in changing healthcare, it’s critical to approach its implementation thoughtfully and strategically. By combining the efficiency of AI with human oversight, businesses can realize the full potential of AI in improving healthcare outcomes and promoting health equity.

About the Author

Dan Milczarski, Executive Vice President of Process and Technology, CQfluency.

Related Videos
Related Content