• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Four Challenges for AI in the Life Sciences

Article

Pharmaceutical Executive

Pharmaceutical ExecutivePharmaceutical Executive-12-01-2018
Volume 38
Issue 12

Objective considerations when gauging investment.

Artificial intelligence (AI) has become ubiquitous in all industries, including the life sciences, and is often billed as the technology needed to forge ahead with innovation. Yet, AI is also attracting some concerns-particularly around job losses, the ethics of AI, and more broadly, how successful it really is. In a recent survey, we found 69% of companies are using AI, machine learning, deep learning, and chatbots, yet only a fifth (21%) of those that adopted AI felt their projects were providing meaningful outcomes. 

As the dust settles after the initial rapid adoption of AI, more firms are now viewing their investments objectively, noting that not all results are positive. To ensure AI pays dividends, companies will need to overcome several barriers. 

 

1. Skills shortage hits AI hard

One of the biggest issues is a shortage of adequately qualified workers with the right technical skills. Life sciences companies typically don’t find it easy to attract digital “natives”; there is often a pay discrepancy between the science and technology industries, and pharma has not typically been recognized as leading from the front when it comes to digital innovation. More recently, pharma companies have also garnered a reputation for “hire and fire” within the tech community, as more people unfamiliar with the environment join the industry. Upskilling those already in the industry will be a key factor in improving AI, as well as altering job-seekers’ impressions to attract skilled data scientists to roles in life sciences. 

2. Poor data affects outcomes 

Limited access to quality data is also affecting the results AI can currently yield. In AI, the “garbage in, garbage out” concept is critical when building algorithms, and even the most experienced technology companies can get it wrong. For example, in 2016, Microsoft’s AI-driven Twitter chatbot, Tay, went completely rogue when attempting to use language patterns of its 18-24 demographic. Tay was said to have found herself “in the wrong crowd”-and while this example likely didn’t result in physical harm to anyone, it highlights that when AI is making decisions about people’s health, the need for a correct, impartial response is paramount.

3. Lack of data standards 

As well as a challenge in accessing patient data, there are currently no industry-wide data standards. These standards need to include patient data in the broadest possible sense and from a wide range of sources, including mobile devices, wearables, and more. As a result, significant time and resources are required to integrate data into corporate systems and make it usable. Standardized data formats would tackle this issue but will require much greater collaboration between pharma and biotech organizations and data and technology firms. Currently, there are guidelines that promote data sharing, such as the FAIR principles (Findable, Accessible, Interoperable, Reusable), but these need to be further encouraged to help maximize the usability of data. 

4. Anxiety limits progress

The progress of AI has also been hindered by anxiety over change, such as the ethics of AI, and employee concerns over potential job losses. But fears of robots taking our jobs are misplaced; AI will augment researchers by helping to tackle repetitive, time-consuming work, allowing them to be more creative and follow different paths to enable fruitful research.

On the other hand, reservations over how “biased” or “unethical” AI might be will need to be addressed. In clinical trials, for example, worries have been expressed that recruitment is not truly representative of demographics. This is a problem given that age, race, sex, genetic factors, other drugs being taken, and more can play a vital role in a person’s response to a drug or intervention. The diversity of clinical trial recruitment must be improved to ensure we are building AI algorithms that will provide the best recommendations for all groups.

A collaborative approach

Overcoming these barriers to progressing AI will require, first and foremost, a shift toward a collaborative mindset within the life sciences industry. It will be essential in ensuring that AI genuinely helps to boost innovation and delivers accurate, unbiased, and ethically derived results. 

 

Steve Arlington, PhD, is President of the Pistoia Alliance

 

Related Videos
Related Content