• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Four Challenges Preventing AI Reaching Its Full Potential

Article

AI has become ubiquitous in all industries, including life sciences. But is also attracting some concerns – particularly around job losses, ethics, and more broadly, how successful it really is, writes Steve Arlington.

AI has become ubiquitous in all industries, including life sciences, and is often billed as the technology needed to forge ahead with innovation. Yet, AI is also attracting some concerns – particularly around job losses, the ethics of AI, and more broadly, how successful it really is. Although usage of AI is high – in a recent survey we found 69% of companies are using AI, machine learning, deep learning, and chatbots –  only a fifth (21%) of those that adopted AI felt their projects were providing meaningful outcomes.

As the dust settles after the initial rapid adoption of AI, more companies are now viewing their investments objectively, noting that not all results are positive. For example, take IBM’s Watson – to date, it has in the eyes of some, been unable to prove its worth in healthcare, and specifically cancer treatment, due to the data used in the system often coming from only a small number of sources – it has even been reported to suggest ‘unsafe’ treatments. To ensure AI pays dividends, companies will need to overcome several barriers. In this article, we will look at four of the main challenges preventing AI reaching its full potential in life sciences.  

1. Skills shortage hits AI hard

One of the biggest issues with the adoption of AI is a shortage of adequately qualified workers with the right technical skills. A recent survey from Ernst & Young found 56% of senior AI professionals agreed that the lack of talent and the supply of qualified workers in AI was the single biggest barrier impacting AI projects. In addition, life sciences companies typically don’t find it easy to attract digital ‘natives’; there is often a pay discrepancy between the science and technology industries, and pharma has not typically been recognized as a sector that is leading from the front when it comes to digital innovation. More recently, pharma companies have also garnered a reputation for ‘hire and fire’ within the tech community, as more people unfamiliar with the environment join the industry. Upskilling those already in the industry will be a key factor in improving AI, as well as altering job-seekers’ impressions to attract skilled data scientists to roles in life sciences.

2. Poor data affects outcomes

Limited access to quality data is also affecting the results AI can currently yield. In a recent survey, we found that in the next 10 months the majority of respondents are looking to collaborate with a technology or data provider (40%) – this is an encouraging finding given that access to quality data is crucial to successful AI. In AI, the ‘garbage in, garbage out’ concept is critical when building algorithms, and even the most experienced technology companies can get it wrong. For example, in 2016, Microsoft’s AI-driven Twitter chatbot, Tay, went completely rogue and tweeted racist statements when attempting to use language patterns of its 18-24 demographic. Tay was said to have found herself ‘in the wrong crowd’ – and while this example likely didn’t result in physical harm to anyone during its short run, it highlights that when AI is making decisions about people’s health the need for a correct, impartial response is paramount.

3. Lack of data standards

As well as a challenge in accessing patient data there are currently no industry-wide data standards. These standards need to include patient data in the broadest possible sense and from a wide range of sources including mobile devices, wearables and more – from healthy populations and not just those that see themselves as patients. As a result, significant time and resources are required to integrate data into corporate systems and make it usable. Standardized data formats would tackle this issue but will require much greater collaboration between pharma and biotech organizations, and data and technology firms. Currently, there are guidelines that promote data sharing, such as the FAIR principles (Findable, Accessible, Interoperable, Reusable), but these need to be further encouraged to help maximize the usability of data. A survey we carried out found a quarter of respondents are aware of FAIR but haven’t yet implemented FAIR-driven policies. While awareness is important, this finding illustrates the extent of work needed to ensure the principles are followed across the whole sector.

4. Anxiety over change holds back progress

The progress of AI has also been hindered by anxiety over change – such as the ethics of AI, and employee concerns over potential job losses – with a recent survey finding 67% of workers are worried about machines taking work away from people. But these fears over robots taking our jobs are misplaced; AI will augment researchers by helping to tackle repetitive, time-consuming work, allowing them to be more creative and follow different paths to enable fruitful research.

On the other hand, reservations over how ‘biased’ or ‘unethical’ AI might be will need to be addressed, especially within the life sciences and healthcare industries, as it will directly impact patients. In clinical trials, for example, worries have been expressed that recruitment is not truly representative of demographics. This is a problem given that age, race, sex, genetic factors, other drugs being taken, and more, can play a vital role in a person’s response to a drug or intervention. A report published in Nature found that although since the 90’s, the number of countries submitting clinical trial data to the FDA has almost doubled, the equivalent increase hasn’t been seen in the diversity of the clinical trial population – in 1997, 92% of participants were white, the figure in 2014 was 86%. Additionally, adult males also dominate the clinical trial population, representing about two thirds. The diversity of clinical trial recruitment must be improved to ensure we are building AI algorithms that will provide the best recommendations for all groups.

A collaborative approach is necessary for the implementation of AI to be to successful

Overcoming these barriers to progressing AI will require first and foremost, a shift toward a collaborative mindset within the life sciences industry. To assist with tackling these challenges, The Pistoia Alliance has created The Centre of Excellence (CoE) for AI in Life Sciences. It aims to help educate researchers on the role of AI in the ‘lab of the future’, while also facilitating collaborations by allowing companies to share their expertise and knowledge. Increased collaboration will allow the industry to find areas where AI can flourish, and where it can augment researchers to ultimately improve patient care. AI adoption will continue to rise, and algorithms will have continuing influence in a wide range of industries – from banking to the legal sector. AI will become increasingly intertwined with our everyday lives. Collaboration will be essential in ensuring that AI genuinely helps to boost innovation and delivers accurate, unbiased and ethically derived results.

Dr. Steve Arlington is President of The Pistoia Alliance.

Related Videos