• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Pharma AI Requires a New Approach to Governance

Article

Artificial intelligence adoption by pharma companies in the UK demands a new, agile governance layer, write Tim Wright and Antony Bott.

AI adoption by pharmaceutical companies demands a new, agile governance layer, write Tim Wright and Antony Bott.

 

Tim Wright

The convergence of big data, cloud computing, artificial intelligence (AI) and its subsets, machine learning and deep learning, offers pharmaceutical companies the potential to realise many exciting benefits. You will have heard the well-worn saying: “Half the money I spend on advertising is wasted; the trouble is I don't know which half”. With drug research, the wastage is more like 80-90% - traditional drug discovery employs biochemical screening, scrutinizing millions of natural and synthetically-derived chemical compounds to identify molecules that have drug-like properties. But the number is so huge (a trillion trillion trillion trillion trillion possible compounds) that, despite technological advances, drug discovery has only become more expensive and protracted.   Now scientists hope to change the game by combining lessons from previous drug research with the vast amounts of experimental data that has already been produced by the scientific community to drive AI-powered drug design. As a result, AI may enable better predictions, resulting in focused sets of compounds for screening during the early stages of drug discovery, or new uses for previously tested compounds towards treating diseases. This will help speed up drug discovery, reducing wasted research and clinical trial failures. The quicker a new drug can be brought to market (i.e. shortening the time from initial filing to regulatory approval), the longer it stands to benefit from its patent. This in turn should help to stem the trend of rising drug prices. AI adoption also promises to improve product quality and make manufacturing processes more efficient.  

AI – the future of pharma?

Antony Bott

A

recent report

by the House of Lords’ AI Select Committee highlighted significant opportunities, with “the impact of artificial intelligence on … the healthcare system … likely to be profound”- through more efficient research and development, the adoption of better methods of healthcare delivery, and more informed clinical decision-making. Patients will also be more able to manage their own health needs.    AI is not new but we are only now starting to see its benefits. And although AI adoption has been slower in the pharma sector compared with some other industries, there is already a wide range of use cases, notably in radiology and oncology diagnostics and drug discovery and development, where computer models are being used to identify novel targets for cancer therapy. Other successful implementations include platforms and applications that deliver virtual patient coaching, digital surgery platforms, surgical robots, predictive medicine and treatment protocols, adverse event detection, regulatory reporting, and quality assurance. AI is being used in clinical trials to identify possible early stage failures, and it is also making its way into supply chain operations and helping to optimise manufacturing processes, through new approaches such as continuous manufacturing, process analytics and advanced process control.   

Data sharing – risks and benefits

AI and machine learning rely on having access to a lot of data. BERG, a pharma start-up, which has developed AI software to analyse the transformation of cells from healthy to cancerous, utilised data from the 2003 Human Genome Project as well as the over 14 trillion data points in a single cell tissue. Using this process, BERG aims to develop new drugs that can return cells to a healthy, pre-cancerous state.    In a crowded market, drugs companies are seeking to differentiate their drugs and treatment therapies, leading to a multitude of collaborations between big pharma, CROs, the large tech titans, government institutions, regulators, academia and a wide range of AI based start-ups. The temptation to over-share datasets, without adequate rules of engagement and protocols in place first, can be strong. The Royal Free London NHS Foundation Trust’s 2015 partnership with DeepMind is a case in point.    DeepMind partnered with the Royal Free to develop an app to assist diagnosis of acute kidney injury. To aid its development, Royal Free had provided DeepMind with personal data from around 1.6 million patients. The Information Commissioner’s Office subsequently investigated, ruling that Royal Free had failed to comply with the Data Protection Act 1998 when it provided the data to DeepMind. Although the app did not use artificial intelligence or deep learning techniques, DeepMind’s involvement highlighted a number of the potential issues involved in the use of patient data to develop AI.    The Data Protection Act 1998 has since been replaced by the General Data Protection Regulation (GDPR) and with it comes the risk of far greater fines and sanctions for getting it wrong. Data sharing requires a cautionary approach, not least since government and regulators are struggling to catch up. Obstacles that still need to be addressed include data governance and privacy of medical records, transparency (or lack thereof) of algorithms. In this context, where machine-to-machine communication is set to grow exponentially, the implementation of data exchange standards, such as those published by the International Organization for Standardization and the International Electrotechnical Commission, will become increasingly important. The Select Committee’s report recommends a universal code of ethics for AI, and calls for an appropriate legal and regulatory framework for AI developers and users: “maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data.”  

A new approach to governance

Over the past decade or so, responding to the regulatory challenge, big pharma has implemented strong governance controls across the enterprise, with top-down management and decision-making meaning that it is sometimes difficult to reverse course. Because of the need, in many cases, to pool large datasets, or where collaborating with a start-up, success may well depend an intensely collaborative, agile approach, enabling a much more rapid response to market pivots. Mirroring cross-industry collaborations in other sectors, such as the coming together of consortia of banks and technology providers to pilot, test, develop and implement blockchain use cases for the finance industry, collaborations between pharma and technology firms call for new approaches to the handling of issues such as data sharing, ownership of deliverables and other intellectual property, defining success, and the monetization of the outputs.    Procurement and legal functions will need to collaborate closely with their business teams to develop and adopt new governance layers specifically targeted at AI adoption (focused on areas such as legal, regulatory, information governance, intellectual property, and reputational risk); without such an approach, businesses risk uncovering issues too late in what is often a fast-moving process. Governance approvals will need to be flexed – current regimes are often most closely linked to spend thresholds that may not be triggered by the new approaches. Different nuances may need to be considered when data sharing, as between say R&D (crown jewels) and manufacturing (operational processes). And beneath the governance layer, a new contract toolkit is needed. Traditional frameworks such as ownership of intellectual property and risk transfer (i.e. liability and indemnity), tried and tested in binary relationships such as manufacturing services agreements, don’t necessarily lend themselves to commercial collaborations and partnerships. Agreements covering things such as non-disclosure, collaboration, partnering, and data sharing will need to be adapted to manage the unique challenges that AI poses, and to reflect a new more agile way of working and partnering with third parties.   

Tim Wright is Partner and Antony Bott is Sourcing Consultant at law firm, Pillsbury Winthrop Shaw Pittman LLP.

   

Recent Videos
Ashley Gaines
Related Content