• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Why Machine Learning's Potential in Pharma Relies on Its Transparency

Article

Despite the rapid innovation of machine learning, correct protocols to ensure compliance and trustworthiness must be considered.

Michelle Marlborough

Michelle Marlborough

Machine learning (ML) has the potential to revolutionize drug development and patient care, from accelerating clinical research to supporting proactive clinical decision-making. Because ML is relatively new, complex, and often misunderstood, it is commonly seen as a somewhat mysterious phenomenon—a "black box" spitting out conclusions without visibility into the data that is used to produce them. In reality, ML is neither magical nor abstract but instead a highly logical, data-driven technology. While this mysterious element is what makes ML so fascinating and powerful, it can also be its Achilles' heel, breeding distrust among its users in pharma who are used to operating in a highly regulated environment where robust evidence is extremely important.

In an effort to increase confidence and standardize the safe and ethical development of ML technologies, FDA, Health Canada, and the Medicines and Healthcare products Regulatory Agency (MHRA) recently introduced a set of guiding principles known as Good Machine Learning Practices (GMLP). While this is a promising first step to ensure that ML innovation advances and adoption improves, we need more than a set of general recommendations of what should be done. We need to pull back the curtain on the inner workings of algorithms to demonstrate that these guidelines were followed at each stage of its development.

Holding digital and physical diagnostics to the same standard

In light of the rapid development of these technologies, it is more important than ever to ensure that the ML algorithms are developed in a safe and ethical manner, and that there is a clear understanding of the desired benefits and potential risks throughout the product life cycle. For example, data security and diversity are among the many factors that influence trust in ML. This includes the way in which personal data is captured, stored, and compliantly used, as well as whether the data an algorithm is fed is representative of the intended patient population. If clinicians are not confident that the technology is safe or can adequately address their patients' needs, it is highly unlikely that they will trust the technology and use it in their practice.

Just as pharmaceutical companies need to provide vigilant proof of the efficacy of a drug for its intended patient population, ML developers should be held to a similar standard. There's a need to extensively track and document how an algorithm is built, its impact, and its purpose-built use cases. Only then can we help grow confidence that these tools are built safely and accurately.

Building trust through standardization

To date, regulation around ML development has largely relied on good faith that developers will follow “good science” and ensure their algorithm is developed using ethical and secure processes. The introduction of the GMLP represents an important first step in the oversight of this growing area. It provides strong recommendations, from developing validation data sets that are independent of the training data sets, to employing models that can be monitored in "real world" settings, both of which are critical factors to an algorithm's accuracy. However, developers are not required to adhere to these guiding principles. They are purely advisory in nature and are intended to provide a framework for future development in an effort to increase users’ confidence and improve product performance. That said, good intentions alone are not sufficient when the outcome could impact a patient's care or the future of a medical treatment. We need more evidence and vigilant tracking of a model to make it trustworthy in the eyes of pharma sponsors and clinicians.

Creating value through traceability

An effective way to establish trust in a model's architecture is by implementing actionable regulatory standards that ensure traceability. This requires clearly defining what aspects of ML should be transparent. Rather than divulging the proprietary code of a model, the system around it can be more telling of its quality and accuracy. The workflow of this system—including how we collect data, how an algorithm is trained, what generates a specific output, and more—gives us an understanding of how each component fits together in accordance with its purpose, design, and performance. Since ML continues to evolve and learn over time, close and continuous monitoring of a model's performance and refining it as necessary is a crucial part of its growth and safe, ethical development.

While we don't want to slow the pace of this rapid innovation, we also want to ensure that the innovation is meaningful, safe, and lives up to its promise. Encouraging developers to implement traceability protocols and document an algorithm's development will not only offer peace of mind to its end-users in healthcare but also improve an industry-wide understanding of best practices for strong ML to continue advancement in this budding field.

Michelle Marlborough, chief product officer, AiCure, LLC

Related Videos