• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Creating Reliable Data Foundations for AI

Article

Steve Gens and Remco Munnik offer five best-practice tips for achieving a definitive, trusted regulatory and product information asset base capable of supporting intelligent innovation.

The transformation potential of smart IT systems and process automation relies on the quality, credibility, and completeness of the underlying data. With this in mind, Steve Gens and Remco Munnik offer five best-practice tips for achieving a definitive, trusted regulatory and product information asset base capable of supporting intelligent innovation.

Life sciences companies are well aware of the huge potential of emerging technologies including artificial intelligence/machine learning (AI/ML) for rapid data analysis, complexing trending and scenario analysis, and for transforming process delivery through intelligent workflow automation. 

Yet, in their keenness to harness these options, many companies try to run before they can walk. Here are five critical data governance elements companies must have in place before they can attempt to become smarter in their use of data.

1.     Assigning dedicated roles & responsibility around data quality

Steve Gens

Unless organizations assign clear and precise responsibility for ensuring consistent data quality, the integrity and reliability of the information available in the systems will suffer.  Having someone whose remit clearly includes maintaining the integrity and value of data is the only way to ensure that any future activities drawing on these sources can be relied upon, and will stand up under regulatory scrutiny.

A 2018 study of Regulatory Information Management by Gens & Associates,[1] which polled respondents from 72 companies internationally about their associated plans and practices, found that confidence in product registration, submission forecasting, and regulatory intelligence data quality was not high. When “confidence” is low or moderate, organizations spend considerable time “verifying” and remediating this information, with a direct negative impact on productivity. 

Ongoing oversight over data quality is critical too, to ensure human errors do not build over time, eroding confidence in system data. Data quality sustainability should be an organization-wide concern, necessitating a culture of quality and clear accountability for this as part of people’s roles - as appropriate. 

Allocated responsibilities should ideally include:

Quality control analysis. Someone who regularly reviews the data for errors - for example sampling registration data to see how accurate and complete it is. 

Data scientist. Someone who works with the data, connecting it with other sources or activities (e.g. linking the company’s regulatory information management (RIM) system into clinical or ERP systems, with the aim of enabling something greater than the sum of the parts – such as “big picture” analytics. 

Chief data officer. With a strategic overview across key company data sources, this person is responsible for ensuring that enterprise information assets globally - including enterprise resource planning (ERP), RIM and Safety systems - have the necessary governance, standards and investments to ensure the data they contain is reliable, accurate and complete and remains so over time.

2.     Quality control routine

Remco Munnik

To steadily build confidence and trust in data, it is important to set down good habits and build these into everyday processes. By putting the right data hygiene practices into place, companies can avoid the high costs and delays caused by data remediation exercises, which can run into millions of dollars or euros. Spending just a fraction of that amount on embedding good practice and dedicated resources is cost effective and will pay dividends in the long term.

Operationalizing data quality standards is important such as naming conventions and data standards, data links with related content and data completeness guidelines. These need to be applied consistently on a global basis. 

Not all data quality errors are equal, so it is important to be able to flag serious issues for urgent action and tracking of error origins, so additional training or support can be provided. To inspire best practice and drive continuous improvement in data hygiene, making data-quality performance visible can be a useful motivator: drawing attention to where efforts to improve data quality are paying off. This is critical for our next point.

3.     Alignment with recognition & rewards systems

Recognition, via transparency, will continue to inspire good performance, accelerate improvements and bed in best practice, which can be readily replicated across the global organization to achieve a state of continuous learning and improvement.

Knowing what good looks like, and establishing KPIs that can be measured against, are important too. Where people have had responsibility for data quality assigned to them as part of their roles and remits, it follows they should be measured for their performance, with reviews forming part of job appraisals, and rewarded for visible improvements.

4.     Creating  a mature & disciplined continuous improvement programmed

Gens & Associates’ 2018 research found that life sciences companies with a Regulatory “continuous improvement programmed” (CIP) have 15 percent higher data confidence levels, 17 percent are more likely to have achieved real-time information reporting, and 21 percent have higher efficiency ratings for key RIM capabilities.

Continuous improvement is both an organizational process and a mind-set. It requires progress to be clearly measured and outcomes tied to business benefits. A successful CIP in Regulatory data management combines anecdotal evidence of the value that can be achieved and clear KPIs (cycle time, quality, volume etc.) that teams can aim towards and be measured against. 

At its core, continuous improvement is a learning process that requires experimentation with “incremental” improvements. 

Establishing good governance, and measuring for and reporting on improvements and net gains and how these were achieved (what resources were allocated, what changes were made, and what impact this has had), will be important too. 

5.     Data standards management

Today in many life sciences companies, data is not aligned across the organizations and standards vary or simply do not exist. Ask people in Regulatory, Pharmacovigilance, Supply Chain, and Quality how they define a “product” or how many products their company has, and answers will probably vary. 

The more that all companies keep to the same regimes and rules, the easier it will become to trust data, and what it says about companies and their products - as it becomes easier to view, compare, interrogate and understand who is doing what, and how, at a community level. 

Evolving international standards such as ISO IDMP and SPOR mean that companies face having to add and change the data they are capturing over time. To stay ahead of the curve, life sciences companies needs a sustainable way to keep track of and adapt to what’s coming. 

Delegating monitoring activity to persons responsible for Quality is unrealistic, as there is so much detail to keep track of. Meanwhile Regulatory specialists may understand the broad spectrum of needs, yet not how to optimize data preparation for the wider benefit of the business. It may be worth seeking external help here, to attain an optimal balance between regulatory duty and strategic ambition.

Future AI potential depends on data quality sustainability investment today

The important takeaway from all of this is companies cannot confidently innovate with AI and process automation based on data that is not properly governed. With emerging technology’s potential advancing all the time, it is incumbent on organizations to formalize their data quality governance and improve their ongoing data hygiene practices now, so they are ready to capitalize on AI-enabled process transformation when everything else is aligned.

Steve Gens is the managing partner of Gens & AssociatesRemco Munnik is Associate Director at Iperion Life Sciences Consultancy.

Note

 

[1] World Class RIM Whitepaper: Connections to Supply Release, Product Change and QMS, Gens & Associates, 2018: https://gens-associates.com/2018/10/10/world-class-regulatory-information-management-whitepaper-connections-to-supply-release-product-change-and-qms/

Related Videos
Ashley Gaines
Related Content