OR WAIT null SECS
© 2023 MJH Life Sciences™ and Pharmaceutical Executive. All rights reserved.
Duncan van Rijsbergen highlights key considerations for life sciences companies tackling data quality issues.
High quality and consistent data are key to unlocking the benefits of regulatory process automation. Duncan van Rijsbergen highlights key considerations for life sciences companies tackling data quality issues.
Basic issues such as getting up-to-date and consistent data to talk to each other across functions and systems are stymieing ambitions for digital transformation in the pharmaceutical industry. Regulatory systems contain data on products and their licenses. There is also procedural data, recording interactions with the authorities about a license, from the initial application through to post-authorization changes to the license. Elsewhere, expert functions from manufacturing to clinical teams generate the basic data that feed the regulatory dossier that supports the license.
Typically, there is no direct communication between regulatory systems and expert functions systems. Manufacturing or clinical teams collate their data into a summary document and send it to the regulatory function. The regulatory team then takes that data and pulls it together in a submission dossier, ready to send to competent authorities for approval.
In manufacturing and the supply chain, the enterprise resource planning (ERP) system typically holds data about products and materials. Meanwhile, in the regulatory function there will be a regulatory information management (RIM) system which also contains data about the same products, but from the perspective of regulatory approval. Those systems are most often in completely separate worlds, with little or no interoperability. Yet, a change made in the manufacturing world must be reflected in the license. Similar issues apply between the systems for management of clinical trials and the regulatory tracking of clinical trials applications. Currently, sharing that information is done through lots of forms and maybe even an intermediate system, which stores those forms. If these processes were automated, changes made post-authorization would not delay product delivery, enabling products to be brought to market and made available for patient treatment rapidly.
It is particularly important to get back to basics when it comes to structured, regulatory chemical, manufacturing and control (CMC) data. In just one instance, the process of inputting specification testing data into the laboratory information management system (LIMS), extracting it, entering it into regulatory documents, sending to regulatory bodies and then reversing the process for implementation can easily take a year or more.
A data-first starting point is key. If companies store clean and consistent data, rather than documents, they will be a in a much better position to automate processes and share this data efficiently with regulatory bodies. Yet, companies continue to struggle with basic data quality issues.
First, there is the compliance issue, where licenses must accurately reflect activity relating to clinical trials or manufacturing. In a regulated environment, compliance failure could lead to product recall, license suspension or fines. Datasets in operational settings may not align with datasets shared with the authorities. While the data is essentially the same, the way the data is presented may not be aligned exactly across the two systems. The granularity of the data, how it is worded or linked, might be slightly different.
Secondly, there are issues tracking changes in data over time. Drugs that are produced over many years will experience changes in for example composition or manufacture. These must be reflected both in regulatory systems and in the company’s operational systems. There is a need to change the data but also to keep it in sync. That synchronization becomes much more difficult if there is a longwinded process, with multiple steps in it, where the data changes form multiple times, going from structured to document, and back to structured again, with manual copying along the way.
Ideally the synching process should be integrated with the regulatory process. That way, when the company introduces improvements to the product, testing data can be shared with the regulator much more quickly, accelerating the time it takes to get product enhancements to market. Reducing manual processes also eliminates the potential for human error and reduces costs.
Commonly, compliance has become a goal in itself. Ideally, though, compliance should be effortless, a by-product of a companies’ activities, not the focus of them. When data is aligned and kept in sync automatically, through a proper aligned process, then compliance is secondary, it will just happen by itself.
Here are five practical action points to help get companies started on their data quality journey:
1. Communicate with all the stakeholders involved in the process. Together, identify the use cases for data flow continuity and agree on how best to measure the benefits of automating data integration. Getting everyone’s buy-in and developing solutions collaboratively drives transparency and improves trust among functions. This approach enables people within a fairly long process chain to know how their data affects their predecessors and successors. It provides confidence that predecessors have done things correctly and successors get data they can work with.
2. Develop a shared vocabulary to talk about data held in common across functions. Presenting product data across the organization in a way that everybody understands, with commonality of language, also builds trust as well as driving operational excellence and innovation.
3. Standardize data descriptions. Once use cases have been identified and a common vocabulary agreed, consider how best to standardize data relating to complex products. The IDMP model is a valiant effort to find a common way to describe product data. The quality and consistency of individual data is also key to data standardization initiatives, such as the US FDA’s drive to standardize Pharmaceutical Quality CMC (PQ-CMC) data elements for electronic submission. The more widely accepted a product model is, the easier it is to share with external parties. This includes regulators, and also partners such as labs, manufacturers and research organizations.
4. Ensure processes are properly aligned. There needs to be a robust process for capturing and sharing changes over time – and making sure that systems keep in sync and that there is as little time lag as possible. Focus on bottlenecks. There may be one process in an operational setting and another in the regulatory section. Where do they meet? Where does the data gets exchanged and how could that be improved?
5. Identify suitable technological solutions. The initial focus should not be on finding the right software, but on the system architecture and how and where to connect systems. One approach could be to build a bridge between two systems, a point to point connection. The issue is maintaining the link and upgrading functionality in two discrete systems that talk to each other. A better option would be to develop a looser coupling, and this is where the common language model comes in. It is important not to take a static approach – how do I solve the problem now – but also consider maintaining the solution and innovating over time. This is not about individual systems but about a system of systems.
The goal of data quality and integration activity is to provide a platform on which to innovate and get the best products to patients. Technology has a key role to play but, first, life sciences companies need to spend time getting to grips with their data. Once they have located all their data and standardized it, only then will it be possible to reap the efficiencies of integration.
Duncan van Rijsbergen is Associate Director, Regulatory Affairs at Iperion.