• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

How Data Integrity Landscape Will Evolve Over the Five Years

Commentary
Video

In this Pharmaceutical Executive video interview, Daniel Ayala, Chief Security and Trust Officer, Dotmatics discusses how the data integrity landscape will evolve over the next five years and what extent bench scientists in the lab focused on data integrity.

Let's forget about IT for just a minute, to what extent are the bench scientists in the lab focused on data integrity?

Researchers have always cared about the integrity and usefulness of their research. They're very cognizant about getting a good entry, good and good entry that data in that they've collected in the lab and the good connection of the data from their instruments. But there's a challenge in this world around the balance of security and usability and this is a long-standing challenge. so that we don't, that we have to that really gets exacerbated in the lab. In the lab, oftentimes you have people who are moving from system to system moving from an instrument back to a system. And traditional security controls say that we, you know, we lock a system when we walk away, or it automatically closes itself after a certain amount of time. But you now have somebody in the lab with materials on their hands, or with their hands full of vials, etc. So how do we find the right balance of giving people good security, but also maintaining their ability to do the research that they do? That's a really interesting dilemma that bent that scientist have bred that researcher have brought to us to me over my career of how do we how do we manage this because I just want to do research, and you just want to protect the data? Where is that middle ground? And that requires some pretty good understanding of how researchers do researchers, or how researchers do research.

I think scientists are keenly aware of the importance of getting the day to day that they've put in correctly. And that work that comes afterward in the lifecycle is dependent upon that. I don't think that scientists always understand the extent to which those can be affected. But at the same time, I haven't really met researchers who are glib about it or off the cuff about the importance of that data integrity, of inputting it correctly of maintaining it. I think people are proud of the research that they do. And they want to ensure that it is successful. In cases where you have to go back and do an investigation of the research to understand the understand the lifecycle being able to go back to the beginning and recreate every step that was taken along the way. It's a difficult process, but I think it's one that scientists understand is important to being able to validate and verify the research. And they are willing to participate in it, especially if it ferrets out bad data, bad entry or bad science that can contribute to the impacts on patient safety or product efficacy.

With the rise of personalized medicine and advanced therapies being pursued through multimodal drug discoveries, how do you see the data integrity landscape evolving over the next five years?

This one's really interesting. First of all, you've got a with personalized medicine, you've got a you've got a lot of combinatorics a lot of combinations that can exist and you really have to account for, you have to account for all of those combinatorics, something that is specific to me, and account for that along with the data that's collected and captured, to be able to understand the personalization that went into it to understand the specifics and also really makes finding a common set of comparative metrics, very difficult. And so, it also makes finding norms, one of the ways that we use to detect data integrity issues is looking at norms and looking at variances from the norms. So, if we have a lot of individualization where it's where there's adjustments, or, or changes to protocols changes to methods based on the individual, it's much harder to be able to detect when issues arise that are, you know, that are outside of an expected of an expected standard deviation out of expected error bars. So, it's, um, yeah, it definitely changes definitely changes the model. Also, how do you effectively capture those impacts those outside influences those personalized impacts that go along with the data so that you can later effect later take it into account when analyzing the research.

The other element was that AI comes into this and the fact that there is dynamism in the research, the fact that when you introduce ever changing data inputs like you get in generative AI or large language models, that makes the research much more difficult to recreate. I can't if I have this the as catalysts, this group of data, or this group of inputs that is changing over time as the data model learns, as the AI model learns to be able to take the snapshot of what it looked like, at the time that I did this research, and this was the output, and the result is also increasingly difficult. It's also hard to predict the, the, the if you if you, if you put a model like that a generative AI or other kind of manipulative artificial intelligence against a data set, it's hard to document the changes that it makes, so that you can recruit that you can prove it, and that you can, you know, unmake sure that it wasn't acting out of bounds. There's this hallucination effect that exists in in AI, where the AI goes off the off the off the rails, it does, it stops thinking logically. And it's not always effectively able to be spotted until it's gone heavily in one direction. So, at what point does the testing done on that data that involved a hallucinating model have to be redone? Well, can you count on all these are things that yeah, that factor in?

Related Videos
Related Content