• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Solving the First Mile Problem: Q&A with Mark Melton

Feature
Article

Melton, VP of scientific operations and development at Slope, discusses how new technologies can be used to ensure data is accurate.

Mark Melton

Mark Melton
VP of scientific operations and development
Slope

Data accuracy is one of the most important aspects of clinical trials. Which is why the first mile problem is such a big deal. Mark Melton, VP of scientific operations and development at Slope, spoke with Pharmaceutical Executive about this issue, what causes it, and how it can be rectified.

Pharmaceutical Executive: Can you describe what you mean by “the first mile?”
Mark Melton: The first mile refers to the first part of the journey of samples we’re collecting from patients and the physicians at the clinical trial to where the samples, and therefore the data that’s associated with the samples, must go. The first mile is the clinical trial site to the actual destination lab. Usually, they are not local. The data that travels with the sample is all paper. What they do is they fill out a requisition form, usually a three-piece carbon copy structure. Essentially, that’s the data that goes with the samples.

Let’s say you had a thousand samples. On average, you probably have about 500 sheets of carbon copy paper that goes with those. The labs that receive them must take that and make sense of the information. They’re looking at the shipment to make sure they got what they’re supposed to get. Unfortunately, they don’t know what they got because they didn’t order it, these deliveries just show up. All the information they have is a tracking number from FEDEX. When the shipment gets there, they end up with a lot of paper to go through to put the samples in their system.

The first mile problem is how do you ensure the integrity of the data and the sample. You can imagine that if you get that part wrong, if you test samples or do anything with them that impacts the patient, you need to make sure that you’re testing the right sample to right patient. You must have the right data associated with the right sample. When they do scientific modeling or analysis of the samples, you use more than just the results of the test. You need to know the body weight, the sex, sometimes the age, what day it was collected, also what time it was collected. That’s what’s known as sample metadata.

That first mile journey, right now, in 2024, is all paper based. That’s where all the problems start. The second and third miles are where they aggregate all that data from the different labs and sites. If you can solve the first mile issues, then everything in the subsequent miles in the journey becomes much easier.

PE: Clinical trials are becoming more complex, is this adding to the problem?
Melton: It is. Before, you weren’t collecting as many samples, so you didn’t have as much data (but it was still a lot). Now, because the complexity has grown, so not only must collect more samples, but the sample types are harder. For example, we need things that are not easy to collect from a patient, like tumor biopsies. You have a lot more samples, and therefore, a lot more data. Since no one solved the first mile, that took an already complex problem and multiplied it tenfold.

What was already hard to do has become increasingly hard to do, and the standard practice right now for both large and small biotechs is to do it manually. They hire teams to look at different data sources, like pulling inventory files. The data comes in different formats and from different companies. You need to figure out where your sample is, if it went to the right place, are they reporting the right data, and are they testing the right sample.

If you’re trying to do that for tens of thousands, if not hundreds of thousands, of samples, it becomes overwhelming fast.

PE: What steps can the industry take to solve this issue?
Melton: The interesting part is that it comes down to communication and having a proper recognition of the problem. Everyone that’s involved in this is a business. They have their own objectives and goals. They’re also competitors, especially when we’re talking about the labs that are receiving the samples downstream from the clinical site. There’s not a big incentive for them to work with one another.

Change must be driven by FDA and regulatory bodies. They must set a standard that says things must be reported out in a data friendly language like CDSC. That at least standardizes the databases. Making these labs perform conformity does impact patient care. The other thing we must do is to stop using carbon copy paper to communicate samples, ship them out, and communicate data between clinical sites and the labs. The prevailing thought is that the clinical sites are running the trials for different companies, each of which has their own take this so they’re throwing a lot of different solutions at the sites. Some sites say that due to the different companies they work with, they have up to 13 different technologies that they can use per study.

There isn’t a standard consensus. If you’re a research coordinator working for a doctor, on one study you have one portal on your laptop. On other studies you’re working on, you have something that’s on a mobile device like a cell phone. We must standardize our approach and make the sites jobs easier so they’re more inclined to use the solution that pushes the data from what they’re doing to everyone who’s using it downstream.

Doing these two things alone would do wonders.

Recent Videos
Related Content