• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Takeaways from the Post-Approval Summit

Article

Pharmaceutical Executive

Dr. Richard Gliklich, president of Quintiles Outcome and a professor at Harvard Medical School, highlights the most prominent post-approval issues and risks coming out of this year’s Post-Approval Summit, held on May 7th-8th.

Dr. Richard Gliklich, president of Quintiles Outcome and a professor at Harvard Medical School, highlights the most prominent post-approval issues and risks coming out of this year’s Post-Approval Summit, held on May 7th-8th.

 

Brand managers prodding products across the regulatory finish line to commercialization are finding themselves in yet another race, instead of in the winners’ circle. Following the Post-Approval Summit – sponsored by Quintiles and hosted by Harvard Medical School – Richard Gliklich spoke with PharmExec about the increasing number of requirements, regulations and risks associated with a drug’s performance after regulatory approval has been granted.

PharmExec: In general, are post-market requirements increasingly significantly?

Richard Gliklich: Yes, definitely. Just to give you an example, the Center for Device Radiologics Health has more power under recent legislation to require things under the 522 Postmarket Surveillance Studies Program. The European Medicines Agency now has new pharmacovigilance guidance. And there’s a lot more that [regulators] are looking to do in the post-approval environment.

PE: Why are regulators focusing on post-approval?

RG: They want to understand a lot more about the safety of a product once it’s exposed to populations that haven’t been tested in clinical trials, like older patients, non-Caucasian patients, younger patients, and patients with multiple comorbidities, for example. Janet Woodcock spoke about the fact that [the Center for Drug Evaluation and Research’s] mandate is now benefit to risk, so they really need to understand more about effectiveness in the real world, in order to make decisions about drugs as safety information continues to accrue after approval. We don’t have a progressive licensure regulation now [Health Canada is developing one], but if we did, that would be a big issue. FDA has a lot more interest in the effectiveness side of the equation than they used to.

PE: Did Janet Woodcock suggest that a progressive licensing program might be implemented in the US?

RG: She was careful not say, but she did say that [CDER] sees evaluating benefit as a significant part of its mandate.

PE: Did the EMA put forward any specific processes or requirements for data collection around special populations?

RG: Yes, there are two aspects. The new pharmacovigilance guidelines do have some specific requirements. Basically a risk management plan is required for all new products, and certain products will be approved with “additional monitoring requirements.” We’ve tried to clarify signal detection roles and responsibilities, and the EMA has tried to clarify how they want to hear about and monitor the effectiveness of risk minimization. They’ve clarified the legal basis for post-approval safety studies in Europe, and also something new called post-approval efficacy studies, which is not something that we have seen before. Post-approval safety studies have really grown, they’re attached to a large number of new approvals over the last couple of years, so that’s going to be important.

PE: Industry has been criticized in the past for having a bad compliance record with respect to completing postmarketing studies on time, or at all. In 2011, FDA held a meeting to address the issue of noncompliance with postmarketing studies in cancer drugs that received accelerated approval. Is the agency stepping up enforcement on these studies, and were they lax on enforcement in the past?

RG: There was a statistic that only 65 percent of these studies were being followed through and it was partially that they weren’t cracking the whip. But to be fair, it was partially because the reporting mechanisms were not clear enough and the schedules for reporting were not clear enough to manufacturers. When that got cleaned up, I think the numbers went up quite a bit. But [FDA] has much more ability to give out monetary penalties for not getting the information they require, whether it’s a postmarketing requirement or a postmarketing commitment in US regulation. With the EMA, when they have a post-approval safety study requirement, those have to be followed and there’s a set reporting frequency for that, so those are definitely followed through on. My sense is in Europe, the number of observational studies required post-approval for NMEs is a very high percentage, higher than in the US as this point.

PE: Are regulators conducting their own studies as well?

RG: They are doing a lot more of their own studies. During the Summit we heard from Rich Platt, who works on the Mini-Sentinel program. While that program is considered a pilot, it’s really percolating along in terms of the number of queries, they’re looking at an exposure/outcome pair.

PE: How does the exposure/outcome pairing work?

RG: If there’s a hypothesis that taking drug X causes a heart attack within 30 days, they can look through a distributed network at large numbers of patients very rapidly, to get a sense of whether something’s really there. That’s moving forward very rapidly.

PE: With respect to patient outcomes, aren’t pharma companies to some extent at the mercy of physicians, and how they prescribe the drug? Could that lead to discrepancies between the initial clinical data, and a drug’s real world performance?

RG: We talked about “practical trials” at the summit. The point is that you want to emulate real-world conditions more closely in the clinic. You want to have the benefit of randomization for certain types of questions, but you want to emulate real-world conditions so that you can extrapolate from the data to predict what’s actually going to happen when a drug gets to the real world. It’s great to have clinical trials data on the absolute efficacy or safety of a product under ideal circumstances and ideal populations, but that’s not how a drug is going to be used. Payers also want to know how a drug will work in the populations they cover, and that is coming through now with Medicare, for example, rejecting certain clinical trials data because it didn’t have enough Medicare patients in it. There was a lot of discussion during the Summit about how practical trials, or combining trials with registries or database studies can give a comprehensive view of the evidence, and answer multiple questions for multiple stakeholders at the same time.

PE: Is there a clinical trial design that can please everyone?

RG: That’s where the conference headed. Different stakeholders – regulators, payers, providers, patients – have different interests in certain information, whether it’s safety, efficacy, effectiveness, value, quality of life…all of those are important, but they all cost money to produce. How do you get to where you have the timeliest information for these decision-makers by combining modalities and trying to get “flavors” of information, so to speak. If you need a trial to answer X, Y and Z, maybe put that within a registry that’s also answering, observationally, how that drug is actually working in the real world at the same time, while also collecting value information for a payer.

PE: Is it possible that emerging data sets, like electronic health record data, or data from registries, or claims data, might actually save money by preventing a full-scale clinical trial to answer a very narrow question?

RG: Yes. Data sets differ and have limitations and variables, and missing data, but for safety it’s quite good to be using claims data and EHR data to actually just look at the signal. You have to know what certain data sets are good for, and what they aren’t good for, but it means you don’t need to do everything with a trial and you may be led down a path that will help rule things out. If somebody’s concerned about gastric bleeding with a new anticoagulant, for example, rather than going to the manufacturer and saying, ‘We’ve seen 50 case reports of this, go do a trial,’ you can instead go to these big data sets with 100 million patients – like Mini-Sentinel –and run that exposure/outcome pair and watch it over time, then the regulator may be comforted by the fact that no, there really isn’t a problem. That was a specific example Rich Platt presented, where that was actually the case.

PE: And the data from EHRs for example will get better.

RG: It will get better, and for certain questions, if you can get to a quick answer that’s practical, then it makes sense. That is definitely true on the safety side. We also had discussions about how to use [data sets in combination] for effectiveness determinations, and there again, the methodologies are not quite clear yet because of missing data issues but the data will get stronger, and the methods will get stronger, and there’s a lot of work going on in that direction. Combining these modalities into more comprehensive model. We used to be card-carrying clinical trialists or card-carrying occupationalists and those days are

over. Now the question is really, what’s the most appropriate method, or combination of methods to get to the answer in the least expensive, timeliest way that will answer the question with sufficient validity so that people can accept that answer or make a decision based on it. But a lot of it depends on what decision you’re making. A decision to approve a product has much broader implications than a decision to pay a claim as a Tier 1 product versus a Tier 2 product. Different questions require different levels of evidence to answer.

PE: Where should companies focus their efforts in an evolving post-approval environment?

RG: If you view the world as having three main areas of evidence development – interventional trials, large prospective observational research studies, and big data, or using existing data – we’re moving towards more and more stakeholders and more stakeholder requirements, and more stakeholders requiring different types of information. You really need to be able to provide all three areas of evidence and be able to combine them appropriately to get to the right answers. And then it’s a question of developing the right methods to do that.

This discussion has been edited and condensed.

Related Videos