• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

"Evidence" Trail Elusive

Article

The potential of comparative effective research in transforming healthcare remains untapped, but the opportunities are there, writes Michael Christel.

The practice of comparative effectiveness research (CER), its promise spotlighted in recent years amid the legislative push to measure the value of medicines in the real world, has had little impact on actual healthcare decision-making to date. That was the consensus from experts speaking on the subject at the DIA 50th Annual Meeting in San Diego, though the slow go to real practice change is not due to a lack of interest or attention in CER. Today, there is more education and guidance available on the conduct of non-interventional approaches, and groups such as the Agency for Healthcare Research and Quality (AHRQ), the Patient-Centered Outcomes Research Institute (PCORI), and NIH are moving closer to an agreement on uniform research standards governing this area.

In addition, opportunities exist to use real-world evidence more widely. The European Medicines Agency (EMA) issued guidance last year on post-authorization safety studies, now requiring the collection of risk-benefit data as well. No longer is it enough to simply identify side effects in the observational setting; pharmaceutical companies must now take a more granular approach, examining different subpopulations to determine their respective risk-benefit balance.1 There is also increasing demand from FDA that drug manufacturers conduct observational studies on a new product’s effectiveness, and healthcare payers and clinicians are eager as well to view more detailed health outcomes data to inform prescribing and reimbursement decisions.

It is believed that 15% of current postmarketing commitments have non-interventional study designs, and the number of patient registry studies listed on ClinicalTrials.gov reportedly jumped from the hundreds to the thousands from 2013-2014.

“CER is going to be a fundamental game-changer in how we practice medicine, and will help us get access to the right medicines at the right time for the right patient,” Nancy Dreyer, chief of scientific affairs and senior VP at Quintiles Outcome, the CRO’s real-world and late-phase research division, told Applied Clinical Trials, leading up to a DIA session she participated in on CER and healthcare decision-making.

Dreyer noted, however, that at the moment, learning healthcare systems still lack the tools and training necessary to best interpret real-world data and help implement it into healthcare delivery. Kimberly Westrich, director of health services research at the National Pharmaceutical Council (NPC), a policy and research organization, also pointed to a lack of good studies and not enough dissemination of quality studies as factors contributing to the current gap between promise and reality for CER. Such factors are likely influencing stakeholder perceptions of CER’s role in the healthcare landscape. According to findings from a NPC survey presented by Westrich, only 6% of respondents feel there will be substantial improvement in CER’s impact on healthcare decision-making in the next year. That figure does increase to 21% when applied to the next three years and 42% to the next five years.    

Another roadblock in CER adoption, according to Brian Mittman, a senior scientist with Kaiser Permanente Dept. of Research & Evaluation, Care Improvement Research Team, speaking during the session, is the physician community’s general conservative response to evidence—that while often an appropriate response to single efficacy-related studies, is not appropriate, Mittman believes, in the face of better forms of evidence such as those captured by systematic reviews and CER studies that are well-designed. Another factor influencing the implementation of CER, Mittman says, is the uneven level of support and investment site-by-site. There are cases where research teams are on-site promoting CER and evidence-based adoption, while also measuring findings and providing feedback; but such efforts are not scalable enough across a study program to achieve practice change on a large scale, Mittman notes.    
 
To help improve the translation and dissemination of CER, researchers, through an initiative funded by the NPC, have developed the Good ReseArch for Comparative Effectiveness (GRACE) checklist, a screening tool that helps identify and validate the observational research that is sufficient enough for decision support.2 The 11-item checklist, led by Dreyer’s group at Quintiles in collaboration with NPC, has been applied to hundreds of literature reviews, Dreyer says, conducted by GRACE volunteers representing academic, government, and private sectors, with the backdrop question being, “Is this adequate for the study purpose?” Another useful tool in assessing good practice for observational studies is the federal handbook, “Registries for Evaluating Patient Outcomes: A User's Guide,” commissioned by the AHRQ.3 The guide, now in its third edition, has helped inform pharmacovigilance legislation in the European Union.

There are emerging tools in the medical data space that are attracting notice from healthcare stakeholders interested in CER. For example, with the growing availability of electronic medical record (EMR) data, there are now opportunities to collect clinical detail on health outcomes, along with the traditional data mined from insurance claims (e.g., prescription use, physician/hospital visits).
 
For healthcare payers, specifically, there is great interest in using CER data to aid in formulary and reimbursement decisions. Agencies such as the National Institute for Health and Care Excellence in the UK have made payer determinations based on patient registries. Health plans and pharmacy benefit managers, however, are still limited in their understanding of CER and, thus, any resulting evidence has had little impact on the way payers reimburse for different therapy options, the speakers said.
 
“Payers thrive on CER, they just won’t tell you how to do it,” Dreyer told us. “It’s not as simple as, ‘This drug costs more.’ Payers have a surprisingly long view. They want to know over the long-term is this benefit really going to make a difference. They need a lot of very detailed information, and they want to know about their patients. [Centers for Medicare & Medicaid Services] will say, ‘I want [the data] in the elderly; don’t bring me a clinical trial in younger people.’ People out in the Southwest will say, ‘I have a Spanish population.’ There’s all kinds of specialty issues.”

The challenge, therefore, is providing payers the right evidence and the right comparators. Sponsors have attempted to placate insurers in the past through head-to-head trials, which have garnered significant notice, but have often left payers unconvinced of a drug’s ultimate effectiveness in the specific therapeutic market. That’s largely because the comparators chosen are often older drugs not covered on formularies, or a company’s new product has not been used enough in the real-world environment. Hence, any comparative data is of less importance to payers.      

“If nobody is using your product, they’re very hard to study in non-interventional studies,” says Dreyer. “There’s some wonderful new drugs that looked good in the clinical setting and physicians won’t touch them. The real-life experience is very different from the clinical trial experience. So several new products never made it. They were good but you needed the real-world setting to understand the value.”

To better deliver the quality evidence sought, pharmaceutical companies need to identify which consumers are the new adopters of drugs in their target markets and find patient populations with similar characteristics to conduct CER, Dreyer said. Understanding the context of real-world studies has become increasingly important. For example, patients in a crowded treatment market are less likely to switch to a new product. With evidence demands increasing from regulators and payers alike, sponsors are realizing the necessity to conduct smaller trials in more sensitive populations.

“You have to start somewhere,” Dreyer told us. “It comes down to having a good enough product that people are willing to give it a try, and then watching who uses it and how they use it, and finding similar people…  It’s the concept of treatment heterogeneity. [Trials typically target] the average patient, the average treatment, the average treatment effect. But no one is average. No one gets an average treatment effect.”

 

References
1. European Medicines Agency, “Guideline on Good Pharmacovigilance Practices,” 2013.  http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2012/06/WC500129137.pdf

2. “Grace Principles: A Validated Checklist for Evaluating the Quality of Observational Cohort Studies for Decision-Making Support,”
http://www.graceprinciples.org/doc/GRACE-Checklist-031114-v5.pdf

3. Agency for Healthcare Research and Quality, “Registries for Evaluating Patient Outcomes: A User's Guide: 3rd Edition,” 2014. http://effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&mp=1&productID=1897

Recent Videos
Related Content