• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Toolkit: Going Head to Head

Article

Pharmaceutical Executive

Pharmaceutical ExecutivePharmaceutical Executive-02-01-2007
Volume 0
Issue 0

Comparative drug tests make pharma see red. But market pressure to prove a drug's real-world value is likely to force even the most stubborn firm into the ring. Why be bull-headed? Adaptive trials can cut cost, time, and risk in half.

Show me the value! Increasingly, that's what everyone—from Medicare and private insurers to doctors and patients—is demanding before showing pharma the money for its drugs. What with the jump in generics, the fight for formulary placement, and ever-savvier consumers, companies are coming to a painful realization: The evidence that satisfies FDA at drug-approval time may not satisfy the marketplace. Being better than a placebo is necessary but increasingly not sufficient. More and more, drugs both new and old are having to justify their price tags by proving that they're stronger, more effective, safer, or easier to take than the competition.

Bryan R. Luce

The congressional bill calling on the government to negotiate drug pricing for Medicare Part D—passed by House Democrats as Pharmaceutical Executive went to press—will only accelerate the demand for this information.

Prove It? Not So Fast

The industry's resistance to head-to-head research is easy to understand: The studies are very expensive, time-consuming, and, above all, risky. As Bristol-Myers Squibb famously learned from its PROVE-IT trial in 2003, real-world comparative data, even when designed to maximize odds for success, can turn around and bite the sponsor.

BMS pitted Pfizer's potent statin, Lipitor, against a lower dose of its own Pravachol in 4,000-plus patients previously hospitalized for angina or heart attack. That matchup may appear a no-brainer, but BMS had designed the study to minimize risk—or so it hoped. First, PROVE-IT was a "noninferiority trial," so Pravachol had only to hold its own against Lipitor. Second, follow-up ended at two years—the point at which differences among anti-cholesterol drugs were believed to be just approaching measurability. But BMS got an unpleasant surprise when data after only 30 days showed that Lipitor patients had a 16-percent-lower risk of heart attack than the Pravachol takers.

PROVE-IT, of course, reinforced industry's wariness of post-marketing comparative evidence. But a mere three years later, the questions surrounding drug-versus-drug studies no longer turn on if but how—and how soon. The challenge for forward-looking drug manufacturers is to find a risk-limiting framework that renders such research faster, easier, and cheaper—and then to be the first to market the information.

Adaptive Trials, Bayesian Model

An adaptive trial design, particularly if it uses Bayesian statistical methods, can go a long way toward overcoming the key hurdles of comparative late-stage research. Donald Berry, a leading Bayesian advocate, argues that such Phase IIIb or IV trials are 30 to 50 percent more efficient than the traditional blinded, placebo-controlled version.

Bayesian studies are generally smaller, swifter, and more focused than the large trials employing classical frequentist statistics. For instance, they can be modified in midstream to capitalize on new data as they come in, allowing companies to optimize time and sample size while limiting costs and risks. If an early analysis indicates that the drug appears to be particularly effective in, say, elderly women, the randomization scheme can be rebalanced to recruit a higher percentage of these participants, producing more meaningful data while improving patient outcome.

Bayesian trials also build off of all the relevant evidence known about a drug. Instead of the classical approach of starting a new study from square zero—as if many millions of dollars haven't already been spent to develop data about the compound's effects—a Bayesian method answers questions more efficiently. It says, in effect, "We know the drug is safe and effective in a controlled setting, but this is not quite enough evidence for an informed decision about formulary placement. Now we need to find out how well it performs in community practice."

Finally, Bayesian research yields information in a form ideally suited to healthcare decision-makers, who must adapt findings to real-world situations. Rather than the statistical significance of traditional trials' output, Bayesian results are expressed in terms of probability, such as:"Drug A is 70 percent more likely to improve health status than drug B."

Early Interest for Early Stages

The appeal of adaptive trial design is starting to register at attentive drug companies. Eli Lilly has launched three early-stage trials for oncology and diabetes drugs. BMS is designing an adaptive approach to dosing studies for a migraine compound. And last year FDA began beating the drum in a big way—particularly as the adaptive approach aligns with its Critical Path initiative aimed at speeding the development of new treatments. The agency established advisory teams, held workshops, and began drafting a series of adaptive-trials guidance documents—due out this year—on issues such as evaluating interim data and maintaining statistical integrity. Still, when it comes to its pivotal Phase III trials, FDA remains cool to the new approach.

The high cost of doing nothing

But what about Phase IV, when comparative real-world effectiveness data can be developed? At that point, FDA has already deemed the drug safe and effective. A novel, scientifically sound, and cost-effective research design would seem to be just the carrot the agency needs to get drug companies to make good on their commitment to post-marketing studies. Of course, FDA's stamp of approval is essential to green-light drug manufacturers' use of adaptive data in their marketing materials.

Missing Evidence and Opportunity

The Centers for Medicare and Medicaid Services (CMS) has been raising the bar on proving real-world comparative value since 2004, when then-CMS head Mark McClellan announced a new Coverage with Evidence Development (CED) policy. In cases where there is insufficient outcomes evidence to make a national determination for coverage, CED restricts it to patients in clinical studies—pending better data. The message to pharma is clear: To get on the formulary, get on the evidence-based bus.

For example, in January 2005, CMS invoked its CED policy by requiring additional evidence before deciding about coverage for four off-label colon cancer drugs, Avastin, Erbitux, Etoxatin, and Camptosar. Sponsored by the National Cancer Institute (NCI), this series of classically designed trials will meet CMS' demand for real-world data distinguishing the best from the rest. Although it remains to be seen exactly how much efficiency might have been gained, time, money, and risk of adverse findings would all have been reduced with a Bayesian approach. NCI may be missing an opportunity to pave the way to late-stage adaptive trials for a quicker coverage policy.

The price drug manufacturers pay by not producing their own definitive comparative-effectiveness evidence is living with evaluations made by other sources (see "The High Cost of Doing Nothing"). Recognizing that in these crunch times, companies are not searching for additional ways to increase R&D costs, one wonders whether, in the face of growing market demand for better real-world drug-versus-drug information, a 30-to-50 percent increase in efficiency might "tip" manufacturers to begin investing in late-stage Bayesian trials. It may be.

Bryan R. Luce is the senior vice president for science policy at United BioSource. He can be reached at bryan.luce@unitedbiosource.com

Related Videos
Related Content