IT'S CERTAINLY NOT HEADLINE NEWS THAT THESE ARE TOUGH days for the pharmaceutical industry. More than $60 billion in revenue
from blockbuster drugs will evaporate as these products go generic over the next five years, while the productivity of clinical
development has hit a particularly rough patch. Even in the companies with relatively strong pipelines, many of the new treatments
are biotech products acquired out of house. The expected authorization of biogenerics will squeeze profits only further.
This shift toward biotech products portends a general evolution in healthcare. Such products are often stunningly effective.
They are also typically targeted to narrow patient populations. The movement toward targeted therapies for niche markets intersects
with a second major trend: consumer-directed healthcare. Because consumers are paying for more and more of their healthcare,
patients who need treatment with biologics can face out-of-pocket costs stretching into thousands of dollars. This nexus is
making health economics and data mining increasingly important to drugmakers eager to show the value of a new drug.
A few years ago, pharma's use of health economics was largely confined to modeling cost-effectiveness for payers—differentiating
a product from the competitors'. But database analyses are now being applied much more widely: to gain knowledge about the
size of potential markets, to establish a pricing strategy, to develop protocol design and evaluation, and to model cost-for-value
based on a drug's actual clinical performance.
Payers, consumers, and regulators all need answers to two basic questions: Is the drug safe in real-world populations? And
is the drug effective in real-world populations? The answers, coupled with information about competing treatments and the
product's price, largely determine its market success or failure. And given their higher price, biologics often have to meet
even higher standards to be considered cost-effective from a payer's standpoint.
THE REAL VALUE OF RETROSPECTIVE DATA
At a drug's launch, real-world data on effectiveness and safety are, of course, not available. Consequently, the health-economics
evidence presented to payers generally involves modeling the cost-effectiveness and budget impact compared with other treatments
for the same indication. To get technical for a moment, cost-effectiveness models estimate the incremental cost associated
with each unit of clinical benefit obtained from treating patients with drug A vs. drug B. Budget-impact models estimate total
healthcare costs associated with offering the new therapy. Both models use data from clinical trials.
Once the drug hits the market, real-world data on healthcare utilization—such as doctor's-office visits, emergency room visits,
hospitalizations, and concomitant medication use—begins to accumulate in retrospective databases. When a sufficient period
of time has passed (say, a year), statistical comparisons often are made between patients who started treatment on the new
drug and those on one or more competing treatments. These analyses enable drugmakers to track product uptake, medication-refill
persistency, and patient healthcare utilization. Payers are often especially interested in the latter data because they reflect
real-world use patterns—the evidence payers are constantly evaluating to revisit coverage and reimbursement decisions.
Despite their usefulness, however, these analyses of retrospective data are limited because they do not fully control for
all sorts of confounding issues that can lead to incorrect conclusions. One concern is that the results may be affected by
"launch bias"—the tendency of physicians to prescribe new drugs to patients for whom other treatments have stopped working.
These patients, who are likely to be sicker than average, may therefore have poorer outcomes. As a result, such comparisons
may result in the new drug's appearing to be less effective or safe than the competition.
Launch bias would not be a problem if it were possible to factor in disease severity, but this information is rarely available.
For example, when comparing antidepressants by way of retrospective data, medical claims generally contain diagnosis codes
but not clinical measures of severity. Statistical methods that reduce such biases exist, but a better (and pricier) solution
is to design studies that collect the relevant data in the first place.