These days, when the conversation turns to the future of the pharmaceutical industry, you hear a lot of talk about innovative,
first-in-class new drugs. It's inspiring, but it raises an important question: Does a strategy based on first-in-class drugs
make sense for companies or for patients?
Accenture and CMR International recently collaborated in a study of 15 pharmaceutical companies, looking at innovation and
how it is linked to commercial success. The results weren't entirely surprising:
First-in-class drugs are riskier. CMR's data indicates that projects based on new targets have significantly lower success rates than those based on established
targets in every phase of the discovery process. For example, in assay development, established target projects have a success
rate of 76 percent compared to 57 percent for new target projects. Overall only three percent of projects based on new targets
are likely to enter preclinical development in comparison with 17 percent of projects on established targets.
First-in-class drugs take longer to develop. CMR's data shows that projects on new targets typically take 16 months longer to deliver a drug candidate into preclinical
development than projects on established targets. The biggest lag is in lead optimization, which takes an average 17 months
longer for new target products.
To see how these figures would play out in terms of R&D productivity, Accenture and CMR modeled a theoretical pipeline. The
model starkly illustrates the implications of investing in new-target research. It shows that to deliver one submission per
year, a company focusing exclusively on new targets would need to initiate almost four times as many new projects per year
as a company working only on established targets—90 compared with 25.
The Price of Innovation
That sort of additional burden might make sense if first-in-class drugs were much more successful than additional-in-class
products. But it's not at all clear that they are. There are many examples of drugs that were second, third, or even fourth-to-market
in their class that surpassed the products that came before them. (Think of fourth-in-class Lipitor [atorvastatin].) There
was a time, a decade ago, when first-in-class drugs dominated pharma's top 10. In 1990, seven of the top 10 products were
first-in-class. But in 2003 only one was, and in 2004, two.
At least in the current snapshot, successor drugs are more successful than first-in-class ones. If you look at today's concerns
with safety, it seems likely that the trend will continue. For a drug to reach the top 10, it generally has to be something
that can be broadly prescribed. (There are exceptions—Procrit [epoetin alfa], for example.) But we seem to be entering a world
in which safety concerns are going to put more pressure on first-in-class products.
That pressure is likely to surface in tightly defined patient populations and a conservative approval approach, with trials
more closely aligned with a narrower set of indications. If that happens, it will be harder for a first-in-class to hit the
top 10, at least right away.
In essence, additional therapies will have greater room to play as they come to market because fewer of the indications will
have been snatched up by the first-in-class. It will take longer for the first-in-class to exploit the entire class—and that
should encourage additional-in-class therapies to come to market.
From First to Best
None of this is to say that companies should not focus on first-in-class drugs. But statistically, the strategy of improving
upon first-in-class products tends to have good—and perhaps more predictable—success rates.
We're heading into a world that will continue to encourage the development of additional-in-class products. Beyond all the
harsh words we hear these days about "me too" drugs, there is a more complex reality. Today's top 10 lists may not have many
first-in-class drugs, but they are dominated by best-in-class drugs, and that has been a benefit to worldwide human health.