• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Managing R&D Risk

Article
Sanjay K. Rao

Sanjay K. Rao

Pharmaceutical R&D prides itself in making new products for treating afflictions with no cure, developing therapeutic advances that offer significant new benefits, finding better ways to make drugs safer and work more effectively or devising radical new approaches to how they are administered.

According to PhRMA, member companies have invested nearly $1 trillion dollars in R&D since 2000, establishing the biopharmaceutical sector as the most R&D-intensive industry in the US economy. Publicly available benchmarks indicate that on average firms with in market assets in the pharmaceutical industry allocate ~20-25% of sales revenue to R&D.1

A commitment to research and development as evidenced by significant resource allocation for it is concomitant to business success, while ensuring a supply of much needed innovation for the benefit of patients, health care professionals, health insurers and other stakeholders in the medical ecosystem. As such, strategies that fortify the workings of an R&D organization and those that seek to rectify its shortcomings are essential to a pharmaceutical firm’s raison d’etre, while desirable to the industry as a whole.

But the relationship between R&D resourcing and innovation is murky at best. For example, there is little evidence to suggest that spending more on R&D will lead to more innovation. This article discusses key risks associated with pharmaceutical R&D in practice and makes recommendations on how they could be managed for successful results.

R&D spending and innovation

Based on recent research and analysis by the author there is substance to the proposition that spending more on pharmaceutical R&D has little to do with generating more new medicines.2 Depending on higher R&D budgets to see more innovation is risky at best.

Consistent with the rationale that R&D is vital to sustaining the healthcare ecosystem, investments in R&D have increased significantly over the past twenty years.In the early 2000s, when drug industry revenues were rising sharply, the industry’s R&D intensity (its R&D spending as a share of net revenues) averaged about 13 percent each year. Over the decade from 2005 to 2014, the industry’s R&D intensity averaged 18 percent to 20 percent each year. That ratio has been trending upward since 2012, and it exceeded 25 percent in 2018 and 2019, the highest R&D intensities recorded by the pharmaceutical industry as a whole since at least 2000.

This trend in increasing R&D spend does not correlate with rising productivity. Chart 1 below illustrates the all too real fact that spending more on pharmaceutical R&D per se has little to do with generating new medicines.

R&D capital expenditure

R&D budgets are subject to endemic risks and trends of capital markets. As such while R&D outlays have increased over time with the cost of available capital, they have remained largely unrelated to changes in productivity as measured in the number of successful drug launches.

The cost of capital typically represent a third of the total costs for developing a pharmaceutical asset. A one percent change in the cost of capital in this context can represent as much as $68M in cost changes. For every asset that is launched after development through human testing in randomized clinical trials (RCTs), ~4 assets fail. Both successful and failed assets carry capitalized costs that are incurred throughout the development cycle. The table below shows expected capitalized costs for three different R&D costs, assuming a 15-year developmental timeline and a weighted average cost of capital (WACC) of 8.33%.

Scale

An empirical relationship exists between scale and R&D expense. Companies that launch 2-3 new drugs incur relatively lower capitalized R&D costs per drug than those that launch far more new drugs. Maintaining cost efficiency with increasing scale is not easy, especially given the uncertainties in resources required, overheads and predicting success accurately.

Another factor that impacts total R&D investment outlays— but which tends to increase risk and inherently reduce chances of market availability—is the number and type of pre-clinical assets in the R&D portfolio. Capitalized cost demands over the full development cycle are highest for pre-clinical assets that need to pass all clinical testing and regulatory phases for launch. Such assets also represent the highest risk of failure, given the multiple testing stages that need to be crossed through to approval. As such, careful consideration of strategies that better manage costs for such assets are warranted, including active co-development, divestiture and/or expediting development through efficient resourcing.

Other factors

Pharmaceutical firms vary considerably in how much is spent on R&D overhead. It is a fact of R&D budgeting that a good portion of expense reflects unavoidable overheads such as skilled labor (scientists, researchers) and sunk capital costs for facilities, licensing technologies, purchasing and maintaining equipment.Other factors impacting costs (but not productivity) include molecular complexity of the products in development; type, size and number of clinical trials necessary to produce convincing evidence for successful filing, the size and structure of patient segments targeted for obtaining asset labels that are commercially viable, and the extent to which one of more niches in the range of afflictions likely to be treated is fulfilled by current standards of care.While these factors directly impact R&D budgets for one or more assets, they can have little impact on proving clinical benefit or establishing one or more beneficial outcomes from market deployment of an asset in clinical development.

Clinical trial risk management

Between 54% and 58% of a typical pharmaceutical firm’s R&D expense involves costs for the design, execution and analysis of data obtained through clinical trials.3 The prevailing paradigm for generating evidence that serves to establish such benefits has remained the same for over a hundred years, viz. the design, conduct and analysis of results from controlled RCTs.

Rate of return

But the trend in rising investments for R&D is not matched by a corresponding increase in clinical trial success.Just about 1 in 5 assets entering human clinical trials can expect to launch. Less than one in a thousand assets that are discovered find their way to their markets through clinical testing and regulatory approval.

As such, the internal rate of return (IRR) on pharmaceutical R&D investments is estimated to decline with time. Studies have shown pharmaceutical IRR estimates to decline from 10.5% in 2010, to 5.5% in 2014, to 0% in 2020. A key hypothesis for such decline is posited to be the all too common law of diminishing returns. In other words increases in R&D expensing is estimated to net less marginal return with every additional dollar spent.4,5

Macro-analyses of the relationship between total industry R&D investment and return in terms of total new product launches may be questioned for overly generalizing across diverse therapy areas, multiple treatment modalities, and ignoring new and yet to be developed technologies that could drive and even reverse future productivity. But the fact that clinical trials fail more often than succeed is unquestionable.

Sources of risk

As can be inferred, Phase II trials measuring human efficacy endpoints for the first time after safety assessments in Phase I are likely to result in lower probabilities of success. This is one reason strategies that focus on acquiring de-risked assets (effectively those that have had passed Phase II) are preferred over ones that recommend equal consideration of pre-clinical, early stage assets.

Phase II and earlier clinical trial costs can represent >2/3rds of the total capitalized cost for developing drugs from the pre-clinical phase through launch. As such, firms need to focus on strategies that advocate a rigorous process to weed out potential failures early, spending resources on a fewer set of assets with higher clinical success probabilities.

In asset acquisition decisions the premium paid to acquire de-risked assets need to be balanced against costs required to achieve successful Phase II results. This is especially useful if decisions pertain to growing an internally developing pipeline asset versus acquiring a late stage or in-market asset.

In principle, one can expect relatively higher returns to investment in R&D by striking a balance between spending on late stage, de-risked assets and those that are pre-clinical, in Phases I or II. Such a balance by definition would be idiosyncratic to a firm, its available capital, propensity to take on risk and track record in achieving successful results from clinical programs.

The duration of a trial presents a key source of risk to successful development. Longer trials usually mean more expense, some of which can be attributed to variable, time-related costs associated with the number of sites involved, trial facilitation, maintaining operations and infrastructure.

Other costs and risks are economic in nature—such as those related to loss in opportunity: for example, staying longer in the field does not imply gains in expected efficacy or safety improvements per se. On the contrary, it opens the field to competition, sometimes with the benefit of modified trial designs that use learning accrued from the first to field trial. Key factors influencing trial duration that need to be managed proactively include selection of appropriate inclusion/exclusion criteria, expected sample sizes tailored to power requirements, site selection and representation.

Managing risks due to global clinical trials

Longer human life spans throughout the world have meant increasing global incidence of chronic and degenerative diseases. Large markets for chronic diseases implicitly present global opportunities for products that would be administered over longer life cycles. To realize such opportunities, however, assets need to undergo clinical trials that are sufficiently long and powered to represent the addressable patient population over a sufficiently representative time in its in-market lifecycle.

Over the years, clinical trials have increased in complexity of design and expectations. As such, trials on average are taking longer to execute. This has invariably added to the risk of achieving successful clinical outcomes over the entire clinical trial cycle through regulatory filings; especially when countries or regions have their own idiosyncratic filing requirements.

Longer timelines increase costs and hold the potential to impinge on revenues if and when the product launches after securing approval. Given that every branded pharmaceutical product has a finite patent life, more time spent in clinical trial testing implies less time to serve patients in the market. Financially, this also implies a lower net present value (NPV) estimate of future earnings.

In this context relying upon strategies that reduce trial design complexity and time with no loss in the ability to detect clinical effects are vital. Examples of designs that differ from traditional randomized control, prospective, interventional and fixed duration frameworks include designs that rely upon retrospective data, use cross sectional comparisons, conduct meta analysis of available data and deploy adaptive methods that enable faster, in-trial decisions to reduce time, complexity and costs.

Specialty product clinical trials

The disparity between rising developmental costs and flat success rates also has roots in the rising importance of products meant to treat highly specialized diseases. Products that serve patients treated by specialists are bright spots in the modern biopharmaceutical landscape. They promise substantial benefits in previously under-treated categories like cancer, arthritis, Alzheimer’s disease and multiple sclerosis. Approximately 40% of drugs in the pipelines of pharmaceutical companies are specialty drugs.6

A high proportion of such drugs are based on large molecules or complex molecular combinations that are typically delivered through injections or infusions. Many specialty drugs are more effective when used in conjunction with molecular testing for biomarkers that improve the quality of treatment selection decisions and the chances of drug success.

Complex specialty drugs require careful, controlled assessments over longer time periods compared to traditional small molecule drugs available in pill formulation; with possibilities that adverse events in the real world may well be detected years after the specialty drug is launched.

Clinical trials to determine optimal pharmacodynamics, clinical properties and proof paradigms of specialty drugs are more complex in design, take longer to execute and more expensive. Safety and efficacy conclusions are less open to generalization and more subject to differences in individual genetics.

The true burden of proof for a novel specialty medication only partially depends on results from company-sponsored clinical trials. Looking at costs and success rates of specialty drug clinical trials may not provide an adequate perspective on their true value, which may be realized fully only long after they are approved and in the market—fulfilling large, unmet patient needs, with additional positive impact on societal health care and related costs. Thus the actual costs of establishing the true clinical benefit of some specialty medications are a lot higher than what may be incurred during the developmental testing phase. It is no surprise then that strategies that proactively factor for such a need (through proposing one or more post launch studies) are increasing in practice, and, in parallel, becoming part of regulatory requirements.

Regulatory risks

Studies have pointed out that regulatory changes in how clinical trials are designed and conducted will contribute to reducing the gap between rising developmental costs and trial success rates. It is but vital to note that US regulations for clinical trials were written at a time when the clinical trial enterprise was simpler, targeting small molecules primarily.

As of now, regulatory demands have changed with little or slow changes in parallel. For example, there is no requirement for community-based physicians (who do a large part of diagnosing and treating with approved products) to get involved in clinical trials, which are often conducted in academic settings that are intrinsically better set up to follow trial execution regulations, albeit over longer timeframes.

Many clinical trials aim for a larger number of highly specific patient subpopulations, in studies with multiple arms set up to achieve more diverse primary and secondary endpoints. The consequence is that there is more competition for the same patients willing to participate in similar trials. Recruiting patients who meet very specific qualifying criteria is harder, takes longer and is costlier.

Further, variations in the type and number of patient segments studied under a trial necessitates more detailed analysis of sub-group characteristics that can have an impact on study conclusions, also adding to time and cost requirements—without impacting trial success rates.

An even more serious implication is that rising clinical trial costs have made the industry as a whole more averse to risk and less willing to take chances on novel medicines.7

Trial design and execution

The proposition that a molecule is inherently more efficacious than a comparator does not necessarily guarantee success in a clinical trial setting. Considerable risks associated with designing and executing RCTs need to be managed effectively so successful results are achieved. In addition to affecting anticipated results, such risks have a direct impact on cost and time resources required to establish trial results that can be reported to regulators and the wider pharmaceutical community.

Key sources of such risk influencing clinical trial success rates, and thus call for effective management include:

  • Sample sizes for test and (one or more) control arms that are adequately powered to compute meaningful hazard ratios and establish results with a minimum level of confidence and precision
  • Sample composition that reasonably represents the patient population and subpopulations likely to benefit from the use of the asset, typically constructed through appropriate inclusion/exclusion criteria
  • Selection of unambiguous clinical endpoints that reflect commercial needs and highlight asset differentiators adequately using clinical trial data measurement
  • Lack of adequate reliance on available technologies (such as biomarkers/genomic testing) that can improve measurement and impact result accuracies
  • Selection of one or more comparators that represent competing therapies or treatments that matter to the asset’s customers, including patients, prescribers, payers, IDNs, PBMs, GPOs, institutions and the wholesale/retail/specialty trade that will support distribution, access, availability and utilization
  • Overly ambitious expectations from a single trial as evidenced in the selection of trial parameters such as use of open/closed labels, type of intervention, site selection and regional/global representation
  • Inclusion of metrics that represent outcomes vital to assessing asset viability in the real world—more so than that possible in a controlled clinical trial setting—such as assessments of adherence, medical utilization, hospitalization rates/ER visits, life cycle costs and savings
  • Lack of alignment between R&D objectives and clinical strategy with corporate goals and expectations, which more often than not supersede the former, concerned as they are with the demands of maintaining a viable commercial business

Risk mitigation strategies

Studies have suggested strategies that focus on policy, design, execution and process improvements for reducing clinical trial costs. Revamping regulations, for example, that allow for simplified enrollment procedures, more efficient and less risk-averse protocols, and more frequent communications between trial sponsors, investigators and regulators can result in measurable increases in trial timeliness and effectiveness.

Relying more on e-technology (e.g. mobile data collection & monitoring, longitudinal collection and analysis of electronic health records) will undoubtedly make trial design and execution more efficient.

The use of FDA priority vouchers has increased in the recent past, enabling speedier review of trial data for approval decisions, thereby reducing the extra time costs that would have been incurred otherwise.8

This article’s author has designed and executed strategic research and consulting projects that streamline decisions about costly clinical trial design choices, enabling critical commercial perspectives to inform a prioritization of multiple trial designs. The projects are driven by principles fundamental to strategic marketing, marketing research, decision modeling and finance. They bring a systematic business framework to bear on clinical trial strategies, making them less prone to uncertainties, combining market considerations and resource requirements with measures of risk and return so smart executive decisions about trial selection, design and execution are enabled.

While details of specific projects vary with considerations such as therapeutic area, drug delivery mechanism, type and number of patient segments and the competitive clinical landscape, broad elements of the framework are designed to collect retrospective and prospective data that, upon thorough analysis, together enable an informed, rational view of alternate clinical trial options, and predict, within manageable levels of statistical error, the financial impact of pursuing one trial over another. Developmental costs can then be allocated rationally under realistic and achievable assumptions of trial success rates.

Key results from such initiatives that have helped R&D executives make informed, defensible and pragmatic choices include:

  • An understanding of what would drive clinical trial success for the product under consideration
  • Estimates of the relative importance of trial-defining parameters in shaping trial success, including study design characteristics such as patient subpopulation descriptors, size and power, type of primary and secondary endpoints, the number of target indications and the number of study arms
  • Assessment of the impact of study design parameters on study costs, and the relationship between costs and probability of trial success
  • Identifying key sources of risk, defining a range of values such sources may take and a quantification of net risk under alternate clinical trial descriptions
  • Forecasts of the risk-adjusted net present value (rNPV) of each clinical trial option, based on alternate scenarios as described on the basis of its parameters, costs and risks
  • Estimates of changes in rNPVs as a result of varying key inputs such as asset market characteristics, type and number of competitors, trial design characteristics, costs, assumptions about technical success and commercial receptivity to trial outputs.

Conclusion

Spending more on R&D or increasing R&D spend as a proportion of sales cannot be expected to increase R&D productivity nor result in an increase in new innovation. Much of R&D spending is concerned with managing risks that are unrelated to establishing clinical evidence vital to the development of significant innovation. Identifying and improving the management of risks inherent in clinical trial design, execution and regulation present far better opportunities to control R&D spend, bringing it in alignment with increased chances of asset developmental success.

Pharmaceutical executives in charge of managing R&D budgets can do well to focus more on modern management science approaches based on relevant data, marketing science and financial modeling techniques that identify and recommend strategies for making rational clinical trial investments—designing trials that are more in line with commercial expectations, reduce developmental risk, increase chances of clinical success and ultimately improve returns to R&D spend.

Sanjay K. Rao, Director, Corporate Strategy at Emergent

References

  1. Congressional Budget Office (2021); Research & development in the pharmaceutical industry, April
  2. Rao, Sanjay K. (2017); R&D spending & success: key trends, issues & solutions, Journal of Commercial Biotechnology, vol. 23, No. 1
  3. Simoens, S, & I. Huys (2014); R&D costs of new medicines: A landscape analysis, review article, Frontiers in Medicine, Regulatory Science, October
  4. Measuring the return from pharmaceutical innovation: Turning a corner? (2014); Deloitte, www.deloitte.co.uk
  5. Stott, K, (2018); Pharma’s broken business model (Part 1), ANOVA, www.anovaevidence.com, blog post
  6. Rao, Sanjay K. (2011); Strategic priorities for specialty care products, PM360, August
  7. Collier, R. (2009); Rapidly rising clinical trial costs worry researchers, Canadian Medical Association Journal, February, 180(3), 277-278.
  8. Rao, Sanjay K., (2015); Trends in market access for specialty biologics: challenges & promises, Journal Of Commercial Biotechnology, April, v21, No. 2
Recent Videos
Related Content