• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Minimizing the Risk of Mistakes in HTA

Article

We all make mistakes — and that includes those involved in Health Technology Assessment (HTA). Following an error made by ICER this year, Leela Barham looks at similar mistakes made in the past and what can be done about them.

In January 2021, the US-based, not-for-profit Institute for Clinical and Economic Review (ICER) issued a press release to correct their estimate of the cost-effectiveness of treatment Adstiladrin (nadofaragen firadenovec) from FKD Therapies Oy and FerGene.1 (Adstiladrin is licensed to treat non-muscle invasive bladder cancer that is unresponsive to Bacillus Calmette-Guerin [BCG] intravesical therapy.) ICER’s original report, released in December 2020, had suggested that nadogaragen firdenovec would be cost effective with a price in the range of US$121,000 to US$201,000. But ICER had to revise that because they spotted a data entry error. The new price range was set at $158,600 to $262,000.

It’s not just HTA agencies that make mistakes. It happens in company submissions to high-profile agencies like the UK's National Institute for Health and Care Excellence (NICE) too.

Researchers have looked into technical errors and validation processes in economic models submitted to NICE during 2017.2 Their analysis found that in only two of 41 Single Technology Appraisals (STAs) completed in 2017 didn’t have any reported errors. 19 (46%) had between one and four errors, 16 (39%) between five and nine errors and four (10%) had more than ten errors. Maybe 2017 was a bad year, but probably not.

Mistakes matter

With the importance being placed on recommendations that come from HTA agencies, it really does matter that their numbers are (roughly) right. NICE positive recommendations come with a requirement to fund in the NHS. ICER decisions don’t have the mandate of other HTA agencies, but according to ICON, who have kept a close eye on the agencies, ICER does influence reimbursement and that influence is steadily growing. According to ICON’s 2020 survey of 30 US payers, 50% use ICER reports to use as a negotiation point for rebate/pricing discussions.3

It’s hard enough to run ”good” models but now both ICER, and companies who submit to HTA agencies, are under pressure to work quickly. There are downsides to waiting too long; patients may miss out on cost-effective treatments. For NICE, dealing with errors can mean adding in more meetings and lengthening the time to produce final guidance.

This is not an argument for an unrealistic level of precision — there are good reasons why it is hard to come to a definitive answer for the cost-effectiveness of new treatments — but efforts should be made to minimize the errors and allow the focus to be on the genuine uncertainties and how to address them.

Strategies to reduce mistakes

Strategies can be put in place to help minimize mistakes. Obvious steps include allowing time for internal peer review.4 With careful agencies like ICER, you can expect some internal reflection on how to reduce the chance of errors, but that remains, at least for now, outside of company control.

But there are external options when companies are in control of the models that are built for their products. Jeanette Kusel, Director of NICE Scientific Advice has highlighted the NICE PRIMA (Preliminary Independent Model Advice) service.5 This a fee-based, peer-review service, launched in 2017, that can help ensure the quality of model structures, computation, coding, usability and transparency. Sounds good, right? The service was used once in 2019/20 and four times up to October 2020, according to a FOI response from NICE.6 That suggests it’s not having a big impact just yet.

Other options are much more strategic and may prompt some reflection at the c-suite level. A more radical suggestion is for company collaboration in pharmaco-economic modeling. This could take longer to achieve but reduces the chances for error. Multiple myeloma has been highlighted as an area where there are multiple models with very similar structures seeking to solve very similar problems from the perspective of disease progression.7

Collaboration can be instigated by the HTA agencies too. The Netherlands’ National Health Care Institute (Zorginstituut Nederland/ZIN) has been actively exploring what they call ”multi-use” models.”8 Part of the rationale is efficiency; ZIN spends a lot of time testing the quality of models. This could change the power dynamics about who develops and manages models and how they accommodate new treatments.

Time will tell what strategies work best to minimize mistakes, but those involved in commissioning as well as developing models should keep monitoring the options. When HTAs, and the economic models used within them, matter so much, it’s in everyone ‘s interest to reduce errors.

Leela Barham is a researcher writing on health and pharmaceuticals from a health economic and policy perspective.

Notes

  1. ICER (2021). ICER issues correction to final evidence report on new therapies for bladder cancer. https://icer.org/news-insights/press-releases/icer-issues-correction-to-final-evidence-report-on-new-therapies-for-bladder-cancer/
  2. Radeva, D., Hopkin, G., Mossialos, E. et al. (2020) ”Assessment of technical errors and validation processes in economic models submitted by the company for NICE technology appraisals,” International Journal of Technology Assessment in Health Care, 36(4). https://www.cambridge.org/core/journals/international-journal-of-technology-assessment-in-health-care/article/abs/assessment-of-technical-errors-and-validation-processes-in-economic-models-submitted-by-the-company-for-nice-technology-appraisals/4B2C3FAA1BB1E519F41BA1B1DA2E1AD5#
  3. ICON (2020). ICER’s impact on payer decision making: Results of ICON’s third annual survey. https://iconplc.com/insights/value-based-healthcare/icers-impact-on-payer-decision-making/
  4. Büyükkaramikli, N.C., Rutten-van Mölken, M.P.M.H., Severns, J.L and Al, M. (2019), ”TECH-VER: A verification checklist to reduce errors in models and improve their credibility,” Pharmacoeconomics, 37, pp. 1391–1408. https://link.springer.com/article/10.1007/s40273-019-00844-y
  5. Kusel, J., LinkedIn. https://www.linkedin.com/feed/update/urn:li:activity:6686258839785918464/
  6. NICE response to FOI request, October 28, 2020.
  7. Hatswell, A.J. and Chandler, F. (2017), ”Sharing is caring: The case for company-level collaboration in pharmacoeconomic modelling,” Pharmacoeconomics, 35, pp. 755–757. https://discovery.ucl.ac.uk/id/eprint/1558718/3/Hatswell_Sharing%20is%20caring_v5.pdf
  8. National Institute for Public Health and the Environment: Ministry of Health, Welfare and Support. (2020) Multi-use disease models: A blueprint for application in support of health care insurance policy and a case study in Diabetes Mellitus. https://rivm.openrepository.com/bitstream/handle/10029/624661/2020-0145.pdf?sequence=1&isAllowed=y
Related Videos