
England's Latest Innovation Scorecard: A Patchy Picture
A common gripe from those in the pharma industry in England is that even when their products have been found to be cost effective by the National Institute for Health and Care Excellence (NICE) (not always easy to do these days), uptake in the NHS can be ‘low and slow’
A common gripe from those in the pharma industry in England is that even when their products have been found to be cost effective by the National Institute for Health and Care Excellence (NICE) (not always easy to do these days), uptake in the NHS can be ‘low and slow’. The latest
Measuring access to innovation
The innovation scorecard was one of the tangible outcomes from
The June 2013 innovation scorecard
The June 2013 innovation scorecard covers:
• 2011 data previously presented in U
• 2012 data for Defined Daily Doses (DDDs) for 12 medicines in primary care from NHS Business Services Authority data
• 2012 data for medicines purchased in secondary care for 62 medicines in mg or units per 100,000 Finished Consultant Episodes (FCEs) (roughly speaking FCEs are completed stays in hospital), with data coming from the Commercial Medicines Unit at the Department of Health
• 2012 data for medicines purchased in secondary care for 8 medicines in mg per 100,000 FCE bed days, with data coming direct from the pharma companies
Variation
It’s pretty hard to interpret much from the scorecard to be honest, but it’s a valiant effort. The most intuitive area is the comparison of observed uptake versus expected uptake; that’s probably also the area that is hardest to do too, because working out what was expected isn’t straightforward let alone knowing with confidence what was prescribed. Part of the complexity is linked to the multiple indications for some drugs, and in others, focusing use on sub-groups of the patient population or perhaps even just interpreting the TA recommendations. The scorecard shows:
• 4 medicines/groups of medicines where use is higher than expected overall at the England level (Varenicline (TA123), Osteoporosis (TA160, 161, 204), Statins (TA94), Ranibizumab (TA155))
• 3 medicines/groups of medicines where use is lower than expected overall at the England level (Acute Coronary Syndrome (TA47), Prucalopride (TA211), Febuxostat (TA164))
For comparisons of DDDs per 100,000 FCE the take away is simply that it varies, and it can vary a lot, across England. For example, the biggest difference across Clinical Commissioning Groups (who are newly formed organizations responsible for spending circa £80 billion of the NHS budget) was close to 370,000 DDD per 100,000 population for Zopiclone (ranging from 58,023 to 427,988 DDD per 100,000 population). That compares to the smallest range seen for Zalepon with a range from 0 to 265 DDD per 100,000 population. What that really means though is difficult to determine; in essence the major source of variation accounted for by the scorecard is for population size and there are a myriad of other drivers for variation aside from that.
What next?
The HSCIC are keen to get feedback, particularly as the scorecard is classed as ‘experimental’ statistics. That just means that the approach to generating them are still under development and suggest that they are used with caution. The HSCIC are inviting people to let them know what they think via a feedback form. They’re asking people to think about:
• Suitability of the data and any alternatives (though it’s hard to think what could have been missed with so many groups involved in the work, including pharma companies like GSK and Roche)
• Whether there should be an additional central data collection or if local reporting requirements could work just as well (and that’s a hard one to answer at a time when the ‘new’ NHS is bedding down)
Some might point out that perhaps efforts should be made to look more closely at more recent innovations, some of the drugs included in the scorecard have been in use for years.
Other key issues are more about answering the ‘so what’ question. If a CCG, or hospital, isn’t in line with expectations, just what will happen? Will action be taken when they’re above as well as below expectations? And crucially, what does this all mean for patients and their health, and for future innovation?
That there will be variation is inevitable, perhaps what we need to do is to move to a ‘comply or explain’ approach. Given how much energy (on many people’s parts; companies, patient groups, clinicians, NICE itself) is expended on NICE TAs there really should be compliance with NICE TA recommendations and only ‘reasonable’ departures from guidance made, supported by a transparent rationale. Especially if that captures the unexpected quirks which might actually need a change in the guidance; real life tends to be somewhat different to the evidence used in NICE appraisals after all.
Leela Barham is an independent health economist. You can contact her at
Newsletter
Lead with insight with the Pharmaceutical Executive newsletter, featuring strategic analysis, leadership trends, and market intelligence for biopharma decision-makers.