• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Data Disclosure: Sealing the Error Envelope

Article

Pharmaceutical Executive

Pharmaceutical ExecutivePharmaceutical Executive-03-01-2016
Volume 36
Issue 3

With the emerging industry commitment to publicly disclose research and clinical trial results, two pivotal issues come to mind - and serve as a reminder to companies that true transparency depends on the truthfulness of the evidence that binds it.

For big Pharma today, the currency of public trust is its most devalued asset, the restoration of which depends on hefty investments in transparency. Progress toward this goal can fairly be described as variable and ad hoc. While drug pricing remains almost entirely non-transparent, we now have an emerging industry commitment to publicly

William Looney

disclose results from research studies and clinical trials, including work that company sponsors abandoned for internal reasons, like failing to secure a desired endpoint. You can call it a bankable addition to that depleted account of public trust. 

However, in the business of biopharma,  it always pays to be careful what you wish for. Data disclosures designed to promote openness in the public eye must account for the complexity of today’s R&D enterprise. Approximately 800,000 research articles are being added to US public data bases each year-a raw byproduct of industry’s $60 billion annual investment in developing new medicines for patients. 

This is data dumping on a prodigious scale. Disclosure is a worthy goal, but can this by itself deliver the larger aim of raising the bar on both the value-and credibility-of the industry’s published research? 

Two issues come to mind. First, how do you control to separate out real insights-the signal-from the background noise induced by volumes of disaggregated data that are hard to place in the proper context? Second, what additional steps, beyond disclosure, are needed to “de-risk” for errors or misinterpretation of publicly disclosed research that could end up leading medical practice and public policy in the wrong direction?  

This second question is important if industry is to prevail. It’s good to know that, in addition to the efforts of trade associations like PhRMA and EFPIA, a multi-stakeholder initiative is in place to tackle the practical details that must undergird any commitment to research transparency. The International Society for Medical Publication Professionals (ISMPP), a non-profit group whose 1,400 members are drawn from big Pharma, CROs, communications agencies, and medical journals, focuses on making the process by which data developed within the R&D industry is compiled and published. ISMPP’s goal is to make this information accurate, analyzable, and accessible through best practices to address challenges like publication bias, statistical rigor, and reliance on paid ghostwriters.

 A big step in this direction was the agreement last year on a “Guideline on Good Publication Practice for Communicating Company-Sponsored Medical Research,” published in the Annals of Internal Medicine. With 70% of funding for clinical trials coming from private industry, parties to the Guidelines know that public confidence in the integrity and interpretative value of this research starts with manuscript development-when data is compiled, evaluated, and brought forward to conclusion. The Guidelines stress that failures here “may result in poorly informed decision-making and reduce the efficiency and quality of healthcare.”  

It is also encouraging to see some timely moves to address the misuse of statistics that drive the analytics behind the research. Pharm Exec readers might look with interest at a recent commentary in the peer-review journal, Clinical Therapeutics (view the abstract here). Janet Forrester, an Associate Professor at Tufts University Medical School, reviewed recent manuscripts submitted to the journal to track the frequency of common statistical errors made by authors and identify measures to reduce them, such as recognizing the limitations of statistical software and to include more statistical experts as part of the manuscript review process.  

Forrester notes that “many articles in the literature do not explain the statistical analyses in detail, leaving the reviewer to trust that the analyses are valid.” Her review concludes that errors in published works are, in fact, quite prevalent. The most basic flaws are using the mean where the median is appropriate; not measuring variability in summary statistics; not accounting for missing or non-independent data; and reporting P values on the role of chance in a complex table without stating what test was used. Each of these errors can result in a skewed finding, with significant implications for the research as a guide to policy or clinical decision-making. 

All this is a reminder to industry that true transparency-and the reputational benefits that accrue from it-depends on the truthfulness of the evidence that binds it. It’s a work in progress. Let’s call it hopeful.

 

William Looney is Editor-in-Chief of Pharm Exec. He can be reached at wlooney@advanstar.com. Follow Bill on Twitter: @BillPharmExec

Related Videos
Related Content