OR WAIT null SECS
As new ICH GCP draft guidelines now require root cause analysis, novel methods for risk analysis and triage must be adopted in drug development.
Decades ago, a study would typically be performed by a seldom-changing team, sometimes working out of a single site. Now, clinical trials have become much more intricate: protocols have skyrocketed in complexity, multiple outsourced providers manage operations, and sites are located across the globe in search of adequate patient populations and lower cost locations.
The rising risk of errors, particularly those with complex sources that may be difficult to ascertain in large organizations, has led the upcoming 2016 International Conference on Harmonization guidelines for Good Clinical Practice (ICH GCP) to include an addendum to its Non-compliance (5.20) section, which now instructs: “When significant noncompliance is discovered, the sponsor should perform root cause analysis and implement appropriate corrective and preventive actions.”
The addendum specifies three distinct needs: risk identification, root cause analysis, and bespoke corrective action. The currently predominant practice of 100% source data verification (SDV) does not support the three requirements of the ICH GCP addendum, nor do many risk-based monitoring (RBM) initiatives. The industry is thus expected to adopt new standards.
This article will examine the deficits in current risk mitigation approaches and then present a new paradigm for risk management that is aligned with the new ICH GCP guidelines. This paradigm, called the human factors analysis and classification system (HFACS), has been employed by organizations in many other industries, from commercial aviation to mining, to resolve otherwise difficult to detect causes of errors. To date, eight pharmaceutical and biotechnology companies have adopted the approach in ongoing clinical trials.
How errors hide
To highlight the deficits of cur- rent approaches to identifying errors in clinical trials, consider a theoretical study with 16 sites during which traditional data monitoring reveals one site with an abnormally high number of errors (see site P in chart below).
At the problem site, adverse effects and concomitant medications were recorded in source or hand-written logs, but not entered in the electronic data capture (EDC) system. The traditional mitigation response is to
retrain staff on the EDC system. Will retraining solve the problem? Possibly. The problem is that this method of analysis and response only addresses errors that stem directly from improper training.
The underlying fault lines may run deeper, perhaps hidden within discrepant versions or translations of the study protocol. Other problems, particularly for modern multi-site trials, may be within a site’s organizational structure or inadequacies in its resourcing or culture. Such error sources are invisible to traditional analytical methodology and RBM. Furthermore, they cannot be resolved by retraining. In fact, some errors may have multiple causes, which may be overlooked by even the most experienced clinical research associates (CRAs) if they lack a standardized framework to evaluate errors’ root causes.
Lessons from other industriesâ¨
The proposed ICH GCP requirement for root cause analysis and bespoke corrective responses is not novel. It draws upon a history of reform in other industries that addressed difficult to recognize sources of risk in complex organizations.
One example is commercial aviation, which suffered from a poor safety record for much of the twentieth century. In the late 1990s, several systematic estimates suggested that 70-80% of aviation accidents could be attributed at least in part to human errors, not aircraft malfunctions. Identifying why the errors occurred through classical approaches of root cause analysis, in which each incident is analyzed and corrected individually, proved unsuccessful because patterns could not be drawn amid the heterogeneity of the primary analyses.
The response from the Federal Aviation Administration was adoption of HFACS, which allowed the agency to “systematically examine underlying human causal factors and to improve aviation accident investigations.” The ability to aggregate events for standardized analysis elucidated a set of factors, including cultural and communication issues, that contributed to errors and were common to many organizations. Reforms aimed at preventing those issues helped the airline industry to halve the incidence of plane crashes.
Another HFACS success occurred in the high-risk Australian mining industry. An HFACS evaluation of 508 incidents was able to pinpoint specific skill-based operator errors as the primary causative agent. Further, skill-based errors in the mining operation were found to be homogenous across mining sites, while a smaller number of identified decision level errors were found to be significantly different between sites. The results were focused, specific, and actionable in terms of corrective actions for retraining or protocols to ensure the presence of required skillsets at critical time points. Furthermore, HFACS stratified root causes to prioritize those corrective actions to reduce accidents as quickly as possible.
What is HFACS?
The standardization, aggregation, and granularity of HFACS makes it exceptionally appropriate as an analytical framework for clinical trials. In particular, aggregation allows CRAs and project management team to create a corrective plan that is actionable not only in addressing the error itself, but also in mitigating all structural break points that are identified in the process.
HFACS stratifies the system into four layers, the top most of which is the active failure layer where active errors are identified.
The next three layers-preconditional, supervisional, organizational-go deeper into clinical study design and identify latent (dormant or undetected) errors. The preconditional layer identifies errors resulting from physical or mental states or limitations that predispose the system to failure. Errors at the supervisional layer are leadership errors, which may result from poor training of study operators, poor protocol implementation, improper supervision, and other leadership failures. Organizational errors may result from poor organizational climate, process, or resource management. If left unaddressed during corrective actions, any of these latent failure layers will continue to embody a breakpoint in study design with the potential to reemerge.
Applying HFACS toâ¨ real trialsâ¨
The combination of real-time data, data visualization, and HFACS is a powerful tool for CRAs to rapidly identify errors in trial execution and address the actual instigating factors. ICON has adapted HFACS for clinical trials through a methodology called Patient Centric Monitoring (PCM), named for its focus on significant errors that affect patients safety and trial integrity. To date, eight sponsors have applied PCM including HFCAS in their trials’ monitoring protocols.
Two sample case studies follow and draw upon data from ongoing trials that employ HFACS.
Case study #1
After aggregation and analysis of error reports, there were 13 high impact errors identified with adverse event reporting, by the CRAs at eight investigator sites and across multiple subjects, with Human Factor “Processes” as category.
In prior studies, without HFCAs, the CRAs would have recorded the error, if noted. Having identified the errors, and the underlying Human Factor for these high impact errors as "process," the CRAs described the error and root cause in adequate detail for effective preventative action.
The CRAs’ reports included:
In response, rather than retrain staff on a study protocol that would not be adequately supported across the organization, the CRAs prevented future errors by helping the sites to establish standardized processes for handling adverse events.
Case study #2
In this trial, CRAs identified 33 high impact errors at 15 sites, which were due to the human factors classifications of “training” and “supervision.” These error sources spanned all categories of study procedures.
The CRAs’ reports included:
The specificity of human factors underlying the identified errors, and their root causes, allowed the CRAs to implement responses to the actions and people introducing risk to the trial and patients. Importantly, the analysis ensured that
the immediate circumstances when the error occurred were not errantly conflated with the incidents’ true causes.
In each of these two case studies, the causes of the errors were not homogenous. It follows that our response to errors also cannot be homogenous. In comparison to training-based remediation approaches such as those traditionally associated with 100% SDV, the power of HFACS is its ability to elucidate structure in the study errors, allowing CRAs the ability to isolate or consolidate corrective action resources as appropriate, protecting data quality across a wider swath of the trial and, over the course of the study, saving both money and time.
We are encouraged by the results of the first clinical trials to employ HFACS, and our initial successes are encouraging further groups to adopt this methodology. The new guidance from the ICH GCP is an acknowledgement of a perceived need for the biopharmaceutical industry to evolve better safeguards for study integrity and data quality. Adoption of HFACS complies with the new requirements, as well as builds a new base of knowledge to make future trials, protocols, and training programs even more prepared for the complexities of the modern, global clinical trial.
Clara Heering is Vice President, Clinical Risk Management, ICON plc. She can be reached at Clara.Heering@iconplc.com