• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

FDA’s Guidance on Using RWD in Evidence Generation Is a Big Step Forward

Article

Practices to ensure accuracy in selected datasets.

Matthew W. Reynolds, PhD

Matthew W. Reynolds, PhD

Real world data (RWD) has been used for more than four decades to expand knowledge about a variety of research questions in healthcare—from disease epidemiology to the performance of treatments and devices in real world settings. This data can play a definitive role in generating evidence, provided the data sources are thoroughly vetted, fit-for-purpose, and the datasets match both the specific research question and the specific regulatory need.

In September and October, FDA issued new guidance about assessing electronic health record (EHR) and claims data and data standards for submissions containing RWD. This action shows the agency’s growing recognition of the value of leveraging RWD to gain insights into the safety, effectiveness, and usage of pharmaceutical products and biologics. It also presents an opportunity for researchers to address historic challenges around the crucial activity of appropriately selecting and vetting of RWD sources.

The possible questions that we can answer are limited only by the ability of researchers to use RWD effectively based on its reliability and relevance for the question at hand. By applying the following practices, researchers can better evaluate the fitness-for-purpose of RWD and ensure that they present the proper rationale for approval of the datasets they select.

Set clear criteria for the study

A single “perfect” database doesn’t exist to support all research needs because databases aren’t either all good or all bad. Some datasets are suitable for many research questions, some for only a few specific questions. For example, a small payer EHR database containing unique and detailed clinical information not typically available in other datasets might, in turn, be constrained by its size or setting. This scenario could result in a regional or limited focus, a smaller sample size, and an imperfect fit for purpose.

By involving all relevant stakeholders for input on the dataset’s required deliverables, researchers can set clear criteria in advance. This enables them to make informed decisions on precisely what data must be included vs. attributes they can afford to lose.

FDA’s data standards guidance walks practitioners through steps to determine why their chosen dataset is fit-for-purpose for their question. The framework identifies issues such as the factors a regulator may push back on, how to resolve these ahead of time, and developing the responses necessary to get real world study acceptance.

Determine dataset fitness-for-purpose

It takes time, effort, and research to find the best potential data sources to answer a particular question. Most datasets are incomplete in some nature, and how this affects the study depends on whether the incompleteness is meaningful or impacts fitness-for-use. For example, a general practice EHR source without inpatient data won’t be valuable for a study looking at the risk of serious events that typically results in hospitalization. Yet, this same dataset could be perfectly viable for studying factors that drive general practitioners to prescribe certain medications. In that use case, there would be no requirement for hospital data.

The process can be simply explained through an analogy of buying a car: You begin by researching and comparing all your options, then create a short list of preferred options and test drive. You’ll talk to stakeholders (in this case, your family, others who have bought a similar car, etc.) and consider which option is best suited for your specific needs; checking all the boxes so that a final choice can be made. It’s similar with RWD—unless you’ve conclusively assessed all the dataset strengths and limitations and applicability for your specific research needs, you can’t confirm it is true fitness for purpose.

Test the process

It’s important to test your RWD dataset to determine whether it is viable for the data points and assessments that you want and need to incorporate. Researchers can use reliability assessments to evaluate whether the codes or combinations of codes represent the medical concepts they are intended to represent. They can also identify whether the data has been captured with an adequate level of accuracy and completeness, and if the available analytical tools can sufficiently address each question of interest.

Specific elements differ depending on the RWD type and the research for which it is intended. For example, EHR data theoretically provides more detailed information on patients than medical claims data, which consists of primarily diagnosis codes and prescriptions. However, EHR datasets may not have complete longitudinal ascertainment of all medical utilization for the patients of interest.

A big step forward

Until now, the research community hasn’t been equipped with a barometer to indicate what FDA expects regarding real world study solutions. Consequently, selecting RWD solutions has been largely reliant on guesswork and mimicking what has worked for others in the past. This guidance provides information directly from FDA on what a good process should involve, and how to evaluate if data is fit-for-purpose, completeness, and totality.

The FDA framework recognizes that choosing data sources is highly individual and is relative to the research question at hand. In that respect, this guidance represents an important step toward the formal use of evidence generated from RWD sources in regulatory decision-making, which holds the promise of benefitting patients and the medical community alike.

Matthew W. Reynolds, PhD, vice president of real world evidence, IQVIA

Related Videos
Related Content