Skip to content

STAT – Real-world evidence is changing the way we study drug safety and effectiveness

STAT – Real-world evidence is changing the way we study drug safety and effectiveness

Randomized controlled clinical trials are a great way to test the safety and effectiveness of a new drug. But when the trial is over and the drug approved, it’s used by patients and health care practitioners in settings that are quite different from the rarified clinical trial setting.

 

Interesting, important, and sometimes surprising findings can emerge when the narrow constraints of clinical trial eligibility and intent-to-treat analyses are set aside. That’s why it’s time for biotech and pharmaceutical companies to pay more attention to understanding, and undertaking, real-world studies, an approach backed by Food and Drug Administration Commissioner Scott Gottlieb in a talk Monday on using new data sources as evidence for both regulatory evaluation and value-based payment programs.

 

Just what is real-world evidence? The Federal Food, Drug, and Cosmetic Act doesn’t offer much help there. It defines real-world evidence as “data regarding the usage, or the potential benefits or risks, of a drug derived from sources other than traditional clinical trials.”

 

In practice, real-world evidence comes from sources such as electronic health records, medical claims and billing data, disease registries, and even patient-generated data from home blood pressure machines, mobile devices, or questionnaires.

 

It’s also collected from more diverse populations including children, the elderly, and those with multiple illnesses or conditions.

 

Old-school thinking in drug development holds that real-world evidence is hopelessly biased, with critics clinging to the supremacy of the randomized controlled trial (RCT) as the only way to draw inferences about causation. They criticize real-world studies because treatments are not masked, there is little if any verification of data sources, and some information of interest, like test results, will be missing (especially if the tests were never ordered in the first place).

Such blanket criticisms no longer carry the day.

 

Regulators, patients, and payers are now demanding evidence about how treatments work in real-world settings. The FDA has come to understand that many of the quality concerns about real-world data can be addressed through careful design and analysis, as evident in the Framework for FDA’s Real-World Evidence Program, which was released in December 2018.

 

Rather than criticizing a study simply because there was no randomization and data from electronic medical records or health insurance claims was used, real-world studies are today being evaluated in terms of data provenance, integration, and curation, as well as design and analysis. The big aha is the importance of understanding as a starting point where and why the data were created in order to understand their utility for research.

Take notice

Real-world data are getting new respect due in part to work like a pilot study led by Friends of Cancer Research and released in July 2018. It was a collaboration between six organizations with real-world data and analytics, including my company, IQVIA. All participants in this voluntary collaboration followed a broad protocol outline to match the inclusion and exclusion criteria used in an RCT of treatment with approved immune checkpoint inhibitors for advanced non-small cell lung cancer.

 

The results, which showed notable correlations between real-world evidence and trial endpoints such as overall survival, demonstrated the durability of real-world evidence across various ethnic groups, care settings, and health care systems.

 

Real-world data are also being used to supplement clinical trials as shown, for example, by the recent approval of avelumab for metastatic Merkel cell carcinoma. IQVIA supplied contemporary European patient registry data as context for a Phase 2 trial, while in the U.S. another group provided data from oncology electronic medical records. Both submission packages were compelling enough, since approval for this new molecular entity was granted in both the U.S. and E.U., with Japan following suit.

 

Although using real-world data as comparators for rare diseases is hardly new, the ability to employ contemporary rather than historical data eliminates doubt about changes in medical care over time that might explain observed differences.

 

Pragmatic randomized trials are also gaining regulatory attention. Consider the Paliperidone Palmitate Research in Demonstrating Effectiveness (PRIDE) trial conducted by Janssen. This study compared the benefits of delivering schizophrenia treatments by monthly injection compared to daily oral medications. Study participants were chosen because they had recently experienced psychotic breaks that led to attempted suicide, hospitalization, or incarceration.

 

After randomization to monthly injections of paliperidone or one of several daily oral medications, researchers followed these study subjects using old-fashioned “shoe-leather” epidemiologic methods to track the recurrence of these same events that qualified patients for study participation and found better outcomes among those using the monthly injections. A likely bellwether, the FDA considered this study to be rigorous enough to grant a label expansion.

How to tell when real-world data are reliable

Many biopharmaceutical and medical device companies are in a state of near paralysis due to the absence of a concrete, agreed-upon framework for the design, execution, and evaluation of real-world studies. What do they need to feel confident enough to invest in real-world research?

 

Any framework for real-world studies must be applied with an understanding of how and where treatments are administered, and if and where outcomes are likely to come to clinical attention. For example, outpatient pharmacy prescriptions do not include infusions or injections that are administered in the clinic; data on mental health visits are often recorded in separate health systems from ambulatory care. This makes it important to be able to describe the provenance of real-world data to be used for a study — where, how, and why the data were collected; the likelihood of follow-up data being available from the study source; what data are likely to be systematically missing; and the like.

 

Even nonspecific information, like a physician encounter that generates a health insurance claim but only a billing diagnosis, may be imprecise as to the nature of a visit, but may be quite useful for big-picture questions like survival and the duration of treatment benefit.

 

Each real-world study must be evaluated in terms of its participants, outcomes, settings, design, and execution in order to understand if it is likely to provide reliable guidance about treatment benefits and risks. No matter what data are used, quantitative assessments of bias like sensitivity analyses will shed light on how much bias could explain the observed benefits or risks.

 

Take the GRACE checklist for evaluating the quality of good observational research of comparative effectiveness. When I and several colleagues validated this checklist, it became evident that sensitivity analyses were the single best predictor of study quality, whether judged by the article’s inclusion in a systematic review, the impact factor of the journal where it was published, or the number of citations.

 

Companies that conduct clinical trials can choose to wait for a framework for real-world evidence to be fully fleshed out. Or they can apply a commonsense approach that uses rigor and reason, shaped by decades of pharmacoepidemiology experience. Those taking the latter approach will find that understanding the potential contributions of real-world evidence for quantifying benefits and risks — and applying them — isn’t so risky after all.

 

https://www.statnews.com/2019/01/29/real-world-evidence-changing-study-…