# Enhanced Cost-Effectiveness Analysis using EHR Data for Real-World Value

## To view this video and others, please subscribe to our Real-World Evidence Newsletter.

Subscribers will also receive access to monthly highlights covering newly published scientific literature, best practices and upcoming events featuring Flatiron real-world data.

This talk is part of the ResearchX session Opportunities to use EHR-derived RWE to inform HTA decision-making.

Sections include:

## Transcript

**Akshay Swaminathan:**

Hi everyone. My name is Akshay Swaminathan. I'm a data scientist and researcher specializing in real-world clinical and genomic data. Very excited to share the results of this proof of concept project demonstrating how real-world data can be used to enhance traditional approaches to cost-effectiveness analysis.

When evaluating new therapies, initial health technology assessments typically rely on data from clinical trials, but as these therapies become more widely used in clinical practice, new evidence in the form of real-world data emerges and this evidence can be used to supplement findings from initial HTAs. Now real-world data has several advantages compared to data from clinical trials. It's often more relevant because it's coming directly from routine clinical practice. It's often more representative of the broader patient population and real-world data also often has larger sample sizes and longer follow-up times compared to clinical trial data. Given these advantages, we were interested to see if we use real-world data instead of clinical trial data for cost-effectiveness analysis, how would this change the results?

**Akshay Swaminathan:**

We replicated a cost effectiveness analysis of non-small cell lung cancer immunotherapies initially developed by the Institute for Clinical and Economic Review and instead of using network meta analysis of clinical trials to derive hazard ratio and survival times, we'll call this the traditional approach. Instead, we derive these quantities using a real-world data cohort of patients with non-small cell lung cancer. How did we use EHR data to select a cohort of patients taking these therapies of interest?

In the next slide you'll see how we started with patients in Flatiron's non-small cell lung cancer dataset and we applied selection criteria to arrive at cohorts of patients who are eligible to take the therapies of interest according to their drug label indication. We arrived at three immunotherapy cohorts, atezolizumab, pembrolizumab and nivolumab and the chemotherapy docetaxel cohort, which was the comparator, and I want to point out here that we were able to use biomarker data captured in the EHR to apply criteria such as selecting for patients with no EGFR mutations and patients who were PD-L1 positive. After selecting these patients, we were interested to see how different are these real-world patients compared to the patients in the original clinical trials.

On this next slide you'll see where we compare the demographic and clinical characteristics between patients in our real-world cohort, shown in the blue bars, with the patients enrolled in the original clinical trials shown in the green bars. The trials we're referring to here are POPLAR for atezolizumab, CheckMate 017 for nivolumab and KEYNOTE-010 for pembrolizuamb. Now, in terms of demographics, we saw that the real-world cohorts were broadly more representative of the overall non-small cell lung cancer patient population, with over 45% female patients across the board and over 20% non-white patients across the board. In terms of followup time, the real-world cohorts had anywhere from five to 10 months greater follow-up time compared to the clinical trials and follow-up time we're defining as time from diagnosis to last clinical activity date. Sample size varied. In the atezolizumab and pembrolizumab cohorts, the original trial populations had greater sample size, but in the nivolumab cohort, the real-world cohort had over 1500 more patients than the original clinical trial.

Now that we understand the makeup of this real-world cohort, how can we use this cohort to conduct a cost effectiveness analysis? This figure on this slide shows the results of probabilistic sensitivity analysis, comparing each of the three immunotherapy cohorts to the comparator chemotherapy cohort, docetaxel. The Xaxis on this figure represents quality adjusted for life years, the Y axis cost in US dollars, and each point represents a simulated ICER, Incremental Cost Effectiveness Ratio. The green points show the simulated ICERs from the traditional cost effectiveness approach and the blue points represent the ICERs from the real-world enhanced cost-effectiveness approach. The table below shows the ICER point estimates with 95% credible intervals. What's immediately evident is that the spread of ICERs from the traditional approach is much larger than the spread of ICERs from the real-world enhanced approach. We actually saw that the 95% credible intervals were shrunk by 37% for atezolizumab, 69% for nivolumab and 83% for pembrolizumab, respectively.

Now, what are the driving factors behind this drastic decrease in uncertainty? The main driver is that in the traditional cost effectiveness analysis approach, the hazard ratios, which were estimated using network meta analysis, had very large confidence intervals and that's what's driving the large spread in simulated ICERs from the traditional approach. Now other contributing factors are that in the real-world cohorts, we saw larger sample size and not just in the nivolumab cohort, but in our chemotherapy cohort docetaxel, which had over 1300 patients. Another contributing factor is the longer followup time in the real-world cohorts, which led to lower rates of censoring.

What can we take away from these results? The point of this analysis is not to propose new price points for the three immunotherapy drugs. That wasn't the purpose of this analysis. Rather, the point is to show how real-world data can be used to select a cohort of patients who are taking therapies of interest, how real-world data can be used to estimate hazard ratios and survival times that can then be inputted into cost-effectiveness models. Also, just show that the result in cost-effectiveness estimates may actually have greater certainty than traditional approaches. Now, I want to point out some limitations of this analysis, as we saw the sample size in the three immunotherapy courts varied substantially and this highlights the fact that this real-world enhanced approach to cost-effectiveness analysis may be best suited for therapies with high uptake in real-world populations. I also want to point out that we did not implement certain clinical trial criteria that involved other variables like baseline ECOG status or sites of metastasis, nor did we implement population adjustment methods, such as matching. Nevertheless, we're very excited about the potential of this new approach using real-world data for enhanced cost effectiveness analysis to inform HTA decision-making. Thank you.