ICD-Pieces Highlights Trade-Offs Involved in Using EHR Data for Pragmatic Trials
The promise of pragmatic clinical trials is that they could lead to evidence-based treatment guidelines based on real-world data pulled directly from electronic health records. But what is the reality so far? The NIH Collaboratory recently featured a presentation by Holt Oliver, M.D., Ph.D., vice president of clinical informatics for the Texas-based Parkland Center for Clinical Innovations, on the challenges faced by the ICD-Pieces pragmatic trial.
The goal of the ICD-Pieces trial is to help primary care physicians treat patients with chronic kidney disease (CKD) in more effective ways to reduce heart problems, hospitalizations, and deaths. This study involves a collaborative primary care–nephrology care model at Parkland Health and Hospital System for patients with CKD in a predominantly minority population. This study uses a technology platform (Pieces) that enables the use of EHR data to improve CKD care within primary care practices or medical homes in the community. The main hypothesis is that patients with CKD, hypertension and diabetes who receive care with a collaborative model of primary care-subspecialty care enhanced by Pieces will have fewer hospitalizations, readmissions, cardiovascular events and deaths than patients receiving standard medical care.
The challenge for pragmatic clinical trials is how to accomplish research aims using the data that is generated in routine care processes, Oliver said. “From my point of view, there is so much clinical care is being delivered, with the steps being captured in the EHR, but there is not much traction in turning that into real-world evidence.”
The ICD-Pieces trial was built on the experience at Dallas-based Parkland Hospital, in which they demonstrated that they could use EHR data to standardize the identification of patients with chronic kidney disease. “Based on that work, we proposed the ICD-Pieces trial where we would use a standard set of EHR data to identify patients with CKD, as well as Type 2 diabetes and hypertension — and these multi-comorbid patients are the types of patients who are frequently excluded from clinical trials. We decided to flip that equation around and use the real-world care of those patients to strengthen the level of evidence for guidelines,” Oliver explained.
Oliver said they had to balance the trial aims with goals of getting a geographic and disperse population so that the evidence generated by this trial would be as widely applicable as possible. “Part of our pragmatic clinical trial network was designed to include multiple different healthcare locations,” he said. That included a trial site that is community county public safety net hospital, a VA healthcare system hospital in North Texas, as well as a private practice network from Texas Health Resources in the Dallas-Ft. Worth area. It also includes an outpatient practice network in Connecticut to give that demographic variability.
However, that created a host of IT challenges and it highlights some of the design choices that trialists have to make, Oliver said. “In this case we were integrating data from three different EHR vendors: Epic, Allscripts outpatient environment, and the VA’s Vista. To get a standard set of outcomes, we were getting hospitalization data from hospital claims data, commercial data and Medicare data from ResDAC, as well as from the national death index. The challenge of that is we lost the scalability of having several networks all on the same EHR to get that diversity. If our outcomes are successful, they would be widely applicable, but we had to overcome data harmonization challenges.”
They solved some of those challenges by sticking to standard parts of EHR data, what Oliver calls the “shallow end of the pool” for the patient selection algorithm. By sticking to areas that are better harmonized across EHRs already, including lab data, ICD-10 data used for billing, as well as standard date and encounter information, that made it easier to harmonize data to use for algorithm selection, he added.
Focusing on data generated for the claims process for hospitalizations led to a final data set for outcome assertions from hospital claims data. “We dodged the bullet of work flow intervention standardization, which is probably the most challenging aspect of a trial,” he said. “We did that by allowing local health systems to figure out implementation. That was a compromise on our part, but we think that the principle of beginning with the end in mind allowed us to get the most diversity across diverse EHRs.”
The trial occurred over three years, and it faced some major organizational changes and staffing changes, but we avoided having to cope with the transition from Vista to Cerner in the middle of the trial. “We did have to deal with challenges of working with the VA,” Oliver said. “They had quite severe PHI restrictions on moving data out of their systems, which made us change some of our strategy around deploying clinical decision support software across all the sites. The trade-off was that they had excellent adherence to standards” such as LOINC. “Not having an embedded VA analyst built into the trial was a big hurdle for us,” he added.
The Connecticut physician group, ProHealth Physicians, injected geographic and demographic variation from the other sites in North Texas but also presented several challenges. They had a change in their organizational structure, in which they became affiliated with Optum Health during the trial. “That led to some changes in the way their data warehouse was laid out,” Oliver explained. “Our initial queries had to be redesigned
The ICD-Pieces team is now finishing its trial and moving to the analysis phase.
“Our data structure was originally based on queries designed at the outset of the trial. Several of those had to be revisited over the course of the trial, particularly after EHR upgrade cycles, which turned out to be much more of a headache than we had anticipated,” Oliver said. “Small changes can lead to a field being deprecated or force you to re-implement access points to get data.”
They also saw lots of changes in work flow due to changes in personnel because the trial occurred over many years. “We went for a minimum necessary approach because of the pragmatic aspect of getting provider network buy-in so they didn’t have to learn new tools,” he said. “That meant maintaining testing and validation and setting up guardrails to make sure our integration end points stayed up and active over the course of the trial.”
The biggest challenge, Oliver said, was the hand-off of work between people during periods of employee turnover. “The transition between analysts was a big challenge. I think good documentation was a big help, but you don’t have complete knowledge transfer, even though you have complete code transfer. Even at EHR partners, you get different analysts helping you at different times during the trial.”
“We were on the leading edge in trying to do these types of trials with an ad hoc collection of EHRs working together for a common aim,” Oliver said. “The idea of having ready-made collaborations with data harmonization in place would avoid some of these challenges, but I think the benefit of these ad hoc collaborations is that they do allow you to inject a different amount of variability.”