NYU Langone Health Uses Randomization to Study Quality Improvement
NYU Langone Health in New York City has created a rapid-cycle randomized trial group that is doing health system quality improvement initiatives to promote a learning healthcare system. In a recent presentation, Leora Horwitz, M.D., director of the health system’s Center for Healthcare Innovation and Delivery Science, described some lessons learned from this novel approach.
In a recent talk in the NIH Collaboratory Grand Rounds series, Horwitz began by reminding the audience of the Institute of Medicine’s 2012 definition of a learning health system as “a system in which science, informatics, incentives and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the delivery process, patients and families active participants in all elements, and new knowledge captured as an integral by-product of the delivery experience.”
“I find this completely inspiring. It is why I come to work every day,” Horwitz said. “It is the kind of system I want to work in. When I give talks and read this definition, usually people laugh, because nobody really recognizes their own health system in this description yet. What we have been trying to do at NYU is chip away at this bit by bit. One of the things we have been doing is really trying to capture new knowledge as part of the ordinary way we do care, and we are doing that through randomization. The spirit is closely related to clinical trials, but it differs in that it is embedded in our usual activity.”
So why do this work? Because most health systems’ operational interventions that seem like good ideas have never been shown to be effective despite the cost and effort involved, she said. Randomized quality improvement projects are needed in order to know whether such system-level programs or interventions are effective.
Horwitz gave an example of the scale of the problem. In one year alone, NYU Langone:
• Fired millions of best practice alerts to prompt staff to provide evidence-based care or avoid adverse events;
• Made more than 19,000 telephone calls to patients after discharge from the hospital to improve continuity of care and satisfaction;
• Sent thousands of letters to remind patients they are overdue for preventive care testing;
• Hired six community health workers for the emergency department.
“All of these things take resources,” she said. “They have expected outcomes and we don't know if they are being generated or if we could have better outcomes or get them more effectively.” The rapid-cycle randomized trial lab is already working with 16 departments in the health system, with 15 trials completed, two ongoing now, and nine trials in planning. And some of the findings are surprising. One example she gave involved whether calling people after hospital discharge helped avoid readmissions and/or improved patients’ perceptions of the hospital. “We do many thousands of these calls and no one has ever done a test to see if they are effective or not,” she said.
In 2018, when these calls were rolled out to a recently purchased hospital, it didn’t initially have the capacity to call all discharged patients. Instead of arbitrarily not calling some people, they randomly assigned who is going to receive these phone calls and not based on even and odd medical record numbers. From this study they found no difference in readmission rates or impressions of the hospital between the control group and the group that got the call. “These phone calls that are time-consuming and costly and take up a lot of staff time have done nothing for us in terms of readmissions and have done nothing in terms of making people happier about our hospital,” Horwitz said. “So did we fire the phone callers? No!” She added it is really important to have the trust of front-line staff. “We need them to trust that if, for example, the phone calls aren’t working, they are not all going to get fired.”
She said they instead treated this as an opportunity to do something different. It freed up resources because they stopped calling the low-risk patients and sent them texts instead. Among high-risk patients, they are randomizing them to a newer better script that goes into more detail about some things but leaves out other things. Because they are freeing up caller time, they can call people twice or call people who missed appointments. “We can afford to be more engaged,” she said, “and we are not losing anything by not calling low-risk patients.”
Horwitz stressed that any randomization strategy it undertakes must be seamless and require no additional work by front-line staff. She also spent some time explaining the collaboration her team did with the health system Institutional Review Board (IRB) to clarify the distinction between quality improvement and human subject research.
“We tried to work through how we think about quality improvement vs. human subject research,” she said. The focus of quality improvement is to improve care we are delivering here at our institution. Research is to answer fundamental questions or test a hypothesis whether something is working at all. Quality improvement is about improving a process or adopting best practices whereas research is to figure out what that best practice is. “We also think if you work here, you have a responsibility to participate in quality improvement efforts. Patients don’t have the option to opt out of our efforts to improve care, whereas they do have the right to opt out of human subject research.”
They created a checklist for these projects to make sure they are confident they fall into the category of quality improvement and not human subject research.
Horwitz said it is important to pick projects that senior-level executives in the health system are interested in. The first few projects they picked were ones they thought would show return on investment or evidence of impact quickly. In the first year they did eight projects. “We worked on care management, which is highly resource-intensive and often ineffective. So we knew those were areas the institution was highly interested in.” After they did those, they opened it up to the institution and asked people to submit ideas of projects they wanted to do. They took 15 to 20 ideas to the C-Suite and asked which ones they wanted to prioritize. “We deliberately picked things where we knew we could do it fast. We thought it was important to show something in the first year.”
Other examples include:
• Community health workers, NYU Brooklyn ED
• Mailers for preventive care, through a Clinically Integrated Network
• Posters for patient reported outcomes, Outpatient office
• Tobacco cessation BPA, NYULH outpatient in ambulatory practices
• Preventive care phone calls from a Florida outreach center, clinically integrated network
When Horwitz finished her presentation, it made me think back to her comment at the beginning that most people don’t recognize their organization in the IOM definition of a learning health system. How many organizations would be interested in adopting randomized QI projects in order to know whether system-level programs or interventions are effective? What would it take for them to replicate NYU Langone’s approach? Is anyone else doing anything similar?