Leveraging AI-Fueled Analytics to Address Avoidable Inpatient Readmissions

April 26, 2021
A study of the use of artificial intelligence-fueled analytics shows that avoidable hospital readmissions is an area of fertile ground for quality leaders in hospital, an adviser to the study confirms

Can the power of artificial intelligence-facilitated analytics help to reduce avoidable readmissions? It appears so. In fact, a study by clinician researchers at the Rochester, Minnesota-based Mayo Clinic Health System is shedding some light on the subject.

An article entitled “Implementation of Artificial Intelligence-Based Clinical Decision Support to Reduce a Hospital Readmissions at a Regional Hospital,” published in the August 2020 issue of Applied Clinical Informatics, speaks to the issue. Written by Santiago Romero-Brufau, M.D., Ph.D., Kirk D. Wyatt, M.D., Patricia Boyum, Mindy Mickelson, Matthew Moore, and Cheristi Cognetta-Rieke, D.N.P., examined how analytics were applied at a the Mayo Clinic System La Crosse Hospital in La Crosse, Wisconsin.

The researchers’ abstract states that “Hospital readmissions are a key quality metric, which has been tied to reimbursement. One strategy to reduce readmissions is to direct resources to patients at the highest risk of readmission. This strategy necessitates a robust predictive model coupled with effective, patient-centered interventions.”

Their objective in the study? “The aim of this study was to reduce unplanned hospital readmissions through the use of artificial intelligence-based clinical decision support,” they wrote. In pursuit of that objective, they wrote that “A commercially vended artificial intelligence tool was implemented at a regional hospital in La Crosse, Wisconsin between November 2018 and April 2019. The tool assessed all patients admitted to general care units for risk of readmission and generated recommendations for interventions intended to decrease readmission risk. Similar hospitals were used as controls. Change in readmission rate was assessed by comparing the six-month intervention period to the same months of the previous calendar year in exposure and control hospitals.”

The results? “Among 2,460 hospitalizations assessed using the tool, 611 were designated by the tool as high risk. Sensitivity and specificity for risk assignment were 65 percent and 89 percent, respectively. Over 6 months following implementation, readmission rates decreased from 11.4 percent during the comparison period to 8.1 percent (p < 0.001). After accounting for the 0.5 percent decrease in readmission rates (from 9.3 to 8.8 percent) at control hospitals, the relative reduction in readmission rate was 25 percent (p < 0.001). Among patients designated as high risk, the number needed to treat to avoid one readmission was 11.” As a result, they wrote, “We observed a decrease in hospital readmission after implementing artificial intelligence-based clinical decision support. Our experience suggests that use of artificial intelligence to identify patients at the highest risk for readmission can reduce quality gaps when coupled with patient-centered interventions.”

Looking at the issue in a bit more detail, the researchers wrote in their article that, “Although artificial intelligence (AI) is widely utilized in many disciplines outside of medicine, its role in routine clinical practice remains limited. Even where AI has been implemented in a clinical setting, published studies primarily focus on reporting of the algorithm’s performance characteristics (e.g., area under the curve) rather than measuring the effect of AI implementation on measures of human health. In a sense, AI in health care is in its infancy. Several quality gaps have emerged as targets for AI-based predictive analytics. These include early recognition and management of sepsis and hospital readmission.” In fact, they noted that “Hospital readmission is often used as a surrogate outcome to assess the quality of initial hospitalization care and care transitions. The underlying rationale is that readmissions are often preventable because they may result as complications from the index hospitalization, foreseeable consequences of the initial illness, or failures in transitions of care. Hospital readmission is also seen as a significant contributor to the cost of health care. The annual cost of unplanned hospital readmissions for Medicare alone has been estimated at $17.4 billion.7 The validity of hospital readmission as a surrogate quality outcome has been criticized, and many consider this binary outcome to be an oversimplification that fails to account for the complexity of patient care and individual patient context.”

That’s where the AI tool comes in, a solution developed by the Suwanne, Georgia-based Jvion, a clinical AI solutions provider. As the researchers explained it, “The AI tool combined clinical and nonclinical data to predict a patient’s risk of hospital readmission within 30 days using machine learning. The tool was developed by a third-party commercial developer (Jvion; Johns Creek, Georgia) and licensed for use by Mayo Clinic. Details on the data used by the vendor as well as their decision tree-based modeling approach have been described in detail elsewhere. Clinical data incorporated into risk prediction included diagnostic codes, vital signs, laboratory orders and results, medications, problem lists, and claims. Based on the vendor’s description, nonclinical data included sociodemographic information available from public and private third-party databases. Data that were not available at the patient level were matched at the ZIP þ 4 code level.”

At the operational level, what happened was that “The tool extracted inputs from the electronic health record (EHR), combined them with inputs from the nonclinical sources, processed them, and generated a report for each high-risk patient daily in the early morning prior to the start of the clinical work day. Each patient’s individual report identified a risk category, risk factors, and targeted recommendations intended to address the identified risk factors. The risk factors contributing to a patient’s high risk were displayed to provide model interpretability to the frontline clinical staff. There were 26 possible recommendations that could be generated by the tool. Examples included recommendations to arrange specific post-discharge referrals (e.g., physical therapy, dietician, and cardiac rehabilitation), implementation of disease-specific medical management plans (e.g., weight monitoring and action plan, pain management plan), enhanced coordination with the patient’s primary care provider, and enhanced education (e.g., predischarge education).”

One key element in all this was that, prior to initiating the pilot, “[T]he project team collaborated with local practice partners to map local workflows and identify how to best integrate the tool. Patients identified as high risk were identified with a purple dot placed next to their name on the unit whiteboard.  Recommendations generated by the tool were discussed at the daily huddle, and the care team determined how to best implement them. Additionally, discharge planners contacted physicians daily to review recommendations, and the tool was further used during a transition of care huddle in collaboration with outpatient care coordinators.”

As a result, the researchers wrote, “We observed a significant reduction in hospital readmission following implementation of AI-based CDS. This difference persisted when two different pre-implementation time periods were considered and when a difference-in-difference sensitivity analysis was performed to account for reduction in hospital readmission observed at control hospitals. The overall reduction was due to the reduction in readmission rate among patients identified as ‘high-risk.’ The overall number needed to treat was 30 admitted patients to prevent one readmission. Focusing only on patients who received an intervention (patients designated as ‘high-risk’), results in a number needed to treat of 11.”

The affiliations of the researchers are as follows: Santiago Brufau: Mayo Clinic Kern Center for the Science of Health Care Delivery, and the Department of Biostatistics, Harvard T.H. Chan School of Public Health, Harvard University; Kirk D. Wyatt, Division of Pediatric Hematology/Oncology, Mayo Clinic, Rochester; Patricia Boyum,  Mindy Mickelson and Matthew Moore, Mayo Clinic Kern Center; and Cheristi Cognetta-Rieke, Department of Nursing, Mayo Clinic Health System La Crosse Hospital.

Recently, Healthcare Innovation Editor-in-Chief Mark Hagland interviewed John Showalter, M.D, chief product officer at Jvion, and a physician who practiced as a hospitalist for ten years before entering the analytics area, to discuss the study’s results and its implications. Dr. Showalter was an adviser on the implementation and adoption of the Jvion technology during the study. Below are excerpts from their interview.

Tell me about Jvion’s participation in the study?

We’re very excited that the Mayo research group was able to do this study and where they were able to do it. It was at a community hospital where they had done readmissions work but hadn’t made that much progress. And the folks at that hospital had addressed the problem, and the question was whether AI would really be helpful. To this, this was a really good demonstration that AI is really helpful to improving outcomes.

I’m a primary care physician and a hospitalist by background. As hospitalists, we’re focused on what’s in front of us, and are dealing with the current condition. And one of the things that the Javion solution brings is a much more holistic view of the patient. A comprehensive view of the person. An understanding of digital fluency, of social support, of the social determinants of health, is necessaryto truly understand risk. And that understanding accomplishes two things: one, it brings people to the surface who you had thought would be OK. So bringing those people to the surface. And the second thing is the focus on what to solve.

Incorporating the social determinants of health into clinical analytics is challenging, isn’t it?

Yes, the challenges of incorporating the social determinants of health into analytics, from a clinician’s  perspective, are very imposing. There are dozens of different variables to consider; and trying to figure out how to deal with the social determinants around any individual can be overwhelming. So bringing focus to the top three or five areas makes this imposing set of challenges manageable; you can tackle a few key elements during the discharge process.

So I think a lot of our success is about a focus on the right people, the right drivers of risk, and the right recommendations. As much as we would like to focus entirely on every patient all the time, most physicians are running a little bit overloaded, especially during the pandemic.

Where does that take us, in terms of applying AI to actual patient care situations?

Some of the discussion in the study focused on the multidisciplinary rounds, the enhancement of the planning discussions that came from the use of the tool. So where we’re at in the AI development in healthcare is definitely in augmentation: helping clinicians to make better decisions, bringing more information and insights to their planning and decision-making. And this is further evidence that this information is valuable to clinicians. We’ll always need clinicians in the loop, of course. AI will never replace doctors or nurses.

As you know, there was initial fear of physician replacement in radiology, for example; and some physicians in other disciplines have been concerned as well.

Yes, and the truth about all those studies in which the algorithm performed better than a group of physicians, is that once the algorithm was paired with living physicians, that combination outperformed everyone. So AI does move the needle significantly. We talk about the clinician in the loop; we are a company that believes in that concept.

How should clinicians think about and participate in the development of AI over the next several years?

I think clinicians need to think about how they can most effectively be in the loop. What information most benefits them, pushes them to new and better decision-making and discoveries? Participation in clinical trials? Pilots? Clinicians should get in early and understand the workflows and how to incorporate AI. I think the best parallel really is, what would you do with a new lab test? How would you participate with your organization in a new lab test process? Or with a new portable MRI? How would you make sure that things were used in ways most beneficial? We have a lot of examples when things are physical; if it’s a new portable MRI, physicians would give you a lot of very concrete suggestions. We need to do that with AI, which might seem more abstract. So we’re going to want to get transportation set up for five percent of your patients post-discharge, and what’s the best way to do that? Just one example. And what’s most effective is when we’ve made it most concrete and said, these are the 10 or 20 things we need to make more effective, and how do we get them done?

So we’re absolutely of the belief that we need the clinicians partnering with us to make sure we’re bringing the right issues to the table, and focusing on the operations side. So we have a whole implementation team and do a lot of workflow redesign aspects, and work with our end-users to incorporate. That’s by far the biggest challenge for AI, is to get it incorporated into workflows and used. And a lot of that has to do with the history of AI in healthcare, as having been abstract. If we bring in a whole bunch of stuff that doesn’t make sense but is solely focused on accuracy, that won’t be helpful to clinicians. If you go in and tell a doctor that a patient is at greater risk because they’re admitted on a Wednesday, that wont help.

So you’re saying that physicians need to be in on the ground floor in the architecting of solutions, before they’re implemented?

Yes, absolutely—physicians and multidisciplinary teams. If they’re not on the ground floor, there’s a big disconnect between what we’re trying to accomplish with AI and what the physicians we think we’re doing, and any mismatches can be very detrimental to implementations. We strongly believe in bringing in the right stakeholders at the outset.

Do you see physicians becoming more open to these potential factors now?

Overall, physicians are still very skeptical, and are still very focused on classic diagnostic statistics, even though they don’t completely apply to a technology that’s being used to augment them. They’re used to lab tests or rapid strep tests, rather than efforts that seem more abstract. But they’re far more open than they were four years ago, when I first got involved in this.

How do you see things evolving forward over next two or three years?

I actually see the next few years involving a slow uptake and then a rapid acceptance of using AI to augment. It’s going to start in radiology, where physicians will be reading mammograms and x-rays with the support of AI, with results that are clearly better. And once they start accepting the better-together concept, there will be a rapid adoption. That’s typical of physicians adopting technology, they’re really slow until a technology begins to be accepted, and then there’s rapid acceptance. And we still hear doctors and nurses fearing replacement; but that’s going to go away. But yes, I want that information and make better decisions with information—that attitude will increase. Once physicians use our solution, within six months, they don’t want to give it up. Internally, we refer to it as the mindshift.

After early adopters or accepters accept a new technology or approach, suddenly, there’s a tipping among the physicians, correct?

Yes, and it’s because of the evangelists on a medical staff. We do see that tipping. And we have amazing stories from physicians—in our oncology practice, a physician getting an alert while they were on an airplane, and scheduling a patient to come in, because of a flag, and it was a crucial event—and now we’ve got a fan for life. Healthcare AI and augmentation is going to be a very different kind of AI than what they live in their daily lives; it’s not like Amazon.

Sponsored Recommendations

Ask the Expert: Is Your Patients' Understanding Putting You at Risk?

Effective health literacy in healthcare is essential for ensuring informed consent, reducing medical malpractice risks, and enhancing patient-provider communication. Unfortunately...

Beyond the Silos: Transforming Coordinated Care Across Healthcare Systems

Coordinated healthcare is vital to delivering a high-quality patient experience, yet it has been difficult to systematize across all healthcare settings. Although it has largely...

The Healthcare Provider's Guide to Accelerating Clinician Onboarding

Improve clinician satisfaction and productivity to enhance patient care

ASK THE EXPERT: ServiceNow’s Erin Smithouser on what C-suite healthcare executives need to know about artificial intelligence

Generative artificial intelligence, also known as GenAI, learns from vast amounts of existing data and large language models to help healthcare organizations improve hospital ...