At UW Health, Steady Progress in Advancing AI to Support Patient Care Delivery

April 8, 2022
Brian Patterson, M.D., the physician informatics director for predictive analytics at UW Health in Madison, Wisconsin, shares his perspectives on what’s been learned so far in AI

Across the U.S. healthcare delivery system, physician informaticists are helping to lead initiatives involving the leveraging of artificial intelligence (AI) and machine learning (ML) to support medical practice and patient care delivery. Among the many physician informaticists doing so is Brian Patterson, M.D., who is a practicing emergency physician and an assistant professor in the Department of Emergency Medicine at the University of Wisconsin Medical School, and the physician informatics director for predictive analytics at UW Health, the academic medical center-based integrated health system based in Madison, Wisconsin.

Dr. Patterson was one of four healthcare leaders participating in a panel entitled “The Name of the Game Is Implementation,” which was part of the Machine Learning & AI for Healthcare Forum,” one of several specialty symposia held on Monday, March 14, at the outset of the HIMSS22 Conference held in Orlando, Florida, and sponsored by the Chicago-based Healthcare Information & Management Systems Society (HIMSS). The discussants on that panel reflected on the numerous complexities involved in developing AI-based algorithms and actually implementing them to support patient care delivery, including the complexities around both process and information technology.

Later last month, Patterson spoke with Healthcare Innovation Editor-in-Chief Mark Hagland regarding the subject. Below are excerpts from that interview.

You continue to practice clinically, correct? How do you partition your time?

Yes, I’m still a practicing emergency physician. I practice in our emergency department at UW, and spend time doing grant-funded research, and am also the director of predictive analytics here; that role consumes 20 percent of my time. I completed my medical residency in 2013, so I’ve been practicing 10 years as an attending physician, and have been here at UW since 2019.

Based on everything you’re aware of, would you say that the healthcare industry is still early on in the leveraging of AI in a clinical setting?

The technology for deriving models with the potential for transforming healthcare, is there; it’s outpaced the development of models. But yes, we’re still early on in terms of using the technology.

There’s been a lot of hype around all of this, as we all know. I think that some people thought that there would soon be the equivalent of being able to go to Target and pick an algorithm off the shelf, but obviously, that hasn’t happened. Algorithms are being developed by individual teams in individual organizations. Can you speak to the reality of algorithm development, versus what might have been imagined by some a few years ago?

With regard to the Gartner Hype Cycle, a couple of years ago, there was so much hype around AI. Meanwhile, I have to say that I don’t think the door has been closed on generalizable models. We might be able to derive models across vast numbers of use cases. But in many cases, it’s very fair to say that just because your EHR [electronic health record] model works in one place, it won’t necessarily in other. So it varies based on the patient populations on which you’re creating the models; and secondly, in terms of the ways in which data is entered into the EHR [electronic health record]. For example, in one institution, only boluses of fluid will be recorded, and in another institution, every medication ordered would have an accompanying order for the fluid that pushed it in. And just based on that difference, one patient may appear to have gotten more or less fluid. And that’s one of the big threats to generalizability. And what I’d definitely agree with is that models need to be developed by specific teams at specific organizations, in specific work environments; in other words, where the models are given to the physicians within their workflow, how they’re used, those have to be individualized. And a couple of recent publications have highlighted that.

Could you ever set a national standard for any of these items?

That’s a good philosophical question. There’s a tradeoff: do we mandate standardization for EHRs; a lot of institutions are proud of how they’ve customized their EHRs; and you’d have to get way more standardized. So I think models will be likely to be standardized, along with tweaks, if you can make sure the model works in your organization, rather than just assuming it works. So it’s a new concept for clinicians, because we think about things, especially in emergency medicine, we think of risk scores, such as a heart score that calculates a patient’s risk for a heart attack after being seen for chest pain in an ED; or scores for severity of pneumonia, so how likely people will do poorly after pneumonia, or likelihood of a stroke after a neurological event.

The danger is in viewing these AI models in the same way. But it’s not the science of how they’re derived, but the science of how they’re implemented, that matters. And you want to be able to predict things with models. But if the model was derived from a machine learning/AI approach as opposed to a traditional epidemiological approach, you have some variables to consider.

How many algorithms have you developed so far at UW Health?

Several. We have at least three or four in actual use, and several more in the pipeline. We have an algorithm for likelihood of falls after ED visits. Others being implemented are algorithms on risk of developing sepsis; implemented a clinical deterioration algorithm, though that’s one we’re implementing from a vendor, though developed by UW faculty; and ones that look at patients’ risk of dangerous risk…

Suchi Saria said in her closing presentation on March 14 that it’s turned out that gains in using algorithms for sepsis have turned out to be far less than originally anticipated. Why do you think that is?

Sepsis is particularly difficult to predict, because there’s a very narrow window in which to act to intervene. It’s a difficult data science problem. It also goes back to our earlier conversation; a lot of the theoretical work has been done in sepsis prediction. I wouldn’t say it’s that much more difficult, versus, we learned a lot of lessons because it’s an area where a lot of work has been done, and that’s taught us lessons about not over-generalizing.

What have the biggest lessons you and your colleagues have learned so far at UW Health?

First, the message I try to get out is that one about, no matter how you think a model will work out from outside your system, you need to validate it based on your data. The other lesson is that many, many models can provide information, but if we want to change patient care, we need to find actionable information and present it to clinicians at the right time. We have to have good, accurate predictions, and then put them into well-designed decision support, well implemented, in the workflow.

So, one critical success factor is that an alert has to be generated at the right moment in the workflow, correct?

Yes, where they’re at the right place to receive information, and when they’re thinking about the situation. If you give the message too late, it’s, well, I’ve already done that, and if too early, it means I’ll have to get more information. Just giving people excellent information isn’t enough; it’s about the timing.

How do you see all of this evolving forward at UW Health in the next few years, and how do you see it evolving forward across the U.S. healthcare system in the next few years?

At UW Health, what we’ve done is to build a team to provide governance around these models that consists of content experts in data science, relevant clinical stakeholders, people who think about clinical decision support, and experts in the potential ethical implications of those models. So we’ve spent a lot of time making sure we have the right people around the table. And speaking nationally, there’s the increasing recognition that we can’t have people just turning on externally obtained models; you need good governance, and then you can create an ecosystem of models that are well-governed. That’s how you do right by the patients.

It's complicated! There are no magic bullets, right? And really, one can’t expect to be able to do the equivalent of going to Target and picking algorithms off the shelf, correct?

I think you’re hearing variations of that idea in more places. And as we discussed on the panel—someone brought up the analogy of moving from a crystal-ball heuristic—that the information would just tell you what to do—that’s a damaging notion—into sort of this co-pilot mentality. This won’t do my work for me, but it’s going to provide valuable insights, and if I use those insights, it will make me a better doctor. And AI is a tool, something that’s going to help us do our work, not do our work for us. And, per Target, there are some use cases where it might be possible to go the equivalent of Target—looking at retinal images to evaluate the risk of macular degeneration. There are use cases where I could imagine where there could be off-the-shelf-based things, but not so much in terms of EHR-derived information.

How would you describe the mentality of practicing physicians right now, healthcare system-wide, with regard to the acceptance of working with AI algorithms?

I think it’s like any potentially transformative technology; you’ve got early adopters, and natural skeptics, and everyone else is in the middle. The general tone has followed that Gartner Hype Cycle pretty well. Seven or eight years ago, my own mindset was a little bit like, oh wow, this will totally transform things, and then I and others went into the trough of disillusionment, and we’re coming out of that. I think we’re moving towards coming out of it, towards the slope of enlightenment.

Could you offer a few pieces of explicit advice for fellow colleagues thinking of moving forward in this area?

One, as I’ve said, validate things locally to make sure they work in your shop; two would be that a predictive model by itself is not useful unless you’ve placed it within a workflow that has the ability to improve patient care by providing actionable information; and three, it’s important to have people who are experts in informatics, but also important to involve the end-user clinicians early in the design, to know that it’s actually going to help them in their work. I initially thought that driving the models would be the hard part, but considering that a lot of the work involves good design work and good implementation work… And then last, the importance of good governance to create a framework so that people will trust these models. And putting out a bad model is a lot worse than putting out no model, right? Because people won’t like the next one. It’s important to get it right.

Have physicians practicing in all the major medical specialties been involved in this work at UW Health?

What we do, and which I’ve liked, is that we don’t say the point of our governance is to include everybody who might be a stakeholder; it would get unwieldy fast; but the governance committee makes sure we get the right people at the table for specific models. Hypertension model, with electrophysiologists, that won’t help, even inside cardiology. So the actual end-users have to be engaged in the design and implementation. And we use the term physician champion a lot; but a lot of these interventions focus on nurses as much as physicians; so here at UW, nurses are very involved in the process well.

And there will be some instances in which pharmacists should be involved?

Yes, absolutely. And absolutely, a lot of these models involve medications.

Sponsored Recommendations

Data-driven, physician-focused approach to CDI improvement

Organizational profile Sisters of Charity of Leavenworth (SCL) Health* has been providing care since it originated in the 1600s in France as the Daughters of Charity. These religious...

Luminis Health improved quality and financial outcomes with advanced CDI technology and consulting from 3M

In the beginning, there were challengesBefore partnering with 3M Health Information Systems (HIS), Luminis Health’s clinical documentation integrity (CDI) program faced ...

Case Study: Intermountain Healthcare - AI-powered physician engagement to drive quality care

Health System profile Intermountain Healthcare is a Utah-based, nonprofit health system composed of 24 hospitals, 225 clinics, a medical group with 3,000 employed physicians and...

10 Reasons to Run Epic on Pure

Gain efficiency & add productivity to your Epic data center. Download now to learn more!