Some of the angst around the rapid pace of change involving artificial intelligence involves concern that the robots are going to replace radiologists or other clinicians. In a March 7 online discussion sponsored by Harvard Business Review, two Accenture executives stressed that in clinical settings, AI will gradually be introduced to assist clinicians and that initially it will have the biggest impact on back-office operations.
In 2019, the most common uses are in cybersecurity and authentication, payment and customer services, in part because they don’t have issues around the validity of algorithms or the complexity of clinical decision making to overcome, said Kaveh Safavi, M.D., Accenture’s global health industry lead. “Initially, larger-scale uses will not be in clinical domains. There is a long, uphill climb around clinical decision making. That is how it is playing out right now.”
The U.S. has already seen the benefits of automation to shift some routine tasks to machines, Safavi said. “With AI, we begin to shift non-routine tasks to machines,” he explained. What has captured our imagination in healthcare is the idea that a machine can substitute for a doctor. That is not what anyone studying this area thinks is going to happen, he said. Clinical judgement is quite significant. Technology can take some tasks over, but cannot replace a physician or nurse.
Safavi explained that healthcare is faced with trying to improve access, affordability and effectiveness at a time when there is expected to be considerable shortages of healthcare workers. “AI has given us an opportunity to address those issues,” he said.
“We see this as humans plus machines,” Safavi said. “We see an opportunity to scale the work force in ways we have not before. We estimate that almost 30 percent of physician capacity today can be moved to patients [in self-service mode] or smart tools or a combination of the two. That gives us some hope of closing the gap between demand and resources available.”
So the good news is that machine learning algorithms are not going to replace your clinical judgment anytime soon. That doesn’t mean, however, that you can just ignore AI’s impact. In many business sectors, including healthcare, the work force will have to adjust to having robots as co-workers and re-think their business processes to include them.
Safavi said the introduction of AI into healthcare settings is not a technology problem. In cases where health systems have tried to deploy basic automation without AI, they often have not realized benefits unless they have rethought the business processes.
“You have an interaction between humans and machines,” he said. The future of AI at work isn’t going to be instead of, but in addition to. The machine makes the human better at what they do, and the machine can take advantage of the human counterpart. Employees need to learn to work differently with the machine as co-worker. They need to get better at the use of technology in what he called “intelligent interrogation,” getting better at asking the machine the right questions.
Brian Kalis, Accenture’s digital health lead, suggested that health systems need to begin developing a strategy and roadmap to apply AI across the enterprise, which involves re-engineering business processes, retraining the work force and focusing on data management. Like Safavi, Kalis said the focus should be on the business objectives you are trying to solve, not the technology itself. “You need a key executive leader as a sponsor to push adoption,” he said, because behavior change will be required. Additionally, you will need a cross-organizational approach to maintaining the data ecosystem and cleaning up the data.
Kalis said early promising applications include trying to predict patient no-shows to reallocate resources. Another one he mentioned is Aetna using fingerprinting powered by AI and machine learning for authentication and authorization of members, resulting in a reduction in the number of calls related to password resets. “These are the types of low-hanging fruit that tend to be the first place to start,” he said.
Kalis used the introduction of virtual assistants or “chatbots” as an example of the complexity of introducing AI into work processes. “Where you run into problems is when you just put a chatbot in an existing service center,” he said. You may actually see a worse customer experience and less productivity. You have to rethink that human-machine relationship. The virtual agent is good for basic transactions, but you need to connect it and be able to hand off to a human chat or a call. “That requires a different type of worker who knows how to train a virtual agent to make them better and learn over time. The work force has to work alongside technology and to explain and build different models. That means allowing machines to do what they do well, and having humans apply judgement and empathize.
In the clinical setting, AI can play a role in activities such as voice-to-text translation, freeing up the time it takes clinicians to chart notes and generate prescriptions. What will doctors and nurses do with the time they get back? See more patients? Safavi said we will have to rethink our work. “In a way the machines are taking the easy work off our plates,” he said. Actually, he added, research has shown that people want to do some easy tasks because they give our brains a rest. “People can’t run at peak cognitive capabilities for hours without breaks,” he said. “These are things we are going to have to learn as a society and as businesses.”