Researchers: AI Development Should Focus on Top Clinician Needs
A team of healthcare researchers is insisting that the development of artificial intelligence (AI) and machine learning (ML) focus on the top priorities around the needs of clinicians delivering patient care, and that those needs must take precedence; they also believe that AI technologies should be evaluated in the way that “all clinical practice guidelines” are evaluated.
Writing in the Journal of the American Medical Association (JAMA) online on Oct. 15 in a Viewpoint article entitled “Translating AI for the Clinician,” Manesh R. Patel, M.D., Suresh Balu, M.S., and Michael J. Pencina, Ph.D., write that “We believe that the progress and adoption of ML and AI tools in medicine will be accelerated by a clinical framework for AI development and testing that links evidence generation to indication and benefit and risk and allows clinicians to immediately understand in the context of existing practice guidelines. They are variously affiliated with Duke Clinical Research Institute at the Duke University School of Medicine, Duke University Medical Center, the Duke Institute for Health Innovation at Duke Health, and Duke AI Health at the Duke University School of Medicine, all in Durham, N.C.
The article’s authors write that, “To realize their full potential, current development of health AI technologies needs to focus on the clinical use case or indication that the technologies aim to improve. Specifically, developers should prioritize aligning the technologies with clinical indication and use cases to maximize impact. We believe this first step is a conceptual sea change from the current development pathway, which focuses on the advanced computational techniques and available health data sources being used, with emphasis on variety, amount, and breadth. Although this is necessary for AI algorithm and model formation, it is not sufficient. For successful adoption of AI technologies in the clinic, we must first articulate the specific problems or use cases that would benefit from the incorporation of AI.”
Per all that, they share a figure in their article with the following correspondences between “clinical indications” and “examples of AI.” They are: interaction with patient: ambient voice dictation, scheduling, electronic health record inbox tools; risk stratification (precision medicine): patient risk assessment tools; diagnosis: analysis of clinical data or imaging (e.eg., echocardiography); interpretation of laboratory results: analysis and description of test results; eliciting patient preferences or behavioral change: conversational chatbot; procedures: surgical assistance; prescribing medication: drug interaction assessment; patient or population monitoring: glucose monitoring, population-at-risk monitoring; research and learning: research participant identification and engagement; and continuing education and training: virtual reality case simulation.
And, they write, “[W]e we believe that the evidence for health AI technologies should be evaluated and reviewed like all clinical practice guidelines (Figure, C). If AI technologies were tested as traditional therapies in medicine, our current standards would typically evaluate factors such as the specific condition being treated (i.e., the indication), the patient population represented by available datasets, the study design of AI integration into clinical practice, and the demonstrated treatment effect compared with existing care to make a guideline recommendation.”