CHAI Seeks Feedback on Responsible Health AI Framework

June 26, 2024
Framework includes criteria to evaluate standards across the AI lifecycle — from identifying a use case and developing a product to deployment and monitoring

The Coalition for Health AI (CHAI) is seeking public comment on a draft framework for the responsible use of artificial intelligence in healthcare.

The nonprofit CHAI includes representatives from over 1,500 member organizations including hospital systems, tech companies, government agencies and advocacy groups. It aspires to contribute to best practices with the testing, deployment, and evaluation of AI systems. This work will engage many stakeholders, promoting discovery and experimentation, and sharing AI innovations in healthcare, including methods that leverage traditional machine learning and more recent developments in generative AI.  

The framework, consisting of an Assurance Standards Guide, provides considerations to ensure standards are met in the deployment of AI in healthcare. This draft framework was created with a consensus-based approach, drawing upon the expertise and knowledge of multiple, diverse stakeholders from across the healthcare ecosystem.

The Guide describes six diverse examples to demonstrate variations in considerations and best practices in the real world:
1. Predictive EHR Risk Use Case (Pediatric Asthma Exacerbation)
2. Imaging Diagnostic Use Case (Mammography)
3. Generative AI Use Case (EHR Query and Extraction)
4. Claims-Based Outpatient Use Case (Care Management)
5. Clinical Ops & Administration Use Case (Prior Authorization with Medical Coding)
6. Genomics Use Case (Precision Oncology with Genomic Markers)

A set of draft companion documents, called The Assurance Reporting Checklists, provides criteria to evaluate standards across the AI lifecycle —  from identifying a use case and developing a product to deployment and monitoring. 

The principles underlying the design of these documents align with the National Academy of Medicine’s AI Code of Conduct, the White House Blueprint for an AI Bill of Rights, several frameworks from the National Institute of Standards and Technology, as well as the Cybersecurity Framework from the Department of Health and Human Services Administration for Strategic Preparedness & Responses.

CHAI eventually expects a federated network of approximately 30 “assurance labs” to be stood up, said Brian S. Anderson, M.D., CHAI’s first CEO, when he was speaking to the NIH Collaboratory Grand Rounds on March 8, 2024. Anderson was previously chief digital health physician at MITRE. 

 “We reached an important milestone today with the open and public release of our draft assurance standards guide and reporting tools,” said Anderson, in a statement. “This step will demonstrate that a consensus-based approach across the health ecosystem can both support innovation in healthcare and build trust that AI can serve all of us.”
 
Multiple, diverse stakeholders are involved in the selection, development, deployment, and use of AI solutions intended for patient care and related health system processes. This includes clinicians, nurses, AI technology developers, data scientists, bioethicists, and regulators, as well as those impacted by the technologies, such as patients and their caregivers. 

The Guide aims to help build consensus among stakeholders from different backgrounds, providing a common language and understanding of the life cycle of health AI solutions, and highlighting best practices when designing, developing and deploying AI within healthcare workflows. This will help ensure effective, useful, safe, secure, fair, and equitable care, CHAI said. The organization will use the input from the public to finalize the Guide and update it as needed in the future. 

The Checklists translate the consensus considerations into actionable evaluation criteria, to assist the independent review of health AI solutions throughout their lifecycle to ensure they are effective, valid, secure and minimize bias. The Checklists are to be used by independent reviewers and organizations evaluating AI solutions, as well as individuals involved in the AI lifecycle for reviewing their work. 

Public reporting of the results of applying the Checklists ensures transparency of the risks and benefits of an AI solution, which will help organizations and their leadership make decisions about the development and deployment of these technologies, CHAI said.
 
“Shared ways to quantify the usefulness of AI algorithms will help ensure we can realize the full potential of AI for patients and health systems,” said Nigam H. Shah, M.B.B.S., a CHAI co-founder and board member, and chief data scientist for Stanford Health Care, in a statement. “The Guide represents the collective consensus of our 2,500 strong CHAI community including patient advocates, clinicians and technologists.” 
  
The public comment period will run for 60 days. 

Sponsored Recommendations

Spotlight on Artificial Intelligence

Unlock the potential of AI in our latest series. Discover how AI is revolutionizing clinical decision support, improving workflow efficiency, and transforming medical documentation...

Beyond the VPN: Zero Trust Access for a Healthcare Hybrid Work Environment

How can healthcare organizations secure a hybrid workforce and third party access while ensuring security, meeting regulatory requirements and delivering ...

How Zscaler Improves Remote Radiology

Many healthcare organizations operate in a regional or semi-regional model with a large percentage of workers within a 100-mile radius. Mission critical healthcare...

A Comprehensive Workplace Safety Checklist

This checklist is designed for healthcare facilities focused on increasing workplace safety. It’s meant to inspire ideas, strengthen safety plans, and encourage joint commission...