University of California Develops AI Governance Framework
The University of California recently became one of the first universities in the nation to publish a set of recommendations to guide the safe and responsible deployment of artificial intelligence in its operations, including University of California Health (UCH), which has six academic medical centers.
Cora Han, chief health data officer at UCH, described the new framework at a Jan. 14 meeting focused on AI put on by the Office of the National Coordinator for Health IT. In August 2020 an interdisciplinary group made up of 32 faculty and staff from all 10 campuses, and reflecting a wide range of disciplines, including computer science and engineering, law and policy, medicine and social sciences, was charged with developing recommendations.
The recommendations grew out of a systemwide recognition that there are many benefits to the deployment of AI, including greater efficiency, as well as better decision-making across certain functions. But university officials also realize the use of AI raises equity, privacy, security, safety and ethical concerns. “There was a need to focus on developing a governance process that really prioritizes transparency and accountability in decisions about how and when the use of AI is deployed,” said Han, a former senior attorney in the Division of Privacy and Identity Protection of the Federal Trade Commission.
The work was guided by an overarching set of principles. The first is appropriateness — the potential benefits and risks, and that the needs and priorities of those affected should be carefully evaluated to determine whether AI should be applied, or whether in certain cases it should be prohibited. Another principle is transparency. Individuals should be informed when tools are being used, and to the extent possible methods should be explainable to them, and individuals should be able to understand the outcomes as well as challenge them, Han explained.
Another principle involves accuracy, reliability and safety. Tools should be accurate and reliable for their intended use, and verifiably safe and secure throughout their operational lifetime. Still another key is fairness and nondiscrimination. It is critical, Han added, to have procedures in place that proactively identify and mitigate bias in AI tools, and the tools should be designed in ways that maximize privacy and that support the ideals of human values, such as human agency and dignity, as well as respect for civil and human rights, shared benefit and prosperity. “Tools should be inclusive and bring equitable benefits, including social and economic benefits to all,” she said. The final principle is accountability — the UC system should be held accountable for its development and use of AI systems.
The interdisciplinary work group examined use cases across three common areas — business administration; clinical diagnosis and treatment; and quality, population health improvement and chronic disease management. The group came up with a set of recommendations to operationalize these principles in the healthcare area, including a risk and impact assessment framework that recognizes that the scale and scope of risks will vary, and that greater accountability mechanisms need to be in place for those processes, which raise higher risk, Han explained.
Another recommendation involves documentation, which Han explained is important for transparency, auditing and oversight. “That includes not only recording what AI technologies are being used, where and for what purposes, but also information about the design of these systems,” she said. “Of course, data that contains bias can produce bias, so it's critical to understand what training data is being used and why. It’s critical to understand what is being used in the model as a proxy for what is trying to be assessed and to ensure that bias is not introduced in that manner.”
Another recommendation from the working group is the need to develop and incorporate standardized, reproducible and meaningful processes for human review at appropriate checkpoints. The group also recommended encouraging representation, engagement and feedback from the UC community as well as from relevant external stakeholders. That requires work on making AI systems actually understandable to diverse patient communities, as well as providing a mechanism to challenge any outcomes and address potential harms that might be caused providing training and educational programs.
For UCH, it will be important that the principles are translated into developing guidelines for the procurement process, Han added. “This was a common theme across all of the domains, not just health,” she added. “Many tools are actually purchased and not developed in house, and may even be in some cases an ancillary part of a primary product that is being purchased. For that reason, how to identify and mitigate risks is critical.”
The final recommendation from the group was further research, including effective methods of monitoring AI to ensure accountability, and ways of incorporating patient feedback into models that are developed.
An October 2021 story on the UC website says that the university system will now take steps to operationalize the working group’s key recommendations:
- Institutionalize the principles in procurement and oversight practices;
- Establish campus-level councils and systemwide coordination to further the principles and guidance from the working group;
- Develop a risk and impact assessment strategy to evaluate AI-enabled technologies during procurement and throughout a tool’s operational lifetime; and
- Document AI-enabled technologies in a public database to promote transparency and accountability.