Philips’ Roy Jakobs on the National Academy of Medicine’s AI Code of Conduct Push

April 11, 2024
The National Academy of Medicine and its partner organizations are working on a code of conduct for AI development; Philips’ CEO elaborates

On April 8, the Washington, D.C.-based National Academy of Medicine published a press release regarding a new initiative to create a code of conduct for the development of artificial intelligence (AI) in healthcare. The press release began thus: “The use of artificial intelligence (AI) in health, medicine, and research is rapidly expanding, creating new opportunities and avenues for research and care. While AI holds immense promise for revolutionizing health care and improving health outcomes, it is not without significant risk. Using these technologies requires careful assessment and stewardship to prevent harm. A new NAM Perspectives Commentary outlines a draft framework for achieving accurate, safe, reliable, and ethical AI advancements that can transform health, health care, and biomedical science.”

The press release continued, “In ‘Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft,’ the paper’s editors explore strategies to responsibly implement AI advancements to achieve profound benefits in health and health care throughout the United States. Based on an extensive review of existing literature surrounding AI guidelines, frameworks, and principles, the editors identify a series of ten Code Principles and six Code Commitments to ensure that best practices maximize AI’s benefits to maximize human health and well-being while minimizing potential risks. The Code Principles promote responsible behavior in AI development, use, and ongoing assessment; they are based on the Leadership Consortium’s Learning Health System Core Principles. The Code Commitments support the careful application of these Principles in practice, serving as guidelines when dealing with complex systems.”

And it quoted NAM president Victor J. Dzau as stating that “The promise of AI technologies to transform health and health care is tremendous, but there is concern that their improper use can have harmful effects. There is a pressing need for establishing principles, guidelines, and safeguards for the use of AI in health care. The new draft code of conduct framework is an important step toward creating a path forward to safely reap the benefits of improved health outcomes and medical breakthroughs possible through responsible use of AI,” Dzau said.

The press release noted that “The new commentary was developed by the NAM’s AI Code of Conduct initiative, under the auspices of an expert stakeholder Steering Committee. This publication represents a milestone toward the initiative’s goal to provide a guiding framework for ensuring that AI algorithms and their applications in health perform responsibly in the service of better health for all.”

“This new framework puts us on the path to safe, effective, and ethical use of AI, as its transformational potential is put to use in health and medicine,” said NAM executive officer Michael McGinnis, in a statement contained in the press release. “It serves as a foundational reference for continuous learning, alignment, and improvement in our health system.”

Meanwhile, a press release from the Amsterdam-based Philips (also known as Royal Philips) organization late last year noted that, “To ensure equitable, ethical, safe and responsible use of AI in health, the National Academy of Medicine invited a group of leading individuals from health, technology, research and bioethics to participate in a Steering Committee to develop an AI Code of Conduct (AICC). Philips CEO Roy Jakobs serves as co-chair with Gianrico Farrugia, M.D., President and CEO of Mayo Clinic, and Bakul Patel, MSc, MBA, Senior Director of Global Digital Health Strategy & Regulatory at Google. The AICC is a signature initiative of the National Academy’s Digital Health Action Collaborative (DHAC) which is co-chaired by Peter Lee, PhD, Corporate Vice President, Research and Incubations at Microsoft and Kenneth Mandl, MD, MPH, Director, Computational Health Informatics Program at Boston Children’s Hospital.

Per that, Jakobs spoke on April 11 with Healthcare Innovation Editor-in-Chief Mark Hagland regarding his participation in the initiative, and his expectations of the initiative to develop an AI Code of Conduct (AICC). Below are excerpts from that interview.

Why is this initiative happening now, and why are you participating in it?

The next generation of AI has huge potential to help in the better delivery of care. This is getting a lot of attention, as it rightfully should, as companies apply it and use it to help doctors, nurses, and technicians in their daily work, and it can help the healthcare system, both in terms of better care, as well as more care. So there is great opportunity, which is needed, because HC is under pressure. But we also realize that to apply the technology, you have to do so in a responsible manner; you’re talking about people’s lives.

So we felt as a group that we needed to take responsibility to create a frame. Why this group? The regulatory framework or other governing bodies, it’s hard to keep up with the pace of AI development. We felt the need for self-governance responsibility. And it’s not only something we want to develop and prescribe, but also develop collaboratively. The ultimate goal is mass adoption of AI, but in a responsible and a right way, with the impact we want to have. It should be actionable care, validated, and benefiting everyone. It should not have biases; it should be safe to use. It needs to be transparent; and we need to shape the parameters in the codes being developed.

Will it be possible for your committee to keep up with the pace of AI development globally?

That’s a very fair question. There’s a lot already happening. That’s why we’ve put it out into the media. The pace is something we’re focused on. We see this as complementary to other efforts. Of course, we’re working with regulatory bodies like the FDA, and we do it in Europe and are looking at global applications; but we understand the pace; and we are living this in our own organizations already.

How does it play out with nation-states and supranational bodies like the European Union, trying to regulate this, while AI is exploding globally?

Ultimately, healthcare is local and is delivered locally. But in terms of the framework around it, those elements there will apply everywhere. It should be safe, actionable, accessible, and transparent—those are common traits. But if you do it in the Netherlands, Bulgaria, Norway, of course, you have to fit the local rules and regulations, of course it has to be adapted to local clinical processes. Some innovations get adopted faster in the US, because not as many differences as in Europe. But even in the US, you have governmental versus privately based healthcare, urban versus rural, etc. So the core of the framework is universal; and that’s how we tried to develop it. It should be usable everywhere.

In the next year or two, what do you anticipate will happen?

The real value in this is in living it. We should be able to embed this into daily practice, so that when my coders, my developers, embed this into the next group of systems, they should develop the algorithm so that it will remove biases, be well-developed, and achieve the intended outcomes. The benefit of this is when the people who work with AI can see that this has been applied, and therefore, they trust AI more. We know that AI has great opportunity but also risk, so we need to address it responsibly. And we want people not to be fearful of it. And of course, there are different use cases with higher and lower risks. Anyone can share a comment on the processthrough May 1; individuals and organizations can comment online [here is the link].

And more broadly, how do you view this moment in global healthcare?

Starting with the challenge, what is universal is that in any country we visit, the gap between demand and supply is widening. The number of patients and the amount of care they require, is growing exponentially all over the world, even as the supply of doctors and nurses is going down, and the unaffordability is growing. But that’s why it’s an exciting moment to talk about AI, because I truly believe we will be able to innovate—we need to get more productive in healthcare, and AI is a huge productivity driver. So this is a huge moment for us. AI can really make a difference. It’s not about talking about AI, but making sure it helps in daily life—writing fewer reports so you can spend more time at the bedside as a nurse, or preparing the first steps towards diagnosis, for doctors. That’s where the power of this initiative comes in: we need to unlock the application of AI, and we need to do so in a trustworthy manner, in an actionable manner.

 

 

Sponsored Recommendations

Addressing Revenue Leakage in Hospitals

Learn how ReadySet Surgical helps hospitals stop the loss of earned money because of billing inefficiencies, processing and coding of surgical instruments. And helps reduce surgical...

Care Access Made Easy: A Guide to Digital Self Service

Embracing digital transformation in healthcare is crucial, and there is no one-size-fits-all strategy. Consider adopting a crawl, walk, run approach to digital projects, enabling...

Powering a Digital Front Door with a Comprehensive Provider Directory

Learn how Geisinger improved provider data accuracy, SEO, and patient acquisition with a comprehensive provider directory.

Data-driven, physician-focused approach to CDI improvement

Organizational profile Sisters of Charity of Leavenworth (SCL) Health* has been providing care since it originated in the 1600s in France as the Daughters of Charity. These religious...