One Legal Expert’s Perspective on AI-Related Challenges
Even as the leaders of patient care organizations are forging ahead with all types of artificial intelligence (AI) and machine learning (ML), developing algorithms for clinical and operational use, as well as plunging into generative AI for a variety of purposes, there are concerns that legal experts are looking at.
One of those legal experts is Jill Lashay, an attorney and shareholder at the Pittsburgh-based Buchanan Ingersoll law firm, where she specializes in labor and employment law. Lashay spoke recently with Healthcare Innovation Editor-in-Chief Mark Hagland about both the potential and pitfalls involved in leveraging AI in healthcare. Below are excerpts from that interview.
What is the potential legal and liability landscape like right now around the use of AI in healthcare?
It’s wide-open. And in healthcare, it’s more of the Wild West than it might be in other industries; that’s because other industries have been using AI as a technology in many areas for decades, with regard to data analytics, etc. And so AI is not as regulated in healthcare as in other industries.
So where are we, overall, right now, in healthcare?
One’s perspective will depend on what discipline, the job, the position an employee would hold. And it depends on the iteration. It’s being used to help in diagnostics, such as in breast cancer diagnosis. There will always continue to be productive iterations of clinical note-taking systems; Amazon and 3M are now partnering around some clinical documentation tools. It’s dependent on where you’re looking for advances in technology.
Obviously, legal-liability concerns are arising. How are you framing those concerns and potential concerns to clients?
I’m advising clients as it relates to their hiring and recruitment and retention of employees. An advanced use of AI in recruiting, hiring, or onboarding of employees. There are helpful AI features that can help more quickly orient employees, to make sure everybody’s on the same page; to meet an employee’s regulatory requirements; to make sure employees are fully in-service-d, etc. On the hiring side, AI is being used to determine the best-fitting candidate for a particular position, and there are a number of state and city regulations already in place; and the EEOC [Equal Employment Opportunity Commission] has already issued guidelines in terms of recruitment or hiring.
[Here is the first section of a May 18 press release from the EEOC:
“Today the Equal Employment Opportunity Commission (EEOC) released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” which is focused on preventing discrimination against job seekers and workers. The document explains the application of key established aspects of Title VII of the Civil Rights Act (Title VII) to an employer’s use of automated systems, including those that incorporate artificial intelligence (AI). The EEOC is the primary federal agency responsible for enforcing Title VII, which prohibits discrimination based on race, color, national origin, religion, or sex (including pregnancy, sexual orientation, and gender identity).
Employers increasingly use automated systems, including those with AI, to help them with a wide range of employment matters, such as selecting new employees, monitoring performance, and determining pay or promotions. Without proper safeguards, their use may run the risk of violating existing civil rights laws.
‘As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality,’ said EEOC Chair Charlotte A. Burrows. ‘This new technical assistance document will aid employers and tech developers as they design and adopt new technologies.’
The EEOC’s new technical assistance document discusses adverse impact, a key civil rights concept, to help employers prevent the use of AI from leading to discrimination in the workplace. This document builds on previous EEOC releases of technical assistance on AI and the Americans with Disabilities Act and a joint agency pledge. It also answers questions employers and tech developers may have about how Title VII applies to use of automated systems in employment decisions and assists employers in evaluating whether such systems may have an adverse or disparate impact on a basis prohibited by Title VII.
‘I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,’ said Burrows. ‘This technical assistance resource is another step in helping employers and vendors understand how civil rights laws apply to automated systems used in employment.’”]
What has the EEOC put forward as guidelines?
They’ve issued an initial outline of how AI should be used in hiring. To make sure that whatever AI system you’re using is free of bias. So they’re suggesting that employees do a bias audit or look at possible implicit bias. New York City has put in place some ordinances requiring that employers affirm that whatever AI tools they’re using are not involving bias. They have to disclose what tools they’re using.
From a healthcare standpoint, healthcare employers might possibly be bigger or more sophisticated than some other employers, so they might be more inclined to use AI; AI could actually conduct interviews. It’s positive in that it ensures that the same questions presented to one candidate would be asked of a second candidate. Or it might be used without a chatbot, maybe just a questionnaire; that’s more simplistic, but certainly many employers are going the route of a chatbot, so you’d have a virtual HR employee asking the requisite questions, and it could be used to filter out certain candidates.
And it’s out there in the literature, and some employers are beginning to use AI, where through digital facial recognition, the tool could tell whether someone is being useful or not, so there’s a myriad potential uses there.
A candidate might say, I was discriminated against because of implicit bias. If so, what might the defense to such an accusation be?
Well, it depends. If you’re in a city that requires the employer to inform the candidate they’re using the tool, there could be a violation there. Certain cities and states have already started to roll out requirements not only for notice, but even for consent. But perhaps the candidate gave consent, but now believes there’s bias. It would be like any other bias claim, if someone said they weren’t hired because of their protected class, whether race or disability or whatever. So the candidate would have to prove that no bias audit was in place; and again, that’s a requirement that’s just being rolled out in the form of ordinances. All of this is brand-spanking new; the EEOC is relying on historical precedents to apply AI questions to hiring. There’s so much going on out there; there are employers—not hospital systems—that are monitoring employees using AI who are working remotely, based on keystrokes or mouse movements, or through facial-recognition tools, based on ocular movements. All of that could involve privacy claims, or claims of explicit or implicit biases. So all of it is brand-new.
Where is this headed in the next few years?
There’s going to be a patchwork quilt of regulation for employers, in healthcare and outside healthcare, around how you manage the hiring, the control and direction of employees in the workplace, and what’s permissible and not. What might be permissible inside a hospital might not be permissible for employees working at home. On the healthcare delivery side, there are going to be a lot of tools. And we have an infrastructure in place, around privacy, per HIPAA, that could apply.
Where in the federal government will this set of issues be tackled?
I think there will be a multi-agency, multi-department approach across the federal government; nothing is siloed. I don’t know where it’s going to go. It partly depends on which agency comes out most quickly, and whether perhaps the agency is challenged in some way.
What would your advice be?
It would be to retain counsel, and get someone whom they’re comfortable feeling that they can look over the horizon—a full-service firm like ours with a labor and employment group, a life sciences group, an intellectual property group, a healthcare group, and just a corporate law group as well, and that can help them address patient care, facility construction, research if they’re in a research hospital, as well as, of course, employees.
All of this could impact unskilled labor, correct?
Yes, and we’ve already seen unskilled workers being taken out through technology. And you’ll have more unskilled and low-skilled labor being replaced with AI technologies as well.