Someone says artificial intelligence, and some instinctively move towards the light, while others run for the exits—with both reactions reflecting the duality of the perception around this new technology. There are an increasing number of articles appearing extolling the virtues and possibilities with AI and others that forewarn of its darker side. There is no doubt about it AI is like any other technology that has come along in at least one sense, and that is that the duality is real. AI promises great things, while representing some frightening risks. But AI is likely here to stay, and like other emerging technologies will be embraced faster than cyber security experts can learn how to secure it and mitigate the risks. Leaving organizations and business leaders with the responsibility to determine appropriate adoption and use. Again, nothing new, but doing so without at least an awareness of the risks is irresponsible.
Probably the biggest challenge with AI is that adopters don’t fully understand the technology or how it can be compromised, meaning the risk it presents to itself and the environment around it. Creating a situation where implementing it without awareness is tantamount to flying blind. If the process, function or thing that AI is targeted at is critical to either operations or people that could spell trouble. AI like so many other technologies is only as good as those who created it. It relies on algorithms or rules defined by humans. If those rules are flawed or include inaccurate bias the AI can reach inaccurate conclusions and be biased itself. In a healthcare setting depending on where it is used this could lead to potential harm. AI deliberately fed bad rules or whose rules were altered or corrupted could also lead to its weaponization for harm. And of course the biggest fear of all, the loss of control of the AI, and it making its own decisions. The good news is AI is still very much in development, so one we understand that we don’t know what we don’t know yet, and two we still have time to learn what risks exist.
One of the best places to get started learning about AI from a security perspective is reading Bruce Schneier’s “The Coming of AI Hackers,” written for Harvard University’s Kennedy School of Government in 2021 and presented as a Keynote at that years RSA Conference. Schneier is probably one of the best known voices in cybersecurity, best known for his ability to take complicated technical topics and make them understandable to just about anyone. This work is no exception. Schneier lays out an explanation of hacking that makes the concept very believable and then shows how ubiquitous hacking is. AI is in fact is just as capable of reaching a poor result, a falsehood, etc. as it is to reach the truth. AIs reliability is a product of its rules and the information it is provided access to. Proving once again the age old truth of computing, “garbage in, garbage out.” But what happens when AI is deliberately fed the wrong information, the wrong reasoning or set of principles? Schneier describes this and more in his explanation of the risks of AI to consider. It is a read well worth reading.
You might think from reading this that I’m not keen on AI, but nothing could be more wrong. I am just as fascinated by AI and its possibilities as the next person, but my concern is this: throughout the history of IT security has lagged behind the innovator, but it was man chasing man. AI is a new reality. With AI it is man chasing machine, a machine capable of moving exponentially faster, and in ways that are truly unpredictable by the human mind, and that from a cybersecurity perspective is truly frightening. There is a reason some who have been pioneers in the development or use of AI are now cautioning others to slow down. It is not likely because they have abandoned their desire to develop or to capitalize on its use. It appears to be a dawning of a new realization. I think Schneier sums it up perfectly when he says, “We don’t know enough to make accurate predictions” concerning the balance between offensive and defensive AI, but “This is all something we need to figure out now, before these AIs come online and start hacking our world.” We need what I call “perceptive adoption” or being aware of and considering the security risks when adopting new or emerging technologies. The cybersecurity community needs to study AI further before we turn it loose.
Mac McMillan has spent several decades as a leading voice in healthcare cybersecurity. He was most recently chairman and CEO of CynergisTek, now part of Clearwater Compliance, LLC.