HC3 Analyst Note: AI’s Potential to Support Malware Development
On Jan. 17, the Health Sector Cybersecurity Coordination Center (HC3) published an analyst note entitled “Artificial Intelligence and Its Current Potential to Aid in Malware Development.” The note says that artificial intelligence (AI) has reached an evolutionary point that now it can be used by threat actors to develop malware and phishing lures.
The note says that “While the use of AI is still very limited and requires a sophisticated user to make it effective, once this technology becomes more user-friendly, there will be a major paradigm shift in the development of malware. One of the key factors making AI particularly dangerous for the healthcare sector is the ability of a threat actor to use AI to easily and quickly customize attacks against the healthcare sector.”
Further, “Artificial Intelligence (AI) has most notably been applied to the defensive side of cybersecurity. It has been used to detect threats, vulnerabilities and active attacks, and to automate security tasks. However, because of its known defensive use and because threat actors in cyberspace are known to be highly creative and well-resourced, concerns have been raised in the cybersecurity community about the potential for artificial intelligence to be used for the development of malware.”
An example of an AI-powered attack tool that was produced to study the potential of attacks and such technologies, according to the release, is DeepLocker. DeepLocker is a set of targeted and evasive attack tools driven by AI and was developed to better understand how AI models could be combined with malware techniques that already exist to create more powerful attacks.
“In November, the artificial intelligence research laboratory OpenAI publicly released a chatbot known as ChatGPT, which is based on its GPT-3.5 language model which was trained on Microsoft Azure,” the note adds. “As a chatbot, it’s designed to interact with humans and respond to their conversation and requests. Among other things, it can be used to answer questions, write essays, poetry or music, and compose e-mails and computer code. Its significant capabilities have raised serious concerns among the top IT companies as being potentially disruptive of existing markets. The platform enjoyed an immediate spike in popularity, garnering a million users in the first six days of its launch. The accessibility and capabilities of the tool as well as its popularity has prompted some prominent members of the cybersecurity community to further investigate how artificial intelligence might be used to develop malware. After testing it, it has been noted for being capable of crafting credible phishing e-mails.”
The note explains that AI technologies are believed by experts to only be at the beginning of what capabilities will enter industries and into people’s private lives. The cybersecurity community has not developed any mitigations and defenses for this kind of malicious code and there may never be ways to prevent AI-generated malware from being used in attacks.