HC3 Analyst Note: AI’s Potential to Support Malware Development

Jan. 23, 2023
A recent analyst note from the Health Sector Cybersecurity Coordination Center warned the healthcare sector about artificial intelligence and its potential to assist with malware development—AI could customize attacks against the healthcare sector

On Jan. 17, the Health Sector Cybersecurity Coordination Center (HC3) published an analyst note entitled “Artificial Intelligence and Its Current Potential to Aid in Malware Development.” The note says that artificial intelligence (AI) has reached an evolutionary point that now it can be used by threat actors to develop malware and phishing lures.

The note says that “While the use of AI is still very limited and requires a sophisticated user to make it effective, once this technology becomes more user-friendly, there will be a major paradigm shift in the development of malware. One of the key factors making AI particularly dangerous for the healthcare sector is the ability of a threat actor to use AI to easily and quickly customize attacks against the healthcare sector.”

Further, “Artificial Intelligence (AI) has most notably been applied to the defensive side of cybersecurity. It has been used to detect threats, vulnerabilities and active attacks, and to automate security tasks. However, because of its known defensive use and because threat actors in cyberspace are known to be highly creative and well-resourced, concerns have been raised in the cybersecurity community about the potential for artificial intelligence to be used for the development of malware.”

An example of an AI-powered attack tool that was produced to study the potential of attacks and such technologies, according to the release, is DeepLocker. DeepLocker is a set of targeted and evasive attack tools driven by AI and was developed to better understand how AI models could be combined with malware techniques that already exist to create more powerful attacks.

“In November, the artificial intelligence research laboratory OpenAI publicly released a chatbot known as ChatGPT, which is based on its GPT-3.5 language model which was trained on Microsoft Azure,” the note adds. “As a chatbot, it’s designed to interact with humans and respond to their conversation and requests. Among other things, it can be used to answer questions, write essays, poetry or music, and compose e-mails and computer code. Its significant capabilities have raised serious concerns among the top IT companies as being potentially disruptive of existing markets. The platform enjoyed an immediate spike in popularity, garnering a million users in the first six days of its launch. The accessibility and capabilities of the tool as well as its popularity has prompted some prominent members of the cybersecurity community to further investigate how artificial intelligence might be used to develop malware. After testing it, it has been noted for being capable of crafting credible phishing e-mails.”

The note explains that AI technologies are believed by experts to only be at the beginning of what capabilities will enter industries and into people’s private lives. The cybersecurity community has not developed any mitigations and defenses for this kind of malicious code and there may never be ways to prevent AI-generated malware from being used in attacks. 

Sponsored Recommendations

10 Reasons to Run Epic on Pure

Gain efficiency & add productivity to your Epic data center. Download now to learn more!

Payer Platform Services and Support

Let’s leverage Payer Platform for smooth, seamless operations.When tasks are important and need to be done right, you trust the experts. The same is true for your...

Pure Powers Progressive Payers

Increase your business agility with Pure’s digital payer platform.Legacy storage solutions cannot keep up with the ever-expanding initiatives in the payer market. To deploy...

Executive Handbook: Ten Transformative Trends 2024

The editors of Healthcare Innovation have published their annual Ten Transformative Trends ensemble of articles