The growing role of artificial intelligence in cyber attacks

March 19, 2024PressroomGenerative AI/incident response

Cyber ​​attacks

The large language models (LLMs) that power artificial intelligence (AI) tools today could be exploited to develop self-enhancing malware that can bypass YARA rules.

“Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively reducing detection rates,” Recorded Future said in a new report shared with The Hacker News.

The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are already being tested by threat actors to create malware code snippets, generate phishing and conducting reconnaissance on potential targets.

Cyber ​​security

The cybersecurity firm said it presented an LLM with a known piece of malware called STEELHOOK associated with the APT28 hacking group, along with its YARA rules, asking him to modify the source code to evade detection so that the original functionality remained intact and the generated source code. it was syntactically error-free.

Armed with this feedback mechanism, the altered malware generated by LLM allowed simple string-based YARA rules to avoid detection.

There are limitations to this approach, the most important of which is the amount of text a model can process as input at one time, which makes it difficult to operate on larger code bases.

In addition to modifying malware to go undetected, such AI tools could be used to create deepfakes that impersonate executives and senior leaders and conduct influence operations that mimic legitimate websites on a large scale.

Additionally, generative AI is expected to accelerate threat actors’ ability to conduct reconnaissance of critical infrastructure and gather intelligence that could be of strategic use in subsequent attacks.

“By leveraging multimodal models, public images and videos of ICS and production equipment, in addition to aerial imagery, can be analyzed and enriched to find additional metadata such as geolocations, equipment manufacturers, models and software versions,” the company said.

Indeed, Microsoft and OpenAI warned last month that APT28 used LLMs to “understand satellite communications protocols, radar imaging technologies, and specific technical parameters,” pointing to efforts to “gain in-depth knowledge of satellite capabilities.” .

Cyber ​​security

Organizations are advised to carefully review publicly available images and videos depicting sensitive equipment and delete them, if necessary, to mitigate the risks posed by such threats.

The development comes as a group of academics discovered that it is possible to jailbreak LLM-based tools and produce malicious content by passing input in the form of ASCII art (for example, “how to build a bomb”, where the word BOMB is written using “*” characters and spaces).

The practical attack, called ArtPrompt, weaponizes “LLMs’ poor performance in recognizing ASCII art to bypass security measures and elicit unwanted behavior from LLMs.”

Did you find this article interesting? Follow us on Twitter and LinkedIn to read the most exclusive content we publish.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *