Nation-states are weaponizing artificial intelligence in cyberattacks

Advanced persistent threats (APTs) aligned with China, Iran, North Korea, and Russia all use large language models (LLMs) to enhance their operations.

New blog posts from OpenAI AND Microsoft reveal that five top threat actors have used OpenAI software for research, fraud, and other malicious purposes. After identifying them, OpenAI closed all their accounts.

While the prospect of AI-enhanced nationwide cyber operations may seem daunting at first glance, there is some good news: None of the LLM abuses observed so far have been particularly devastating.

“Threat actors’ current use of LLM technology has revealed behaviors consistent with attackers using AI as an additional productivity tool,” Microsoft noted in its report. “Microsoft and OpenAI have not yet observed any particularly new or unique AI-enabled attack or abuse techniques resulting from threat actors’ use of AI.”

Nation-state APTs using OpenAI

National APTs using OpenAI today are among the most famous in the world.

Consider the group that Microsoft tracks as Forest Blizzard, but it is better known as Fancy Bear. The Democratic National Committee’s affiliated military unit for hacking and terrorism in Ukraine, the Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU), used LLM for basic scripting tasks: file manipulation, selection of data, multiprocessing and so on, as well as intelligence gathering, research into satellite communications protocols and radar imaging technologies, probably as they pertain to the ongoing war in Ukraine.

Two Chinese state actors recently used ChatGPT: Charcoal Typhoon (aka Aquatic Panda, ControlX, RedHotel, BRONZE UNIVERSITY)and Salmon Typhoon (also known as APT4, Maverick Panda).

The former made good use of AI for both pre-compromised malicious behavior, gathering information on specific technologies, platforms and vulnerabilities, generating and refining scripts, and generating social engineering texts in translated languages, and post-compromised, executing advanced commands , gaining deeper access to the system and gaining control in the systems.

Salmon Typhoon primarily focused on LLMs as an intelligence tool, sourcing publicly available information on high-profile individuals, intelligence agencies, domestic and international politics, and more. It has also largely unsuccessfully attempted to abuse OpenAI to help develop malicious code and research stealth tactics.

That of Iran Crimson Sandstorm (Tortoiseshell, Imperial Kitten, Yellow Liderc) is using OpenAI to develop phishing material (emails pretending to come from an international development agency, for example, or a feminist group) as well as code snippets to facilitate web scraping operations, performing tasks when users they access an app, and so on.

Finally there is that of Kim Jong-Un Emerald Sleet (Kimsuky, Velvet Chollima) which, like other APTs, is aimed at OpenAI for basic scripting tasks, generation of phishing content and search for publicly available information on vulnerabilities, as well as experts, think tanks and government organizations interested in defense issues and its nuclear weapons program.

AI isn’t changing the rules of the game (yet)

If these many harmful uses of AI seem useful, but not science fiction-level interesting, there’s a reason why.

“Threat actors who are effective enough to be tracked by Microsoft are likely already skilled at writing software,” explains Joseph Thacker, principal AI engineer and security researcher at AppOmni. “Generative AI is great, but it mostly helps humans be more efficient rather than make discoveries. I believe these threat actors are using LLM to write code (like malware) faster, but it doesn’t have a noticeable impact because they already had malware. They still have malware. They may be able to be more efficient, but at the end of the day they’re not doing anything new yet.”

While cautious not to overestimate its impact, Thacker warns that AI still offers advantages to attackers. “Bad actors will likely be able to deploy malware on a larger scale or on systems for which they previously had no support. LLMs are pretty good at translating code from one language or architecture to another. So I can see them converting the their malicious code in new languages ​​in which they were not previously proficient,” he says.

Furthermore, “if a threat actor found a new use case, it could still be hidden and not yet detected by these companies, so it’s not impossible. I’ve seen fully autonomous AI agents that can “hack” and find real vulnerabilities, so If some bad actor developed something like this, it would be dangerous.”

For these reasons he simply adds that “companies can remain vigilant. Continue to do the basic things well.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *