Microsoft, OpenAI warn of nation-state hackers using AI as weapon for cyberattacks

February 14, 2024PressroomArtificial Intelligence/Cyber ​​Attack

Hackers are using AI as a weapon for cyberattacks

State actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyberattack operations.

The findings come from a report published by Microsoft in collaboration with OpenAI, both of which say they disrupted efforts by five state-affiliated actors using its AI services to perform malicious cyber activity by shutting down their assets and accounts .

“Language support is a natural feature of LLMs and is attractive to threat actors with a continued focus on social engineering and other techniques that rely on false and deceptive communications tailored to work, professional networks and other relationships their goals,” Microsoft said in a report shared with The Hacker News.

While no significant or new attacks employing LLMs have been detected to date, adversary exploration of AI technologies has transcended various stages of the attack chain, such as reconnaissance, coding assistance, and malware development .

“These actors generally sought to use OpenAI services to query open source information, translate, find coding errors, and perform basic coding tasks,” the AI ​​firm said.

Cyber ​​security

For example, the Russian state-group tracked as Forest Blizzard (also known as APT28) is said to have used its offerings to conduct open source research into satellite communications protocols and radar imaging technology, as well as support with scripting.

Some of the other notable hacker groups are listed below:

  • Emerald Sleet (aka Kimusky), a North Korean threat actor, used LLM to identify experts, think tanks, and organizations focused on defense issues in the Asia-Pacific region, understand publicly available flaws, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
  • Crimson Sandstorm (aka Imperial Kitten), an Iranian threat actor that used LLM to create code snippets related to app and web development, generate phishing emails, and research common ways malware could evade detection
  • Charcoal Typhoon (aka Aquatic Panda), a Chinese threat actor who used LLM to research various companies and vulnerabilities, generate scripts, create content that could be used in phishing campaigns, and identify techniques for post-compromise behavior
  • Salmon Typhoon (aka Maverick Panda), a Chinese threat actor who used LLM to translate technical documents, retrieve publicly available information on multiple intelligence agencies and regional threat actors, fix coding errors, and find stealth tactics to evade the detection

Microsoft said it is also formulating a set of principles to mitigate the risks posed by the malicious use of AI tools and APIs by nationwide advanced persistent threats (APTs), advanced persistent manipulators (APMs) and criminal syndicates IT and design effective protection and security. mechanisms around its models.

“These principles include identifying and taking action against malicious threat actors’ use, notifying other AI service providers, collaborating with other stakeholders, and transparency,” Redmond said.

Did you find this article interesting? Follow us on Twitter and LinkedIn to read the most exclusive content we publish.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *