TA547 uses an LLM-generated dropper to infect German organizations

Proofpoint researchers recently observed a malicious campaign targeting dozens of organizations in various sectors in Germany. One part of the attack chain in particular stood out: an otherwise ordinary dropper malware, whose code was clearly generated by artificial intelligence (AI).

What researchers found: Initial access broker (IAB) TA547 uses AI-generated dropper in phishing attacks.

While it may be a harbinger of things to come, there’s no reason to panic. The defense against malware is the same no matter who or what writes it, and AI-powered malware is unlikely to take over the world again.

“For the next few years, I don’t think malware coming from LLMs will be any more sophisticated than something a human will be able to write,” says Daniel Blackford, senior manager of threat research at Proofpoint. After all, AI aside, “We have very talented software engineers working contradictorily against us.”

TA547 AI Dropper

TA547 has a long history of financial cyberattacks. It came to prominence with Trickbot trafficking, but passed through a handful of other popular cybercrime tools, including Gozi/Ursnif, Lumma stealer, NetSupport RAT, StealC, ZLoader, and others.

“We’re seeing, not just with TA547, but also with other groups, much faster iteration of development cycles, adoption of other malware, and experimenting with new techniques to see what will stick,” Blackford explains. And the latest evolution of the TA547 seems to have been with artificial intelligence.

Its attacks began with short impersonation emails, for example masquerading as the German retail company Metro AG. The emails contained password-protected ZIP files that enclose compressed LNK files. The LNK files, when executed, triggered a Powershell script that was released the Rhadamanthys infostealer.

It seems simple enough, but the Powershell script that deleted Rhadamanthys had a strange feature. Inside the code, above each individual component, there was a hashtag followed by hyperspecific comments about what the component achieved.

As Proofpoint noted, this is characteristic of the code generated by LLM, indicating that the group – or whoever originally wrote the dropper – used some sort of chatbot to write it.

Is the worst AI malware on the way?

Like all of us, cyber attackers have experienced how AI-powered chatbots can help them achieve their goals more easily, quickly and effectively.

Some understood it small ways to use AI to improve their daily operations, for example by supporting research into emerging targets and vulnerabilities. But aside proofs of concept AND the strange novelty toolthere isn’t much evidence that hackers are writing useful malware with the help of artificial intelligence.

This, Blackford explains, is because humans are still much better than robots at writing malicious code. Additionally, AI developers have taken steps to prevent misuse of their software.

At least for now, he says, “how these groups will exploit AI to scale their operations is a more interesting problem than the idea of ​​creating some new super malware with it.”

And even once the super malware is automatically generated, the task of defending against it will remain the same. As Proofpoint concluded in its post, “Similarly, phishing emails generated by LLM to conduct a business email compromise (BEC) use the same characteristics as human-generated content and are caught by automated detection, malware or scripts that embed machine-generated code will still work the same in a sandbox (or on a host), enabling the same automated defenses.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *