A new cyber-attack method called “Conversation Overflow” has emerged, attempting to slip credential-harvesting phishing emails past artificial intelligence (AI) and machine learning (ML)-enabled security platforms. .
Emails can slip through the cracks Threat detection by AI/ML algorithms through the use of hidden text designed to mimic legitimate communication, according to threat researchers at SlashNext, who published an analysis on the tactic today. They noted that it is being used in a series of attacks in what appears to be a testing exercise by the bad actors, testing ways to bypass advanced cyber defenses.
Unlike traditional security checks, which rely on detecting “known as bad” signatures, AI/ML algorithms rely on identifying “known as good” communication deviations.
So, the attack works like this: Cybercriminals create emails with two distinct parts; a visible section that asks the recipient to click on a link or send information, and a hidden portion containing innocuous text intended to fool AI/ML algorithms by mimicking “known as good” communication.
The goal is to convince controls that the message is a normal exchange, with attackers betting that humans won’t scroll four blank pages to the bottom to see the unrelated fake conversation intended only for AI/ML eyes.
This allows attackers to trick systems into classifying the entire email and any subsequent replies as safe, thus allowing the attack to reach users’ inboxes.
Once these attacks bypass security measures, cybercriminals can use the same email conversation to send authentic-looking messages requesting executives to reauthenticate passwords and logins, making it easier to steal credentials.
Leverage “known” anomaly detection in MLs
Stephen Kowski, SlashNext’s field CTO, says the emergence of Conversation Overflow attacks highlights the adaptability of cybercriminals to evade advanced security measures, particularly in the age of AI security.
“I only saw this attack style once in early 2023, but now I see it more often and in different environments,” he explains. “When I find them, they are targeting senior management and executives.”
He points out that phishing is a business, so attackers want to be efficient with their time and resources, targeting accounts with the most access or implied authority possible.
Kowski says this attack vector should be considered more dangerous than the average phishing attempt because it exploits weaknesses in new, highly effective technologies that companies may not be aware of. This leaves a gap that cybercriminals can rush to exploit before IT departments take care of it.
“In fact, these attackers continually perform penetration tests on organizations for their own purposes, to see what will and will not work reliably,” he says. “Look at the huge increase in QR code phishing six months ago – they found a weakness in many tools and quickly tried to exploit it everywhere.”
And indeed, the use of QR codes to deliver malicious payloads increased in the fourth quarter of 2023, especially among executives, who experienced 42 times more QR code phishing than the average employee.
The emergence of such tactics suggests that constant vigilance is required – and Kowski emphasizes that no technology is perfect and there is no finish line.
“When this threat is well understood and mitigated at all times, malicious actors will focus on a different method,” he says.
Using AI to combat AI threats
Kowski advises security teams to respond by actively performing their own assessments and testing with tools to find “unknown unknowns” in their environments.
“They cannot assume that their preferred vendor or tool, while effective when they acquired it, will remain effective over time,” he warns. “We expect attackers to continue to be attackers, to innovate, pivot and change their tactics.”
He adds that attack techniques are likely to get more creative, and as email becomes more secure, attackers are already changing their strategies attacks on new environments, including SMS or Teams chat.
Kowski says to invest in cybersecurity solutions that leverage ML and AI it will be necessary to combat AI-based threats, which explains the volume of attacks which is too high and constantly increasing.
“Security economies necessarily require investments in platforms that enable relatively expensive investments [human] resources to do more with less,” he says. “We rarely hear from security teams that they are hiring a bunch of new people to address these growing concerns.”