More than one in four companies now ban their employees from using generative AI. But this does little to protect against criminals who use it to trick employees into sharing sensitive information or paying fraudulent invoices.
Armed with ChatGPT or its dark web equivalent, FraudGPT, criminals can easily create realistic videos of profit and loss statements, fake IDs, fake identities, or even convincing deepfakes of a business executive using their voice and image.
The statistics are sobering. In a recent survey conducted by the Association of Financial Professionals, 65% of respondents said their organizations were victims of attempted or actual payment fraud in 2022. Of those who lost money, 71% were compromised via e-mail. According to the survey, larger organizations with annual revenues of $1 billion are the most vulnerable to email scams.
Among the most common email scams are phishing emails. These fraudulent emails resemble trusted sources, such as Chase or eBay, asking people to click on a link that takes them to a fake, but convincing-looking site. It asks the potential victim to log in and provide some personal information. Once criminals have this information, they can access bank accounts or even commit identity theft.
Spear phishing is similar but more targeted. Instead of sending generic emails, the emails are addressed to a specific individual or organization. The criminals may have searched for job title, co-worker names, and even the names of a supervisor or manager.
The old scams are getting bigger and better
These scams are nothing new, of course, but generative AI makes it harder to tell what’s real and what’s not. Until recently, wonky fonts, strange writing, or grammatical errors were easy to spot. Now, criminals anywhere in the world can use ChatGPT or FraudGPT to create convincing phishing and spear phishing emails. They can even impersonate a CEO or other manager of a company, hijacking their voice for a fake phone call or their image in a video call.
This is what happened recently in Hong Kong when a finance employee thought he received a message from the UK-based company’s chief financial officer requesting a transfer of $25.6 million. Although they initially suspected it might be a phishing email, the employee’s fears were allayed after a video call with the CFO and other colleagues he recognized. As it turns out, everyone on the call was deepfake. Only after consulting the headquarters did he discover the deception. But by then the money had been transferred.
“The work that has gone into making them credible is actually quite impressive,” said Christopher Budd, director of cybersecurity firm Sophos.
Recent high-profile deepfakes involving public figures show how quickly the technology has evolved. Last summer, a fake investment program showed a profoundly fake Elon Musk promoting a non-existent platform. There were also deepfake videos of CBS News anchor Gayle King; former Fox News host Tucker Carlson and talk show host Bill Maher, presumably talking about Musk’s new investment platform. These videos circulate on social platforms such as TikTok, Facebook and YouTube.
“It’s increasingly easier for people to create synthetic identities. Using stolen information or information invented using generative artificial intelligence,” said Andrew Davies, global head of regulatory affairs at ComplyAdvantage, a regulatory technology company.
“There is so much information available online that criminals can use to create very realistic phishing emails. Great language models are trained on the Internet, they know the company, the CEO and the CFO,” said Cyril Noel-Tagoe, principal security researcher at Netcea, a cybersecurity company focused on automated threats.
Larger companies at risk in the world of APIs and payment apps
While generative AI makes threats more credible, the scale of the problem is growing larger thanks to automation and the growing number of websites and apps that handle financial transactions.
“One of the real catalysts for the evolution of fraud and financial crime in general is the transformation of financial services,” Davies said. Just ten years ago, there were few ways to move money electronically. Most involved traditional banks. The explosion of payment solutions – PayPal, Zelle, Venmo, Wise and others – has broadened the playing field, giving criminals more places to attack. Traditional banks are increasingly using APIs, or application programming interfaces, that connect apps and platforms, which represent another potential point of attack.
Criminals use generative AI to quickly create credible messages, then use automation to expand. “It’s a numbers game. If I’m going to run 1,000 spear phishing emails or fraudulent attacks on the CEO and I find that one in 10 works, it could be worth millions of dollars,” Davies said.
According to Netcea, 22% of companies surveyed said they had been attacked by a bot for creating fake accounts. In the financial services sector the percentage rose to 27%. Of companies that detected an automated bot attack, 99% of companies said they noticed an increase in the number of attacks in 2022. Larger companies were more likely to see a significant increase, with 66 % of companies with $5 billion or more in revenue reporting a “significant” or “moderate” increase. And while all industries said they had records of fake accounts, the financial services industry was the most targeted with 30% of financial services firms attacked saying 6% to 10% of new accounts are fake .
The financial industry is fighting AI-powered fraud with its own AI models. Mastercard recently said it has created a new artificial intelligence model to help detect fraudulent transactions by identifying “mule accounts” used by criminals to move stolen funds.
Criminals are increasingly using impersonation tactics to convince victims that the transfer is legitimate and intended for a real person or company. “Banks have found these scams incredibly difficult to detect,” Ajay Bhalla, president of cyber and intelligence at Mastercard, said in a statement in July. “Their customers pass all the required checks and send the money themselves; the criminals didn’t need to break any security measures,” he said. Mastercard estimates that its algorithm can help banks save money by reducing the costs they typically incur in rooting out fake transactions.
More detailed identity analysis is needed
Some particularly motivated attackers may have inside information. Criminals have become “very, very sophisticated,” Noel-Tagoe said, but added, “they’re not going to know exactly the inner workings of your company.”
It may be impossible to know right away whether the CEO or CFO’s money transfer request is legitimate, but employees can find ways to verify this. Companies should have specific procedures for transferring money, Noel-Tagoe said. So if your usual channels for money transfer requests are through a billing platform rather than email or Slack, find another way to contact them and verify.
Another way companies are trying to distinguish real identities from deepfake ones is through a more detailed authentication process. At the moment, digital identity companies often require ID and perhaps a real-time selfie as part of the process. Soon, companies may ask people to blink, say their name, or take some other action to distinguish between real-time video and something prerecorded.
It will take time for companies to adapt, but for now cybersecurity experts say generative AI is leading to a wave of very convincing financial scams. “I’ve been working in technology for 25 years at this point, and this shift away from AI is like putting jet fuel on fire,” said Sophos’ Budd. “It’s something I’ve never seen before.”