Why criminals like artificial intelligence for synthetic identity fraud

As generative AI technology becomes more widely available, cybercriminals will likely take advantage of it to improve their synthetic identity fraud capabilities. Unfortunately, current fraud detection tools will likely not be sufficient to address the growing threat of AI-based generative synthetic identity fraud, which could result in financial losses in the coming years, experts say.

Synthetic identity fraud refers to a collection of stolen or fabricated personal information that is used to create an individual who exists only digitally. This information could include attributes belonging to real people, such as birth dates and Social Security numbers, as well as spoofed characteristics such as email addresses and phone numbers.

This type of fraud has increased so rapidly that many cybersecurity professionals are wondering how soon the technology will be available to address the threat. A Wakefield Research Survey Last fall, among a sample of 500 fraud and risk professionals, it was found that 88% of respondents believe that AI-generated fraud will get worse before new technology is created to prevent it.

Simple technology, lowest barrier to entry

Cybercriminals have turned to generative artificial intelligence to create deepfake videos and voiceprints of real people to defraud companies, says Matt Miller, head of cybersecurity services at KPMG US. The rise of large language models (LLMs) and other similar AI technologies has made generating fake images easier and cheaper for cybercriminals to misuse.

Cybercriminals’ use of generative AI varies based on their level of sophistication, says Ari Jacoby, founder and CEO of Deduce. In the past, attackers had to write their own scripts or hire a software developer to write scripts to attack. But with the advent of generative AI, cybercriminals can turn to these tools to write a malicious script quickly and cheaply.

An attacker can instruct the generative AI application, “Create an accurate New York driver’s license,” and will be able to fabricate documents using photos of real people readily available online, Jacoby says, noting that existing defenses intend preventing ID counterfeiting will be “crushed” by generative artificial intelligence.

“If you want to use data that already exists for almost everyone to create a selfie, it’s not difficult,” he says. “There’s a huge group of bad guys, bad people out there, who are now using this kind of AI as a weapon to accelerate the rate at which they can commit crimes. This is the low end of the spectrum. Imagine what’s happening in the high end of the organized crime spectrum and enormous financial resources.”

There are also copycat versions of AI tools like ChatGPT available on the Dark Web, says Nathan Richter, senior partner at Wakefield Research.

Get worse before you get better

THE Survey data from Wakefield Research shows that organizations are already affected by the rise of synthetic identity fraud. According to the report, 76% of respondents say they believe their organization has customers using synthetic identities who have been approved for accounts. Fraud and risk professionals surveyed also estimate that synthetic identity fraud has increased, on average, 17% over the past 24 months.

Nearly a quarter (23%) of respondents estimate the average cost of a synthetic fraud incident to be between $10,000 and $25,000. Another fifth of respondents estimate that synthetic identity fraud incidents cost between $50,000 and $100,000. For financial companies, the cost impact of synthetic identity fraud could be high.

Many cybersecurity professionals believe the problem of synthetic identity fraud is getting worse before it gets better. THE Deloitte Center for Financial Services predicts that synthetic identity fraud could lead to $23 billion in losses by 2030.

The willingness among respondents to discuss the issue suggests that synthetic identity fraud is becoming more pervasive, Richter says.

“Typically, when you research an audience of highly trained professionals, there is a certain amount of professional pride that makes it difficult to admit any kind of fault or problem,” Richter says. “Here for the most part we don’t have this problem. We have interviewees who readily admit that it is a huge problem. It results in significant losses per accident and is expected to get worse before it gets better. I can tell you, as a researcher, this is extremely rare.”

Fight against cyber fraud

Addressing this problem requires companies to take a multi-layered approach, says Mark Nicholson, head of cyber and strategic risk at Deloitte. Part of the solution involves using artificial intelligence and behavioral analytics to distinguish between real customers and scammers.

In addition to verifying a customer’s identity at a specific point in time, companies, particularly in the financial services industry, need to understand customer behaviors over a longer period and continue to authenticate them during those interactions, Nicholson says. In addition to behavioral analytics, companies are evaluating other options, such as leveraging biometrics, third-party data, fraud data sources, risk assessors, and session monitoring tools.

“Just as we face zero-days and patch applications, we will need to understand how generative AI is used on an ongoing basis and adapt as quickly as possible in response,” says Nicholson. “There’s no silver bullet, I don’t think. And it’s going to take a concerted effort from everyone involved.”

In addition to their cybersecurity tools, companies also need to evaluate the human risk factors that have emerged with the rise of generative artificial intelligence and synthetic identity fraud and begin training employees to spot those risks, Miller says. Companies need to understand where their processes are susceptible to human error.

“Can your leadership call your Treasury Department and move money with just one phone call? If your CEO or CFO were deepfake, could that result in financial loss?” Miller says. “Look at some of these process controls and put counterbalances in place where necessary.”

THE Biden Administration Executive Order Introducing new standards for the safety and security of AI is a good first step, but more regulation is needed to safeguard the public. While tech companies are pushing for self-regulation, that may not be enough to address the growing threat of artificial intelligence, Jacoby says, adding that self-government has not been beneficial to consumers in the past.

“I don’t think the talking heads on Capitol Hill understand all the ramifications, nor should we expect them to in the first innings of this game.” Jacoby says. “It’s very difficult to regulate these things.”

In addition to regulatory and policy controls, Miller says he envisions implementing technological controls so that AI can be used in a way deemed appropriate by stakeholders. However, while these restrictions are resolved, companies must remain diligent, because digital adversaries are able to build their own models and infrastructures to carry out fraud.

Ultimately, AI companies will have to play a role in mitigating the risks associated with the technology they have created.

“It’s up to the institutions that provide this technology to not just understand them, but really understand the risks associated with them and be able to educate on proper use and also be able to police their own platforms,” Miller says. “Historically we’ve always talked about it in cyberspace as spy versus spy, but in many cases now we’re seeing AI versus AI.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *