Countering voice fraud in the age of artificial intelligence

COMMENT
Three seconds of audio is enough to clone a voice. Vishing, or voice fraud, has quickly become a problem that many of us know all too well, affecting 15% of the population. Over three-quarters of the victims end up lose moneymaking this the most profitable type of tax-based scam on an individual basis, according to the United States Federal Trade Commission (FTC).

When caller ID spoofing is combined with AI-powered deepfake technology, scammers can, at very low cost and at scale, mask their real numbers and location and highly convincingly impersonate trusted organizations, such as a bank or local council, or even friends and family. .

While artificial intelligence (AI) presents all sorts of new threats, the ability to spoof caller IDs is still the main entry point for sophisticated fraud. This also posed serious challenges for authenticating genuine calls. Let’s dive into the criminal world of Caller ID spoofing.

What’s behind the rise in voice fraud?

The democratization of spoofing technology, such as spoofing apps, has made it easier for malicious actors to impersonate legitimate caller IDs, leading to an increase in fraudulent activity conducted via voice calls. One journalist, who claims to be known for her rational and meticulous nature, fell victim to a sophisticated scam that exploited her fear and concern for her family’s safety. Initially contacted via a fictitious call appearing to come from Amazon, she was transferred to someone posing as an FTC investigator, who convincingly presented her with a fabricated story involving identity theft, money laundering, and threats to her safety .

These stories are becoming more and more common. People are quick to be skeptical of a hidden, international, or unknown number, but if they see a legitimate company’s name appear on their phones, they’re more likely to answer the call in an accommodating manner.

In addition to spoofing, we are also seeing an increase in AI-generated audio deepfakes. Last year in Canada, criminals scammed to the elderly a sum of over 200,000 dollars by using artificial intelligence to imitate the voices of their loved ones in difficulty. A mother in the US state of Arizona also received a desperate call from her 15-year-old daughter claiming she had been kidnapped; the call turned out to be generated by artificial intelligence. When combined with caller ID spoofing, these deepfakes would be nearly impossible for the average person to catch.

AS Generative AI and AI-based tools become more accessible, this type of fraud is becoming more common. Cybercriminals don’t necessarily need to make direct contact to replicate their voice because, according to McAfee, more than half of people willingly share their voice in some form at least once a week on social media. Nor do they need exceptional digital skills, since apps do the hard work of cloning your voice based on a short audio clip, as highlighted recently by high-profile deepfakes American President Joe Biden and singer Taylor Swift.

Entire organizations can fall prey to voice fraud, not just individuals. All it takes is for a threat actor to convince an employee to share some seemingly insignificant details about their business over the phone, which are then used to connect the dots and allow a cybercriminal to gain access to sensitive data. This is a particularly concerning trend in industries where voice communication is a key component of customer interaction, such as banking, healthcare and government services. Many businesses rely on voice calls to verify identity and authorize transactions. Therefore, they are particularly vulnerable to AI-generated voice fraud.

What can we do about it

Regulators, industry bodies and businesses are increasingly recognizing the need for collective action against voice fraud. This could include intelligence to better understand scam patterns across regions and sectors, the development of industry-wide standards to improve the security of voice calls, and stricter rules governing reporting for network operators.

Regulators around the world are now tightening rules related to AI-based voice fraud. For example, the United States Federal Communications Commission (FCC) has made the use of robocalls illegal AI-generated or pre-recorded voices. In Finland, the government has imposed new obligations on telecom operators to protect against caller ID spoofing and the passing of scam calls to recipients. The EU is studying similar measures, led mainly by banks and other financial institutions that want to keep their customers safe. In all cases, efforts are underway to close the door to caller ID spoofing and smishing (fake text messages), which often serve as an entry point for more sophisticated AI-based tactics.

Many promising detection tools in development could, in theory, dramatically reduce voice fraud. They include voice biometrics, deepfake detectors, AI anomaly detection analytics, blockchain, signaling firewalls, and so on. However, cyber criminals are adept at outmaneuvering and deceiving technological advances, so only time will tell what will work best.

For businesses of all sizes and industries, cybersecurity capabilities will become increasingly important for telecommunications services. Aside from the communication network layer, companies should establish clear policies and processes, such as multi-factor authentication that uses a variety of verification methods.

Companies should also raise awareness of common fraud tactics. Regular employee training should focus on recognizing and responding to scams, while customers should be encouraged to do so report suspicious calls.

At consumer level, the UK Communications Regulatory Authority, Ofcomrevealed that more than 41 million people were targeted by suspicious calls or texts in a three-month period in 2022. In other words, although brands and governments have reiterated the message that legitimate companies will never ask for money or information sensitive phone, continuous vigilance is required.

The easy availability of cloning tools and soaring levels of crime have prompted experts such as the Electronic Frontier Foundation to suggest that people should agree on a family password to combat AI-based fraud attempts. It’s a surprisingly low-fi solution to a high-tech challenge.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *