A weapon of mass deception?

Digital security

As fictional images, videos and audio clips of real people become mainstream, the prospect of an AI-fueled fire hose of misinformation is of growing concern

Deepfakes in the 2024 global election year: a weapon of mass deception?

Fake news has dominated election headlines ever since it became big news during the race for the White House in 2016. But eight years later, there’s an arguably bigger threat: a combination of misinformation and deepfakes that could fool even the experts. It is highly likely that recent examples of election-themed AI-generated content – ​​including a series of images and videos circulating in the run-up to Argentina’s presidential election and manipulated audio of US President Joe Biden – were harbingers of what will probably happen. a larger scale.

With around a quarter of the world’s population set to go to the polls in 2024, concerns are growing that disinformation and AI-based deception could be used by malicious actors to influence outcomes, with many experts fearing the consequences of the spread of deepfake.

The threat of deepfake misinformation

As already mentioned, this year no less than two billion people will go to local polling stations to vote for their favorite representatives and state leaders. As major elections are scheduled for more than just countries, including the US, UK and India (as well as for the European Parliament), this has the potential to change the political landscape and direction of geopolitics for the next few years – and beyond there.

At the same time, however, misinformation and disinformation were recently ranked by the World Economic Forum (WEF) as the number one global risk of the next two years.

The challenge with deepfakes is that AI-based technology is becoming cheap, accessible, and powerful enough to cause large-scale harm. It democratizes the ability of cybercriminals, state actors, and hacktivists to launch convincing disinformation campaigns and more ad hoc, one-off scams. It’s one reason why the WEF recently ranked disinformation as the biggest global risk of the next two years and the second-biggest current risk, after extreme weather. This is what 1,490 experts from the world of academia, business, government, the international community and civil society consulted by the WEF say.

The report warns:“Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years… there is a risk that some governments will act too slowly, facing a trade-off between preventing disinformation and protecting freedom of word”.

deepfakes-misinformation-politics

(Deep) pretending

The challenge is that tools like ChatGPT and freely accessible generative artificial intelligence (GenAI) have enabled a wider range of individuals to engage in the creation of disinformation campaigns driven by deepfake technology. With all the hard work done for them, malicious authors have more time to work on their messages and amplification efforts to ensure their fake content is seen and heard.

In an electoral context, deepfakes could obviously be used to erode voter confidence in a particular candidate. After all, it is easier to convince someone not to do something than the opposite. If supporters of a political party or candidate could be adequately influenced by doctored audio or video, this would represent a definitive victory for rival groups. In some situations, rogue states may seek to undermine confidence in the entire democratic process, so that whoever wins will have difficulty governing with legitimacy.

At the heart of the challenge is a simple truth: When humans process information, they tend to value quantity and ease of understanding. This means that the more content we see with a similar message and the easier it is to understand it, the more likely we are to believe it. This is why marketing campaigns tend to be made up of short, continuously repeated messages. Add to that the fact that deepfakes are becoming increasingly difficult to distinguish from real content, and you have a potential recipe for democratic disaster.

From theory to practice

What is worrying is that deepfakes will likely impact voter sentiment. Take this new example: In January 2024, an audio deepfake of US President Joe Biden was disseminated via a robocall to an unknown number of primary voters in New Hampshire. In the message he apparently told them not to show up, and instead to “save your vote for the November elections.” The caller ID number displayed was also spoofed to appear as if the automated message was sent from the personal number of Kathy Sullivan, a former state Democratic Party chairwoman who now runs a pro-Biden super PAC.

It is not difficult to see how such appeals could be used to dissuade voters from voting for their preferred candidate ahead of November’s presidential election. The risk will be particularly acute in closely contested elections, where the movement of a small number of voters from one side to the other determines the outcome. With only tens of thousands of voters in a handful of swing states likely to decide the outcome of the election, a targeted campaign like this could cause untold damage. And adding insult to injury, since in the above case it spread via robocalls rather than social media, it is even more difficult to monitor or measure its impact.

What are tech companies doing about it?

Both YouTube and Facebook are said to have been slow to respond to some deepfakes that were supposed to influence recent elections. This is despite a new EU law (the Digital Services Act) requiring social media companies to crack down on attempts at election manipulation.

For its part, OpenAI said it will implement Coalition for Content Provenance and Authenticity (C2PA) digital credentials for images generated by DALL-E 3. The cryptographic watermarking technology – also pioneered by Meta and Google – is designed to make it more difficult to produce false images.

However, these are still only small steps and there are justified concerns that the technological response to the threat will be too little, too late as election fever grips the world. Especially if spread in relatively closed networks such as WhatsApp groups or robocalls, it will be difficult to quickly track and debunk any fake audio or video.

The “anchoring bias” theory suggests that the first information humans hear is what stays in our minds, even if it turns out to be false. If the deepfakers manage to convince the voters first, all bets on who the final winner will be are off. In the age of social media and AI-fueled misinformation, Jonathan Swift’s dictum “falsehood flies and truth limps after it” takes on a whole new meaning.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *