Disinformation is expected to be among the top cyber risks for the 2024 elections.
Andrew Brookes | Image Source | Getty Images
Britain is expected to face a barrage of state-backed cyberattacks and disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence is a key risk, according to cyber experts who spoke to CNBC.
Britons will vote in local elections on May 2, while a general election is expected in the second half of this year, although British Prime Minister Rishi Sunak has not yet set a date.
The votes come as the country faces a host of problems, including a cost-of-living crisis and stark divisions over immigration and asylum.
“With the majority of UK citizens voting at polling stations on election day, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” he told CNBC via email mail Todd McKinnon, CEO of identity security company Okta. .
It would not be the first time.
In 2016, the US presidential election and the UK Brexit vote were found to have been disrupted by disinformation shared on social media platforms, allegedly by groups affiliated with the Russian state, although Moscow denies these claims.
According to cyber experts, state actors have since carried out routine attacks in various countries to manipulate the outcome of elections.
Meanwhile, last week, the UK said Chinese state-affiliated hacking group APT 31 attempted to access the email accounts of British lawmakers, but said such attempts were unsuccessful. London has imposed sanctions on Chinese citizens and a Wuhan tech company believed to be a front for APT 31.
The United States, Australia and New Zealand followed with their own sanctions. China has denied the allegations of state-sponsored hacking, calling them “baseless.”
Cybercriminals using artificial intelligence
Cybersecurity experts expect malicious actors to interfere in the upcoming elections in several ways, not least through disinformation, which is expected to be even worse this year due to the widespread use of artificial intelligence.
Synthetic images, videos and audio generated using computer graphics, simulation methods and artificial intelligence – commonly referred to as “deep fakes” – will become a common occurrence as it becomes easier for people to create them, experts say.
“Nation-state actors and cybercriminals will likely use AI-powered identity-based attacks such as phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related institutions “Okta’s McKinnon added.
“We are also sure to see an influx of AI-driven content and bots generated by threat actors to spread disinformation on an even greater scale than we have seen in previous election cycles.”
The cybersecurity community has called for greater awareness of this type of AI-generated disinformation, as well as international cooperation to mitigate the risk of such malicious activities.
The maximum electoral risk
Adam Meyers, head of law enforcement operations for cybersecurity firm CrowdStrike, said AI-driven disinformation poses a major risk to the 2024 election.
“Right now, generative AI can be used for harm or for good and so we see both applications being adopted more and more every day,” Meyers told CNBC.
According to Crowdstrike’s latest annual threat report, China, Russia and Iran are very likely to conduct disinformation operations against various global elections with the help of tools such as generative artificial intelligence.
“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile nation states like Russia, China or Iran can leverage generative AI and some of the latest technologies to craft messages and use deep falsehoods to create a story or narrative that is compelling for people to accept, especially when people already have this kind of confirmation bias, is extremely dangerous.”
A key issue is that AI is reducing the barriers to entry for criminals seeking to exploit people online. This has already happened in the form of scam emails created using easily accessible AI tools like ChatGPT.
According to Dan Holmes, fraud prevention specialist at regulatory technology firm Feedzai, hackers are also developing more advanced – and personal – attacks by training artificial intelligence models on our available social media data.
“You can train these voice AI models very easily… through exposure to social media [media]” Holmes told CNBC in an interview. “It is [about] get that level of emotional involvement and really come up with something creative.”
In the context of the election, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labor Party, abusing party staff members was published on social media platform X in October 2023. The post it has garnered up to 1.5 million views, according to fact-correction charity Full Fact.
It’s just one example of many deepfakes that have cybersecurity experts worried about what will happen as the UK approaches elections later this year.
The elections are a test for the tech giants
However, deep fake technology is becoming much more advanced. And for many tech companies, the race to beat them is now about fighting fire with fire.
“Deepfakes have gone from being a theoretical thing to being an actual live production today,” Mike Tuchen, CEO of Onfido, told CNBC in an interview last year.
“Now it’s a cat-and-mouse game where it’s ‘AI versus AI’: using AI to detect deepfakes and mitigate the impact for our customers is the big fight right now.”
Cyber experts say it is becoming increasingly difficult to tell what is real, but there may be some signs that content is being digitally manipulated.
AI uses instructions to generate text, images and videos, but it doesn’t always get it right. So, for example, if you’re watching an AI-generated dinner video and the spoon suddenly disappears, that’s an example of an AI flaw.
“We will definitely see more deepfakes during the election process, but one easy step we can all take is to verify the authenticity of something before sharing it,” Okta’s McKinnon added.