COMMENT
In recent global election cycles, the Internet and social media have facilitated the widespread spread of fake news, misleading memes, and deepfake content, overwhelming voters. Since it is difficult to directly compromise the electoral systems used to vote and count votes, adversaries turn to the age-old technique of psychological manipulation to achieve the desired results: no hacking required. With the emergence of generative artificial intelligence (AI) tools, the impact of disinformation campaigns is expected to increase further. This has led to greater uncertainty and ambiguity about reality, with personal biases often shaping perceptions of truth.
In some ways, misinformation is like a cyber threat: As security leaders, we realize that malware, phishing attempts, and other attacks are a fact of life. But we put controls in place to minimize the impact, if not prevent it altogether. We develop defense strategies based on decades of knowledge and historical data to gain the best advantage.
Today’s disinformation campaigns, however, are essentially a product of the last decade, and we have not yet designed a mature set of controls to counter them. But we have to do it. With 83 national elections in 78 countries to be held in 2024 – a volume not expected to be matched until 2048 – the stakes have never been higher. A recent spate of troubling incidents and developments illustrates the many ways adversaries are attempting to deceive the hearts and minds of voters around the world:
-
In Europe, the French Foreign Minister accused Russia of creating a network of over 190 websites intended to spread disinformation to “destroy the unity of Europe” and “sold out our democracies” in an attempt to discourage support for Ukraine. The network, codenamed “Portal Kombat,” also sought to confuse voters, discredit some candidates and disrupt major sporting events such as the Paris Olympics.
-
In Pakistan, the voters have been exposed to false anti-vaccination and Covid-19 propaganda, online hate speech against religious groups and attacks on women’s movements.
-
The World Economic Forum classifies the use of misinformation and misinformation by foreign and domestic entities as the “most serious global risk” for the next two years – due to extreme weather events, cyber attacks, armed conflicts and economic downturns.
Let’s clarify the difference between misinformation and disinformation here: the latter is incorrect information, but not intended for mass distribution. The distributor of “fake news” may not even be aware of its inaccuracies.
Disinformation, on the other hand, occurs when an entity (such as an adversary nation-state) knowingly exploits disinformation with the intent of viral distribution.
Psychological manipulation undermines the stability of democratic institutions. Think of disinformation farms as a big office with hundreds or even thousands of people doing nothing but creating authentic-looking blogs, articles, and videos to target candidates and positions that contradict their agendas. Once spread on social media, these falsehoods spread rapidly, reaching millions of people and disguised as real events.
How can citizens best protect themselves from these campaigns to maintain a firm understanding of what is real and what is not? How can cybersecurity leaders help?
Here are four best practices.
DYOV: Do your checkup
A meme or GIF alone is not a credible source of information. Not all professional-looking publications are credible or accurate. Not all statements from a reliable source may be theirs. It’s too easy to create fake videos using AI-generated images. There are few arbiters of truth on the Internet, so buyer beware. We also cannot depend on social media platforms to monitor and eliminate misinformation, regardless of whether we agree with it or support it. Section 230 established immunity for online companies that serve as publishing resources for third-party content.
It is crucial to look at different platforms and reconcile them with government websites, real news outlets, and respected organizations such as National Conference of State Legislatures (NCSL) they are reporting. Inconsistencies should serve as a warning sign. Also, when looking for bias in the information source, always ask: “Why should I believe it? Who is the author? What is their interest in this position?”
2. Avoid becoming part of the problem
Social media can do it mashed potato easy to manage with a post or video that presents a version of the “truth” that is anything but. The architects of disinformation campaigns depend on individual users to spread their messages, for example: “It came from my brother/boss/neighbor, so it must be true.” Again, DYOV before we pass anything. Be judicious about clicking the “next” and “like” buttons to avoid becoming a driver of these campaigns.
3. Follow the watchdogs
Organizations like the one based in the Netherlands Defending democracybased at the University of Pennsylvania FactCheck.org and Santa Monica, California-based RAND Corp. offer resources to help better distinguish fact from fiction. In the academic community, San Diego State University University Library and Stetson University duPont-Ball Library maintain a list of control groups, databases, and other resources.
4. Take a leadership position
As cybersecurity professionals, we recognize that threats such as branding and phishing occur outside of our controlled technology environments. We cannot block every email and our controls will not block or detect imitations of technologies we do not control. Instead, we need to actively promote cyber education and awareness so employees can learn about the latest phishing attempts and the dangers of clicking on unfamiliar links.
We should take a similar, education-focused approach to disinformation campaigns. We can create employee awareness programs so they understand what to look for, even when the attempts don’t involve our technology. We can also promote this knowledge through various platforms (internal company communications, public-facing blogs, articles) where we have a prominent voice. Offer credible, contextual resources against which to review information.
Unfortunately, misinformation – especially during political times – cannot be avoided, forcing us to field all relevant “facts” through due diligence. However, the tools allow everyone to do this by educating employees and the public as cybersecurity leaders. If they do, 2024 may be remembered as the year the global community decided that truth matters.