Mind Games: Your Role in Preventing the Spread of Manipulated Content

The concern about the impact of generative AI and deep fake content has emerged as a major threat this election year. 

Some consider fake content as a form of psychological warfare. The word psychology derives from the Greek “psyche”  meaning the mind, soul, or spirit. In disseminating inauthentic and manipulative content (the words I prefer),  bad actors' goals are to captivate our minds, souls, and spirits.

The media, government officials, and voters have sounded the alarm.  In a recent Yubico/DDC survey of registered voters, 78% of respondents expressed concern about AI-generated content being used to impersonate a political candidate or create inauthentic content. More than 40% believe AI will have a negative effect on the outcome of the election.

Recently, after the release of a manipulated video clip of President Biden, the term “cheap fake" entered the vernacular. Unlike a deep fake, which is 100% synthetic or AI-generated content, a cheap fake manipulates authentic content in a way that is misleading or false. The concern about inauthentic content is exacerbated because this is the first election cycle where the tools to create and manipulate content are readily available.

The general public should not get too wrapped up in parsing the difference between deep and cheap fakes. The focus should be on understanding the goal of the creators and disseminators of inauthentic content to influence, manipulate, and divide us. 

The purveyors of inauthentic content use the same playbook as cybercriminals use for phishing, get our eyeballs on content that drives us to act, such as the continued spread of misinformation or changing our behavior.

The Last Line of Defense 

Mitigating the impact of inauthentic content is a shared responsibility. Industry efforts, including The Tech Accord to Combat Deceptive Use of AI in 2024 Elections and The Coalition for Content Provenance and Authenticity (C2PA) are important collaborations. Agencies including the Cybersecurity Infrastructure and Security Agency (CISA) and the FBI track bad actor behavior, and educate the public. The media combats inauthentic content by fact-checking and focusing public attention on specific incidents.

These robust efforts won’t eliminate all of it as some manipulated content will find its way to all of us. We, the people, are the last line of defense in mitigating its impact. 

Be Diligent 

How do we identify and respond to manipulated content? We start by paying close attention to our emotional responses to the content we see. 

Inauthentic content may:

  • Be inflammatory, attempting to divide you against others;

  • Cause you to feel angry and compel you to share and/or respond emotionally;

  • Cause you to feel defeated, hopeless, and apathetic.

Not all content that elicits a strong emotional response is manipulative. However, it is a warning to pay attention, by checking the source for its legitimacy, searching images to verify if they are real, or confirming news reports are authentic. If the content came from a friend or family member, it doesn’t make it real. 

You can also report content you think is false or inauthentic. See tips below on how to help prevent the spread of misinformation.

A Flood of Content

Bad actors leverage newsworthy events. For example, phishing usually increases around natural disasters, as cyber criminals attempt to take advantage of people’s goodwill to donate and help others. We can expect the same around inauthentic content this election season. American politics creates a never-ending river of content in traditional and social media. Specific events such as debates, primaries, and campaign rallies provide moments of public focus and backdrops for generating and disseminating inauthentic content. Breaking news, including geopolitical events, is also an opportunity for bad actors to insert themselves in front of information seekers.

The risk for voters is higher in swing states for the Presidential election or balance of power races because outcomes hinge on swaying a small number of votes. Be on alert for sneaky ways fake content appears, for example through a community listserv or a fake identity posing as a community member. Microsoft’s Threat Analysis Center released a report in April 2024 highlighting these tactics and specific examples of U.S.-focused influence operations ahead of the U.S. Presidential elections. 

Risk rises as the election draws closer when the impact can be greater, and it doesn’t end on Election Day. In any election from the town council to the Presidency, if outcomes are slow to be determined or any other “issues” arise, bad actors will be quick to exploit any uncertainty. 

Remaining vigilant, dialing into our emotional responses, and alerting to the presence of manipulative content can help us better protect against it — and ultimately better protect our democracy. 

How to prevent the spread of inauthentic or false content 

Reporting false information that you see on social media helps slow its spread. Follow the links below for instructions on how to report misinformation and fake news on social media platforms.

Eligible political campaigns can also prevent the spread of misinformation by protecting candidates' and campaign's social media handles and accounts with access to DDC’s free tools. Doppel for Campaigns facilitates and automates takedowns across social media and Valimail for Campaigns authenticates emails campaigns send and prevents impersonation. For more information about how to access DDC offerings and quickly enable these tools contact info@defendcampaigns.org

Michael Kaiser
President and CEO
Defending Digital Campaigns
.