Best Practices for Maintaining Control Over Your Authentic Content and Combating Deepfakes: A Q&A with Microsoft’s Campaign Success Team

If you’re involved in a political campaign, whether directly, for a digital firm or a traditional political organization, you understand the critical role of accurate information. Maintaining voter confidence and trust hinges on reliable content and information about your campaign, candidate, and key issues.

However, the proliferation of inauthentic content can easily take on a life of its own and sway opinions. Misinformation, deepfakes, and abusive content pose significant challenges for campaigns. To counter these negative impacts, it’s essential to prioritize campaign security and have a solid communications strategy. Ensuring that your authentic content remains unaltered by bad actors is crucial for maintaining transparency and trust.

We sat down with Microsoft's Ashley O’Rourke and Seth Reznik who are part of Microsoft’s Campaign Success Team dedicated to helping political campaigns navigate cybersecurity challenges and the new world of AI to discuss this topic further.

Q:2024 is a big year for elections, not just in America but globally. This campaign cycle will also be the first one where AI is readily available. Microsoft continues to invest a substantial amount in technologies that help political campaigns verify the authenticity of their media. Why is this issue so important to Microsoft? 

MSFT: Microsoft is committed to protecting the electoral process which includes taking proactive measures to help safeguard elections from disinformation and AI-driven deepfakes. We were proud to join +20 tech companies in signing the Tech Accord to Combat Deceptive Use of AI in the 2024 Elections where we outlined our collective commitments to address this issue. While we wait to see how meaningful of an impact deepfakes have on the upcoming elections, we are dedicated to ensuring that political parties and campaigns have the tools and resources needed to navigate the risks of deceptive AI use and protecting their media online.

Q: Best practices in cybersecurity emphasize taking offensive steps to protect assets like websites, data, key accounts, etc. The idea of protecting media and content is a newer topic. How should campaigns view their content in this world of AI, and what are the steps campaigns can take to prevent abusive or deceptive media? 

MSFT: The reality is that there isn’t a single silver bullet to combating deepfakes. Like most security issues, it requires a layered-defensive strategy. A key step in building your organization’s strategy to mitigate the risks of deepfakes is to take proactive measures that protect the authenticity of your media online. Leveraging new technologies such as digital watermarking and Content Credentials are a great way to do just that. Content Credentials (the basic details about a piece of content) can be added to media as cryptographically encoded metadata, ensuring they can’t be altered without detection. A great way to think of this is a tamper-proof seal on the content, if the seal is broken, we know it can no longer be trusted.

In spring of 2024, Microsoft released its Content Credentials as a service tool based on the C2PA technical standard, called Content Integrity, in a free private preview for organizations in the political elections space. This tool and technology not only help candidates and political organizations maintain greater control over their content and likeness by attesting to its origin, but they also help voters discern if digital content is coming from a trusted source, is AI generated, or has been manipulated.

Q: C2PA is a newly established standard enabling campaigns to protect their authentic media from bad actors and maintain a repository of authentic content. Can you talk about how C2PA started and what you hope to accomplish?

MSFT: The original idea for what became C2PA was started at the World Economic Forum in Davos in early 2019. Microsoft’s Defending Democracy Program showcased deepfake videos, highlighting the urgent need for a solution to protect journalism and democracy from synthetic and manipulated media. Following the conference, we began working on potential solutions and standards that could fit the challenge. Then in 2021, the Coalition for Content Provenance and Authenticity (C2PA) was created to work together to unify concepts that Microsoft and other organizations including the BBC, Adobe, Arm, Intel and Truepic were working on independently. C2PA, which has come to include more than 60 organizations from tech to PR to the media, now works to develop open standards and technical specifications for content provenance and authentication, ensuring the integrity of online media. With C2PA, Microsoft and its partners aim to help verify the origin of digital content. By doing so, we hope to empower consumers, journalists, and campaigns to discern trustworthy information from disinformation. In an era where truth is often elusive, we feel that this work represents a crucial step toward re-establishing trust in digital media.

Q: What does C2PA look like in practice and do you have any examples of how campaigns and organizations are protecting the authenticity of their content?

MSFT: As we’ve started working to educate political organizations on Content Credentials, we’ve seen them really focusing on leveraging it in three key stages of the content creation and development process: 

  1. At Capture & Creation: Content credentials applied as media is created (i.e. digital camera, AI tools, coming soon – Microsoft's Content Integrity mobile application)  

  2. During Editing Process: Content credentials applied as media is being edited/altered (i.e. using Adobe Creative Suite)  

  3. Prior to Final Publication: Content Credentials certification tool (i.e. using Microsoft’s Content Integrity) is used to apply credentials prior to publication.  

Since this is such a new technology that political organizations are just getting started with, initial uses have included: certifying all raw photos from campaign events, creating a repository of certified media for the campaign (including both raw images, video, and paid media). Another use case that is interesting and important is the certification of PDFs. C2PA is also a good fit for organizations who might want to certify their official communications, perhaps around the details of an election or for important research.

Q: There is a lot of concern about deepfakes and how campaigns can and should respond to impersonation and inauthentic content. What steps can a campaign take to detect and respond to abusive content? Should campaigns respond to everything.

MSFT: The threat of this type of content is something that campaigns need to think about as both a technology and communications issue. In that vein, it's important to be prepared. Steps like establishing a point person on the team, understanding the policies of the various content distribution networks, and protecting your content with Content Credentials as mentioned above are great steps. The second point is a really important one, not everything requires a response, and in some cases, a response can bring attention above and beyond what would’ve been the case without. It’s a fine line, but things that have low visibility/impact or may be more on the humor or meme end rather than the misleading harmful content, is often better left ignored.

Q: What additional resources can you share to help campaigns craft a media integrity or deepfake mitigation strategy?

MSFT: We’ve developed a guide for campaigns that outlines how to build a framework for a deepfake response plan in greater detail. It emphasizes the importance of educating staff, monitoring online discourse and content, and documenting incidents. The document's goal is to outline a response plan that includes how to assess threats and align your team to respond. Overall, campaigns need to have vigilance, preparedness, and proactive measures to contribute to informed public discourse and protect the democratic process, and this plan is a step to help folks in those directions. Lastly, if campaigns are interested in learning more or getting access to the Content Integrity tool – don't hesitate to reach out! Our team can be reached at CampaignSuccess@microsoft.com and you can visit aka.ms/CampaignSuccess for more information.