A Tool That Actually Simplifies Privacy and Security on Social: A Q&A with Block Party’s Founder, Tracy Chou

A Tool That Actually Simplifies Privacy and Security on Social: A Q&A with Block Party’s Founder, Tracy Chou

Candidates and campaign staff face the complex challenge of managing both public and private online lives. As high-risk technology users, they encounter threats across both work and personal accounts.

While social media platforms offer a wide range of privacy and security settings, navigating them can be confusing and overwhelming. Additionally, these settings change over time, and many users don’t regularly review their configurations, leaving them uncertain about what information they’re sharing and how well their accounts are secured.

Recently, DDC welcomed a new technology partner: Block Party. Block Party helps users systematically review and update their privacy and security settings across multiple platforms. We’re excited to offer Block Party’s services for free to eligible campaigns because it simplifies the process of safeguarding your digital presence.

We sat down with Block Party’s founder, Tracy Chou, to discuss her journey in creating Block Party and how it serves a critical need for high-risk users.

Q: What inspired you to create Block Party?

A: The internet is deeply woven into our daily lives, enabling incredible opportunities—such as building movements, reaching voters, and connecting communities—but it also brings significant digital risks that can extend into the real world.

As an activist, I’ve personally experienced online harassment and threats of physical stalking. It was emotionally overwhelming and frightening, but it was also infuriating to see these tactics used to silence and intimidate people out of public life. I started Block Party because I needed better tools to manage my own online safety, and I realized many others did, too. When technology enables attacks at an unprecedented scale, we need technological solutions to protect ourselves, our families, our colleagues, and the broader communities we serve.

Q: How does it work?

A: Block Party integrates with your browser to scan and clean up your accounts across 11+ platforms, including Facebook, LinkedIn, and X (formerly Twitter). It identifies potential security risks and data overexposures, then prompts you to take action. If you accept the recommendations, Block Party automates the process of updating your settings or removing outdated content, making it quick and effortless.

As a browser extension, Block Party does not store your login credentials or access your accounts without your explicit direction. Think of it as a security-savvy friend sitting beside you, helping you update your settings—once you close your laptop, they no longer have access.

Q: What sets Block Party apart from other online protection tools?

A: Many organizations and campaigns already use privacy tools to remove personal information from third-party data brokers and people search sites. Block Party is complementary—and arguably even more essential—because it tackles first-party data, the information individuals voluntarily share online. Since this data often feeds into third-party sources, our approach helps reduce exposure at the root.

Block Party isn’t just for those at elevated risk; it provides a foundational security layer for everyone on a campaign, reducing the chances of harassment, impersonation, and digital threats. Just as every team member should use a password manager, every campaign member—whether a candidate, staffer, or volunteer—should proactively secure their online presence.

Attackers don’t need to target candidates directly. Compromising even one team member can expose the entire campaign. From impersonation and phishing to account takeovers, a single weak link can create a serious security threat. Block Party ensures that every team member minimizes their risk, keeping the whole campaign safer.

Q: What excites you most about partnering with DDC?

A: This collaboration amplifies our shared mission: ensuring that campaigns at every level have access to proactive safety measures without adding to their workload. Harassment, doxxing, and other digital attacks are often used to intimidate, silence, or coerce candidates, staff, and volunteers, ultimately threatening our democratic process. Partnering with DDC makes it easier for campaigns to access the tools they need to stay protected and allows us to put our automation technology into the hands of those who need it most.

We want campaigns to operate without the constant fear of digital attacks so they can stay focused on what truly matters.

Q: Campaigns often have limited time and resources. How does Block Party help?

A: Cybersecurity and cybersafety are too important to be ignored—campaigns must carve out time and resources for them, or they risk serious consequences. Traditionally, campaigns have relied on security consultants to conduct workshops or manually review online account risks, but these approaches are slow, tedious, and often ineffective due to lack of follow-through.

Block Party automates essential privacy and security tasks that would otherwise be time-consuming or overlooked. Not only do we help address vulnerabilities immediately, but we also continuously monitor for new risks, platform changes, and emerging threats, keeping campaign members safe with minimal effort on their part.

Q: What Block Party features are particularly useful for political campaigns?

A: We offer several key features that support campaigns:

  • Privacy & Security Checklists: These expert-backed recommendations cover all essential settings across social media accounts. After a scan, users receive a checklist of flagged risks, along with automated fixes to quickly secure their accounts.

  • Content Cleanup Tools: These tools enable batch deletion of photos and posts across social platforms, preventing old content from being misused or taken out of context—especially important for public figures looking to manage their digital footprint.

  • Connection Cleanup Tools: Users can review and remove unwanted connections from their accounts. For example, our Facebook unfriending tool lets users bulk-delete acquaintances or inactive connections, limiting access to personal information.

  • Experience Cleanup Tools: These enhance existing platform features like blocking, making it easier to filter out harmful interactions and maintain a safer online space.

Q: Any final thoughts?

A: Online safety and privacy can’t be an afterthought—especially in political campaigns, where digital threats often translate into real-world dangers. Every campaign, from grassroots efforts to national races, should take proactive steps to protect their team. We’re proud to support the movement to safeguard democracy by keeping campaigns secure, allowing them to focus on what truly matters.

Block Party is available for free through DDC for eligible campaigns. If you have questions or want to get started, email info@defendcampaigns.org.

Five Easy Steps to A Cyber-Secure New Year

Five Easy Steps to A Cyber-Secure New Year

The Times Square ball has dropped, the parties are over, and people are back to their daily lives. As we move from 2024 to 2025, many people have made New Year’s resolutions to eat healthier, exercise more, or spend more time with family and friends — all admirable goals.

At Defending Digital Campaigns (DDC), we want you to add making yourself more cyber-secure in 2025 and beyond to your list of resolutions. It’s easy to get started, and you won’t have to count calories or hit the gym four times a week! Some measures of protection are just set it and forget it.

Q and A with Brandon Amacher of Utah Valley University

In October, Utah Valley University conducted a study on how deepfakes impact viewers, whether viewers can identify deepfakes, and how viewers engage with deepfake content. 

The study used a combination of online and in-person participants viewing videos or listening to real and AI-created content. Overall, 240 subjects participated in the study, including forty subjects on-site. 

We had a chance to pose some questions to Brandon Amacher, the director of the Emerging Tech Policy Lab for the I3SC and an instructor at the UVU Center for National Security Studies, who was one of the leads on the study.

DDC: Tell us a little bit about the National Security program you run at UVU.

BA: Established in January 2016, the UVU Center for National Security Studies is one of the premier national security programs in the country. The CNSS employs a multi-disciplinary academic approach to examine both the theoretical and practical aspects of national security policy and practice, with areas of focus in intelligence, emerging technology, cybersecurity, and homeland security.

DDC: You along with some colleagues, and most importantly students, did a research study on how people respond to inauthentic content.  What was the impetus behind the research? 

BA: Several of us here at UVU including the Center for National Security Studies and the Gary R. Herbert Institute for Public Policy were deeply concerned about the potential impact of deepfake media on election security and public trust. We decided to take action and to leverage the expertise of UVU’s Neuromarketing SMARTLab, which has extensive experience conducting research on subjects’ non-conscious responses to digital content, in order to determine just how impactful deepfake media actually is. This combination of expertise and experience allowed us to design and execute a study that could effectively quantify the severity of the problem for policymakers. 

DDC: What were the research questions you hoped to answer? 

BA: We designed this study to address four key questions:

  1. Is there a measurable difference in the credibility of legitimate media versus deepfake media?

  2. Do participants exhibit different unconscious responses to real versus deepfake content?

  3. How accurately can subjects identify deepfake media after viewing or listening to it?

  4. Is there a difference in the ability to distinguish deepfakes in audio versus video content?

DDC: We know you developed a methodology that used both in person testing as well as online participants. Can you describe the approach?

BA:
A total of 244 subjects participated in the study, with 40 of them tested on-site to collect biometric data, including eye-tracking and facial coding. The participants were divided into four equal groups and exposed to either a video or audio sample.

At the beginning of the test, participants were unaware that some content was AI-generated. After viewing or listening to the media, participants evaluated the message and speaker on factors such as credibility, knowledge, and trustworthiness. Participants would rate the content they viewed on a Likert scale (1-7) with one being the least favorable rating, four being neutral, 7 being the most favorable. They were then given the opportunity to explain their rating in a short-answer response. Questions in this section of study were as follows: 

  1. What was your impression of the speaker? (Short Answer)

  2. How knowledgeable do you think the speaker is about the topic? (Likert Score & Short Answer)

  3. How trustworthy do you think the speaker is about the topic? (Likert Score & Short Answer)

  4. How persuasive do you find the speaker? (Likert Score & Short Answer)

  5. How reliable did you find the information in the sample? (Likert Score & Short Answer)

  6. How would you rate the overall quality of the content? (Likert Score & Short Answer)

  7. This content seemed authentic. (Likert Score & Short Answer)

Following this section, subjects were informed that the study aimed to measure the impact of deepfakes and that some content may have been AI-generated. Participants were then asked to assess whether they believed the media was real or AI-generated and to rate their confidence in their judgment. 

DDC: What were the top takeaways from the study?

BA:

  • Impact on Viewer: Deepfake and genuine media were rated by participants across several categories, including the speaker's knowledgeability, trustworthiness, persuasiveness, the reliability of the information, and the quality of the content. The average ratings across each the categories showed that deepfakes had effectively the same impact on viewers as real content. Ratings were based on a Likert scale, with no statistically significant differences observed between deepfake and real media. 

  • Difficulty Identifying Deepfakes in Retrospect: Even after being informed that they might have encountered a deepfake, participants struggled to consistently identify AI-generated content. Across all media types—real video, deepfake video, real audio, and deepfake audio—at least 50% of participants believed the media was "probably real." Furthermore, 57% or more were confident in their assessment, suggesting a roughly 50/50 chance of detecting a deepfake, with most people standing by their initial judgments.

  • Non-conscious Engagement with Deepfakes: Participants showed higher levels of engagement and confusion when exposed to deepfake content, as evidenced by micro-expressions, though they did not report these feelings during post-test interviews. This suggests that deepfakes may trigger a non-conscious response associated with the "uncanny valley" effect. In contrast, real media prompted more traditional emotional responses which were also expressed more strongly than emotions elicited by deepfakes

DDC:  We know that this research was just phase one, and you have a bigger vision of where this research could go, can you share some ideas of other research questions you have about inauthentic content that you hope to explore in the future?

BA: We are currently exploring options for follow-up studies which could tackle a range of issues including:

  • How could deepfake media affect down-ballot elections?

  • Are people more prone to being deceived by deepfake media if it reenforces their previously held beliefs?

  • How could deepfake content be utilized in cybercrime and information warfare? 

DDC: What’s something fun or fantastic about UVU that you think everyone should know?

UVU is intensely focused on providing engaged learning opportunities to students so that they can enter the workforce not only with an academic credential, but with high-impact experience. This project is a perfect example of diverse departments collaborating in order to afford students the opportunity to have an impact on a critical issue set. 

DDC: Where can people read more about the findings?

BA: https://www.uvu.edu/news/2024/10/ai-deepfake-2024-elections-discussion.html

Cyber Attacks During the 2024 Campaign Were Widespread

In the lead-up to the 2024 campaign, concerns about threats to campaigns and the election process were significant. While much attention focused on potential AI and deepfake-driven misinformation, these threats didn't materialize as severely as worst-case scenarios suggested. Though some deepfakes and inauthentic content did circulate, their impact on the election requires further evaluation. Instead, the most significant attacks used traditional techniques like phishing and Distributed Denial of Service (DDoS) attacks on websites.

Three major incidents captured broad media attention and public interest: the Biden robocall incident during the New Hampshire Primary, the successful attack on the Trump campaign, and the parallel attack on the then-Biden campaign in the summer.

Most other cyber incidents received limited attention, even within the political sector. While some were reported by select media outlets, they didn't become national stories.

The most comprehensive insights into the cyber threat environment came from companies like Google and Microsoft, whose threat analysis work provided a deeper understanding of foreign adversaries' actions and methods. These reports, along with federal government warnings, proved invaluable (see links below).

In the infographic below, DDC presents a timeline of these attacks. 

Below are the details of the incidents in the infographic along with some additional other cyber incidents of note.

Biden Robocall - January 2024

A widespread robocall appearing to be a voice of President Joe Biden advised New Hampshire residents against voting in the presidential primary, and instead save their vote for the November general election.

The call stated: “Republicans have been trying to push nonpartisan and Democratic voters to participate in their primary. What a bunch of malarkey. We know the value of voting Democratic when our votes count. It’s important that you save your vote for the November election.

NY State Deepfake - January 2024

In a video allegedly showing Keith Wright, a fixture in New York politics, he could be heard saying “I dug her grave and she rolled into it.”  Laced with other profanities, he described a rival as “lazy, incompetent — if it wasn’t for her, I’d be in Congress.”

The 10-second clip spread quickly among Harlem political players — a seemingly stunning hot mic moment for the influential leader. But there was a problem: It was faked.

The audio was generated by artificial intelligence to sound like Wright and shared anonymously to cause political chaos. Wright quickly denounced it.  

Texas Deepfake Mailer - April 2024

A mailer, paid for by the Jeff Yass-bankrolled Club for Growth Action PAC, depicted Phelan in an intimate hug with former U.S. House Speaker Nancy Pelosi, apparently a remake of Pelosi hugging new House Democratic Leader Hakeem Jeffries.

Less publicized was the flip side of the mailer, which falsely depicted Phelan at a lectern speaking at a Texas House Democratic Caucus news conference.

Trump Deepfakes - June 2024

A video from Republican presidential candidate Ron DeSantis included apparently fake images of former President Donald Trump hugging Anthony Fauci.

In a collage of six pictures of the two men, three appear to be AI-generated fakes depicting Trump and Fauci embracing. The other three are real photos of the two men together in March 2020, according to AFP, which first identified the fakes.

Cheapfakes - June 2024

Selectively edited clips of President Biden circulated online to paint the picture of a physically and mentally challenged commander-in-chief as he was attending the D-Day commemoration in Normandy.

Utah Governor Spencer Cox Deepfake - June 2024

A video circulated appearing to show Gov. Spencer Cox admitting to fraudulently gathering signatures in the gubernatorial race. A local elections officer warned her followers on Twitter/X that the video should serve as a “huge warning” moving forward.

Harris Deepfake - July 2024

A video using artificial intelligence voice-cloning mimicking the voice of Vice President Kamala Harris saying things she did not say raised concerns about the power of AI to mislead with Election Day about three months away.

The video, which was developed as a parody, used many of the same visuals as a real Harris ad. It was shared by Elon Musk shared it on platform X  without explicitly noting it was originally released as a parody. Musk later clarified the video was intended as satire, pinning the original creator’s post to his profile.

Attack on Trump Campaign - August 2024

Former President Donald Trump’s campaign was attacked and information was stolen and distributed to the media.

The campaign blamed “foreign sources hostile to the United States,” citing a Microsoft report on Friday that Iranian hackers “sent a spear phishing email in June to a high-ranking official on a presidential campaign.”  Spear phishing was the attack method. A third party close to the campaign had their account compromised and phishing emails from that “legitimate” source were sent to campaign officials who then had their accounts compromised.

Attempted Harris Campaign Attack - August 2024

At the same time the Trump campaign attack was happening, there was an attempted similar attack on the Harris campaign. The FBI reported that the attempted attack targeted three Biden-Harris campaign staffers.

The attack was unsuccessful.

Iran Sends Trump Data  to Biden Campaign - September 2024

Iranian hackers sent unsolicited information they stole from Donald Trump’s presidential campaign to people who were affiliated with Joe Biden’s campaign.

The Office of the Director of National Intelligence, the FBI, and the Cybersecurity and Infrastructure Security Agency said in a joint statement that in late June and early July, Iranian malicious cyber actors “sent unsolicited emails to individuals then associated with President Biden’s campaign that contained an excerpt taken from stolen, non-public material from former President Trump’s campaign as text in the emails.”

There is no indication that Biden’s staff ever replied.

Harris Deepfake -  September 2024

Using a fictitious San Francisco news outlet, Russian surrogates disseminated “fabricated videos designed to sow discord and spread disinformation” about the Kamala Harris presidential campaign, according to Microsoft.

One video, which “used an on-screen actor to fabricate false claims about Vice President Harris’s involvement in a hit-and-run accident,” was purportedly published by a San Francisco news outlet created days before the video was posted.

The video generated millions of views, according to Microsoft, and was produced by a troll farm with ties to the Kremlin.

Walz Deepfake - October 2024

Russia was behind social media posts making baseless and salacious claims about Minnesota Governor, Tim Walz. The false claim that the Democratic vice-presidential nominee abused a student as a teacher went viral after an anonymous X account posted what it said were screenshots of correspondence with an alleged victim.

The documents were debunked, and the account soon disappeared from the site.

Multiple experts tracking disinformation attributed the source to a disinformation network with ties to Russia called Storm-1516.

Georgia Secretary of State Reports DDoS Attack - October 2024

The Georgia Secretary of State reported that there were attempts to interfere with and attackers attempting to knock the absentee ballot website offline. Hundreds of thousands of IP addresses from numerous countries flooded the Georgia website with bogus traffic, a classic Distributed Denial of Service or DDoS attack. 

China Verizon/Trump Phone Hack - October 2024

Chinese hackers targeted data from phones used by former President Donald J. Trump and his running mate, Senator JD Vance of Ohio, as part of what appears to be a wider intelligence-collection effort.

This was, and is still ongoing, sophisticated penetration of telecom systems.

The type of information on phones used by a presidential candidate and his running mate could be a gold mine for an intelligence agency or other bad actors. A successful attack could reveal Who they called and texted, how often they communicated with certain people, and how long they talked to those people. This is high-value information for an adversary like China.

Georgia Election Deepfake - October 2024

Georgia’s Secretary of State Brad Raffensperger reported the state had been targeted by election disinformation, pointing to a viral video of alleged voter fraud that he suggested could be the result of foreign meddling.

The original video, which emerged on the social media platform X, had well over half a million views and purportedly showed a Haitian immigrant claiming he voted several times for Vice President Kamala Harris in the presidential election. Even though the original post was deleted, the video continued to circulate on social media as proof of supposed voter fraud.

DDOS Campaign Website Attacks - November 2024

DDoS attacks targeting US political or elections-related Internet properties in particular picked up starting in September, with the more than 6 billion HTTP DDoS requests seen during the first six days of November exceeding the volume seen during all of September and October.

Cloudflare blocked a series of DDoS attacks targeting a high-profile campaign website. The attacks began on October 29, with a four-minute spike reaching 345,000 requests per second. On October 31, more intense attacks followed, with the first lasting over an hour, peaking at 213,000 requests per second. Hours later, on November 1, a larger attack reached 700,000 requests per second, followed by two more waves at 311,000 and 205,000 requests per second.

Over 16 hours, Cloudflare blocked more than 6 billion malicious HTTP requests between October 31 and November 1. Additional attacks continued on November 3, with peaks at 200,000 requests per second; on November 4, at 352,000; on Election Day, November 5, at 271,000 around 14:33 ET (11:33 PT); and on November 6, at 108,000.

Threat reports from Microsoft and Google

Microsoft Threat Intelligence Report: Iran steps into US election 2024 with cyber-enabled influence operations - Aug 2024
https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/5bc57431-a7a9-49ad-944d-b93b7d35d0fc.pdf 

Google Threat Analysis Group: Iranian-backed group steps up phishing campaigns against Israel, U.S. - Aug 2024
https://blog.google/threat-analysis-group/iranian-backed-group-steps-up-phishing-campaigns-against-israel-us/ 

Cybersecurity: It Ain’t Over ‘Till It’s Over and Not Then Either

In most people’s minds, Election Day is the end of a long hard campaign. With the votes cast and the counting underway, it's time to roll up the sidewalks and move on. However, the time after election day can be rife with cybersecurity risk.

Election results may take days or even weeks to finalize. This is normal and can happen for several common reasons: ballot counting procedures take time, races are too close to call, or automatic recounts are required by law.

The time between Election Day and final results is a high-risk period for campaigns. Bad actors may see opportunities to stir the pot of unknown results even if there is no way to impact the outcome. False information about a candidate or the integrity of the process or results can be used to further create division, anger, and distrust in the electoral process. All big wins for nation-states and hacktivists. 

Cybercriminals may view campaigns that are winding down as a target of opportunity. As staff clean up last-minute details and obligations, cybercriminals might attempt to fool staff into paying fake invoices.

Don’t let your guard down. 

If you see false or inauthentic content or impersonations,  follow the links below for instructions on how to report misinformation and fake news on social media platforms.

Remind your team to be on alert for efforts by cybercriminals. Cybercriminals impersonate campaign staff or compromise the email accounts of vendors to get legitimate-looking invoices in front of people. Double-checking can prevent a lot of incidents. Have staff:

  • Double-check routing numbers to be sure they haven’t changed.

  • Double-check directly with vendors if they have even the faintest hint of suspicion like things requiring immediate attention or a special discount to act now.

  • Double-check the actual email with the real person for any invoices that come with new instructions or require immediate attention.

There are other ways to securely close down your campaign. Read our blog on the steps you should take, or visit DDC’s Knowledge Base articles on why post-election cybersecurity is important and easy steps to implement it.

How to Securely Close Down Your Campaign

In less than one week, the campaign will be over. Your months of hard work have hopefully led to a victory for your candidate. Win or lose, your staff will soon be departing, and the campaign will be on hiatus or closing down. 

Cybersecurity doesn’t end on Election Day.  You want to be sure your campaign’s digital assets are secure and not left open to abuse. Regardless if your candidate is running again, there are valuable data and assets to protect. The following will help keep the campaign secure:

  • Secure and store credentials (logins and passwords) to key accounts and services: Many campaigns have staff quickly leave once election day passes. They may have created accounts on behalf of the campaign during their tenure. Someone like the campaign manager might be the admin on the Workspace or M365 platforms. These credentials will be critical to new staff when the campaign reboots for the next election. Make sure someone who will remain around the candidate–a permanent staffer, counsel, or family member–has access to those credentials and changes account ownership as needed. Some password managers have vault options for storing passwords that need to be shared later

  • Manage the digital departure of people:  Deleting accounts no longer in use is a critical cybersecurity function. Dormant accounts are a common attack vector for bad actors. It’s easy for them to fly under the radar inside your system if they have accessed an account that exists but is no longer used. Depending on the platforms you use, there may be ways to share or automatically move files staff have created to an existing user or store in a shared drive.

  • Secure the website between campaigns: Out of site shouldn’t be out of mind when it comes to your website. Periodic monitoring for content changes should be conducted. Additionally, be sure that campaign domains are renewed so they can’t be taken over by someone else, and ensure security certificates (https) are up-to-date. Any people who no longer need access to the content management system should be removed from the site. Finally, if your website is not protected from DDoS attacks, you should implement Cloudflare or Project Shield from Google. If your website has names of staffers and contact information that are no longer working, delete those as well.

  • Remove access to social media: Throughout the campaign, you may have granted access to a candidate's or campaign's social media presence. This could include staff who are posting and responding and ad buying privileges. Revoke all privileges for all that are no longer needed. Change passwords on the accounts as needed as well.

  • Remove campaign data from personal devices: Most campaigns are “bring your own device” (BYOD). As staffers or key volunteers leave your campaign, they may have valuable and sensitive information on those devices. Purging them and any app access they may have will prevent any data leakage post-election.

  • Purge: Campaigns amass reams of data some of which can be highly personal in nature or considered personally identifying information, including personnel information.  Delete all unneeded files containing sensitive information or give them to someone else, like an attorney for the campaign or another trusted entity, for safekeeping until the next campaign

DDC has a section in our Knowledge Base with additional information about closing down your campaign securely.​

If you want to stay up to speed on cybersecurity, subscribe to our newsletter. During the off-season, our newsletter frequency is about once per month with possibly an announcement or two in between.

If you have taken advantage of any free services from DDC, such as Cloudflare, be on the lookout for an email about how those services are maintained between election cycles.

New Report Finds 27,000 Personal Accounts and Passwords Related to Political Campaigns Readily Available

With just under three weeks to go until election day, a new report from VoterGuard being released in partnership with Defending Digital Campaigns found over 27,000 accounts associated with political campaigns readily available online including account passwords and other sensitive personal information. 

Political campaign staff members and anyone associated with a campaign are considered high-risk users. This publicly findable information substantially increases the vulnerability of individuals and campaigns to attacks like phishing and accounts being taken over. 

Spear phishing was used to attack the Trump campaign in August as well as attempted attacks on the then-Biden campaign. This is a reminder that protecting accounts is step one for anyone associated with a political campaign. New initiatives from the industry like enabling passkeys on accounts make them virtually unphisable - even if passwords are publicly available - and are fast and easy to implement.

We sat down with Andrew Schoka, founder of VoterGuard and a former US Army Cyber Warfare Officer to talk about the key takeaways from the report. 

Q 1. What is the VoterGuard 2024 Election Threat Report and what is the greatest insight the report reveals?

VoterGuard was launched because we were passionate about empowering political organizations to better defend themselves against cyber threats. The 2024 Election Threat Report is our effort to share the most pressing risks we’ve identified for political parties and campaigns at all levels as we approach the 2024 elections.

The biggest takeaway from our report is the alarming amount of personal information exposure—over 66,000 accounts linked to political organizations were publicly discoverable through vectors like misconfigured web pages or unsecured file-sharing tools. Of the 66,000 exposed accounts in recent data breaches, 27,000 account passwords and other highly sensitive personal information were also available. This exposure makes phishing and cyberattacks much more likely, especially for local campaigns where cybersecurity resources are often stretched thin. Importantly, these threats don’t care about party lines—both sides of the political aisle are being targeted by malign actors this election cycle. 


Q 2. The report highlights personal information exposure as a significant issue. Why is this a risk for campaigns and political party staff?

In our report, we differentiate between accounts that are exposed and those that are breached.

  • Exposed accounts are those we were able to find through publicly accessible sources like insecure file-sharing services or misconfigured web pages. These accounts may not have been part of a data breach, but they’re still vulnerable because their existence—and sometimes details like email addresses or usernames—are publicly visible.

  • Breached accounts, on the other hand, are those whose information—such as passwords or sensitive personal details—has already been compromised in a known data breach. Once an account is breached, attackers can easily use the stolen information to gain access to other accounts if the same credentials are reused.

Both exposed and breached accounts present significant risks, but breached accounts are especially dangerous because attackers can immediately use stolen passwords or personal details to access campaign systems. Even if a password hasn’t been exposed, attackers can still use the publicly available details from exposed accounts to craft convincing social engineering or phishing attacks. For local campaigns, where volunteers and staff often use personal emails and repeat passwords, this significantly increases the chances of account takeover. 

Q 3. VoterGuard's report mentions a significant amount of publicly available campaign and party data. Why is this a concern and how could bad actors exploit it?

Publicly available campaign data is a goldmine for attackers. For smaller campaigns without the resources for comprehensive cybersecurity, it’s even more concerning. Attackers can use this information to craft highly targeted “spear-phishing” attacks, where they impersonate trusted people within the campaign to steal sensitive information.

Studies show that accounts exposed in data breaches are around 5x more likely to be targeted by phishing. Spear-phishing attacks that use personal information are even more dangerous, with success rates over 50%. As we highlighted in the report, this risk is bipartisan, and both major political parties are equally vulnerable to the threats posed by exposed and breached accounts.

Q 4. Your findings on website vulnerabilities, particularly DMARC adoption, were eye-opening. Can you elaborate on how these vulnerabilities impact campaigns and what steps can be taken to mitigate them?

DMARC is a crucial tool to help organizations protect against the risks of email spoofing. In our analysis, we found that over two-thirds of campaigns and parties had not yet implemented a secure DMARC configuration for their email domains. Without DMARC, bad actors can spoof campaign email addresses and send fraudulent messages to donors, voters, or campaign staff, tricking them into sharing sensitive information or downloading malware.

To mitigate this risk, campaigns should implement DMARC enforcement through tools like ValiMail, which can help organizations automate the process of implementing a secure DMARC configuration. This is a great “set-it-and-forget-it” solution that can dramatically reduce the risk of phishing attacks that target campaigns and their supporters. This offering is available to eligible campaigns for free through DDC. 

Q 5. What practical steps can campaigns and political organizations take right now to address the security gaps highlighted in your report, especially as the 2024 elections approach? 

The good news here is that we’re not defenseless, and the political tech community has a ton of tools at its disposal to stop the bad guys. Every campaign should take the time to set up multi-factor authentication, use secure communication tools, and double-check security settings for key accounts like web administrators and social media pages. 

There are also a lot of excellent free resources for campaigns, like Google's Project Shield for protecting campaign websites and CISA's Election Security Toolkit for election-related organizations of all sizes. 

Q 6. How did you gather the data for this report? Could you walk us through the research process and explain how you identified the key security risks for campaigns and political parties?

The data for this report comes from VoterGuard’s Election Threat Monitoring Platform, which is the foundation of all of our work in political party cybersecurity. We start by mapping out an organization’s entire digital footprint, from its domain to any associated services, exposed accounts, or vulnerabilities in its infrastructure. Then, we layer in threat intelligence feeds and data sources that look across the deep and open Web to find at-risk accounts, look for data breaches that involve the organization, and spot any signs of potential compromise.

We fuse all of this data into a comprehensive digital risk assessment that highlights the most pressing security concerns for an organization. Because our platform runs continuously in the cloud, we’re also constantly assessing for any changes to an organization’s risk posture and can identify potential cyber threats in real-time. The 2024 Election Threat Report is a “snapshot” of these risks across the political landscape, highlighting the biggest security concerns facing campaigns today.

Q 7. Your report focuses on the 2024 elections. How do the security threats you've identified compare to those seen in previous election cycles?

Compared to previous election cycles, the 2024 cycle has seen a significant increase in both the sophistication and scale of cyberattacks targeting political organizations. We’ve observed more targeted efforts by nation-state actors and cybercriminals alike, with tactics ranging from phishing and disinformation campaigns to more advanced AI-driven attacks.

The 2024 election has also seen a surge in attacks against down-ballot races and candidates in state-level races, especially in swing states. What’s clear from our research is that no campaign is too small to be targeted—local elections are facing global threats, and campaigns need to take proactive steps to defend themselves.

Countdown to Election Day: Truths and Myths About Using Security Keys

he concept of strong user authentication can seem technical and tricky, but it doesn’t have to be. DDC is teaming up with the FIDO Alliance, an open standards industry association with a mission to reduce the world’s reliance on passwords. We are setting the record straight to help everyone understand how simple it is to use a physical security key—the strongest form of authentication—to prevent malicious actors from accessing your accounts.

If you are a candidate or work with or for a political campaign, you are at a higher risk of being targeted by a nation-state, hacktivists, or cybercriminals. Protecting your accounts—the primary target of bad actors—between now and Election Day is essential. Enabling a physical security key certified by the FIDO Alliance takes only minutes. DDC-eligible campaigns can get these keys for FREE, shipped to you quickly. We can even walk you through how to set them up. Still not convinced? Read further.

60 Seconds to Better Cybersecurity for Political Campaigns Using Microsoft

As Election Day draws near, cyber threats will only increase for political campaigns (of all sizes) and the digital firms and vendors that serve them. Already this cycle, we’ve seen numerous instances of bad actors leveraging different tactics, including generative AI, in their election interference operations, targeting political organizations up and down the ballot. Microsoft’s Threat Analysis Center has been actively tracking the various tactics employed by nation-state actors as they look to influence the US elections, including recent Russian interference and elections-specific cyber attacks coming from Iran.

Countdown to Election Day: Cybersecurity Risks Every Campaign Faces

The final sprint of the campaign is underway! Campaign managers, staff, and volunteers are bracing for the intense days ahead, with all eyes on the countdown clock as Election Day approaches. The singular focus now is on securing a victory.

As we approach the final stretch of the election season, it's not just campaigns pushing hard towards the finish line. Bad actors seeking to interfere with or disrupt our elections are also ramping up their efforts. While campaign staff are understandably focused on the goal ahead, it's crucial they don't lose sight of potential cybersecurity risks. Campaign managers, staff, and anyone associated with the campaign need to stay vigilant and take basic protective measures now to safeguard against these threats. 

Risk #1 Bad Actors Compromising Accounts via Phishing and Spear Phishing

The recent spear phishing attack on Trump's campaign and the attempted attacks on Biden's campaign (before Harris became the nominee) serve as stark reminders that nation-states are actively on the prowl this election season. Every campaign faces the significant risk of infiltration aimed at stealing information for potential release or conducting espionage. While most computer users have become savvy about broad-based phishing attempts, like emails claiming "your package is delayed," spear phishing poses a more insidious threat. Spear phishing is a targeted approach that exploits compromised accounts of individuals known to the email recipient. These could be vendors, family members, donors, close advisors, or associates of the candidate – people the recipient trusts as legitimate sources. Such emails might contain malicious links or attachments, or request sensitive information. The familiarity of the sender often lowers the recipient's guard, making these attacks particularly dangerous.

Prevention: Implement the strongest available multifactor authentication methods. This includes using security keys (available free to DDC-eligible campaigns), enrolling in Google's Advanced Protection Program (which can also be activated with a Passkey), or utilizing Microsoft Account Guard. These robust security measures are offered at no cost to campaigns and high-risk users, providing an essential defense against sophisticated phishing attempts.

Risk #2 Campaign Funds Stolen 

American political campaigns are lucrative targets for cybercriminals, who are well aware of the substantial funds involved. While some incidents like the $2.3 million theft from the Wisconsin GOP in 2020 have been publicized, many others go unreported. 

These criminals exploit the fast-paced nature of campaign environments, often using spear phishing tactics to execute their schemes. They typically compromise a third party (usually a vendor) or create a convincing fake email that closely resembles one from a campaign leader, sometimes even spoofing a personal account. Their approach often involves a fraudulent invoice demanding immediate action. 

If the email appears to come from a vendor, it might claim that payment is overdue and threaten to halt essential services like radio ads or mailers, or offer an enticing discount for quick payment. When masquerading as campaign leadership, the message might urgently request payment, stating something like, "I promised we'd pay this today, please pay ASAP!" These tactics capitalize on the pressure and quick decision-making inherent in campaign operations.

Prevention: Ensure campaign staff are trained to be vigilant about any communications requesting payments. Establish a protocol to always verify directly with the source of the email through a separate, known email address or via phone call. Instruct staff processing payments to consistently double-check routing numbers to prevent misdirection of funds. As emphasized previously, implementing strong authentication measures across all systems is crucial. These practices create a robust defense against financial fraud attempts.

Risk #3 Website Attacks

Websites are easy targets for hacktivists and nation-states. There is clear evidence that candidates and committees are highly targeted around elections. A recent blog published by one of DDC's partners, Cloudflare, shows the increase in attacks around elections in France and the Netherlands just this past July. Similar increases in attacks have happened around US elections as well. Bad actors look to deny access to websites (Distributed Denial of Service Attacks or DDoS), make content changes, or deface websites with objectionable content. If you get complaints from supporters about your site being down or content that doesn’t make sense, you have been compromised.

Prevention: Cloudflare offers Cloudflare for Campaigns for DDC eligible campaigns and free protection from DDoS attacks for any website. Google offers Project Shield, similar DDoS protection for high-risk organizations and campaigns. Use the strongest multifactor authentication available on all content management systems.

Risk #4 Social Media Hijacked

Social media presents multiple risks for campaigns in the 2024 election cycle. Key concerns include the spread of inauthentic content about candidates or their stances, and impersonation of candidates or campaigns to phish supporters, potentially leading to financial or personal data theft. Recent reports have also highlighted the circulation of fake celebrity endorsements. Additionally, there's a significant risk of compromising social media accounts belonging to campaign staff, vendors, or others who manage the campaign's online presence, potentially leading to account hijacking.

Prevention: DDC eligible campaigns can implement Doppel to safeguard their social media presence. Meta offers Facebook Protect at no cost, providing advanced security measures for high-risk users in the political sector (contact DDC for assistance). It's crucial to employ the most robust forms of multifactor authentication across all accounts to prevent unauthorized access by malicious actors.

It’s never too late to address your concerns about risk and strengthen your cybersecurity posture. Learn more about DDC eligibility and information on free tools for every campaign and organization.

Michal Kaiser

President and CEO, Defending Digital Campaigns

What You Can Expect Post the Donald Trump Campaign Attack

Whenever there is a major cyber incident, like the attack on Donald Trump’s campaign, there are many questions on everyone’s mind including:

  • Was this preventable?

  • Will it happen again and could it happen to my campaign?

  • What might happen next?

  • What should my campaign do right now?

Was this preventable?

It's likely, but we are not sure without more details.

We don't know all the specifics, so it's difficult to determine if this attack was entirely preventable. The hackers employed a classic approach: compromising someone close to and trusted by the campaign, then using their email to send seemingly legitimate messages to the intended targets. While the full extent of the attack's sophistication is still unclear, we do know that sensitive documents were stolen as a result.

The majority of spear phishing attacks can be prevented with the use of the strongest form of multifactor authentication available: a security key.

Security keys protect accounts even when login credentials are compromised or stolen in a phishing attack. Suppose someone tries logging into an account with only a password, or from a new machine, or other situations like a different country. In that case, the physical security key will be required before account access is granted. Eligible campaigns can get FREE security keys from DDC.

Will it happen again and could it happen to my campaign?

You can count on it. 

In the world of cybersecurity, it's a fact that when one incident comes to light, it's rarely an isolated case. This principle holds true for the attack on the Trump campaign. Major tech companies like Google and Microsoft, which monitor and combat cyber threats, have reported a surge in phishing and spear phishing attempts targeting campaigns since June. We must assume that nation-states, hacktivists, and cybercriminals are constantly on the lookout for vulnerabilities, ready to exploit them at every opportunity.

What might happen next?

An environment of misinformation and phishing.

In addition to the actual threat of additional similar attacks, the bad actors may look to exploit the attack in other ways. Nefarious cyber actors have a long history of using and inserting themselves in news events to lure people to click and download things they shouldn’t. Everyone should be on the lookout for these social engineering efforts. Each hack is one step in their playbook.

For example, since the public is primed to be interested, it’s possible we could begin seeing fake news articles and news sites about alleged newly released confidential information from the Trump campaign or other campaigns. Such claims could appear in emails and news feeds on social media. 

Your campaign could be swept in as well with impersonators claiming to be your candidate or campaign, or there could be inauthentic content allegedly stolen from your campaign released with the intention to harm you. Cybercriminals might attempt to contact your supporters claiming their personal information was lost in a hack of your campaign and urge them to click on a link to remedy the situation.

DDC can help and offers eligible campaigns two powerful tools: Doppel, which scans social media for impersonations and fake content, initiating takedown requests when necessary; and Valimail, which authenticates your outbound emails to prevent spoofing and impersonation of campaign communications.  

What should a campaign do now?

Act now to protect your campaign. The risk will only increase between now and Election Day.

Don’t think your campaign is too small or unimportant enough to be a target. Even if you are running unopposed, bad actors might try to compromise your campaign or staff to reach other campaigns or steal personal information about donors, staff, or candidates.

Campaigns should ensure their core platforms (like Workspace or Microsoft 365) are configured correctly. Staff should be armed with security keys and remain vigilant about what’s coming into email boxes and newsfeeds. Campaign websites, a frequent target of cyber attacks, should be protected.

DDC can help campaigns be more cyber-secure with FREE products including:

  • Workspace for Campaigns: Account Security Fundamentals from Google and Microsoft Account Guard to better secure your platforms.

  • Cloudflare for Campaigns to protect websites

  • Doppel for protecting social media

  • Valimail for protecting outbound email

  • Security keys from Yubico and Google are the number #1 protection every campaign needs!

Get your core protections and common sense security measures in place to protect campaign and personal accounts, websites, social media, and email communications before Labor Day so you can finish off the campaign season with a little peace of mind. 

DDC can help with all of this. Just reach out to info@defendcampaigns.org to get started.

The Trump Campaign Hack is a Wake Up Call for All Campaigns

Earlier today, Politico reported that Donald Trump’s Presidential Campaign had been hacked and internal documents were leaked to the media. The campaign believes that foreign actors stole and released the information. This compromise and stealing of confidential information should serve as a wake up call for every campaign to review their cybersecurity posture immediately.

At DDC we can't speak to the cybersecurity of the Trump campaign and don't know how the documents were obtained or who was behind the attack. However, this kind of attack and release of information is exactly the kind of threat faced by all campaigns large and small. 

Because campaigns are targets of nation-states, hacktivists, and cybercriminals, Defending Digital Campaigns (DDC) and others consider everyone working on campaigns to be high-risk computer users. Campaigns must implement core cybersecurity practices ASAP to protect the campaign, staffers, and others associated with the campaign as well as digital assets like websites and social media.

DDC can help you rapidly shore up your cybersecurity protections with free cybersecurity products available to federal campaigns. We can also serve state and local campaigns in Georgia, Michigan, Ohio, and Virginia (see eligibility here: https://defendcampaigns.org/offerings-for-eligible-campaigns). If you are a digital firm or other vendor to campaigns, we can work with you to help your clients as well. 

To access anything below or ask questions, respond to this email or contact us at info@defendcampaigns.org to access any products below.

Every campaign should immediately get the two following products from DDC:

  • Security Keys from Google or Yubico to implement the strongest, phish-resistant multifactor authentication on core accounts like email, cloud, and social, and turn on Google’s Advanced Protection or get Account Guard from Microsoft

  • Cloudflare for Campaigns to protect websites from attacks.

Every campaign should consider getting the following products in place as well:

  • Doppel to scan social media and take down inauthentic content and impersonations  of the campaign

  • Valimail to protect email sending domains from spoofing and impersonation

  • iVerify to protect mobile devices if you are concerned about mobile security

If you are concerned that you don’t know if your platform has been set up with cybersecurity in mind both Google and Microsoft have programs for campaigns to configure your environment:

  • Account Security Fundamentals for Google Workspace, a one-click feature to immediately configure 26 core security settings for your entire team.

  • Microsoft 365 for campaigns that configure the security for your Microsoft environment

DDC can help campaigns enhance protections on Facebook by facilitating invites to Facebook Protect, Meta’s strongest protections for high-risk users.

DDC urges every campaign to take advantage of all the free cybersecurity built into the platforms you already use like password managers in Edge and Chrome and passkeys where available. Cloudflare has a free version to protect any website and Google has Project Shield, also free, for high-risk organizations like campaigns 

Cybersecurity is very much about creating a culture of security within your campaign. Staff should know what security measures to use, who to report problems to, and where to get answers to their security questions. This video could help and read this article from our Knowledge Base

Risk rises as Election Day draws nearer. This should also serve as a reminder that you need to have a response plan if a cyber incident happens on your campaign. Learn more about responding to a cyber incident here.

Remember to contact us at info@defendcampaigns.org to get started on cybersecurity ASAP.

Mind Games: Your Role in Preventing the Spread of Manipulated Content

The concern about the impact of generative AI and deep fake content has emerged as a major threat this election year. 

Some consider fake content as a form of psychological warfare. The word psychology derives from the Greek “psyche”  meaning the mind, soul, or spirit. In disseminating inauthentic and manipulative content (the words I prefer),  bad actors' goals are to captivate our minds, souls, and spirits.

The media, government officials, and voters have sounded the alarm.  In a recent Yubico/DDC survey of registered voters, 78% of respondents expressed concern about AI-generated content being used to impersonate a political candidate or create inauthentic content. More than 40% believe AI will have a negative effect on the outcome of the election.

Recently, after the release of a manipulated video clip of President Biden, the term “cheap fake" entered the vernacular. Unlike a deep fake, which is 100% synthetic or AI-generated content, a cheap fake manipulates authentic content in a way that is misleading or false. The concern about inauthentic content is exacerbated because this is the first election cycle where the tools to create and manipulate content are readily available.

The general public should not get too wrapped up in parsing the difference between deep and cheap fakes. The focus should be on understanding the goal of the creators and disseminators of inauthentic content to influence, manipulate, and divide us. 

The purveyors of inauthentic content use the same playbook as cybercriminals use for phishing, get our eyeballs on content that drives us to act, such as the continued spread of misinformation or changing our behavior.

The Last Line of Defense 

Mitigating the impact of inauthentic content is a shared responsibility. Industry efforts, including The Tech Accord to Combat Deceptive Use of AI in 2024 Elections and The Coalition for Content Provenance and Authenticity (C2PA) are important collaborations. Agencies including the Cybersecurity Infrastructure and Security Agency (CISA) and the FBI track bad actor behavior, and educate the public. The media combats inauthentic content by fact-checking and focusing public attention on specific incidents.

These robust efforts won’t eliminate it as manipulative content will find its way to all of us. We, the people, are the last line of defense in mitigating its impact. 

Be Diligent 

How do we identify and respond to manipulated content? We start by paying close attention to our emotional responses to the content we see. 

Inauthentic content may:

  • Be inflammatory, attempting to divide you against others;

  • Cause you to feel angry and compel you to share and/or respond emotionally;

  • Cause you to feel defeated, hopeless, and apathetic.

Not all content that elicits a strong emotional response is manipulative. However, it is a warning to pay attention, by checking the source for its legitimacy, searching images to verify if they are real, or confirming news reports are authentic. If the content came from a friend or family member, it doesn’t make it real. 

You can also report content you think is false or inauthentic. See tips below on how to help prevent the spread of misinformation.

A Flood of Content

Bad actors leverage newsworthy events. For example, phishing usually increases around natural disasters, as cyber criminals attempt to take advantage of people’s goodwill to donate and help others. We can expect the same around inauthentic content this election season. American politics creates a never-ending river of content in traditional and social media. Specific events such as debates, primaries, and campaign rallies provide moments of public focus and backdrops for generating and disseminating inauthentic content. Breaking news, including geopolitical events, is also an opportunity for bad actors to insert themselves in front of information seekers.

The risk for voters is higher in swing states for the Presidential election or balance of power races because outcomes hinge on swaying a small number of votes. Be on alert for sneaky ways fake content appears, for example through a community listserv or a fake identity posing as a community member. Microsoft’s Threat Analysis Center released a report in April 2024 highlighting these tactics and specific examples of U.S.-focused influence operations ahead of the U.S. Presidential elections. 

Risk rises as the election draws closer when the impact can be greater, and it doesn’t end on Election Day. In any election from the town council to the Presidency, if outcomes are slow to be determined or any other “issues” arise, bad actors will be quick to exploit any uncertainty. 

Remaining vigilant, dialing into our emotional responses, and alerting to the presence of manipulative content can help us better protect against it — and ultimately better protect our democracy. 

How to prevent the spread of inauthentic or false content 

Reporting false information that you see on social media helps slow its spread. Follow the links below for instructions on how to report misinformation and fake news on social media platforms.

Eligible political campaigns can also prevent the spread of misinformation by protecting candidates' and campaign's social media handles and accounts with access to DDC’s free tools. Doppel for Campaigns facilitates and automates takedowns across social media and Valimail for Campaigns authenticates emails campaigns send and prevents impersonation. For more information about how to access DDC offerings and quickly enable these tools contact info@defendcampaigns.org

Michael Kaiser
President and CEO
Defending Digital Campaigns
.

Best Practices for Maintaining Control Over Your Authentic Content and Combating Deepfakes: A Q&A with Microsoft’s Campaign Success Team

If you’re involved in a political campaign, whether directly, for a digital firm or a traditional political organization, you understand the critical role of accurate information. Maintaining voter confidence and trust hinges on reliable content and information about your campaign, candidate, and key issues.

However, the proliferation of inauthentic content can easily take on a life of its own and sway opinions. Misinformation, deepfakes, and abusive content pose significant challenges for campaigns. To counter these negative impacts, it’s essential to prioritize campaign security and have a solid communications strategy. Ensuring that your authentic content remains unaltered by bad actors is crucial for maintaining transparency and trust.

We sat down with Microsoft's Ashley O’Rourke and Seth Reznik who are part of Microsoft’s Campaign Success Team dedicated to helping political campaigns navigate cybersecurity challenges and the new world of AI to discuss this topic further.

Q:2024 is a big year for elections, not just in America but globally. This campaign cycle will also be the first one where AI is readily available. Microsoft continues to invest a substantial amount in technologies that help political campaigns verify the authenticity of their media. Why is this issue so important to Microsoft? 

MSFT: Microsoft is committed to protecting the electoral process which includes taking proactive measures to help safeguard elections from disinformation and AI-driven deepfakes. We were proud to join +20 tech companies in signing the Tech Accord to Combat Deceptive Use of AI in the 2024 Elections where we outlined our collective commitments to address this issue. While we wait to see how meaningful of an impact deepfakes have on the upcoming elections, we are dedicated to ensuring that political parties and campaigns have the tools and resources needed to navigate the risks of deceptive AI use and protecting their media online.

Q: Best practices in cybersecurity emphasize taking offensive steps to protect assets like websites, data, key accounts, etc. The idea of protecting media and content is a newer topic. How should campaigns view their content in this world of AI, and what are the steps campaigns can take to prevent abusive or deceptive media? 

MSFT: The reality is that there isn’t a single silver bullet to combating deepfakes. Like most security issues, it requires a layered-defensive strategy. A key step in building your organization’s strategy to mitigate the risks of deepfakes is to take proactive measures that protect the authenticity of your media online. Leveraging new technologies such as digital watermarking and Content Credentials are a great way to do just that. Content Credentials (the basic details about a piece of content) can be added to media as cryptographically encoded metadata, ensuring they can’t be altered without detection. A great way to think of this is a tamper-proof seal on the content, if the seal is broken, we know it can no longer be trusted.

In spring of 2024, Microsoft released its Content Credentials as a service tool based on the C2PA technical standard, called Content Integrity, in a free private preview for organizations in the political elections space. This tool and technology not only help candidates and political organizations maintain greater control over their content and likeness by attesting to its origin, but they also help voters discern if digital content is coming from a trusted source, is AI generated, or has been manipulated.

Q: C2PA is a newly established standard enabling campaigns to protect their authentic media from bad actors and maintain a repository of authentic content. Can you talk about how C2PA started and what you hope to accomplish?

MSFT: The original idea for what became C2PA was started at the World Economic Forum in Davos in early 2019. Microsoft’s Defending Democracy Program showcased deepfake videos, highlighting the urgent need for a solution to protect journalism and democracy from synthetic and manipulated media. Following the conference, we began working on potential solutions and standards that could fit the challenge. Then in 2021, the Coalition for Content Provenance and Authenticity (C2PA) was created to work together to unify concepts that Microsoft and other organizations including the BBC, Adobe, Arm, Intel and Truepic were working on independently. C2PA, which has come to include more than 60 organizations from tech to PR to the media, now works to develop open standards and technical specifications for content provenance and authentication, ensuring the integrity of online media. With C2PA, Microsoft and its partners aim to help verify the origin of digital content. By doing so, we hope to empower consumers, journalists, and campaigns to discern trustworthy information from disinformation. In an era where truth is often elusive, we feel that this work represents a crucial step toward re-establishing trust in digital media.

Q: What does C2PA look like in practice and do you have any examples of how campaigns and organizations are protecting the authenticity of their content?

MSFT: As we’ve started working to educate political organizations on Content Credentials, we’ve seen them really focusing on leveraging it in three key stages of the content creation and development process: 

  1. At Capture & Creation: Content credentials applied as media is created (i.e. digital camera, AI tools, coming soon – Microsoft's Content Integrity mobile application)  

  2. During Editing Process: Content credentials applied as media is being edited/altered (i.e. using Adobe Creative Suite)  

  3. Prior to Final Publication: Content Credentials certification tool (i.e. using Microsoft’s Content Integrity) is used to apply credentials prior to publication.  

Since this is such a new technology that political organizations are just getting started with, initial uses have included: certifying all raw photos from campaign events, creating a repository of certified media for the campaign (including both raw images, video, and paid media). Another use case that is interesting and important is the certification of PDFs. C2PA is also a good fit for organizations who might want to certify their official communications, perhaps around the details of an election or for important research.

Q: There is a lot of concern about deepfakes and how campaigns can and should respond to impersonation and inauthentic content. What steps can a campaign take to detect and respond to abusive content? Should campaigns respond to everything.

MSFT: The threat of this type of content is something that campaigns need to think about as both a technology and communications issue. In that vein, it's important to be prepared. Steps like establishing a point person on the team, understanding the policies of the various content distribution networks, and protecting your content with Content Credentials as mentioned above are great steps. The second point is a really important one, not everything requires a response, and in some cases, a response can bring attention above and beyond what would’ve been the case without. It’s a fine line, but things that have low visibility/impact or may be more on the humor or meme end rather than the misleading harmful content, is often better left ignored.

Q: What additional resources can you share to help campaigns craft a media integrity or deepfake mitigation strategy?

MSFT: We’ve developed a guide for campaigns that outlines how to build a framework for a deepfake response plan in greater detail. It emphasizes the importance of educating staff, monitoring online discourse and content, and documenting incidents. The document's goal is to outline a response plan that includes how to assess threats and align your team to respond. Overall, campaigns need to have vigilance, preparedness, and proactive measures to contribute to informed public discourse and protect the democratic process, and this plan is a step to help folks in those directions. Lastly, if campaigns are interested in learning more or getting access to the Content Integrity tool – don't hesitate to reach out! Our team can be reached at CampaignSuccess@microsoft.com and you can visit aka.ms/CampaignSuccess for more information.

Q and A with Doppel’s founder, Kevin Tian

Recently, DDC signed a new vendor partner: Doppel. We were excited to add Doppel to our catalog because it addresses a significant security concern that is top of mind for lots of campaigns this election cycle: Impersonations of campaigns and the use of inauthentic content about candidates.

We had a chance to pose these questions to Doppel’s founder, Kevin Tian, about the risks campaigns face and how Doppel addresses them.

DDC: Tell us about how Doppel was started and what led you to launch the company.

Rahul and I met as engineers at Uber working on AI, ML, and distributed systems powering the core ridesharing platform. We wanted to start a company together and made it happen two years ago.

Doppel started off as an AI tool for detecting scams in crypto, the most fast-paced adversarial threat environment. From there, we realized the technology we built to scale for volume and velocity was very valuable for serving multinational enterprises, so we expanded to cover all digital attack surfaces, including social media, domains, and telco.

DDC: There is a lot of noise about deep fakes, inauthentic content, and impersonators. What is your view of the threat landscape and is the situation getting worse? If so, why?

While there certainly is a lot of hype, the threat actors and the tactics they deploy pose very real threats to businesses, organizations, and campaigns. AI, deep fakes, and disinformation accelerate existing impersonation threats by making it cheaper and more personalized.  Just take a look at the news, every day we see new examples of hyper-personalized and targeted social engineering campaigns that manipulate public opinion, create physical threats, and drive rampant financial loss to victims. 

Year over year, the number of deepfakes detected has grown exponentially—we've seen estimates of 90,000 deepfake videos shared online in 2023; when including synthetic audio, that number jumps to ~500,000, according to sources like DeepMedia.

DDC: In our discussions leading up to you offering Doppel to DDC-eligible entities, we sensed a lot of passion for helping campaigns. Why was it so important for you and the company to provide this protection for free?

At Doppel, we’re on a mission to make the internet a safer place. With pivotal elections this year, we have the tools to help the electorate and want to do our part to not only raise awareness of the threat but also play a role in protecting society from serious risk. 

DDC: Campaigns have little time for scouring the internet and keeping on top of efforts by bad actors' activities about them on social media. Can you give a little insight into how your product works? What does it do and how does it do it? 

Doppel leverages AI technology and security experts to automate the collection, validation, and takedown of social media threats. We crawl the web for potential threats, capture evidence of potential threats, and use AI to categorize threats. From there, our security experts validate and take action on the threats to protect clients from these bad actors. It’s turnkey and only requires a brief onboarding call to get started.

DDC: Campaigns don’t have a lot of time to track and request takedowns of false content. How does your takedown process work? What are the limitations around takedowns (or maybe something around success or how social media sites evaluate? Just want to create some realistic expectations)

Our takedown process involves working directly with the platforms to take down content based on their policies. When we validate that a threat violates the platform policy, we have an extremely high success rate, over 95%. However, if the threat does not violate a policy, we would not be able to take it down. For example, parody accounts are often protected by free speech principles on the platform.

DDC: Just because you request a takedown, doesn’t mean the social media companies are obligated to do it. How does that work and what should the expectations of campaigns be vis a vis takedowns?

Our efficacy rate is high when we can collect evidence that there is a clear violation of their terms. Typically, platforms have policies against direct impersonation, and these pose the largest threats from a cyber risk posture.

DDC: Do you have any success stories you can share?

Absolutely. Our commercial customers include major Fortune 500 companies like Meta and Coinbase, financial services firms like Ark Invest, to Hollywood talent agencies. We help them prevent their executives from getting impersonated and reduce social engineering risks to their business.

DDC:  Any final thoughts?

We’re excited to work with DDC and offer our technology to protect election integrity. The threat landscape is rapidly evolving with AI, and we believe cutting-edge problems require cutting-edge solutions. Looking forward to partnering with campaigns and fighting the good fight together!

If your campaign is eligible to receive DDC products, contact us at info@defendcampaigns.org 

Q and A with Microsoft’s Tech for Social Impact Team

Q and A with Microsoft’s Tech for Social Impact Team 

As the 2024 campaign cycle gears up, there has been a lot of chatter about artificial intelligence and other cybersecurity risks. 

DDC partners with Microsoft on campaign security issues, including artificial intelligence. We recently talked with Microsoft's Ashley O’Rourke and Seth Reznik who are part of Microsoft’s newly formed Campaign Success Team dedicated to helping political campaigns navigate cybersecurity challenges and the new world of AI.

DDC: Microsoft has a long history and made significant commitments to defending democracy around the world. We have worked with you since we started operations in 2019. Can you talk about the mission of Microsoft in the campaigns and election space and your latest efforts related to campaign security and the impact you hope to achieve?

MSFT: [Ashley] In November 2023, Microsoft announced a set of Election Protection Commitments to help safeguard voters, candidates and campaigns, and election authorities. These commitments are focused on supporting political campaigns, promoting a healthy information ecosystem, safeguarding electoral processes, and driving responsible AI innovation. As we approach a historic moment of global elections, we are excited about the potential of AI to empower campaigns with time-saving innovation and are committed to help organizations build the skills needed to maximize the technology available to them—all while safeguarding the integrity of sensitive data. We are also working to help protect the political ecosystem from malicious actors who wage cyberattacks and misuse AI to influence and interfere with the democratic process. While no individual, institution, or company can guarantee that the ecosystem is secure, by working together we can make meaningful progress to safeguard elections and earn public trust, while harnessing responsible AI to help campaigns find new ways to engage and reach voters, increase productivity, and accelerate their impact.

DDC: AI is being talked about as a game changer across a variety of industries and everyday life. Microsoft is an acknowledged leader making AI tools available across many of its products and platforms. It is assumed campaigns will be big benefactors of AI. How can they responsibly use it in their operations?

MSFT: [Seth] I think that generative AI represents an important opportunity to campaigns for both innovation and increased productivity. We all know that campaign staff work extremely hard, and the most valuable resource on campaigns is that time.
Routine tasks can be automated using AI, allowing campaign staff to focus on strategic planning and creativity. Getting over hurdles like the tyranny of the blank page or using AI as an editor help with processes that we are all already doing, but in many cases getting stuck on.
As you said though, responsibility and ethical practices are crucial. Transparency is key - campaigns should have policies around AI use that maintain needed disclosures and most importantly prioritize human oversight. Always remember that ultimately you are responsible for the output, not the AI. With thoughtful and transparent use, campaigns can benefit from AI while maintaining public trust.

DDC: As we move into 2024, there is a lot of concern about the use of generative AI to attempt to influence the outcomes via mis and disinformation, what are your concerns?

MSFT: [Ashley] This is one of the areas that our team was created to focus on. AI offers a new world of creative and productivity opportunities, but there are also associated risks. Recently, Microsoft, in a group of 20+ companies, announced an accord that aims to address the abuse of AI-generated audio, video, and images that fake or alter the appearance, voice, or actions of political candidates and other key stakeholders in a democratic election. The accord outlines eight specific commitments that cover the areas of safety by design, content provenance and watermarking, detection and response, transparency, engagement, public awareness, and resilience. We view this as a first and important step, but one that needs to go in concert with work from governments, civil society, and the public to make happen. There’s much more to read here: Meeting the moment: combating AI deepfakes in elections through today’s new tech accord - Microsoft On the Issues


DDC: Proving authenticity of a campaign’s digital content is a core challenge to combating misinformation. Can you talk about Microsoft’s role both internally and how you work with industry in helping campaigns thwart misinformation and proactive steps they can take?


MSFT: [Seth] To combat the impact that deceptive AI generated content can have, Microsoft has worked with a cross-industry group (C2PA) including Adobe, Google, Sony, Trupic, the BBC, New York Times and others to develop a widely adopted standard for content provenance. Content provenance is the ability to trace the origin, history, and authenticity of digital media, such as images and videos. It seeks to clarify important questions about media, like: “Where and how did the content come from? Is it real or artificial? What are its creation and modification dates?”

This is done by embedding metadata into the media files that disclose those details. This metadata can then be verified and reviewed by users.  Organizations in C2PA have already begun implementing content credentials in their products, from the ability to automatically add credentials to images or videos created in Adobe products to Trupic adding credentials to images automatically at the time of capture.

Microsoft has launched a Content Integrity tool built for campaigns that will allow them to credential their media via a website, an app that allows for media captured on a phone to be automatically credentialed, and a site that will allow the public to check media they find online for credential details. Why is this important for campaigns? Not only will this help campaign’s maintain greater control over their content and candidate’s likeness, but it sends a trust-signal to voters that they are engaging with content from a verified source. 


DDC: We are all about people and campaigns getting the most security out of the platforms they use. If I am a candidate, campaign manager, staff or volunteer on a campaign that runs on Microsoft, what are simple actions I can take to improve my account security quickly?

MSFT: [Ashley] The quickest and most important step to take is signing up with us for AccountGuard, our free cybersecurity program that not only offers an additional layer of protection and monitoring to your Microsoft hosted email domain, but also allows you to add the personal Microsoft email accounts (Outlook, Hotmail) for additional monitoring as well. Since personal accounts of high-profile users are a known attack vector, this adds another layer of security for your campaign. As part of this initiative, campaigns have access to enhanced identity protection by taking advantage of free security keys based on our partnership with Yubico and DDC. You can learn more about AccountGuard here:  https://accountguard.microsoft.com/



DDC: What Microsoft products and services can eligible campaigns access through DDC? 

MSFT: [Seth] In addition to AccountGuard and the content credentials tool, we are going to be rolling out a series of additional tools and resources for campaigns throughout this election cycle. These include:

  • M365 for Campaigns: Affordable, simplified security for political campaigns and parties, using tailored security settings. 

  • Election Security Advisors: Proactive and reactive security review services offered through our partnership with Defending Digital Campaigns (DDC).

  • AI + Cybersecurity workshops: A series of training sessions  on cybersecurity, deepfakes and responsibly innovating with AI for elections.

DDC: What's one fun application of AI that you’ve seen Microsoft users taking advantage of in the political campaign space?

Seth:  One fun use case I’ve found for AI is using Copilot in Teams. Effective communication, collaboration and time-management are essential to a winning campaign – particularly since the pandemic when campaigns began taking advantage of virtual meeting tools. To get the most out of these engagements, AI enables you to quickly catch up on what you might have missed by summarizing key highlights of the meeting and outlining next steps based on the call. You can even ask questions to the Copilot (i.e. what were the action items assigned to me as we prep for this campaign rally?) 

Ashley: Using the AI Copilot in Excel! Almost every campaign or committee staffer has needed to rush through an analysis of historical election results or turnout statistics at some point in their career. Using Copilot to answer questions about the election results (i.e. Which count had the highest winning margin?), generate new calculated columns and creating charts/graphs that you can quickly drop into a campaign plan – all through basic prompts – makes this process so much faster. 


Q&A with Marc Howard of Google’s Project Shield

We recently had the chance to pose some questions to Marc Howard, the founding engineer of Google’s Project Shield. Project Shield is Google’s free Distributed Denial of Service (DDoS) Protection for the websites of high risk organizations, including political campaigns, operated by Google Cloud and Jigsaw.

DDC considers DDoS protection of a campaign’s web assets as an essential cybersecurity component that every campaign should have in place.

DDC: Marc, let's start with some background. You are the founding engineer for Project Shield. What was the impetus for creating it? 

Marc: Project Shield was founded in 2013, when independent news sites were struggling to stay online in the face of large attacks. At the time, there were no free DDoS defense options available. We identified an opportunity to support journalists, free expression, and Google's core mission to organize the world's information and make it universally accessible and useful. The initial launch was effective and well-received, and demonstrated a need for DDoS protection. Shortly after we expanded the eligible categories to also support human rights and elections.

Over the years, we've continued to grow the product, adding machine-learning based defenses, easier onboarding for non-technical customers, and advanced features for power users. Through it all, our core focus has been protecting vulnerable information, and helping inform populaces.

DDC: There is a consensus that campaigns, and the people who work on them are at higher risk than many other technology users. This goes for campaign websites as well. Can you talk about the vulnerabilities of websites, what motivates bad actors to go after websites, and any particular insights you have about websites in the political space?

Marc: Information is the lifeblood of the election process. Voters use the internet to access critical information like where, when and how to vote. Voters also become informed on candidate stances, and much or all of this information comes from websites - candidates, political organizations, nonprofits and community groups. 

DDoS attacks allow anyone in the world to take these sites offline, at the moment when their information is most important, very cheaply and often with no repercussions. These attacks use infected machines around the world to send a huge surge of traffic that takes the web servers offline, and are often timed to interfere with timely events like elections. In the past decade, DDoS attacks have grown significantly in both size and frequency, and Google has recently defended against some of the largest attacks seen on the internet (blog).

Even in the absence of malicious attacks, many servers that normally work fine may crash during critical moments (such as election day) when they suddenly get significantly more traffic. Whether the traffic surge is legitimate or malicious, Project Shield can help keep the website online.

This is a graph of DDoS attacks Project Shield defended during the US Midterm Elections in 2022 (full case study here). Note the long period of attacks both before and after the election. We strongly encourage sites to apply for DDoS protection well in advance of any election events. 

DDC: One area of friction we often run into when trying to get campaigns to adopt cybersecurity is that implementing cybersecurity with a team that has little or no IT or cybersecurity experience can seem complicated. You have designed Project Shield around ease of use. Can you describe the process of adopting Project Shield?

Marc: Project Shield is built from the ground-up for easy adoption, without requiring technical knowledge. Prospective organizations apply by filling out a quick form telling us their website URL, and the name of their organization, and we get back to eligible applicants in less than 48 hours (usually much quicker). 

The user then clicks the link in their welcome email, which signs them into our dashboard, and starts creating defenses for the URL they applied with. Our system automatically gathers information about the website, including which hostnames need protection, and the user is given a chance to confirm that or make changes. We then ask the user to change the DNS settings for their website to point to Project Shield to receive protection.

Project Shield defenses do not require any input from the user. Our system uses machine learning (ML) and other advanced algorithms to learn about your traffic and put defenses in place that can help mitigate attacks and allow legitimate readers to access your site. As the website and its traffic volumes and patterns evolve, we update our defense models to track those changes and best protect your site.

It's important to note that users retain control over their DNS settings, which lets them turn on or off Project Shield at any time. We can only protect traffic that is pointing to our system, so that last step of onboarding is very important! 

DDC: While the core of Project Shield is the DDoS Protection are there other benefits or optional tools that come with signing up for the product?  What are the limits?

Marc: Project Shield is built on Google Cloud, and offers users access to the global Google network, which allows high-speed delivery of their website to readers anywhere in the world. By allowing our system to cache a copy of the website, it can be served to readers much quicker than by a standard hosting provider. This also allows us to keep serving the website even if your hosting provider goes down. 

In addition to our automatic defenses, we also allow users to manually enter IP allow and deny lists, blocking some clients entirely, or allowing trusted clients to bypass our defenses. We also offer seamless integration with reCAPTCHA Enterprise, allowing users to engage reCAPTCHA defenses for their whole site with a simple switch or API call

We provide traffic analysis graphs to allow users to see all of their traffic data in one place, and examine trends over time. Since all the traffic that is allowed, served from cache, or denied goes through Project Shield, we provide these graphs to give users the most complete picture of their site traffic.

Project Shield is specifically designed for ease-of-use and some customers might need more stringent guard rails for protections. For those customers, we encourage them to use the Google Cloud Networking products that Project Shield is built on - Cloud Armor, Cloud CDN and Google Cloud Load Balancing.

DDC:  When a campaign signs up for Project Shield, will there be any changes to their website or user experience for their visitors?  What kind of data, if any, about their website will be shared with Google?

Marc: Project Shield site administrators and their readers should not see any change in their website content.

We encourage Project Shield site administrators to make their website as cacheable as possible, to get the most out of Project Shield's capabilities. When using caching, site administrators may see small delays in new content rolling out to readers. We offer an easy way to push new content live immediately, either with a dashboard button, or with an API. We also encourage site administrators to consider setting low TTL (time-to-live) for their cache entries, which will allow them to still utilize our caching system, but may prevent them from needing to do manual content refreshes. 

Project Shield analyzes traffic to identify attackers and attack patterns and improve defenses for all our products. Project Shield does not share any data about website traffic with Google for any other purpose (including marketing). 

DDC: If a campaign website protected by Project Shield comes under attack, what kind of support is available?

Marc: Project Shield offers a wealth of help articles and FAQs to help site administrators get the most out of the product. We also offer email support through our support portal, where our support specialists and engineers can help you with any questions or concerns.

If your website is under attack and struggling to stay online, our first advice is to turn on the reCAPTCHA defense on the Project Shield dashboard, and then let us know. Our engineers can analyze your automated defenses, and potentially make suggestions that will allow you to turn reCAPTCHA off in the future.

DDC: You mentioned some of the optional tools available with Project Shield, what’s the one you think everyone should try out?

Marc: Definitely caching! Hosting servers set a special header called a "cache-control" header that tells Project Shield and other services whether they can store a copy of the site. For most resources on most websites, you want this on. But it's not always set properly by hosting providers, so that's worth checking. We offer a graph to show your cache hit rate, which represents the percentage of requests that we could successfully serve from cache. You want that number as high as possible.

I also advise power users to try out our APIs, especially for cache invalidation and reCAPTCHA. You can set up a simple script on your hosting server to invalidate the Project Shield cache every time you post new content. You can also instruct your server to turn on reCAPTCHA if load on the server grows too high (such as during an attack), and turn it off when things have returned to normal. 

DDC: What’s the best link to learn more about Project Shield?

Marc: Check out our help center at: https://support.projectshield.withgoogle.com/

New users may want to start with these articles:


To get started with Project Shield, and for attestations from other users go to:  g.co/shield

New Research: Voters Concerned about AI and the Cybersecurity of Campaigns

DDC and Yubico released the findings of a survey investigating voter attitudes towards cybersecurity and AI in the upcoming elections. Conducted with 2000 registered voters in the US by OnePoll, the results underscore significant concerns regarding AI-generated content and cybersecurity practices within political campaigns. Key insights reveal that 78% of respondents are apprehensive about AI-generated content impersonating political candidates, while 85% lack confidence in campaign data protection. Notably, 42% of donors indicated a change in donation likelihood if a campaign were hacked. DDC offers free cybersecurity tools to eligible campaigns, emphasizing the importance of responsible AI usage and user vigilance in combating misinformation. This study highlights the urgent need for enhanced cybersecurity measures and voter awareness to safeguard the integrity of the electoral process.

Announcement: Valimail has Partnered with DDC to Offer Free Email Security to Campaigns

DDC recently announced a new vendor partnership with Valimail to help campaigns secure their outbound emails and comply with security sending requirements. 

Bad actors can spoof or impersonate a campaign’s email traffic to phish supporters, steal money, or otherwise influence them. DDC is excited to offer Valimail for campaigns because it blocks spoofing and impersonation attacks by ensuring that email senders like Google and Yahoo know email from your campaign is legitimate. 

We had an opportunity to pose some questions to Seth Blank, Valimail’s Chief Technology Officer, about how the product can secure email, some additional benefits of the product, and their commitment to protecting democracy.

DDC: First off, many thanks for your support of DDC and the campaigns we serve. What is the company’s motivation to work with DDC?

Seth: Protecting elections, campaigns, and officials has been a passion of Valimail’s since our inception, and we’ve been offering pro bono services since 2018. Unfortunately, this has been a difficult offer for campaigns to accept. Partnering with DDC, under their FEC guidance, allows us to make the difference we’ve always wanted to make. We couldn’t be more excited to work with DDC to protect campaigns from email impersonation and to do so in a simple, automated, and rapid way.

DDC:  The drumbeat of information about the candidate, their policy positions, and fundraising via email is critical for campaigns to maintain. What are the security risks and concerns that campaigns face when it comes to email systems? 

Seth: When a threat actor can use a candidate’s or campaign’s exact email address to send mail, they can wreak havoc. It doesn’t take much to impersonate the candidate, spread disinformation, steal donations, or get fraudulent access to campaigns' systems or databases. 

DDC: How does Valimail address those cybersecurity concerns and risks? 

Seth: Impersonating email is easy, inexpensive, and legal. Thankfully, there are open standards (SPF, DKIM, and DMARC) that prevent such email impersonation. Unfortunately, these standards can be incredibly difficult to manage and maintain for organizations of all sizes. Success rates for implementation are awful (less than 15%). Valimail provides an automation suite that makes implementing these standards as easy as pressing a button. Campaigns using Valimail get continuous protection without the hassle or failures.

DDC: Let’s say an organization already has DMARC in place for their promotional emails, what benefit does Valimail provide them?

Seth: Just passing DMARC is no longer enough. Valimail ensures the campaigns get continuous protection at scale. Especially under Google and Yahoo’s new rules, which continue to evolve, Valimail ensures campaigns are always ahead of the ball,  ensuring that they don’t have any gaps against the new rules, and delivery of critical campaign messages are not at risk.

DDC: One of the things that impressed us about Valimail was ease of use. We had explored other potential ways of campaigns getting compliant and protecting domains but they were all too complicated for organizations without IT or cybersecurity support. Can you talk about Valimail’s process for getting the product up and running and your philosophy about ease of use?

Seth: Valimail was built on the premise that DMARC did not need to be a manual, error prone, and time consuming project. We invented hosted email authentication and automation -- and have the patents to prove it --  so that customers could use a product and press buttons, instead of engaging in painful consulting engagements. We are constantly raising the bar, creating and improving automation to deliver outsized results for our customers every day, including DMARC-as-a-Service and guided workflows to make onboarding and getting to Enforcement painless and fast. And if you need additional help beyond the automation, we have the best onboarding support team in the world. DMARC is our business, it doesn't have to be yours. You have a campaign to run.

DDC: How do I find out if we already have DMARC?

Seth: Use Valimail’s domain checker! We’ll instantly tell you if your domains have DMARC and if there’s immediate work to do. By sending reports to our free product, Monitor, you’ll be able to assess if you have any gaps against the new Google and Yahoo requirements, so you can take appropriate action.

To get started with Valimail for your campaign email info@defendcampaigns.org.

Read more about email security in DDC’s Knowledge Base.

A New Year’s Resolution You Can Keep: Cyber-Secure Your Campaign

Many of us watched the ball drop to usher in the New Year. Now is the time to get the ball rolling toward better cybersecurity in 2024. 

As we move from 2023 to 2024, many people will make New Year’s resolutions to eat healthier, exercise more, or spend more time with family and friends — all admirable goals.

At Defending Digital Campaigns (DDC), we want you to add making your campaign more cyber-secure in 2024 to your list of resolutions. It’s easy to get started, and you won’t have to count calories or hit the gym four times a week! 

Implementing cybersecurity early and maintaining it through Election Day is your best defense.

Many factors make the 2024 campaign higher risk.  When we look across the landscape of cyber risk, we see emerging technology like AI, geo-political events, and high-stakes elections in the U.S., including a Presidential contest, the U.S. House and Senate campaigns that will determine the balance of power and thousands of state legislative and down-ballot races. The confluence of events provides an expansive attack surface and motivation for nation-states, hacktivists, and cybercriminals to go after campaigns and political organizations.

When you make personal New Year’s resolutions, you usually achieve them on your own. When you resolve to make your campaign more cyber-secure, you have a partner in DDC to achieve success. 

We are ready to help you get the free products you need and ensure they are working correctly with our onboarding and support services. With our help and just a little time on your part, we can help you implement the following resolutions.

To be more cyber-secure in 2024, your campaign will:

  • Lockdown logins: by ordering free security keys from DDC and using them on key campaigns and personal accounts like Google, Microsoft, Facebook, X, and others to implement the strongest authentication possible, and protect against accounts being compromised.

  • Protect your website: by taking advantage of free Cloudflare for Campaigns and protecting our valuable public presence from being taken down or defaced.

  • Sign up campaign staff and volunteers for Facebook Protect: by applying for this advanced protection through DDC, we can better protect our Facebook accounts.

  • Fast-track your way to stronger Google Workspace security: by participating in a new beta effort to enhance security features that will automatically be configured.

  • Get help from DDC: by setting up a call with DDC's onboarding team to review our current practices, learn how to order free products, and learn how DDC can help get our cybersecurity up and running.

If you are a Federal Campaign, just email info@defendcampaigns.org with the subject: help make my campaign more cyber-secure.