As winter gives way to spring, primaries are underway, campaigns are ramping up activities, and unfortunately, bad actors are starting to set their sights on U.S. campaigns with the power of AI.
Phishing remains a primary attack vector for bad actors in efforts to compromise people and accounts. What’s of significant concern leading into election season is the rapid advancement of the ways AI accelerates targeting, personalization, and effectiveness of phishing attacks.
Furthermore, campaigns are fast-paced, winning-focused, well-funded, and understaffed—an ideal environment for phishers to ply their trade.
According to Microsoft’s 2025 Digital Defense Report, by automating campaigns, AI-generated phishing emails have achieved an alarming 54% click-through rate—4.5 times higher than the 12% rate of traditional phishing attempts.
This election season, campaigns need to understand how AI is being used in phishing and take simple cybersecurity precautions to prevent harm.
How has AI aided and abetted phishers, and what should everyone be on the lookout for?
A few decades into the internet, most computer users have developed a healthy skepticism around inbound communications and are pretty well defended against some common phishing practices. They know attempts to coerce people into immediate action that requires sharing personal information, making a payment, clicking a link, or opening an attachment is how bad actors gain access. They also understand that phishing can happen via email, text, voicemail, and on social media.
The good news is that anti-phishing technology has also improved. The major platforms—Google and Microsoft—screen out a high percentage of phishing attempts, and many spammy or phishing texts get flagged as well by mobile carriers.
The bad news is that AI greatly enhances the ability of bad actors to create targeted attacks—also known as spear phishing—at scale. Here are some of the ways AI can facilitate phishing attempts:
Mimicking connection and creating familiarity: Using publicly available data from professional networks, social media, and other sources, phishers can create approaches that might include specific connections to you so you let down your guard, such as “we are both alumni of the same college,” “I grew up in the same town as you,” or “I read your blog and totally agree with your policy on…”
Accuracy in communication style: By training a model on publicly available information from websites, videos, or other sources, phishers can create authentic-sounding communications that mimic a vendor, a person, or an organization. In the world of audio, they can recreate the voice of a person and deliver a message via voicemail that sounds like the real person.
Using data to accurately target: Campaigns leave a very public paper trail. For example, FEC filings are publicly available and contain information that bad actors can analyze to make their attacks seem more legitimate. Using expenditure reports, they can see the vendors you use and how much you tend to spend. Then they can create a fake invoice request from a vendor you know in an amount that falls within a range you normally spend.
As time goes on and AI becomes even more robust, there will likely be other creative and nefarious ways that bad actors develop to phish people.
How do we defend against phishing in the world of AI?
Since the goal of phishers is to get you to share personal information—with a focus on compromising accounts—step number one is using the highest form of account protection available. Fortunately, this is easy to do, and it’s free using a passkey.
Passkeys are digital, encrypted credentials that you enable on critical accounts like Gmail and Microsoft 365. A passkey resides on your device and is used during the login process as a factor in multi-factor authentication to protect your accounts. You can further strengthen your protections by using programs from Microsoft (AccountGuard) and Google (Advanced Protection Program), specifically designed to protect campaigns and other high-risk users and organizations.
Unlike passwords, passkeys can’t be copied or stolen, making them virtually unphishable.
Passkeys work in concert with the other factors you use. For example, you have added a passkey to your phone for your email account. To access your email, you open your phone with a fingerprint or face scan (factor one), you open your email account and it recognizes a request from a device it knows (factor two, since phones have unique identifiers), and it confirms account access via the passkey (factor three).
Of course, awareness matters as well. Your team should understand that phishing is becoming more sophisticated, harder to recognize, and that people associated with campaigns are at high risk. People are the last firewall in protecting your campaign from phishing. Encourage your team to maintain a healthy dose of suspicion, double-check any requests involving money, and stay alert when providing personal or critical information.
You can protect your accounts with a passkey using these links on Gmail and Microsoft within minutes.
Still not sure how to do it? Check out our how-to videos:
How to Enable Passkey on Gmail: https://www.youtube.com/watch?v=Fgf9bt6xoIc
How to Enable Passkey on Microsoft: https://www.youtube.com/watch?v=iAaTn4CFM8U
You can learn more at defendcampaigns.org
