Deepfakes Are Everywhere, and Many Campaigns Aren’t Prepared

Audio and video manipulated by AI could mislead voters and hurt candidates. Many lawmakers’ campaigns have no plans in case they’re targeted.

Rep. Sarah McBride at the Capitol.
“It’s not only a deep concern in our politics but at large,” Rep. Sarah McBride said of manipulated video, audio and images. Aaron Schwartz/Sipa USA via AP

Deepfakes are getting easier to make and harder to identify. And while lawmakers are worried about the election implications of AI impersonations, many campaigns have no plans in place for how to deal with them.

“It’s a deep concern, no pun intended,” Rep. Sarah McBride told NOTUS about deepfakes, which range from digital impersonations that can convincingly imitate someone’s voice to videos depicting someone doing or saying something they did not. “And it’s not only a deep concern in our politics but at large — the technology is incredibly deceiving.”

In a fragmented media ecosystem that makes it harder for many voters to access verified information, political candidates are particularly vulnerable to manipulated media — but few campaigns seem to be taking specific steps to mitigate the risks of digital impersonation. In conversations with over a dozen lawmakers from both parties, members told NOTUS that they don’t have emergency plans or protocol to deal with election-related AI deepfakes.

In the last year, Congress has moved to address harmful uses of deepfakes in nonconsensual porn, but lawmakers have yet to regulate the use of synthetic media — content made or altered with AI — in political campaigns.

Rep. Jay Obernolte, a California Republican who has been involved in AI discussion in Congress, told NOTUS that a surge in AI deepfakes around the 2026 midterm elections is “a very real possibility.”

“It’s been a concern for a lot of us,” Obernolte said. “Americans are still not nearly educated enough about the power of generative AI and about the need to question anything that they see on the internet before they determine whether or not this is true.”

McBride and Obernolte said that they’re not aware of their campaigns developing a protocol to prevent or mitigate deepfake attacks. The same was true of other lawmakers NOTUS spoke with.

But nearly all of them said they’re worried about the implications.

“We know how real it can look. And pretty devastating things could be said with your face and your voice,” Democratic Rep. Ilhan Omar told NOTUS. “It is a conversation that members are having on both sides of the aisle.”

Republican Rep. Tim Burchett told NOTUS that his office has already alerted him to some deepfakes of him on the internet, although none of them have had a malicious intent. When asked by NOTUS if his office has developed a protocol for that scenario, Burchett said, “None whatsoever. So I suspect we’ll get popped on something.”

Earlier this year, Sen. Amy Klobuchar was targeted by a deepfake video that used vulgar language to criticize a contentious ad featuring actress Sydney Sweeney. After the digitally manipulated video went viral on social media, Klobuchar wrote an op-ed piece for The New York Times warning about the technology.

“We need rules of the road for the use of AI generated content in elections and political campaigns so that people know if what they are seeing is real or has been created by AI,” Klobuchar said in a statement.

The statement, which called for banning deepfakes of political candidates, did not address a direct question from NOTUS asking if her office had developed any emergency protocols for viral deepfakes since Klobuchar was targeted.

Other high-profile politicians have been similarly targeted by malicious actors using AI. In July, Congressional lawmakers were the target of phishing calls that used deepfake technology to impersonate Secretary of State Marco Rubio. Last year, voters in New Hampshire received deepfake robocalls impersonating former President Joe Biden, intended to discourage voters ahead of the primary elections.

“AI technology is making these images and videos much better, so it’s definitely not as easy for private citizens to be able to detect deepfakes on their own,” said Lauryn Williams, a researcher at the Center for Strategic and International Studies.

Williams said that previous deepfake technology used to produce inconsistencies in hand movement and skin tone might give away signs of digital manipulation to voters. But now “it’s getting much, much harder” to detect deepfakes, particularly in the fast-paced environment of algorithmic social media, Williams said.

Deepfakes and AI-manipulated media are increasingly becoming a communications challenge for political campaigns rather than a technical one, said Matt Hodges, a strategist at DigiDems, a political action committee that helps Democratic campaigns find tech-specialized workers.

As generative AI tools get easier to use, the likelihood of a bad actor using them for deceptive means goes up, Hodges said. Campaigns need to be proactive in trying to “inoculate voters against manipulated media,” he said.

“From the communications angle, the best defense is a strong offense,” Hodges said. “If a candidate has a deep, authentic and consistent social presence in their own voice, we believe that voters develop an ear for what’s real.”

Highly deceptive deepfakes can spread fast on social media, making it a challenge to contain them. Campaigns can mitigate the damage of manipulated media attacks by creating emergency response channels that can quickly and effectively reach their voters with validated media, Hodges said.

“The worst thing that a campaign can have is almost no digital presence, which would allow any new fake media that appears to claim that voice for them,” he said.

He also said campaigns should establish relationships with social media platforms, almost all of which have policies against the deceptive use of AI in a political context. Most platforms have recently deemphasized content moderation as a priority, but candidates may have better success at raising complaints if they already have relationships with social media companies and employ personnel familiar with the industry, Hodges said.

Legislating against deepfakes at a federal level has proven difficult, given free speech concerns. In the absence of federal regulations, lawmakers in all 50 state legislatures have introduced bills trying to regulate the use of deepfakes and synthetic media in political campaigns.

These laws have had various levels of success in the courts. But as of this year, 26 states have active laws regulating the use of AI deepfakes and AI-generated media in political campaigns, according to the American Association of Political Consultants.

Some political consultants worry that state-level solutions might be doing more harm than good when it comes to responding to malicious uses of synthetic media in politics.

“The truth is that they don’t know if these laws work as the lawmakers who wrote them intend them to work,” Julie Sweet, director of advocacy for the American Association of Political Consultants, told NOTUS. She said this legislative patchwork amounts to a “system of confusion” that allows malicious actors to experiment with deceptive uses of AI while discouraging more “legitimate” uses.

Each state defines what amounts to an illegal use of synthetic media differently and has different disclosure requirements for political material, according to the American Association of Political Consultants. Some of these laws require campaigns to add relatively vague disclaimers using words like “manipulated” with the use of AI, even when the use of synthetic media does not explicitly misrepresent candidates or political opponents.

“We still don’t know what kind of effect these disclaimers are having on the audiences,” Sweet said. “The responsible actors that are governed by our code of ethics are labeling their content, and the bad and nefarious actors just aren’t.”

The persuasiveness of political deepfakes and synthetic media is still up for debate, said Tim Harper, senior policy analyst at the Center for Technology and Democracy, a nonprofit policy advocacy group supporting digital rights.

He said a lot of factors would have to coincide for a single piece of synthetic media to disrupt the outcome of an election. Still, the increasing quality and accessibility of generative AI software allow deceptive political actors who already engage in misinformation and disinformation to put out more content much faster, Harper said.

“Traditional misinformation and disinformation campaigns that use much less sophisticated measures have also historically been able to be persuasive if they are effectively deployed at the right moment,” Harper said. “I don’t think there’s a reason for us to believe that AI tools will be any less persuasive if used correctly.”