Politics

It’s ‘deepfake’ season. Here’s how to spot AI in political ads.

A Georgia Tech professor explains how to discern deceptive advertising.
(Photo Illustration: Philip Robibero / AJC | Source: Getty)
(Photo Illustration: Philip Robibero / AJC | Source: Getty)
49 minutes ago

It’s always tricky to understand the full context of quotes and sound bites used in political advertisements.

But with the onslaught of artificial intelligence — when campaigns can make opponents appear to say words or make actions that never existed — experts say it’s even more important for voters to stay vigilant.

Review sources with skepticism, and ask whether a particular image or video confirms preexisting — potentially controversial — beliefs, they advise.

Earlier this month, the senatorial campaign of Republican U.S. Rep. Mike Collins created a manipulated video of U.S. Sen. Jon Ossoff that appeared to show the Democrat voicing disdain for farmers and recipients of federal food assistance.

Ossoff and state Democrats condemned the video, but the Collins campaign defended it as an opportunity for technological innovation.

However, Republican elected officials, including President Donald Trump, have also been victims of deepfakes. Just recently, Trump was falsely depicted in a suggestive video with former President Bill Clinton.

Brian Magerko, a professor of digital media at Georgia Tech, has studied AI for decades. He spoke with The Atlanta Journal-Constitution about how to spot a fake ad and protect yourself against political deception.

This interview has been edited for length and clarity.

AJC: What are signs that a political advertisement may be fake?

Magerko: If something seems a little too good to be true, and especially if it confirms your beliefs — and this goes across the political spectrum — be skeptical of it.

What clues are there that a video was made using AI?

Stop and ask, does this look normal? (In the Ossoff deepfake,) he’s just sort of standing there, barely his head moving side to side.

We can’t quite simulate or generate naturalistic, believable human behavior yet. There’s just little things that we pick up from micro-expressions in the face and the eyeline to postural movements and how your body moves when you breathe. A lot of these details get washed out in these algorithms.

What was your impression of the video from a technical standpoint?

It looks like it was generated by AI if you’re paying attention in the slightest to what’s in front of you. That doesn’t look like how a human being stands and talks.

What is the first instance you recall seeing AI in politics?

It was before Trump won his second presidency, and there were a lot of court cases against him. There was a story about how he was getting his mugshot taken, but there’s also this sort of counternarrative, generative AI thing that showed him bravely fighting off FBI agents. It was the first time I could vividly remember seeing something out there in the zeitgeist that was clearly generated by AI to serve some narrative.

How do deepfakes differ from spoofs or parodies, like those done by “Saturday Night Live?”

Parody requires us to be in on it. With “Saturday Night Live,” they do that. Even if they (show) something that looks like a Trump ad, it’s within the context of a TV show.

With parody, we get the greater social context, because you’re presenting in a way where there’s a little wink. We know this isn’t Donald Trump. We know this isn’t Jon Ossoff. But there is no wink in (Collins’) ad. We’re not in on it. It’s presented as authentic. And that’s a problem.

Which states are taking smart approaches to regulating AI?

There are a couple states that tend to be a little bit more proactive. California basically has a law that says this Jon Ossoff video would be illegal — specifically, using AI to generate political advertisements. They’ve talked about, legally, what it means to be a digital replica, probably driven from the entertainment business over there. (For example,) Matthew McConaughey just signed off the rights for AI companies to make models of his voice. They have a legal structure for how to do this.

Do you think Georgia law has caught up with where AI is presently at?

Not in Georgia. At a state level, we are way behind. But this is going to be a national issue, just like libel is a national issue.

What would you like to see from Georgia law?

There are countries that are trying to turn that tide and say, “Look, you have an inherent right to yourself in the real as well as in the digital.” I’d love to see our state and our country go more in that direction, like a Bill of Rights (about) our data.

What are the political consequences of deploying deepfakes?

I’m imagining to some it’ll be even more of a turn off for (Collins). Like, “Wow, this was really low. This isn’t cool. I’m not going to vote for that guy.”

Some people might like that he’s fighting dirty like this because he’ll do whatever it takes. There’s going to be that population, too. How it shakes out in terms of reception societally, I think, is what’s going to dictate (whether) we are going to be OK with AI.

About the Author

Michelle Baruchman covers the Georgia House of Representatives and statewide issues. She is a politics news and enterprise reporter covering statewide political stories.

More Stories