Welcome to the Political Insider’s four-part series, The Big Idea, exploring which big ideas will change Georgia politics in the future. Today, it’s AI, or artificial intelligence. Next week, we’ll look at how a manufacturing boom in parts of rural Georgia could remake the politics there. If you’ve got a big idea, send it to Patricia.Murphy@ajc.com.
You may have seen reporting recently on AI, the rapidly expanding technology that learns and improves as it gets more information, data, and experience.
Its job is to learn on the job and its potential to remake the political fields of speechwriting, polling, data collection, and analysis has plenty of people asking how AI could change politics.
But the truth is AI is already being used by candidates and campaigns across the country, including in Georgia, to supercharge their work, and to come up with ways to influence voters that nobody has thought of yet.
From the speeches voters hear to the ads on their computers to the flyers in their mailboxes and the lists they’re on, Georgians may have already interacted with AI technology without even knowing it.
A recent attack ad from the Republican National Committee featured a post-Apocalyptic image of San Francisco, with a crush of people crowding the street and moving toward the Golden Gate Bridge in the distance.
“Who’s in charge here? It feels like the train is coming off the tracks,” a voice warns as the video closes.
But a close look at the Golden Gate image and others in the ad shows that something’s off. The road toward the bridge does not really exist and the angles on the Golden Gate don’t line up. A small disclaimer at the end reads, “Built entirely with AI imagery,”
Other images that have surfaced online have shown former President Donald Trump getting arrested or running from police.
The Washington Post and New York Times have become so attuned to the chance of featuring AI-generated “deep fakes” that they now have entire digital staffs trained to spot maliciously manipulated images and verify real ones.
Those are examples of the dark uses of AI in politics, so far.
But data firms, media consultants, and ad makers are all using AI in their fields to maximize the effectiveness of the work they do for their clients, too.
Will Long is a Harvard-trained data scientist who came up with the AI modeling tool in his dorm room — naturally — that became the starting point for his political data firm, Numinar Analytics, which calls itself “the world’s first artificially intelligent political campaign data platform.” State Rep. Houston Gaines, R-Athens, is a client.
“It’s essentially like a Moneyball for politics,” Long said. “Where you can solve every district in terms of what gives you the best possible chance of winning, in terms of who you talk to, about what, and where are you spending your money. And that’s all powered by A.I.”
Long said he gives his AI tool access to data on voter preferences, candidate preference, the issue they care about most, and how likely they are to turn out. “Then it can learn from that data it’s collecting from the field, match that against the voter file data that we have, and then come up with an optimized, data-driven strategy.”
A Democratic data firm, Mission Wired, advertises its “Advantage AI digital modeling,” and includes Raphael Warnock’s campaign on its client list.
When I asked veteran consultant Rick Dent if he would ever use AI in the future, he said he’s already using it.
Dent described AI programs like Chat GPT, which generates written content, as another research tool he can use to help his clients.
“For the time being, I think AI is only as good as the human using the tool,” he said. “If used properly, it should help deliver to your clients excellent work.” ... But I’m also sure there will be those lazy consultants passing off 100% AI-generated content as their own, and I do have an ethical problem with that.”
Dent has also started sending out an AI newsletter for clients to keep them up to speed on applications of the rapidly changing technology.
The power and availability of AI tools are what makes AI so exciting to political professionals, but it’s what worries them the most, too.
“It’s a new kind of power, and power can be used for good or bad,” said Mark DiMassimo, a New York-based ad maker. “A lot of it depends on who’s using it. Certainly, intention is a massive part of it, but so is wisdom and often wisdom comes out of really negative experience.”
The negative uses for AI are not hard to imagine.
DiMassimo said “deep fake” videos and manipulated audio of candidates have the potential to be a more powerful source of disinformation than any false story or conspiracy from 2020.
“This is way more powerful stuff,” he said. “This is crack versus cannabis.”
Long warned that candidates should take steps to use AI only to supplement human contacts and inputs, not replace it. And he said disclosure of AI content is important.
“Right now, campaigns have to include disclaimers that tell voters who paid for the ad. I can easily envision a scenario where, if this becomes a problem, you could legislate that there needs to be some sort of disclaimer that something has been AI-generated to flag the use of AI.”
Of even more concern for some campaign staffers is the worry that they could be replaced by AI tools that could not only do their job , but someday possibly do it better than they could.
What’s stopping a candidate from using Chat GPT to write their speeches and press releases instead of a young staff member?
“Let’s just say I’m glad I’m near the end of my career and don’t to worry about it,” Dent said.
About the Author