Business

Have you thought about how AI will change cybersecurity? You should.

Most Atlanta companies are now using AI, but so are hackers, scammers and cybercriminals.
(Illustration: Marcie LaCerte for the AJC)
(Illustration: Marcie LaCerte for the AJC)
12 minutes ago

Artificial intelligence is being presented as the next technological revolution poised to permeate most aspects of our lives at home and work.

Some Atlantans and companies are dabbling in AI, while others have fully embraced it. But with any fast-changing technology promising opportunity, there’s a dark side.

Generative AI as a cybersecurity tool cuts both ways, Atlanta-based security firms and the FBI told The Atlanta Journal-Constitution. The technology can be wielded as a protective shield or an offensive sword, requiring vigilance and preparation to stay ahead of bad actors.

“If you stay still, you start to lag behind,” said Flavio Villanustre, senior vice president and chief information security officer for Alpharetta-based LexisNexis Risk Solutions. “It is building a better lock, while the attackers find a better way to bypass that lock.”

The risks of making a mistake have only gotten greater as AI becomes more sophisticated and widespread, said Joe Zadik, FBI cybersecurity special agent for the Southeast region.

AI can generate phishing emails without typos or wonky grammar. It can create deepfake audio or video impersonating public figures, such as a CEO. Hackers can infiltrate AI systems to get past safety guardrails, leak sensitive data or mess up responses to prompts.

It used to take a skilled hacker to truly threaten a company, but AI has lowered that barrier to entry, Zadik said.

“The idea of having a sophisticated hands-on-keyboard actor who has to fully understand various coding languages, that time is out the door,” he said. “Now, we can have lower-level cybercriminals who don’t have the technical chops (but are) able to pull off sophisticated malware attacks or even ransomware attacks.”

‘Out in the wild’

The internet radically changed how companies, organizations and governments had to approach protecting their information.

Password-protected accounts became commonplace, but initial solutions didn’t remain secure for long, said Blake Brannon, chief product and strategy officer at Atlanta-based AI governance and compliance startup OneTrust.

A compromised account could ripple through an organization and its clients, and that exposure exponentially increased with the innovation of cloud computing. Instead of all sensitive data being stored internally, that information was spread among third-party applications, which created more potential weak points for cybercriminals or security lapses.

“Your company data is no longer inside the company network,” Brannon said. “That perimeter that you built your whole security program around … now it’s out in the wild using all kinds of cloud apps and services and mobile devices.”

A view of the building that houses software firm OneTrust's new global headquarters in Atlanta on Thursday, May 22, 2025. (Arvin Temkar/AJC)
A view of the building that houses software firm OneTrust's new global headquarters in Atlanta on Thursday, May 22, 2025. (Arvin Temkar/AJC)

AI incorporation widens the risk even further, and it’s done so quickly.

Generative AI, where language learning models evaluate gigantic data sets to prepare unique responses to prompts, only became widespread in 2023 with the launch of OpenAI’s GPT-4 model in ChatGPT.

By the end of 2023, about a third of all organizations used generative AI in some capacity, according to a recent study by management consulting giant McKinsey & Co. By early November 2025, that figure has increased to nearly 80%.

But not every organization uses AI in the same way. Only 7% of study respondents who said they use it have fully scaled its deployment across the entire organization. More than 60% of respondents say they’re either experimenting or piloting AI.

“As an organization, large or small, you have to figure that out and navigate it,” Brannon said. “Getting it wrong can be pretty impactful.”

Best practices

The FBI is among the organizations using AI, leveraging the technology to analyze mammoth amounts of video footage or data for law enforcement purposes.

Zadik said the power and risks of AI are clear, requiring new tactics to safely use it while staving off bad actors.

Using a large corporation as an example, he said employees need to think critically if they receive a phone call from an executive asking to initiate a large financial transaction. Audio and video are no longer trustworthy sources, so verification is vital.

“That’s a bad day for the company if (they receive an AI-powered scam call) and if companies are not preparing their employees to think critically,” Zadik said. “Where is this phone call coming from? Is this a trusted source?”

Villanustre said that verification process applies to hiring practices as well. Interviews used to solely take place in person, but the shift to remote work and video interviews opens up the potential for scams. Cybersecurity systems are fairly good at vetting return users, but there are limited tools to screen someone new, which is why initial meetings are so important, he said.

When using AI for important work tasks, Villanustre said it’s important to remember how this technology works. Generative AI is known to “hallucinate” responses to prompts if the answer isn’t within its training materials, which can lead to mistakes that open up security risks.

“AI is a little bit of a pleaser,” he said. “If you ask an AI to give information about something that the AI has absolutely no clue about, the AI will come up with something. It’s trying to fulfill your prompt no matter what.”

Flavio Villanustre is senior vice president and chief information security officer for Alpharetta-based LexisNexis Risk Solutions. (Courtesy of LexisNexis Risk Solutions)
Flavio Villanustre is senior vice president and chief information security officer for Alpharetta-based LexisNexis Risk Solutions. (Courtesy of LexisNexis Risk Solutions)

Constantly reevaluating AI policies, compliance and cybersecurity measures is also a worthwhile habit for any organization. Having mitigation plans that limit the harm of an attack — along with detection systems to flag a compromised system — is also vital in a digital world.

An effective lock built yesterday might be vulnerable today. And yesterday’s preventive measures may not apply to the tactics bad actors use today.

“If companies are still sending phishing test messages to their employees with misspellings and grammar flags in it, they’re doing their employees a disservice,” Zadik said, citing a common cybersecurity training exercise. “That’s not how phishing messages work in an age of AI.”

About the Author

Zachary Hansen, a Georgia native, covers economic development and commercial real estate for the AJC. He's been with the newspaper since 2018 and enjoys diving into complex stories that affect people's lives.

More Stories