Should we be concerned about Google's new human-like AI assistant?

This week Google unveiled Duplex, a new AI technology for conducting natural conversations to carry out specific tasks over the phone. It even throws in the occasional “um…” to give the impression of natural human conversation. Duplex project lead wrote in an announcement, “The Google Duplex technology is built to sound natural, to make the conversation experience comfortable." At its core Duplex is a recurrent neural network, trained on a collection of anonymous phone conversation data. It uses Google’s automatic speech recognition technology as well as other features.

Google CEO Sundar Pichai revealed his company's new and improved Assistant feature – Duplex – at the annual Google I/O event earlier this week, showcasing how far artificial intelligence technology has come.

» RELATED: How Google aims to simplify your life with AI

During the demonstration, Duplex calls to schedule a hair appointment, sounding remarkably human. While the audience in the video laughs at the AI's uncanny speech intonations and use of filler words, the demonstration has also raised concerns.

In a blog post accompanying the demonstration, Google described Duplex as "a new technology for conducting natural conversations to carry out 'real world' tasks over the phone."

"The Google Duplex technology is built to sound natural, to make the conversation experience comfortable," the tech company wrote.

» RELATED: What happens when the robots sound too much like humans?

We reached out to some experts at the Georgia Institute of Technology to get some perspective on just how human-like the new technology is and whether there’s any cause for concern.

"From what I can tell, this is an impressive research and engineering demonstration. However, as the Google blog reports, it's important to keep in mind that these agents can handle narrow-domain conversations, not open-domain – chit-chat – conversations," Dhruv Batra, an assistant professor at the School of Interactive Computing at Georgia Tech, told the AJC.

Batra said that "open-domain conversations" – similar to those you'd have with your friends or colleagues – are much more difficult to program an AI to carry out, as they rely a lot on a human's common sense. While impressive, Google's technology is only capable of performing well-defined tasks.

» RELATED: More than 3,300 Android apps are improperly tracking kids, study finds

Mark Riedl, an associate professor in the School of Interactive Computing, said that he see the demonstration as "undoubtedly a technically impressive engineering feat."

But he believes there are reasons to be "concerned about technologies that pass themselves off as human."

"Failing to reveal that a speaker is an algorithm can create unrealistic expectations or even resentment. Google has stated that they are looking into whether the system should declare itself to be an AI upfront," he said.

Batra echoed his sentiments, saying there are certainly some "legitimate ethical questions" to be raised about such technologies.

But "I don't think there's any basis for AI fear mongering," he said.

» RELATED: Do you have the new Gmail? Google's email gets massive redesign

As Riedl pointed out, people often are too quick to assume that a machine doing human-like tasks has more capabilities than it actually does.

"There is an inclination to see an AI system do something that reminds them of humans and to infer greater capability than exists or to anthropomorphize the system," he said.

But problems may arise if people choose to abuse the technology.

"Technology is neither good nor bad, but can always be put to bad purposes," Riedl said. "For example, could this technology be used for phone scams or political influence campaigns?"

In the future, AI will likely become a bigger part of daily life. Already, people are using AI systems in their homes, in less human-sounding phone answering services, and many other applications.

"I expect personal assistants such as Google Assistant, Alexa, Siri, and Cortana to become better at providing services. Instead of a singular AI system that takes on more and more services, it will be more and more specialized," Riedl said.

» RELATED: How to keep your kids safe on social media

Batra posited that society is currently seeing the "simplification of the human-machine interface," pointing out that computers have progressed from using punch cards a few decades ago, to language interfaces today.

Moving forward, Batra believes "we will want to talk to our computers and simply point at the things around us."

"We'll need natural language understanding and generation, computer vision to see, and machine learning for our agents to be able to learn from data and take actions in the world for us," he said.

While both Batra and Riedl don't believe Duplex technology is anything to be afraid of, other leading scientists and tech experts have raised questions about the rapid rise of AI.

Back in 2014, renowned British theoretical physicist and cosmologist Stephen Hawking warned against the careless development of AI.

"The development of full artificial intelligence could spell the end of the human race," he said, according to the BBC.

» RELATED: Cobb freshman to represent Georgia in Google Doodle art competition

However, Hawking also said that he believed the primitive forms of AI already developed have proven very useful. His concern centered around creating something that would equal or surpass human capabilities. But Duplex doesn't even come close to matching a human's conversational abilities as of yet.

Prominent investor and engineer Elon Musk, of Space X and Tesla Inc., has also expressed dire warnings about AI, suggesting its development could lead to World War III. He also has said that "AI is a fundamental risk to the existence of human civilization."

Microsoft Founder Bill Gates disagreed with Musk's claims.

"The so-called control problem that Elon is worried about isn't something that people should feel is imminent," Gates told The Wall Street Journal in September.

"We shouldn't panic about it."