» RELATED: What happens when the robots sound too much like humans?
We reached out to some experts at the Georgia Institute of Technology to get some perspective on just how human-like the new technology is and whether there’s any cause for concern.
"From what I can tell, this is an impressive research and engineering demonstration. However, as the Google blog reports, it's important to keep in mind that these agents can handle narrow-domain conversations, not open-domain – chit-chat – conversations," Dhruv Batra, an assistant professor at the School of Interactive Computing at Georgia Tech, told the AJC.
Batra said that "open-domain conversations" – similar to those you'd have with your friends or colleagues – are much more difficult to program an AI to carry out, as they rely a lot on a human's common sense. While impressive, Google's technology is only capable of performing well-defined tasks.
» RELATED: More than 3,300 Android apps are improperly tracking kids, study finds
Mark Riedl, an associate professor in the School of Interactive Computing, said that he see the demonstration as "undoubtedly a technically impressive engineering feat."
But he believes there are reasons to be "concerned about technologies that pass themselves off as human."
"Failing to reveal that a speaker is an algorithm can create unrealistic expectations or even resentment. Google has stated that they are looking into whether the system should declare itself to be an AI upfront," he said.
Batra echoed his sentiments, saying there are certainly some "legitimate ethical questions" to be raised about such technologies.
But "I don't think there's any basis for AI fear mongering," he said.
» RELATED: Do you have the new Gmail? Google's email gets massive redesign
As Riedl pointed out, people often are too quick to assume that a machine doing human-like tasks has more capabilities than it actually does.
"There is an inclination to see an AI system do something that reminds them of humans and to infer greater capability than exists or to anthropomorphize the system," he said.
But problems may arise if people choose to abuse the technology.
"Technology is neither good nor bad, but can always be put to bad purposes," Riedl said. "For example, could this technology be used for phone scams or political influence campaigns?"
In the future, AI will likely become a bigger part of daily life. Already, people are using AI systems in their homes, in less human-sounding phone answering services, and many other applications.
"I expect personal assistants such as Google Assistant, Alexa, Siri, and Cortana to become better at providing services. Instead of a singular AI system that takes on more and more services, it will be more and more specialized," Riedl said.
» RELATED: How to keep your kids safe on social media
Batra posited that society is currently seeing the "simplification of the human-machine interface," pointing out that computers have progressed from using punch cards a few decades ago, to language interfaces today.
Moving forward, Batra believes "we will want to talk to our computers and simply point at the things around us."
"We'll need natural language understanding and generation, computer vision to see, and machine learning for our agents to be able to learn from data and take actions in the world for us," he said.
While both Batra and Riedl don't believe Duplex technology is anything to be afraid of, other leading scientists and tech experts have raised questions about the rapid rise of AI.
Back in 2014, renowned British theoretical physicist and cosmologist Stephen Hawking warned against the careless development of AI.
"The development of full artificial intelligence could spell the end of the human race," he said, according to the BBC.
» RELATED: Cobb freshman to represent Georgia in Google Doodle art competition
However, Hawking also said that he believed the primitive forms of AI already developed have proven very useful. His concern centered around creating something that would equal or surpass human capabilities. But Duplex doesn't even come close to matching a human's conversational abilities as of yet.
Prominent investor and engineer Elon Musk, of Space X and Tesla Inc., has also expressed dire warnings about AI, suggesting its development could lead to World War III. He also has said that "AI is a fundamental risk to the existence of human civilization."
Microsoft Founder Bill Gates disagreed with Musk's claims.
"The so-called control problem that Elon is worried about isn't something that people should feel is imminent," Gates told The Wall Street Journal in September.
"We shouldn't panic about it."