Ronald Arkin is a bearded, bespectacled Georgia Tech roboticist who says he hates war. He is also a global front man for what others love to call “killer robots.”

Today, at a United Nations session in Geneva, Arkin and a computer scientist from Britain will debate the pros and cons of machines that — after gathering and assessing certain data — could decide, on their own, to use deadly force.

Arkin bristles at any suggestion that exploring the implications of such technology makes him a war monger.

“I abhor warfare,” he said Monday, responding by email to questions from The Atlanta Journal-Constitution. “My belief is that if these systems are properly designed, they will be able to reduce civilian casualties when compared to human war fighters.”

He said he encourages pacifists to find solutions to the problem of war.

“If we could could figure out ways to stop war on Earth, I would encourage you to do so,” he said in a TEDx talk at Georgia Tech in 2011. “Unfortunately, I don’t know how to make that happen. But I do think I know how to assist in making warfare, as oxymoronic as it sounds, more humane.”

Today’s UN debate kicks off a four-day meeting of experts on the ethical and legal concerns surrounding fully autonomous robot weapons — weapons that have the potential to change the future of warfare.

Unlike unmanned drones, which are operated remotely by humans, these robots would make their own decisions. They would do it by assessing their situation against certain criteria — including ethical and legal standards — that are programmed into them.

The debate will pit Arkin against Professor Noel Sharkey, an expert in artificial intelligence who helped launch the Campaign To Stop Killer Robots. Sharkey, who favors a ban on further development of such robots, asserts that they lack the technology to discern the difference between combatants and noncombatants.

Moreover, Sharkey says, robots do not have the common sense and moral judgment needed to make on-the-spot, life-and-death decisions. And, in the end, they cannot be held accountable for their actions, he said.

“Autonomous weapons systems cannot be guaranteed to predictably comply with international law,” Sharkey told the BBC. “Nations aren’t talking to each other about this, which poses a big risk to humanity.”

Human Rights Watch has said international discussions should lead to a treaty banning lethal autonomous weapons. Days ago, the International Committee of the Red Cross said: “major concerns persist over whether a fully autonomous weapon could make the complex, context-dependent judgements required by international humanitarian law.”

Arkin has suggested in the past that enforcing any ban on autonomous weapons would be challenging.

He told the AJC that although much research remains, he is optimistic robots will be able to make split-second decisions that comply with international humanitarian law that will ultimately save lives. In fact, he believes machines — free of the high emotions that can cloud battlefield judgement — may some day be able to outperform humans with respect to moral behavior.

“I have the utmost respect for our young men and women in the battlefield, but human beings are being put into conditions in modern warfare that no one was ever designed to function in,” he said. “The tempo of the battle space outpaces human reasoning and reaction in many cases, compromising human decision-making.”

And the battlefield isn’t the only place where he foresees robots outperforming their biological counterparts. Years ago, his Georgia Tech group worked on a robot dog that could walk on all fours, wag its tail, sit, beg and even dance.

“The hope is that in the future, every home could have a robot pet, ” he told a reporter for a Connecticut newspaper. “You want a robot dog that can provide the kind of support the biological model can, without the downsides. No kennels. Doesn’t mess up the carpet.”

Arkin, who is director of Georgia Tech’s Mobile Robot Laboratory, has written several books on robotics. He has done work for the Pentagon exploring the application of international law to robotic systems.

He said he hopes current research won’t be stifled by the fears created by movies and stories in which intelligent war machines inevitably turn on the human race. And he’s glad the debate has reached the level of the UN, hoping it will enhance discussions of how technology can reduce the number of civilians killed in wars.

“This is a humanitarian endeavour,” he said. “It is not clear if this goal can be achieved, but I believe it is the responsibility of a scientist to find ways to better protect noncombatants and reduce atrocities. This may be one such way.”