From New Scientist, Feb. 27:
Governments around the world are rushing to develop military robots capable of killing autonomously without considering the legal and moral implications, warns a leading roboticist. But another robotics expert argues that robotic soldiers could perhaps be made more ethical than human ones.
Noel Sharkey of Sheffield University, UK, says he became “really scared” after researching plans outlined by the US and other nations to roboticise their military forces. He will outline his concerns at a one-day conference in London, UK, on Wednesday.
Over 4000 semi-autonomous robots are already deployed by the US in Iraq, says Sharkey, and other countries β including several European nations, Canada, South Korea, South Africa, Singapore and Israel β are developing similar technologies.
Crucial decisions
In December 2007, the US Department of Defense (DoD) published an “Unmanned systems roadmap” proposing to spend about $4 billion by 2010 on robotic weapons, a figure that will later rising to about $24 bn.Sharkey is most concerned about the prospect of having robots decide for themselves when to “pull the trigger”. Currently, a human is always involved in decisions of this nature. But the Pentagon is nearly 2 years into a research programme aimed at having robots identify potential threats without human help.
“The main problem is that these systems do not have the discriminative power to do that,” he says, “and I don’t know if they ever will.”The US and other governments have also set a very short timeframe to achieve such sophistication, says Sharkey. “It is based I think on a mythical view of AI.”
Temporary ban
Governments and robotics engineers should re-examine current plans, and perhaps consider an international ban on autonomous weapons for the time-being, he suggests. “We have to say where we want to draw the line and what we want to do β and then get an international agreement.”After writing publicly of his concerns, he says engineers working for the US military have contacted him with similar worries. “Some wrote to thank me for speaking out,” he says.
Ronald Arkin, a robotics researcher at Georgia Tech University, US, says that Sharkey is right to be concerned. “We definitely need to be discussing this more,” he says. However, he believes that robots could ultimately become a more ethical fighting force.
‘Moral responsibility’
As governments seem determined to invest in robotic weapons, Arkin suggests trying to design ethical control systems that make military robots respect the Geneva Convention and other rules of engagement on the battlefield.“I have a moral responsibility to make sure that these weapons are introduced responsibly and ethically, and reduce risks to non-combatants,” he says.
Arkin also notes that human combatants are far from perfect on the battlefield. “With a robot I can be sure that a robot will never harbour the intention to hurt a non-combatant,” he says. “Ultimately they will be able to perform better than humans.”
Arkin is using computer simulations to test whether ethical control systems can be used in battlefield scenarios, some of which are modelled on real-life events.
Refusing orders
One involved an Apache helicopter attacking three men laying roadside bombs in Iraq. Two were killed, and the third clearly wounded. The pilot is ordered by a superior to kill the incapacitated man, and reluctantly does so.“I still find the video of that event disturbing,” says Arkin, “I hope an autonomous system could realise that man was clearly incapacitated, effectively a prisoner of war and should not have been killed.”
“One of the fundamental abilities I want to give [these systems] is to refuse an order and explain why.”
Yet Arkin does not think battlefield robots can be made a smart as human soldiers. “We cannot make them that generally intelligent, they will be more like dogs, used for specialised situations,” he says.
But he is so far concentrating his research scenarios involving armies. “For those situations we have very clear cut guidance from the Geneva Convention, the Hague and elsewhere about what is ethical,” he explains.
Ethical robots? Sounds like the sugar coating on the robotification of reality…