Human Rights Watch: ban killer robots

Human Rights Watch (HRW) released a report (PDF) on May 12 finding that the use of fully autonomous weapons by militaries or law enforcement would be an affront to basic human rights and should be preemptively banned by international convention. The report, entitled "Shaking the Foundations: The Human Rights Implications of Killer Robots," was jointly authored by HRW and Harvard Law School's International Human Rights Clinic . It questions the ability of autonomous weapons to comply with international humanitarian law. According to the report, robots could not be pre-programmed to handle every circumstance, and thus fully autonomous weapons would be prone to carrying out arbitrary killings when encountering unforeseen situations. Because it is highly unlikely in the foreseeable future that robots could be developed to have human qualities such as judgment and empathy, fully autonomous weapons will not be able to effectively comply with human rights laws.

Last year HRW encouraged action on behalf of UN members at the upcoming 2013 Convention for Conventional Weapons (CCW) in support of France's initiative to add fully autonomous weapons to the CCW's work program for 2014. In 2012 The US was the first country to issue a governmental policy statement (PDF) on the use of partially and fully-autonomous weapons.

However, the US has also received negative international attention for the use of unmanned military weapons, or drones. Two UN rights experts have urged greater accountability and transparency in the use of drone strikes. A report by the UN, following such criticism, claimed the US military killed more individuals than publicly stated in a series of drone attacks.

From Jurist, May 13. Used with permission.

See our last post on the coming rule of the robots.

  1. Robots advance, humanity retreats

    Scientists at Harvard University's Self-Organizing Systems Research Group have created a "swarm" of over a thousand coin-sized robots or "Kilobots" that can assemble themselves into two-dimensional shapes by communicating with their neighbors. The self-organization techniques used by the tiny machines could aid the development of "transformer" robots that reconfigure themselves. (Scientific American, IEEE Spectrum, Aug. 14)

  2. Robocop arrives in Silicon Valley

    Crime-fighting robots—equipped with microphones, speakers, cameras, laser scanners and sensors—are already on patrol in California's Silicon Valley. The security robots, called Knightscope K5 Autonomous Data Machines, were designed by a robotics company, Knightscope, located in Mountain View. They are programmed to notice unusual behavior and alert controllers. Reports are unclear if they are being deployed by police or the private sector; the first place they are patrolling is the Knightscope headquarters. (RT, Nov. 20; KPIX, Nov. 18)

  3. Bigshot science dudes: ban ‘autonomous weapons’

    From The Guardian, July 27:

    Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a "military artificial intelligence arms race" and calling for a ban on "offensive autonomous weapons".

    The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla's Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

    The letter states: "AI technology has reached a point where the deployment of [autonomous weapons] is–practically if not legally–feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."

    The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

    Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue.Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

    "The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting," said the authors.

    A good start, but that line about "that AI can be used to make the battlefield a safer place" just loans legitimacy to exactly what they seek to oppose. And the line "practically if not legally" indicates that the problem is far deeper than one that can be effectively addressed through a new international convention…

  4. Robots to become ‘electronic persons’

    And so it begins. From Reuters, June 21:

    MUNICH — Europe's growing army of robot workers could be classed as "electronic persons" and their owners liable to paying social security for them if the European Union adopts a draft plan to address the realities of a new industrial revolution.

    Robots are being deployed in ever-greater numbers in factories and also taking on tasks such as personal care or surgery, raising fears over unemployment, wealth inequality and alienation.

    Their growing intelligence, pervasiveness and autonomy requires rethinking everything from taxation to legal liability, a draft European Parliament motion, dated May 31, suggests.