France Calls For Talks Over Banning Cylons, The Terminator, And Other Killer Robots


(Credit: The QinetiQ Group's Flickr account)

(Credit: The QinetiQ Group's Flickr account)

(Credit: The QinetiQ Group’s Flickr account)

Lethal robots — drones that can select targets and fire without direct human say-so, essentially — are officially no longer a matter for science fiction. Yesterday, France gave a statement to the United Nations demanding a formal disarmament conversation about these weapons. Egypt actually went a step further, calling for international regulations on so-called “killer robots” even before any such system has been built.

There are two things you need to know about this. First, this issue isn’t going away anytime soon — there’s a strong and growing movement looking to prevent lethal autonomous weapons from ever being built. Second, the debate over lethal robots is in large part a debate over how we understand the morality of war.

The movement to ban lethal autonomy centers on the Campaign To Stop Killer Robots, an umbrella organization that coordinates anti-robot activism in at least 19 different countries. The Campaign, housed in Human Rights Watch’s DC office, is led by Mary Wareham, HRW’s arms control advocacy director.

HRW wrote a report, “Losing Humanity,” that serves as a blueprint of sorts of the Campaign’s goals. Their endgame is an international treaty banning “killer robots” patterned on the Campaign to Ban Landmines, which ultimately succeeded in getting 161 nations to become party to an international treaty proscribing landmine use.

Of course, there’s a major difference between the two: we already know the horrible things landmines do to people. No one is quite sure how any of the proposed mechanisms for letting robots make kill decisions would work out in practice, and probably won’t for at least a decade. While some weapons have been banned before use, such as the blinding lasers treaty, there’s never been a preemptive international treaty as comprehensive as the proposed ban on lethal autonomy. That has some critics — like American University’s Kenneth Anderson, one of the architects of the land mines campaign — up in arms, as they believe (to simplify greatly) that a blanket ban wouldn’t work and might proscribe live-saving technology.

Regardless of the Campaign’s merits, they’ve certainly made headway. Legislators in Germany and Britain are openly debating the merits of a preemptive ban on lethal autonomous machines. And France appears to be committed to pushing the envelope.

“France seems to have been doing a lot of work to see if there is support for the Convention on Conventional Weapons (CCW) to take up ‘lethal autonomous robotics’ in 2014,” Wareham told me. “The decision-point on that will be at the CCW’s annual meeting in Geneva on November 14-15.” If France succeeds in getting a CCW “mandate” to consider lethal autonomous weapons, then international deliberations will begin on amending the Convention to include killer autonomous robots in some fashion. It’s not clear on whether France has the votes to do that — it needs consensus for a mandate, which means any state (like, say, South Korea or India) could veto a mandate.

At its core, this coming debate about lethal autonomy is a debate about ethics. The French statement’s central qualm about killer robots was moral: “this is a key debate as it raises the fundamental question of the place of Man in the decision to use lethal force,” it reads. There are two interlocking ethical concerns here: whether robots can ever make make appropriately discerning decisions about who to target and, more fundamentally, whether it’s ever acceptable for a robot to make decisions about who should live and who should die regardless of whether it comes to the correct ones.

It’s these concerns that really fuel the growing public concern about these weapons. Charli Carpenter, a professor of international relations at UMass-Amherst whose work focuses on global human rights movements, found wide opposition to the deployment of killer robots among Americans. The reason, she wrote in a follow-up piece, was ” a visceral fear among the public of outsourcing killing decisions to machines.”

“If a convention to ban the use of killer robots becomes the next big multilateral arms control treaty,” Carpenter writes, “It will be because global civil society tapped into a serious and pervasive moral conviction: that just because we can do something does not mean that we should.” France and Egypt just took big steps toward proving her prediction right.


An earlier version of this article incorrectly identified a translation of France’s statement to the U.N. as a “letter.”

Share Update