Advertisement

Campaign Launched To Ban Autonomous ‘Killer Robots’

Tuesday morning, a consortium of human rights organizations launched the Campaign to Stop Killer Robots, a joint project to enact an international treaty banning the use of “fully autonomous robots” — machines that kill without direct human oversight — in combat. The launch highlights a net of thorny ethical and legal issues surrounding the use of these weapons, ones that have yet to be fully resolved by the US government or international community.

The campaign to ban robot soldiers began in response to rapid advancements in military robotics in roughly the past decade and a half, developments that most famously produced the armed Predator and Reaper drones in common use by U.S. armed personnel today. In 2009, several concerned researchers founded the International Committee for Robot Arms Control (ICRAC), the first NGO dedicated to pushing an international treaty that (among other things) would ban autonomous weapons. Debate over the topic heated up in late 2012, when Human Rights Watch released much-debated report arguing that autonomous weapons were in-principle inconsistent with international humanitarian law.

The Campaign to Stop Killer Robots joins ICRAC and HRW with 20 other like-minded organizations, including Amnesty International and Code Pink, in a renewed effort to codify a ban on autonomous weapons. There is no currently existing fully autonomous weapons platform and the U.S. Department of Defense, which supervises what is by far the most robotically advanced military in the world, has a self-imposed moratorium on deploying weapons capable of autonomously using lethal force. However, the Campaign’s member groups are worried that technological advancement will make the deployment of such weapons inevitable without a treaty ban:

Over the past decade, the expanded use of unmanned armed vehicles or drones has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are permitting the United States and other nations with high-tech militaries, including China, Israel, Russia, and the United Kingdom, to move toward systems that would give full combat autonomy to machines…”We cannot afford to sleepwalk into an acceptance of these weapons. New military technologies tend to be put in action before the wider society can assess the implications, but public debate on such a change to warfare is crucial,” said Thomas Nash, Director of Article 36. “A pre-emptive ban on lethal autonomous robots is both necessary and achievable, but only if action is taken now.”

These efforts are obviously in their infancy, and militaries have some pretty obvious reasons to want autonomous weapons, so it’s unlikely that we’ll see a treaty banning robot soldiers anytime soon. Moreover, it’s not even clear if it’d be a good thing: lawyers and ethicists are sharply divided as to whether autonomous weapons would be illegal, unimportant, or potentially even an improvement over human soldiers.

Advertisement

Critics of autonomous weapons argue that they’re incapable of complying with critical provisions in international humanitarian law aimed at protecting civilians. That Human Rights Watch report concludes that no algorithm or artificial intelligence could sufficiently distinguish, for example, between civilians and insurgents in an Afghanistan-style counterinsurgency, meaning that no army employing autonomous weapons could satisfy the legal principle of “distinction” (that all armies must, over the course of fighting, identify civilian populations and military targets and treat the two differently). Critics also believe that autonomous weapons would have difficulty making the kinds of contextual moral judgments necessary to comply with the principle of proportionality, the idea that any unintentional cost to civilian life must be proportionate to the military benefits, or “military necessity,” the legal principle requiring armies to hold off on attacks (even on military targets) that aren’t necessary for winning.

Opponents of a treaty ban, by contrast, argue that such criticisms are missing the point. All weapons, they hold, can be used illegally — obviously, armies using machetes alone can violate the principles of distinction, proportionality, or military necessity. Indeed, they might be more likely to: while humans are driven to massacre by anger or sadism, a properly programmed robot will never lash out (Ronald Arkin at the Georgia Institute of Technology is working on just this sort of programming). It’s much smarter, they argue, to attempt to identify the specific circumstances under which autonomous weapons could be used lawfully or unlawfully rather than tilt after the windmill of an international treaty.

While some of these quandaries depend on technical assessments — How well can we program robots? How good are their sensors? — others are more conceptual. One such challenge comes from Monash University Professor Robert Sparrow, who argues that robots create problems of moral responsibility for atrocities that are in principle impossible to resolve. Sparrow argues that, the more autonomous a combat machine is, the less predictable its behavior in combat zones becomes, and hence the less fair it is to hold either its programmer or commanding officer responsible for any atrocities it commits. But if it’s wrong to hold anyone responsible for atrocities, then the entire system of international law and the morality of war — which depends on being able to hold particular individuals responsible for war crimes — falls apart.

Others, like Jeffrey S. Thurnher and Michael Schmitt, counter that it’s hard to imagine any scenario where a robot could commit a war crime without the person who ordered it into combat knowing that atrocity was a possible outcome of their order, suggesting that a person could always be responsible for the machine’s actions.

These issues have yet to be resolved as a matter of law, philosophy, or even robotics. Yet one thing is clear: the launch of the Campaign to Stop Killer Robots won’t end the end of debate about the use of robots in war. If anything, it’ll escalate it.