Americans oppose the use of autonomous military robots that can kill absent a direct command from a human operator, according to a new University of Massachusetts-Amherst poll.
Autonomous weapons or, as their critics call them, “killer robots,” are becoming less science fiction and more military reality. Though the United States military, by far the world’s most robotically advanced, has placed a moratorium on the development of autonomous robots, the technology is easily within reach. Indeed, in early June, the Indian government launched a program to develop and deploy fully autonomous weapons.
Concern about these weapons has grown as the technology has become more feasible, most famously producing both an organized campaign to ban so-called killer robots and a UN report echoing the campaign’s concerns.
The UMass poll asked a random sample of Americans to weigh in on the debate. By a wide 55-26 margin, Americans opposed “the trend towards using completely autonomous” weapons. A similarly large majority, 53-19, supported the campaign to preemptively ban the weapons. These results generally held across demographic groups, the principal exception being a disparity between active duty military, who oppose autonomous weapons by enormous margins, and their families, who are somewhat more supportive than other groups:
The poll’s results also held steady when the question was phrased in more and less loaded terms. The questions varied between using the term “robotic weapons” and “lethal robots” to describe the technology in question in order to assess how serious Americans’ opposition to these weapons is. That there was no difference in the results as a consequence of different framing suggests objections to the technology rather than reaction to hyperbolic media coverage; as Charli Carpenter, the poll’s author, puts it, “people are afraid of ‘killer robots’ because of [the fact that] ‘killer robots’ are scary, not because of the ‘killer robot’ label.”
This poll should be taken with a grain of salt. Since autonomous robots have yet to be really deployed, both critics and defenders of the technology are operating based on hypotheticals; we can’t yet be sure how good autonomous weapons’ targeting software will be, making it hard to assess whether they’d be more or less likely to kill civilians than manned weapons. We also don’t yet know whether international law will be able to develop a mechanism for holding someone accountable if a robot commits a war crime, one of the thorniest issues in the autonomous weaponry debate.