only search

How scared are people of “killer robots” and why does it matter?

There is one question in this controversy that social scientists will agree is amenable to empirical inquiry: how do people feel about the idea of outsourcing targeting decisions to machines?

Warfare is increasingly carried out by machines. Automated systems already routinely defuse explosives, conduct reconnaissance, serve as mechanical beasts of burden over inhospitable terrain, and assist medics, but militaries worldwide are taking steps toward developing armed robots with the capacity to use lethal force as well.

Weapon systems with various degrees of autonomy are already in use by the United States, Israel, South Korea and the United Kingdom; India is the latest nation to establish a program to develop fully autonomous weapons.

While governments have so far kept humans “in the loop” or “on the loop,” as is the case with tele-operated drones, military doctrine leaves open the possibility of full autonomy down the line in specific scenarios.

This slippery slope toward increasingly outsourcing kill decisions to machines has made a growing number of roboticists, ethicists and now a transnational network of human security campaigners uneasy. Concerned scientists formed the International Committee for Robot Arms Control in 2009; in 2012, Human Rights Watch released a report on the perils of fully autonomous weapons, and in April of this year an NGO campaign kicked off on the steps of Parliament in London. The campaign is endorsed by Nobel Laureate and renowned anti-landmine campaigner Jody Williams and now includes a coalition of over 30 NGOs.  

Are you shocked?

Campaigners make a number of arguments against fully autonomous weapons: that they would lack the situational awareness to comply with humanitarian law, that their ease of use could increase levels of armed conflict, that their use would undermine existing war law by limiting accountability for mistakes, and that there are ethical problems with allowing machines to make life or death decisions, among others.

Critics say that most of these are hypotheticals at this point: until they are deployed no one can know for certain whether the weapons would be harmful, and they might do some good: supposedly they would protect troops and even commit fewer war crimes.

In a campaign characterized by hypotheticals and philosophical questions, how might social scientists weigh in? There is at least one point in this discussion that is amenable to empirical inquiry: how people feel about the idea of outsourcing targeting decisions to machines. This matters because international humanitarian law assumes that the “dictates of the public conscience” constitute a barometer of appropriate conduct in military affairs, especially where existing rule-sets provide inadequate guidance and the concerns being raised have not yet come to pass.

The Martens’ clause, inserted into the Hague Conventions as sort of a back-up plan for situations not foreseen by the drafters, reads:

“Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.”

Human Rights Watch invoked this argument explicitly last November in its report Losing Humanity (p. 34), pointing out that regardless of utilitarian arguments for autonomous weapons or uncertainty about their risks, the weapons should be banned because the entire idea of out-sourcing killing decisions to machine is morally offensive, frightening, even repulsive, to many people:

Both experts and laypeople have expressed a range of strong opinions
about whether or not fully autonomous machines should be given the power to deliver lethal force without human supervision. While there is no consensus, there is certainly a large number for whom the idea is shocking and unacceptable. States should take their perspective into account when determining the dictates of public conscience.

Human Rights Watch cites anecdotal evidence in its report, but just how widespread is this sense of “shock” and “unacceptability?

Force protection

A new survey I’ve just completed suggests there is indeed a general level of concern over “death by algorithm” among the US public. As part of my ongoing research into human security norms, I embedded questions on YouGov’s Omnibus survey asking how people feel about the potential for outsourcing lethal targeting decisions to machines.

1000 Americans were surveyed, matched on gender, age, race, income, region, education, party identification, voter registration, ideology, political interest and asked about their military status. Across the board, 55% of Americans opposed autonomous weapons (nearly 40% were “strongly opposed,”) and a majority (53%) expressed support for the new ban campaign in a second question.

As these break-out charts detail, this finding is consistent across the political spectrum, with the strongest opposition coming from both the far right and the far left. Men are slightly more likely than women to oppose the weapons; women are likelier to acknowledge they don’t have sufficient information to hold a strong opinion.

Opposition is most highly concentrated among the highly-educated, the well-informed, and the military. Many people are unsure; most who are unsure would prefer caution. Very few openly support the idea of machines autonomously deciding to kill.

Do these findings convey a sense of “shock” or concern over matters of “conscience”? An initial qualitative break-down of 500 respondents’ open-ended comments explaining their feelings suggests the answer may be yes. The visualization below is a frequency distribution of codes used to sort open-ended responses by the 55% of respondents who “somewhat” or “strongly” opposed autonomous weapons, along with some representative quotations that illustrate how the codes were applied.

View larger version of this image

As the tag cloud shows, the most frequent response was a sense that machines cannot equal human beings in situational judgment. Many stressed the importance of accountability, of human empathy, and the possibility of machine error. A significant number of responses referred to the idea as “terrifying,” “frightening,” or repulsive. One respondent said:

“This is a too nightmarish… divorcing human intervention from the actions of these machines means surrendering all consideration of right and wrong from the decision making process.”

Open-ended responses showed that even many people who “somewhat support’ the idea often do so only with careful qualifications, such as the importance of “picking targets without killing innocent women and children.”

Interestingly, of the 26% of Americans who at least “somewhat support”ed the idea of man-out-of-the-loop weapons, the most common reason given in open-ended answers coded so far (about half the data-set) is “force protection” – the idea that these weapons will be beneficial for US troops.

Many of those who opposed an NGO-run ban campaign did so not because they disagreed with a ban but because they thought only the military should be making these kinds of decisions. For example, one respondent said of campaigners: “If they had to put their lives on the line they would think differently,” and another said, “Let those who want the ban be the ones who fight in person.”

What do the military think?

Given supporters’ concerns over protecting troops and their deference to the concerns of the military, it is interesting to note that military personnel, veterans and those with family in the military are more strongly opposed to autonomous weapons than the general public, with the highest opposition among active duty troops. (Click the graph to enlarge.)

View larger version of this image.

View larger version of this image.

For example, one active duty respondent said, “Humans are the moral check on military actions.” Another said, “I do not believe in removing empathy or moral action in conflicts. A person knows they are hurting others.”

One respondent who “served previously” and whose family had served stated:

“Why would we, as a race, allow machines to kill others of our race? A machine has no remorse, no compromise, nothing influences it's decisions other than what it was programed with. A human could see that what was thought to be enemy fighters it actually a group of children. A machine won't.”

In short there is significant evidence in these data that the majority of the American public opposes autonomous weapons, that those who support them would defer to the feelings of those most affected by them (the military), and that individuals closer to the military stand in even greater opposition to the weapons themselves, often using the very language of “conscience” and “principles of humanity” that campaigners invoke.

In addition to supporting the Martens clause argument, these data also provide support for a “precautionary principle” regarding these weapons systems. Of those who did not outright oppose fully autonomous weapons on the basis of moral revulsion, only 10% “strongly favored” them; 16% “somewhat favored” and 18% were “not sure.” However by examining open-ended answers, I found that even those who clicked “not sure,” as well as a good many “somewhat favor”-ing respondents, actually have grave reservations in the absence of information and lean in favor of a precautionary principle given the uncertainty about the impacts of these systems.

This suggests not only that the descriptive statistics may underestimate public opposition, and that there is a general sense that great caution is needed in this area. For example, open-ended answers among the “not sure” respondents who claimed to “need more information” included the following:

“Generally, without additional info, I’m opposed. Sounds like too much room for error. Or the terminator in real life.”

“It seems like ideas with good potential eventually end up in the hands corrupt or dangerous people with bad intentions.”

“I have no concrete opinion, but I believe that it is dangerous enough with a human controller, let alone a robot.”

“Not sure, don’t know, really don’t think it’s a good idea…”

“I don’t know enough about this subject to have a fully formed opinion. It sounds like something I would oppose.”

What’s in a name?

Of course some have argued that the campaign itself has been contributing to this climate of fear and uncertainty through its use of the term “killer robots” and roboapocalyptic imagery. This survey also goes some way toward disproving that claim. I included a priming element in the survey to see whether there was any such effect.

500 were asked how they felt about the weapons themselves using dry military jargon and stressing the autonomy of “weapons,” whereas the other 500 were asked about “lethal robots,” invoking the idea of robots as agents, as soldiers rather than weapons. In the second question on whether they supported the ban campaign, 500 were asked about “autonomous weapons” and the other 500 were asked about “killer robots.”

I found almost no effect on citizens’ sentiment based on this varied wording, and what effect we observe was within the margin of error for the survey. The majority of Americans across all ideologies and demographic groups oppose the development and use of autonomous weapons; nearly 40% are “strongly opposed.” This doesn’t change much at all if you ask them about “lethal robots” versus “robotic weapons.” Moreover, a majority of Americans surveyed support the idea of a ban campaign, and this too doesn’t change if you refer to the campaign as the “Campaign to Stop Killer Robots” versus “a campaign to ban the use of fully autonomous weapons” (question 2).

View larger version of this image.

In short, people are afraid of “killer robots” because “killer robots” are scary, not because of the “killer robot” label. NGOs are channeling popular concern about autonomous systems, not manufacturing it.

These findings comes with two caveats.

First, this survey captures US public opinion only. It is likely that by “dictates of the public conscience” diplomats were referring to global public opinion, so this study would be most useful if replicated in other country settings.

Still, to the extent that US policymakers are making decisions about development of autonomous weapons or their position with respect to an international ban movement, they are required under international law to take US public opinion into account.

Second, this is a preliminary cut at coding, and the results may change as the rest of the data-set is annotated more rigorously. But even this initial cut at the raw data illustrates the ways some Americans describe their intuitive concern over autonomous weapons. There is certainly an “ugh” factor among many respondents. There is a concern about machine “morality” and reference to a putative warrior ethic that requires lethal decision-making power to be constrained by human judgment.

But primarily, there is a sense of “human nationalism,” whether rational or not: the notion that at a moral level certain types of acts simply belong in the hands of humans, that “death by algorithm” is “just wrong.” And these data show that this concern is not being manufactured by the campaign rhetoric itself but represents participants’ general reaction whether or not they are primed to think in terms of “killer robots.” This poll generally supports the claims of humanitarian disarmament campaigners who are trying to channel this public concern into preventive government action to keep meaningful human control over life and death decisions in war.

About the author

Charli Carpenter is a Professor of Political Science at University of Massachusetts-Amherst and the author of ‘Lost’ Causes: Agenda-Vetting in Global Issue Networks and the Shaping of Human Security. She blogs at Duck of Minerva.

Charli Carpenter es profesora de ciencias políticas de la Universidad de Massachusetts-Amherst y autora de “Lost” Causes: Agenda-Vetting in Global Issue Networks and the Shaping of Human Security (Causas “perdidas”: La revisión de la agenda en las redes temáticas globales y la conformación de la seguridad humana). Escribe en el blog Duck of Minerva.

We encourage anyone to comment, please consult the
oD commenting guidelines if you have any questions.