← Return to search results
Back to Prindle Institute
War

Real Life Terminators: The Inevitable Rise of Autonomous Weapons

By D'Arcy Blaxell
11 Aug 2022
image of predator drones in formation

Slaughterbots, a YouTube video by the Future of Life Institute, has racked up nearly three and a half million views for its dystopic nightmare where automated killing machines use facial recognition to track down and murder dissident students. Meanwhile, New Zealand and Austria have called for a ban on autonomous weapons, citing ethical and equity concerns, while a group of parliamentarians from thirty countries have also advocated for a treaty banning the development and use of so-called “killer-robots.” In the U.S., however, a bipartisan committee found that a ban on autonomous weapons “is not currently in the interest of U.S. or international security.”

Despite the sci-fi futurism of slaughterbots, autonomous weapons are not far off. Loitering munitions, which can hover over an area before self-selecting and destroying a target (and themselves), have proliferated since the first reports of their use by Turkish-backed forces in Libya last year. They were used on both sides of the conflict between Armenia and Azerbaijan, while U.S.-made switchblade and Russian Zala KYB kamikaze drones have recently been employed in Ukraine. China has even revealed a ship which can not only operate and navigate autonomously, but deploy drones of its own (although the ship is, mercifully, unarmed).

Proponents of autonomous weapons hope that they will reduce casualties overall, as they replace front-line soldiers on the battlefield.

As well as getting humans out of harm’s way, autonomous weapons might be more precise than their human counterparts, reducing collateral damage and risk to civilians.

A survey of Australian Defence Force officers found that the possibility of risk reduction was a significant factor in troops’ attitudes to autonomous weapons, although many retained strong misgivings about operating alongside them. Yet detractors of autonomous weapons, like the group Stop Killer Robots, worry about the ethics of turning life-or-death decisions over to machines. Apart from the dehumanizing nature of the whole endeavor, there are concerns about a lack of accountability and the potential for algorithms to entrench discrimination – with deadly results.

If autonomous weapons can reduce casualties, the concerns over dehumanization and algorithmic discrimination might fade away. What could be a better affirmation of humanity than saving human lives? At this stage, however, data on precision is hard to come by. And there is little reason to think that truly autonomous weapons will be more precise than ‘human-in-the-loop’ systems, which require a flesh-and-blood human to sign off on any aggressive action (although arguments for removing the human from the loop do exist).

There is also the risk that the development of autonomous weapons will lower the barrier of entry to war: if we only have to worry about losing machines, and not people, we might lose sight of the true horrors of armed conflict.

So should we trust robots with life-or-death decisions? Peter Maurer, President of the International Committee of the Red Cross, worries that abrogating responsibility for killing – even in the heat of battle – will decrease the value of human life. Moreover, the outsourcing of such significant decisions might lead to an accountability gap, where we are left with no recourse when things go wrong. We can hold soldiers to account for killing innocent civilians, but how can we hold a robot to account – especially one which destroys itself on impact?

Technological ethicist Steven Umbrello dismisses the accountability gap, arguing that autonomous weapons are no more troubling than traditional ones. By focusing on the broader system, accountability can be conferred upon decisionmakers in the military chain of command and the designers and engineers of the weapons themselves. There is never a case where the robot is solely at fault: if something goes wrong, we will still be able to find out who is accountable. This response can also apply to the dehumanization problem: it isn’t truly robots who are making life or death decisions, but the people who create and deploy them.

The issue with this approach is that knowing who is accountable isn’t the only factor in accountability: it will, undoubtedly, be far harder to hold those responsible to account.

They won’t be soldiers on the battlefield, but programmers in offices and on campuses thousands of kilometers away. So although the accountability gap may not be an insurmountable philosophical problem, it will still be a difficult practical one.

Although currently confined to the battlefield, we also ought to consider the inevitable spread of autonomous weapons into the domestic sphere. As of last year, over 15 billion dollars in surplus military technology had found its way into the hands of American police. There are already concerns that the proliferation of autonomous systems in southeast Asia could lead to increases in “repression and internal surveillance.” And Human Rights Watch worries that “Fully autonomous weapons would lack human qualities that help law enforcement officials assess the seriousness of a threat and the need for a response.”

But how widespread are these ‘human qualities’ in humans? Police kill over a thousand people each year in the U.S. Robots might be worse – but they could be better. They are unlikely to reflect the fear, short tempers, poor self-control, or lack of training of their human counterparts.

Indeed, an optimist might hope that autonomous systems can increase the effectiveness of policing while reducing danger to both police and civilians.

There is a catch, however: not even AI is free of bias. Studies have found racial bias in algorithms used in risk assessments and facial recognition, and a Microsoft chatbot had to be shut down after it started tweeting offensive statements. Autonomous weapons with biases against particular ethnicities, genders, or societal groups would be a truly frightening prospect.

Finally, we can return to science fiction. What if one of our favorite space-traveling billionaires decides that a private human army isn’t enough, and they’d rather a private robot army? In 2017, a group of billionaires, AI researchers, and academics – including Elon Musk – signed an open letter warning about the dangers of autonomous weapons. That warning wasn’t heeded, and development has continued unabated. With the widespread military adoption of autonomous weapons already occurring, it is only a matter of time before they wind up in private hands. If dehumanization and algorithmic discrimination are serious concerns, then we’re running out of time to address them.

 

Thanks to my friend CAPT Andrew Pham for his input.

D’Arcy Blaxell is completing his PhD in Philosophy at the University of New South Wales in Sydney, Australia. His work focuses on the philosophy of perception and perceptual experience, and he also writes on ethical and political philosophy.
Related Stories