← Return to search results
Back to Prindle Institute
TechnologyWar

Battlefield A.I. and the Future of War

By Conner Gordon
8 Jun 2015

The robots are coming. Slowly and clumsily, perhaps, but they’re coming. At least, it would seem that way, with competitions like The Robotics Challenge taking place. Funded by the Department of Defense, the challenge brought together 24 teams to show off the best and most versatile robots they could, with a $2 million prize for the winner.

Prompted by shortcomings in the response to the 2011 Fukushima reactor meltdown, the competition aimed to explore how robots could better function in a disaster environment. However, while this goal seems innocent enough, many are concerned that disaster relief will only be the first step into a future of armed and autonomous military robots. In fact, with the deployment of increasingly advanced technologies on the battlefield, The Verge’s Adrianne Jeffries writes that the debate around killer robots has never been more important.

With the development of increasingly mechanized and semi-autonomous weapon systems like drones, it appears only to be a matter of time before fully autonomous robots become a battlefield possibility. According to Russell Brandom, The impact of such developments on the battlefield raise not only logistical questions, but also moral considerations of whether the act of killing a person should ever be an inhuman affair.

In light of this controversy, increased steps to dehumanize combat through “killer robots” can be seen as both positive and negative. On one hand, well-programmed robots could theoretically reduce unintentional civilian casualties, or even human actions that brought about massacres like those in My Lai in 1968 and Kandahar in 2012. However, robots and their programming are equally human creations – as such, they could easily be purposed for atrocity, with the added threat of such atrocity being carried out even more efficiently.

Additionally, there exists the question of whether any programming could be finessed enough to limit civilian casualties. In the chaos of battle, even the most well-trained soldiers have killed civilians, both intentionally and unintentionally. It is yet to be seen whether androids would be able to avoid such actions.

The development of mechanized forms of combat also stands to change the way that war crimes are prosecuted. In the hypothetical event that an intentional robot-involved atrocity or massacre occurred, a resulting investigation would need to examine the factors that caused it. In such a case, if the machine acted on programming to carry out the massacre, the threshold needed to prove intentionality or organization behind war crimes would be easier to meet. After all, outside of science fiction, robots cannot become fully autonomous and commit massacres on their own. On the contrary, they always operate on their programming – programming that could clearly establish the chain of command behind a massacre.

However, as Jeffries’ article notes, this fact also raises a question of culpability if a robot does commit a war crime. For instance, if an automated turret kills ten civilians at a checkpoint, who is at fault? Is it the machine’s programmer, or perhaps the soldiers stationed at the checkpoint? Could the blame be erased entirely, thought of only as an unfortunate glitch in the system? And could such “glitches” be used to excuse or cover up war crimes? In this way, the mechanization of warfare also stands to make war crimes or other atrocities even more difficult to prosecute.

It is clear that introducing autonomous combat machines to the battlefield stands to cause a paradigm shift in how we think about war. Their development not only sparks questions of how combat and atrocity should be thought of, but also if it’s ethical to remove humans from the battlefield at all. Clearly, such questions need to be carefully considered by those designing and implementing the robots.

However, as Brandom argues, one thing is clear: the appearance of the killer robot on the battlefield is more likely than ever, and it may come sooner than expected. It is now, then, that these questions must be given due consideration.

Conner was a Graduate Fellow at the Prindle Institute from 2016-2018. Conner's writing focuses on memory, politics and culture. He is currently an MFA candidate at the University of Oregon.
Related Stories