← Return to search results
Back to Prindle Institute

Military AI and the Illusion of Authority

Israel has recruited an AI program called Lavender into its ongoing assault against Palestinians. Lavender processes military intelligence that previously would have been processed by humans, producing a list of targets for the Israel Defense Forces (IDF) to kill. This novel use of AI, which has drawn swift condemnation from legal scholars and human rights advocates, represents a new role for technology in warfare. In what follows, I explore how the technological aspects of AI such as Lavender contribute to a false sense of its authority and credibility. (All details and quotations not otherwise attributed are sourced from this April 5 report on Lavender.)

While I will focus on the technological aspect of Lavender, let us be clear about the larger ethical picture. Israel’s extended campaign — with tactics like mass starvation, high-casualty bombing, dehumanizing language, and destroying health infrastructure — is increasingly being recognized as a genocide. The evil of genocide almost exceeds comprehension; and in the wake of tens of thousands of deaths, there is no point quibbling about methods. I offer the below analysis as a way to help us understand the role that AI actually plays — and does not play — not because its role is central in the overall ethical picture, but because it is a new element in the picture that bears explaining. It is my hope that identifying the role of technology in this instance will give us insight into AI’s ethical and epistemic dangers, as well as insight into how oppression will be mechanized in the coming years. As a political project, we must use every tool we have to resist the structures and acts of oppression that make these atrocities possible. Understanding may prove a helpful tool.

Let’s start with understanding how Lavender works. In its training phase, Lavender used data concerning known Hamas operatives to determine a set of characteristics, each of which indicates that an individual is likely to be a member of Hamas. Lavender scans data regarding every Gazan in the IDF’s database and, using this set of characteristics, generates a score from 1 to 100. The higher the number, the more likely that individual is to be a member of Hamas, according to the set of characteristics the AI produced. Lavender outputs these names onto a kill list. Then, after a brief check to confirm that a target is male, commanders turn the name over to additional tracking technologies, ordering the air force to bomb the target once their surveillance technology indicates that he is at home.

What role does this new technology play in apparently authorizing the military actions that are causally downstream of its output? I will highlight three aspects of its role. The use of AI such as Lavender alienates the people involved from their actions, inserting a non-agent into an apparent role of authority in a high-stakes process, while relying on its technological features to boost the credibility of ultimately human decisions.

This technology affords a degree of alienation for the human person who authorizes the subsequent violence. My main interest here is not whether we should pity the person pushing their lever in the war machine, alienated as they are from their work. The point, rather, is that alienation from the causes and consequences of our actions dulls the conscience, and in this case the oppressed suffer for it. As one source from the Israeli military puts it, “I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago…. The machine did it coldly. And that made it easier.” Says another, “even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end.” The swiftness and ease of the technology separates people from the reality of what they are taking part in, paving the way for an immensely deadly campaign.

With Lavender in place, people are seemingly relieved of their decision-making. But the computer is not an agent, and its technology cannot properly bear moral responsibility for the human actions that it plays a causal role in. This is not to say that no one is morally responsible for Lavender’s output; those who put it in place knew what it would do. However, the AI’s programming does not determinately cause its output, giving the appearance that the creators have invented something independent that can make decisions on its own. Thus, Lavender offers a blank space in the midst of a causal chain of moral responsibility between genocidal intent and genocidal action, while paradoxically providing a veneer of authority for that action. (More on that authority below.) Israel’s use of Lavender offloads moral responsibility onto the one entity in the process that can’t actually bear it — in the process obscuring the amount of human decision-making that really goes into what Lavender produces and how it’s used.

The technological aspect of Lavender is not incidental to its authorizing role. In “The Seductions of Clarity,” philosopher C. Thi Nguyen argues that clarity, far from always being helpful to us as knowers, can sometimes obscure the truth. When a message seems clear — easily digested, neatly quantified — this ease can lull us into accepting it without further inquiry. Clarity can thus be used to manipulate, depriving us of the impetus to investigate further.

In a similar fashion, Lavender’s output offers a kind of ease and definiteness that plausibly acts as a cognitive balm. A computer told us to! It’s intelligent! This effect is internal to the decision-making process, reassuring the people who act on Lavender’s output that what they are doing is right, or perhaps that it is out of their hands. (This effect could also be used externally in the form of propaganda, though Israel’s current tactic is to downplay the role of AI in their decisions.)

Machines have long been the tools that settle disputes when people can’t agree. You wouldn’t argue with a calculator, because the numbers don’t lie. As one source internal to the IDF put it, “Everything was statistical, everything was neat — it was very dry.” But the cold clarity of technology cannot absolve us of our sins, whether moral or epistemic. Humans gave this technology the parameters in which to operate. Humans entrust it with producing its death list. And it is humans who press play on the process that kills the targets the AI churns out. The veneer of credibility and objectivity afforded by the technical process obscures a familiar reality: that the people who enact this violence choose to do so. That it is up to the local human agents, their commanders, and their government.

So in the end we find that this technology is aptly named. Lavender — the plant — has long been known to help people fall asleep. Lavender — the AI — can have an effect that is similarly lulling. When used to automate and accelerate genocidal intelligence, this technology alienates humans from their own actions. It lends the illusion of authority to an entity that can’t bear moral responsibility, easing the minds of those involved with the comforting authority of statistics. But it can only have this effect if we use it to — and we should rail against the use of it when so much is at stake.

How I Learned to Worry About the Bombs: Cluster Munitions in Ukraine

photograph of cluster bomb model

On Friday, July 7th, Colin Kahl, the Undersecretary of Defense for Policy, announced that the U.S. is sending additional military aid to Ukraine, including cluster munitions or cluster bombs. Cluster munitions are, effectively, bombs that contain many smaller bombs – they are designed to open mid-air before reaching their target, releasing smaller bombs, called submunitions or “bomblettes,” into a large area. Military decision makers view them as more effective than traditional artillery but cheaper than more advanced weaponry like guided missiles. Nonetheless, the Biden administration has faced pushback for this decision. U.S. allies have criticized the decision and some Democrats in Congress have expressed concerns.

The problem is that cluster munitions pose significant risks to civilians. All munitions can fail – there are still unexploded bombs from the Second World War found in Europe. Since cluster munitions may contain hundreds of submunitions, failure of at least one submunition is significantly more likely than the failure of a conventional explosive. Further, because submunitions are numerous and small (some the size of a tennis ball), it is difficult to determine when one has failed to detonate and to track it down afterwards.

For instance, the United States made use of cluster munitions to heavily bomb Laos during the Vietnam war, in an effort to disrupt North Vietnamese supply lines. As much as 30% of the munitions may have failed to detonate as intended. The Mines Advisory Group reports that it disarmed its 300,000th bomb in Laos since beginning operation there in 1994. They also estimate that 50,000 people, half of whom were children, were killed by unexploded ordnance in Laos, with 20,000 of those deaths occurring after the war in Vietnam concluded. Civilians may unintentionally activate the explosives after unknowingly stepping on them, or when attempting to move or scrap the munitions.

As a result, many nations have sought to eliminate the use of cluster munitions. The UN Convention on Cluster Munitions has 108 signatories. These nations agreed to never produce, store, transfer or use cluster munitions, and to destroy their stockpiles of these weapons. Notably, the United States, Ukraine, and Russia did not sign on to the agreement.

Is it morally justifiable for the U.S. to send cluster munitions to Ukraine? I have previously written on just war theory and the war in Ukraine. In that discussion, I enumerated three criteria that theorists assess to determine the permissibility of wartime acts. To be morally justified, first, acts must not intentionally target civilians. However, acts which knowingly result in the deaths of civilians may be permissible. This is known as the doctrine of double effect. But justification requires two further standards. The harms of the act must be proportionate to the good that it aims to secure, meaning the outcomes it aims to achieve must “fit” the harms produced; the greater the harm, the greater the gain must be. Additionally, the harms ought to be necessary to achieve the goal – if there is some other, less harmful measure that can bring about the same results, then the more harmful act is not justified. With this in mind, let us consider some of the arguments public officials have offered to defend this decision.

Secretary of State Anthony Blinken argued, among other points, that providing cluster munitions to Ukraine would not radically change the risk to civilians – after all, the Russian military has already used cluster munitions. In fact, independent investigators believe that the Russian military deliberately targeted civilian locations with cluster munitions. Indeed, then White House Press Secretary, Jen Psaki commented on these reports stating that “If that were true, it would potentially be a war crime.”

Ultimately, this point fails to demonstrate anything from a moral point of view. Past wrong acts do not justify further, albeit lesser, wrong-doing in response. Even if there is already a risk posed to civilians from prior Russian use of cluster munitions, further use increases that risk. Thus, this point raised by Secretary Blinken is either irrelevant or incorrect, and perhaps both.

Several members of the Biden administration have emphasized the low dud rate of the cluster munitions the U.S. is sending to Ukraine. Purportedly 2.5% or fewer of these munitions fail to detonate as intended. They have contrasted this to the Russian military utilizing munitions with a dud rate they claim is as high as 30 to 40%. However, the previous reports from the Pentagon state that some submunitions in U.S. cluster bombs have a failure rate of 14% or greater.

Even if we take for granted the lower of the reported dud rates, it is unclear what this is supposed to demonstrate, morally speaking. Again, this dovetails with the previous point about the Russian military’s use of cluster weapons. Some of these munitions will undoubtedly fail. Thus, U.S. provided cluster munitions may still kill innocents. At best, this line of thought shows that the use of U.S. provided cluster munitions is more likely to be proportionate than the Russian use, given the lesser risk to civilians. But it is possible that both would be unjustified.

A more plausible defense of sending cluster munitions to Ukraine stems from the notion that they are necessary. President Biden has argued that the munitions are currently needed for a “transitionary period” so Ukraine’s military can keep up pressure while conventional munitions are restocked. Secretary Blinken also claimed that Ukraine would be defenseless without these additional munitions.

I lack the expertise to comment directly on the military necessity of using these weapons. So, in this sense, my analysis here is limited. Perhaps Ukrainian armed forces could hold the line until restocked with conventional munitions. Alternatively, perhaps this would give the Russian forces more time to dig in their defenses. This could make any Ukranian counter-offensive more difficult and deadly. More troublingly, it could help the Russian military regain previously lost ground in Ukraine.

Regardless, this line of argument seems suspect in that it appears self-undermining. Even proponents of sending cluster munitions to Ukraine want their use limited. Specifically, they should be used only until conventional weapon stockpiles can be replenished. This, ultimately, suggests skepticism that cluster munitions are morally justified. If they were, then it would indeed be quite odd that Ukrainian forces should only temporarily utilize them.

Of course, one might argue that even questioning whether the U.S. ought to send these munitions misses the point. The people of Ukraine are fighting a war against an unjust aggressor and defending their sovereignty. In the face of this existential threat to a democracy posed by an authoritarian regime, perhaps the least we can do is arm them with whatever munitions they request, within reason of course.

In the Republic, when discussing the nature of justice with Cephalus, Socrates asks his interlocutor to consider the following scenario: Suppose you borrowed a sword from a friend. Later your friend, now in a crazed state, asks you to return the sword. Both agree that even if justice requires paying one’s debts, surely justice would require withholding the sword if innocent lives are at risk. The consequences of what happens after you give your friend the sword, or in this case, the bombs, affect the morality of your act as well. The supplier does not have moral carte blanche.

Ultimately, whether supplying cluster munitions to Ukraine is justified turns on what these munitions will accomplish – are they necessary to achieve some substantive good that is worth the risk to civilian lives? If there is some less risky alternative with a reasonable chance of achieving the same goals, then it appears that alternatively is morally preferable. Further, even if cluster munitions are required to achieve some current objective, there must be an honest examination of whether those gains are worth the loss of civilian lives. Thus far, the arguments offered by the Biden administration speak minimally to these points.

Considered Position: Thinking Through Sanctions – The Ethics of Targeting Civilians

photograph of ATM line in Kyev

This piece continues a Considered Position series investigating the purpose and permissibility of economic sanctions.

In this series of posts, I want to investigate some of the ethical questions surrounding the use of sanctions. Each post will be dedicated to one important ethical question.

Part 1: Do sanctions work to change behavior?

Part 2: Do sanctions unethically target civilians?

Part 3: What obligations do we as individuals have with regard to sanctions?

In the first post I suggested reasons to think that imposing economic sanctions generally has a good effect. In this post, I want to consider what I think is the strongest objection to the use of sanctions – namely, that they target civilians in an unjust manner.

Double Effect and The Combatant/Non-Combatant Distinction

One of the fundamental principles of just war theory is the distinction between combatants and non-combatants. In war, you are not supposed to target enemy civilians even if you think doing so might terrorize an enemy into giving up.

Now, this does not mean that you cannot ever harm civilians. Just war theorists acknowledge that sometimes civilians will die as a result of military action. You cannot wage a war without some collateral damage. Nevertheless, you are not supposed to target civilians. You are not supposed to intend that they be harmed.

We can illustrate this distinction by considering two different hypothetical cases of military bombing.

Case 1 – Strategic Bomber: A pilot is told that by destroying an enemy’s munitions factory, she will be able to end the enemy’s ability to wage war. By ending the war, the pilot will be able to save 200,000 lives. However, she is also told that the enemy has placed the munitions factory near a retirement center. If the pilot blows up the munitions factory, the secondary explosion will destroy the retirement center as well, killing 2,000 elderly civilians.

Case 2 – Terror Bomber: A pilot is told that the enemy is near the breaking point and might soon give up. However, it will require one last decisive strike against morale. The military’s psychologists have realized that the other country particularly values the lives of the elderly, and so if the pilot could kill several thousand elderly civilians that would demoralize the enemy, ending their ability to wage war. By ending the war, the pilot will be able to save 200,000 lives.

In both cases, the pilot faces a choice of whether to drop a bomb which will both end the war and kill 2,000 civilians. However, there is an important difference. In the strategic bomber case, she is not targeting the enemy civilians; in the terror bomber case, she is.

Here is one way to see the difference.

Suppose that in the first case, after the bombing the pilot comes home and turns on the TV. The TV announcer explains that a surprise bombing destroyed the enemy’s primary munitions factory. The announcer then goes on to explain that in a weird twist of fate, the bombing happened at the exact same time that everyone at the retirement center had left for a group trip to the zoo, and so no civilians were killed.

In the first case, the pilot would be thrilled. This is great news. The munitions factory was destroyed, and no civilians were harmed.

In contrast, suppose the second pilot targeted the same retirement center. When she gets home she also hears that no civilians were killed. But in this second case, the pilot will not be thrilled. The reason the pilot bombed the retirement center was to kill civilians. Killing civilians was the means to the end of ending the war. If the people don’t die, the pilot will not have helped stop the war at all.

In the terror bombing case, the pilot intends civilian deaths, because the harm to civilians is how the pilot plans to end the war. The civilians are, therefore, used as a means to an end. The civilians are viewed, as Warren Quinn says, “as material to be strategically shaped or framed.”

This distinction is core to just war theory, and for good reason.

But the distinction is often misunderstood. For example, many people mistakenly think that one’s ‘intention’ is just your ultimate goal (ending the war). Thus, some tried to use the intention/foresight distinction to say that Harry Truman did not intend civilian deaths when he authorized dropping atomic weapons on Japan. The thought was that Truman only intended to win the war.

But this is not how the principle of double effect works. Truman still intended those civilians’ deaths because it was by killing civilians that Truman hoped to win the war. This is why Harry Truman is a murder and a war criminal (as was argued by the great ethicist Elizabeth Anscombe).

The Problem for Sanctions

How do these principles apply to sanctions?

They create a real ethical challenge for the use of sanctions. That is because sanctions tend to directly target civilians. The goal of most sanctions is to inflict damage to a nation’s economy in order to change the government’s cost-benefit calculation. But it seems to do this damage by harming civilians.

Thus, sanctions seem to be a direct violation of the principle of double effect. Or so Joy Gordon argues:

Although the doctrine of double effect would seem to justify “collateral damage,” it does not offer a justification of sanctions. . . . The direct damage to the economy is intended to indirectly influence the leadership, by triggering political pressure or uprisings of the civilians, or by generating moral guilt from the “fearful spectacle of the civilian dead.” Sanctions directed against an economy would in fact be considered unsuccessful if no disruption of the economy took place. We often hear commentators objecting that “sanctions didn’t work” in one situation or another because they weren’t “tight” enough — they did not succeed in disrupting the economy. Thus, sanctions are not defensible under the doctrine of double effect.

Now, this objection does not apply to all sanctions. Some ‘smart sanctions’ do try to directly target the leaders of a military, and so do respect a distinction between civilians and combatants. But many other sanctions do not, including many of the sanctions that the west is currently levying against Russia.

A Possible Reply

There is a plausible reply that one can make on behalf of sanctions. That is because there is a big difference between dropping a bomb on someone and refusing to trade with someone.

The difference is that people have a right not to be killed, but it is not at all clear that anyone has the right to trade in Western markets. It is wrong for me to threaten to take your money unless you clean my house. But it is not wrong for me to offer to pay you if you clean my house. In both cases, you have more money if you clean my house than if you don’t, but in one case your rights are being violated and in the other they are not. If I threaten to sabotage your children’s grades unless you give me money, then I am using your children as a means to an end.  But there is nothing wrong with me saying I will only tutor your kids if you give me money.

So, you might think that the use of sanctions are not designed to harm civilians unless the government changes behavior. Instead, we are just refusing to help unless the government changes behavior.  And that seems, on the whole, far more ethically justifiable.

Real World Complications

So which view is right? Do sanctions violate the right of innocent civilians, using them as a means to an end to put pressure on a foreign government?

It’s a difficult question. And partly I think it might depend on the details of the sanction. Take the action of PepsiCo as an example. The company recently announced that they would no longer sell Pepsi, 7 Up, or other soft drinks in Russia. However, the company will continue to sell milk, baby food, and formula.

This strikes me as, plausibly, the right balance. I think it is plausible that people have a right to certain basic goods (like food, water, or baby formula), but not rights to Diet Pepsi. As such, it would make sense to refuse to sell luxuries, even if one continues to supply civilians with necessities.

Thus, it seems that we should probably oppose any sanctions that prevent the sale of life-saving medications to Russian civilians; but it seems justifiable to support sanctions that prevent the sale of American-made cars.

If North Korea Launches a Nuclear Attack, How Should the U.S. Respond?

A photo of the North Korean-South Korean border

North Korea’s regime has taken a bolder step in its confrontation with the United States: it has threatened to launch an attack against Guam, a US territory in the Pacific. Then, it walked it back. But, we have seen this kind of behavior in Kim Jong Un many times, so we may foresee that, sooner or later, he will again threaten to attack Hawaii, Guam, South Korea, or any other target within North Korea’s range. If such an attack takes place, and it is a nuclear attack, how should the U.S. ethically respond?

Continue reading “If North Korea Launches a Nuclear Attack, How Should the U.S. Respond?”