Israel has recruited an AI program called Lavender into its ongoing assault against Palestinians. Lavender processes military intelligence that previously would have been processed by humans, producing a list of targets for the Israel Defense Forces (IDF) to kill. This novel use of AI, which has drawn swift condemnation from legal scholars and human rights advocates, represents a new role for technology in warfare. In what follows, I explore how the technological aspects of AI such as Lavender contribute to a false sense of its authority and credibility. (All details and quotations not otherwise attributed are sourced from this April 5 report on Lavender.)
While I will focus on the technological aspect of Lavender, let us be clear about the larger ethical picture. Israel’s extended campaign — with tactics like mass starvation, high-casualty bombing, dehumanizing language, and destroying health infrastructure — is increasingly being recognized as a genocide. The evil of genocide almost exceeds comprehension; and in the wake of tens of thousands of deaths, there is no point quibbling about methods. I offer the below analysis as a way to help us understand the role that AI actually plays — and does not play — not because its role is central in the overall ethical picture, but because it is a new element in the picture that bears explaining. It is my hope that identifying the role of technology in this instance will give us insight into AI’s ethical and epistemic dangers, as well as insight into how oppression will be mechanized in the coming years. As a political project, we must use every tool we have to resist the structures and acts of oppression that make these atrocities possible. Understanding may prove a helpful tool.
Let’s start with understanding how Lavender works. In its training phase, Lavender used data concerning known Hamas operatives to determine a set of characteristics, each of which indicates that an individual is likely to be a member of Hamas. Lavender scans data regarding every Gazan in the IDF’s database and, using this set of characteristics, generates a score from 1 to 100. The higher the number, the more likely that individual is to be a member of Hamas, according to the set of characteristics the AI produced. Lavender outputs these names onto a kill list. Then, after a brief check to confirm that a target is male, commanders turn the name over to additional tracking technologies, ordering the air force to bomb the target once their surveillance technology indicates that he is at home.
What role does this new technology play in apparently authorizing the military actions that are causally downstream of its output? I will highlight three aspects of its role. The use of AI such as Lavender alienates the people involved from their actions, inserting a non-agent into an apparent role of authority in a high-stakes process, while relying on its technological features to boost the credibility of ultimately human decisions.
This technology affords a degree of alienation for the human person who authorizes the subsequent violence. My main interest here is not whether we should pity the person pushing their lever in the war machine, alienated as they are from their work. The point, rather, is that alienation from the causes and consequences of our actions dulls the conscience, and in this case the oppressed suffer for it. As one source from the Israeli military puts it, “I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago…. The machine did it coldly. And that made it easier.” Says another, “even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end.” The swiftness and ease of the technology separates people from the reality of what they are taking part in, paving the way for an immensely deadly campaign.
With Lavender in place, people are seemingly relieved of their decision-making. But the computer is not an agent, and its technology cannot properly bear moral responsibility for the human actions that it plays a causal role in. This is not to say that no one is morally responsible for Lavender’s output; those who put it in place knew what it would do. However, the AI’s programming does not determinately cause its output, giving the appearance that the creators have invented something independent that can make decisions on its own. Thus, Lavender offers a blank space in the midst of a causal chain of moral responsibility between genocidal intent and genocidal action, while paradoxically providing a veneer of authority for that action. (More on that authority below.) Israel’s use of Lavender offloads moral responsibility onto the one entity in the process that can’t actually bear it — in the process obscuring the amount of human decision-making that really goes into what Lavender produces and how it’s used.
The technological aspect of Lavender is not incidental to its authorizing role. In “The Seductions of Clarity,” philosopher C. Thi Nguyen argues that clarity, far from always being helpful to us as knowers, can sometimes obscure the truth. When a message seems clear — easily digested, neatly quantified — this ease can lull us into accepting it without further inquiry. Clarity can thus be used to manipulate, depriving us of the impetus to investigate further.
In a similar fashion, Lavender’s output offers a kind of ease and definiteness that plausibly acts as a cognitive balm. A computer told us to! It’s intelligent! This effect is internal to the decision-making process, reassuring the people who act on Lavender’s output that what they are doing is right, or perhaps that it is out of their hands. (This effect could also be used externally in the form of propaganda, though Israel’s current tactic is to downplay the role of AI in their decisions.)
Machines have long been the tools that settle disputes when people can’t agree. You wouldn’t argue with a calculator, because the numbers don’t lie. As one source internal to the IDF put it, “Everything was statistical, everything was neat — it was very dry.” But the cold clarity of technology cannot absolve us of our sins, whether moral or epistemic. Humans gave this technology the parameters in which to operate. Humans entrust it with producing its death list. And it is humans who press play on the process that kills the targets the AI churns out. The veneer of credibility and objectivity afforded by the technical process obscures a familiar reality: that the people who enact this violence choose to do so. That it is up to the local human agents, their commanders, and their government.
So in the end we find that this technology is aptly named. Lavender — the plant — has long been known to help people fall asleep. Lavender — the AI — can have an effect that is similarly lulling. When used to automate and accelerate genocidal intelligence, this technology alienates humans from their own actions. It lends the illusion of authority to an entity that can’t bear moral responsibility, easing the minds of those involved with the comforting authority of statistics. But it can only have this effect if we use it to — and we should rail against the use of it when so much is at stake.