← Return to search results
Back to Prindle Institute

Military AI and the Illusion of Authority

Israel has recruited an AI program called Lavender into its ongoing assault against Palestinians. Lavender processes military intelligence that previously would have been processed by humans, producing a list of targets for the Israel Defense Forces (IDF) to kill. This novel use of AI, which has drawn swift condemnation from legal scholars and human rights advocates, represents a new role for technology in warfare. In what follows, I explore how the technological aspects of AI such as Lavender contribute to a false sense of its authority and credibility. (All details and quotations not otherwise attributed are sourced from this April 5 report on Lavender.)

While I will focus on the technological aspect of Lavender, let us be clear about the larger ethical picture. Israel’s extended campaign — with tactics like mass starvation, high-casualty bombing, dehumanizing language, and destroying health infrastructure — is increasingly being recognized as a genocide. The evil of genocide almost exceeds comprehension; and in the wake of tens of thousands of deaths, there is no point quibbling about methods. I offer the below analysis as a way to help us understand the role that AI actually plays — and does not play — not because its role is central in the overall ethical picture, but because it is a new element in the picture that bears explaining. It is my hope that identifying the role of technology in this instance will give us insight into AI’s ethical and epistemic dangers, as well as insight into how oppression will be mechanized in the coming years. As a political project, we must use every tool we have to resist the structures and acts of oppression that make these atrocities possible. Understanding may prove a helpful tool.

Let’s start with understanding how Lavender works. In its training phase, Lavender used data concerning known Hamas operatives to determine a set of characteristics, each of which indicates that an individual is likely to be a member of Hamas. Lavender scans data regarding every Gazan in the IDF’s database and, using this set of characteristics, generates a score from 1 to 100. The higher the number, the more likely that individual is to be a member of Hamas, according to the set of characteristics the AI produced. Lavender outputs these names onto a kill list. Then, after a brief check to confirm that a target is male, commanders turn the name over to additional tracking technologies, ordering the air force to bomb the target once their surveillance technology indicates that he is at home.

What role does this new technology play in apparently authorizing the military actions that are causally downstream of its output? I will highlight three aspects of its role. The use of AI such as Lavender alienates the people involved from their actions, inserting a non-agent into an apparent role of authority in a high-stakes process, while relying on its technological features to boost the credibility of ultimately human decisions.

This technology affords a degree of alienation for the human person who authorizes the subsequent violence. My main interest here is not whether we should pity the person pushing their lever in the war machine, alienated as they are from their work. The point, rather, is that alienation from the causes and consequences of our actions dulls the conscience, and in this case the oppressed suffer for it. As one source from the Israeli military puts it, “I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago…. The machine did it coldly. And that made it easier.” Says another, “even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end.” The swiftness and ease of the technology separates people from the reality of what they are taking part in, paving the way for an immensely deadly campaign.

With Lavender in place, people are seemingly relieved of their decision-making. But the computer is not an agent, and its technology cannot properly bear moral responsibility for the human actions that it plays a causal role in. This is not to say that no one is morally responsible for Lavender’s output; those who put it in place knew what it would do. However, the AI’s programming does not determinately cause its output, giving the appearance that the creators have invented something independent that can make decisions on its own. Thus, Lavender offers a blank space in the midst of a causal chain of moral responsibility between genocidal intent and genocidal action, while paradoxically providing a veneer of authority for that action. (More on that authority below.) Israel’s use of Lavender offloads moral responsibility onto the one entity in the process that can’t actually bear it — in the process obscuring the amount of human decision-making that really goes into what Lavender produces and how it’s used.

The technological aspect of Lavender is not incidental to its authorizing role. In “The Seductions of Clarity,” philosopher C. Thi Nguyen argues that clarity, far from always being helpful to us as knowers, can sometimes obscure the truth. When a message seems clear — easily digested, neatly quantified — this ease can lull us into accepting it without further inquiry. Clarity can thus be used to manipulate, depriving us of the impetus to investigate further.

In a similar fashion, Lavender’s output offers a kind of ease and definiteness that plausibly acts as a cognitive balm. A computer told us to! It’s intelligent! This effect is internal to the decision-making process, reassuring the people who act on Lavender’s output that what they are doing is right, or perhaps that it is out of their hands. (This effect could also be used externally in the form of propaganda, though Israel’s current tactic is to downplay the role of AI in their decisions.)

Machines have long been the tools that settle disputes when people can’t agree. You wouldn’t argue with a calculator, because the numbers don’t lie. As one source internal to the IDF put it, “Everything was statistical, everything was neat — it was very dry.” But the cold clarity of technology cannot absolve us of our sins, whether moral or epistemic. Humans gave this technology the parameters in which to operate. Humans entrust it with producing its death list. And it is humans who press play on the process that kills the targets the AI churns out. The veneer of credibility and objectivity afforded by the technical process obscures a familiar reality: that the people who enact this violence choose to do so. That it is up to the local human agents, their commanders, and their government.

So in the end we find that this technology is aptly named. Lavender — the plant — has long been known to help people fall asleep. Lavender — the AI — can have an effect that is similarly lulling. When used to automate and accelerate genocidal intelligence, this technology alienates humans from their own actions. It lends the illusion of authority to an entity that can’t bear moral responsibility, easing the minds of those involved with the comforting authority of statistics. But it can only have this effect if we use it to — and we should rail against the use of it when so much is at stake.

Real Life Terminators: The Inevitable Rise of Autonomous Weapons

image of predator drones in formation

Slaughterbots, a YouTube video by the Future of Life Institute, has racked up nearly three and a half million views for its dystopic nightmare where automated killing machines use facial recognition to track down and murder dissident students. Meanwhile, New Zealand and Austria have called for a ban on autonomous weapons, citing ethical and equity concerns, while a group of parliamentarians from thirty countries have also advocated for a treaty banning the development and use of so-called “killer-robots.” In the U.S., however, a bipartisan committee found that a ban on autonomous weapons “is not currently in the interest of U.S. or international security.”

Despite the sci-fi futurism of slaughterbots, autonomous weapons are not far off. Loitering munitions, which can hover over an area before self-selecting and destroying a target (and themselves), have proliferated since the first reports of their use by Turkish-backed forces in Libya last year. They were used on both sides of the conflict between Armenia and Azerbaijan, while U.S.-made switchblade and Russian Zala KYB kamikaze drones have recently been employed in Ukraine. China has even revealed a ship which can not only operate and navigate autonomously, but deploy drones of its own (although the ship is, mercifully, unarmed).

Proponents of autonomous weapons hope that they will reduce casualties overall, as they replace front-line soldiers on the battlefield.

As well as getting humans out of harm’s way, autonomous weapons might be more precise than their human counterparts, reducing collateral damage and risk to civilians.

A survey of Australian Defence Force officers found that the possibility of risk reduction was a significant factor in troops’ attitudes to autonomous weapons, although many retained strong misgivings about operating alongside them. Yet detractors of autonomous weapons, like the group Stop Killer Robots, worry about the ethics of turning life-or-death decisions over to machines. Apart from the dehumanizing nature of the whole endeavor, there are concerns about a lack of accountability and the potential for algorithms to entrench discrimination – with deadly results.

If autonomous weapons can reduce casualties, the concerns over dehumanization and algorithmic discrimination might fade away. What could be a better affirmation of humanity than saving human lives? At this stage, however, data on precision is hard to come by. And there is little reason to think that truly autonomous weapons will be more precise than ‘human-in-the-loop’ systems, which require a flesh-and-blood human to sign off on any aggressive action (although arguments for removing the human from the loop do exist).

There is also the risk that the development of autonomous weapons will lower the barrier of entry to war: if we only have to worry about losing machines, and not people, we might lose sight of the true horrors of armed conflict.

So should we trust robots with life-or-death decisions? Peter Maurer, President of the International Committee of the Red Cross, worries that abrogating responsibility for killing – even in the heat of battle – will decrease the value of human life. Moreover, the outsourcing of such significant decisions might lead to an accountability gap, where we are left with no recourse when things go wrong. We can hold soldiers to account for killing innocent civilians, but how can we hold a robot to account – especially one which destroys itself on impact?

Technological ethicist Steven Umbrello dismisses the accountability gap, arguing that autonomous weapons are no more troubling than traditional ones. By focusing on the broader system, accountability can be conferred upon decisionmakers in the military chain of command and the designers and engineers of the weapons themselves. There is never a case where the robot is solely at fault: if something goes wrong, we will still be able to find out who is accountable. This response can also apply to the dehumanization problem: it isn’t truly robots who are making life or death decisions, but the people who create and deploy them.

The issue with this approach is that knowing who is accountable isn’t the only factor in accountability: it will, undoubtedly, be far harder to hold those responsible to account.

They won’t be soldiers on the battlefield, but programmers in offices and on campuses thousands of kilometers away. So although the accountability gap may not be an insurmountable philosophical problem, it will still be a difficult practical one.

Although currently confined to the battlefield, we also ought to consider the inevitable spread of autonomous weapons into the domestic sphere. As of last year, over 15 billion dollars in surplus military technology had found its way into the hands of American police. There are already concerns that the proliferation of autonomous systems in southeast Asia could lead to increases in “repression and internal surveillance.” And Human Rights Watch worries that “Fully autonomous weapons would lack human qualities that help law enforcement officials assess the seriousness of a threat and the need for a response.”

But how widespread are these ‘human qualities’ in humans? Police kill over a thousand people each year in the U.S. Robots might be worse – but they could be better. They are unlikely to reflect the fear, short tempers, poor self-control, or lack of training of their human counterparts.

Indeed, an optimist might hope that autonomous systems can increase the effectiveness of policing while reducing danger to both police and civilians.

There is a catch, however: not even AI is free of bias. Studies have found racial bias in algorithms used in risk assessments and facial recognition, and a Microsoft chatbot had to be shut down after it started tweeting offensive statements. Autonomous weapons with biases against particular ethnicities, genders, or societal groups would be a truly frightening prospect.

Finally, we can return to science fiction. What if one of our favorite space-traveling billionaires decides that a private human army isn’t enough, and they’d rather a private robot army? In 2017, a group of billionaires, AI researchers, and academics – including Elon Musk – signed an open letter warning about the dangers of autonomous weapons. That warning wasn’t heeded, and development has continued unabated. With the widespread military adoption of autonomous weapons already occurring, it is only a matter of time before they wind up in private hands. If dehumanization and algorithmic discrimination are serious concerns, then we’re running out of time to address them.

 

Thanks to my friend CAPT Andrew Pham for his input.

Considered Position: Thinking Through Sanctions – The Ethics of Targeting Civilians

photograph of ATM line in Kyev

This piece continues a Considered Position series investigating the purpose and permissibility of economic sanctions.

In this series of posts, I want to investigate some of the ethical questions surrounding the use of sanctions. Each post will be dedicated to one important ethical question.

Part 1: Do sanctions work to change behavior?

Part 2: Do sanctions unethically target civilians?

Part 3: What obligations do we as individuals have with regard to sanctions?

In the first post I suggested reasons to think that imposing economic sanctions generally has a good effect. In this post, I want to consider what I think is the strongest objection to the use of sanctions – namely, that they target civilians in an unjust manner.

Double Effect and The Combatant/Non-Combatant Distinction

One of the fundamental principles of just war theory is the distinction between combatants and non-combatants. In war, you are not supposed to target enemy civilians even if you think doing so might terrorize an enemy into giving up.

Now, this does not mean that you cannot ever harm civilians. Just war theorists acknowledge that sometimes civilians will die as a result of military action. You cannot wage a war without some collateral damage. Nevertheless, you are not supposed to target civilians. You are not supposed to intend that they be harmed.

We can illustrate this distinction by considering two different hypothetical cases of military bombing.

Case 1 – Strategic Bomber: A pilot is told that by destroying an enemy’s munitions factory, she will be able to end the enemy’s ability to wage war. By ending the war, the pilot will be able to save 200,000 lives. However, she is also told that the enemy has placed the munitions factory near a retirement center. If the pilot blows up the munitions factory, the secondary explosion will destroy the retirement center as well, killing 2,000 elderly civilians.

Case 2 – Terror Bomber: A pilot is told that the enemy is near the breaking point and might soon give up. However, it will require one last decisive strike against morale. The military’s psychologists have realized that the other country particularly values the lives of the elderly, and so if the pilot could kill several thousand elderly civilians that would demoralize the enemy, ending their ability to wage war. By ending the war, the pilot will be able to save 200,000 lives.

In both cases, the pilot faces a choice of whether to drop a bomb which will both end the war and kill 2,000 civilians. However, there is an important difference. In the strategic bomber case, she is not targeting the enemy civilians; in the terror bomber case, she is.

Here is one way to see the difference.

Suppose that in the first case, after the bombing the pilot comes home and turns on the TV. The TV announcer explains that a surprise bombing destroyed the enemy’s primary munitions factory. The announcer then goes on to explain that in a weird twist of fate, the bombing happened at the exact same time that everyone at the retirement center had left for a group trip to the zoo, and so no civilians were killed.

In the first case, the pilot would be thrilled. This is great news. The munitions factory was destroyed, and no civilians were harmed.

In contrast, suppose the second pilot targeted the same retirement center. When she gets home she also hears that no civilians were killed. But in this second case, the pilot will not be thrilled. The reason the pilot bombed the retirement center was to kill civilians. Killing civilians was the means to the end of ending the war. If the people don’t die, the pilot will not have helped stop the war at all.

In the terror bombing case, the pilot intends civilian deaths, because the harm to civilians is how the pilot plans to end the war. The civilians are, therefore, used as a means to an end. The civilians are viewed, as Warren Quinn says, “as material to be strategically shaped or framed.”

This distinction is core to just war theory, and for good reason.

But the distinction is often misunderstood. For example, many people mistakenly think that one’s ‘intention’ is just your ultimate goal (ending the war). Thus, some tried to use the intention/foresight distinction to say that Harry Truman did not intend civilian deaths when he authorized dropping atomic weapons on Japan. The thought was that Truman only intended to win the war.

But this is not how the principle of double effect works. Truman still intended those civilians’ deaths because it was by killing civilians that Truman hoped to win the war. This is why Harry Truman is a murder and a war criminal (as was argued by the great ethicist Elizabeth Anscombe).

The Problem for Sanctions

How do these principles apply to sanctions?

They create a real ethical challenge for the use of sanctions. That is because sanctions tend to directly target civilians. The goal of most sanctions is to inflict damage to a nation’s economy in order to change the government’s cost-benefit calculation. But it seems to do this damage by harming civilians.

Thus, sanctions seem to be a direct violation of the principle of double effect. Or so Joy Gordon argues:

Although the doctrine of double effect would seem to justify “collateral damage,” it does not offer a justification of sanctions. . . . The direct damage to the economy is intended to indirectly influence the leadership, by triggering political pressure or uprisings of the civilians, or by generating moral guilt from the “fearful spectacle of the civilian dead.” Sanctions directed against an economy would in fact be considered unsuccessful if no disruption of the economy took place. We often hear commentators objecting that “sanctions didn’t work” in one situation or another because they weren’t “tight” enough — they did not succeed in disrupting the economy. Thus, sanctions are not defensible under the doctrine of double effect.

Now, this objection does not apply to all sanctions. Some ‘smart sanctions’ do try to directly target the leaders of a military, and so do respect a distinction between civilians and combatants. But many other sanctions do not, including many of the sanctions that the west is currently levying against Russia.

A Possible Reply

There is a plausible reply that one can make on behalf of sanctions. That is because there is a big difference between dropping a bomb on someone and refusing to trade with someone.

The difference is that people have a right not to be killed, but it is not at all clear that anyone has the right to trade in Western markets. It is wrong for me to threaten to take your money unless you clean my house. But it is not wrong for me to offer to pay you if you clean my house. In both cases, you have more money if you clean my house than if you don’t, but in one case your rights are being violated and in the other they are not. If I threaten to sabotage your children’s grades unless you give me money, then I am using your children as a means to an end.  But there is nothing wrong with me saying I will only tutor your kids if you give me money.

So, you might think that the use of sanctions are not designed to harm civilians unless the government changes behavior. Instead, we are just refusing to help unless the government changes behavior.  And that seems, on the whole, far more ethically justifiable.

Real World Complications

So which view is right? Do sanctions violate the right of innocent civilians, using them as a means to an end to put pressure on a foreign government?

It’s a difficult question. And partly I think it might depend on the details of the sanction. Take the action of PepsiCo as an example. The company recently announced that they would no longer sell Pepsi, 7 Up, or other soft drinks in Russia. However, the company will continue to sell milk, baby food, and formula.

This strikes me as, plausibly, the right balance. I think it is plausible that people have a right to certain basic goods (like food, water, or baby formula), but not rights to Diet Pepsi. As such, it would make sense to refuse to sell luxuries, even if one continues to supply civilians with necessities.

Thus, it seems that we should probably oppose any sanctions that prevent the sale of life-saving medications to Russian civilians; but it seems justifiable to support sanctions that prevent the sale of American-made cars.

Ukraine, Digital Sanctions, and Double Effect: A Response

image of Putin profile, origami style

Kenneth Boyd recently wrote a piece on the Prindle Post on whether tech companies, in addition to governments, have an obligation to help Ukraine by way of sanctions. Various tech companies and media platforms, such as TikTok and Facebook, are ready sources of misinformation about the war. This calls into question whether imposing bans on such platforms would prove helpful to deter Putin by raising the costs of the invasion of Ukraine and silencing misinformation. It is no surprise, then, that the digital minister of Ukraine, Mykhailo Fedorov, has approached Apple, Google, Meta, Netflix, and YouTube to block Russia from their services in different capacities. These methods would undoubtedly be less effective than financial sanctions, but the question is an important one: Are tech companies permitted or obligated to intervene?

One of the arguments Kenneth entertains against this position is that there could be side effects on the citizens of Russia who do not support the attack on Ukraine. As such, there are bystanders for whom such a move to ban media platforms would cause damage (how will some people reach their loved ones?). While such sanctions are potentially helpful in the larger picture of deterring Putin from continuing acts of aggression, is the potential cost morally acceptable in this scenario? The answer, if no, is a mark against tech and media companies enacting such sanctions.

I want to make two points. First, this question of permissible costs is equally applicable to any government deciding to put sanctions on Russia. When the EU, Canada, U.K., and the U.S. put economic sanctions on Russia’s central bank and involvement in Swift, for instance, this effectively caused a cash run and is likely the beginning of an inflation issue for Russians. This affects all in Russia, spanning from those in the government to the ‘mere civilians,’ including those protesting. As such, this cost must be addressed in the moral deliberation to execute such an act.

Second, the Doctrine of Double Effect (DDE) helps us see why unintentionally harming bystanders is morally permissible in this scenario (Not, mind you, in the case of innocent bystanders in Ukraine). So long as non-governmental institutions are the kind of entities morally permitted or obligated to respond (a question worth discussing, which Kenneth also raises), DDE applies equally to both the types of institutions in imposing sanctions with possible side effects.

What does the Doctrine of Double Effect maintain? The bumper sticker version is the following from the BBC: “[I]f doing something morally good has a morally bad side-effect, it’s ethically OK to do it providing the bad side-effect wasn’t intended. This is true even if you foresaw that the bad effect would probably happen.”

The name, one might guess, addresses the two effects one action produces. This bumper sticker version has considerable appeal. For instance, killing in self-defense falls under this. DDE is also applicable to certain cases of administering medicine with harmful side effects and explains the difference between suicide and self-sacrifice.

A good litmus question is whether and when a medical doctor is permitted to administer a lethal dose of medicine. It depends on the intentions, of course, but the bumper sticker version doesn’t catch whether the patient must be mildly or severely ill, whether there are other available options, etc.

The examples and litmus question should prime the intuitions for this doctrine. The full version of DDE (which the criterion below roughly follows) maintains that an agent may intentionally perform an action that will bring about an evil side effect(s) so long as the following conditions are simultaneously and entirely satisfied:

  1. The action performed must in itself be morally good or neutral;
  2. The good action and effect(s), and not the evil effect, are intended;
  3. The evil effect cannot be the means to achieve the good effect — the good must be achieved as directly (or more directly) than the evil;
  4. There must be a proportionality between the good and the evil, in which the evil is lesser than or equal to the good, which serves as a good reason for the act in question.

One can easily see how this applies to killing in self-defense. While impermissible to kill someone in cold blood or even kill someone who is plotting your own death, it is morally permissible to kill someone in self-defense. This is the case even if one foresees that the act of defense will require lethal effort.

As is evident, DDE does not justify the death of individuals in Ukraine who are unintentionally killed (say, in a bombing). For the very act of untempered aggression is an immoral act and fails to meet the criterion.

Now, apply this criterion to the question of tech companies who may impose sanctions to achieve a certain good and with it, an evil.

What are the relevant goods and evils? In this case, the good is at least that of deterring Putin from further aggression and stopping misinformation. The bad is the consequences upon locals. For instance, the anti-war protestors in Russia who are communicating their situation, and perhaps the individuals who use these media outlets to secure communication with loved ones.

This type of act hits all four marks: the action is neutral, the good effects are the ones intended (presumably this is the case), the evil effects are not the means of achieving this outcome and are no more direct than the good effects, and the good far outweighs the evil caused by this.

That the evil is equal to or less than the good achieved in this scenario might not seem apparent. But consider how the civilians have other means of reaching loved ones, and how news reporters (not only TikTok and Facebook) are still prominent ways to communicate information. These are both goods. And thankfully, they would not be entirely lost because of such potential sanctions.

As should be clear, the potential bad side effects are not a good reason to refrain from imposing media and tech sanctions on Russia. This is not to say that it is therefore a good reason to impose sanctions. All we have done in this case is see how the respective side effects are not sufficient to deter sanctions and how the action meets all four criteria. And this shows that it is morally permissible.

Do Police Intentions Matter?

photograph of armed police officer

Imagine if it became widely-reported that police officers had been intentionally killing Black Americans for the expressed reason that they are Black. Public outrage would be essentially universal. But, while it is true that Black Americans are disproportionately the victims of police use of force, including lethal force, it seems unlikely that these rights violations are part of a conscious, intentional scheme on the part of those in power to oppress or terrorize Black citizens. At any rate, the official statements from law enforcement regarding these incidents invariably deny discriminatory motivations on the part of officers. Why, then, are we seeing calls to defund the police?

The slogan “Defund the Police” has been clarified by Black Lives Matter co-founder Alicia Garza on NBC’s Meet the Press: “When we talk about defunding the police, what we’re saying is, ‘Invest in the resources that our communities need.’” The underlying problem runs deep: it is rooted in an unrelenting devaluation of communities of color. Rights violations by police are part of a larger picture of racial inequality that includes economic, health, and educational disadvantages.

The sources of this inequality are mostly implicit and institutional: a product of the unconscious biases of individuals, including police officers, and prejudicial treatment “baked into” our institutions, like the justice system. That is, social inequality seems to be systemic and not an intentional program of overtly racist policies. In particular, most of us feel strongly that the all-to-frequent killing of unarmed Black citizens, though repellent, has been unintentional.

But does this distinction matter? A plausible argument could be made that the chronic, unintentional killing of unarmed Black men and women by police is morally on a par with the intentional killing of these citizens. Let me explain.

Let’s begin with the reasonable assumption that implicit racial bias, specifically an implicit devaluation of Black lives, impacts decisions made by all members of our society, including police officers. What is devaluation? Attitudes toward enemy lives in war throws some light on the concept: each side invariably comes to view enemy lives as less valuable than their own. Even unintended enemy civilian casualties, euphemistically termed “collateral damage,” become tolerable if the military objective is important enough. On the battlefield, tactical decisions must conform to a “tolerable” relation between the value of an objective and the anticipated extent of collateral damage. This relation is called “proportionality.”

By contrast, policing is intended to be a preventative exercise of authority in the interest of keeping the peace and protecting the rights of citizens, including suspected criminals. Still, police do violate rights on occasion, and police officials operate with their own concept of proportionality: use of force must be proportional to the threat or resistance the officer anticipates.

Ironically, rights violations usually occur in the name of the protection of rights; when, for example, an officer uses excessive force to subdue a thief. Often, these violations are regarded as regrettable, but unavoidable; they are justified as the price we pay for law and order. But, in reality, these violations frequently stem from implicit racial biases. What’s more, the policy of “qualified immunity” offers legal protections for police officers and this disproportionately deprives Black victims of justice in such cases. This combination of factors has led some to argue that police authority amounts to a form of State-sponsored violence. These rights violations resemble wartime collateral damage: they are unintended consequences deemed proportional to legitimate efforts to protect citizen’s rights.

Now consider the following question posed by philosopher Igor Primoratz regarding wartime collateral damage: is the foreseeable killing of civilians as a side-effect of a military operation any morally better than the intentional killing of civilians. Specifically he asks, “suppose you were bound to be killed, but could choose between being killed with intent and being killed without intent, but as a side-effect of the killer’s pursuit of his end. Would you have any reason for preferring the latter fate to the former?”

Imagine two police officers, each of whom has killed a Black suspect under identical circumstances. When asked whether the suspect’s race was relevant to the use of force, the first officer says, “No, and I regret that deadly force was proportional to the threat I encountered.” The second officer says, “Yes, race was a factor. Cultural stereotypes predispose me to view Black men as likely threats, and institutional practices in the justice system keep the stakes for the use of lethal force relatively low. Thus, I regret my use of deadly force that I considered proportional to my perception of the threat in the absence of serious legal consequences.”

The second officer’s response would be surprising, but honest. Depictions of Black men in particular as violent “superpredators” in the media, in movies, and by politicians, are ample. Furthermore, the doctrine of qualified immunity, which bars people from recovering damages when police violate their rights, offers protection to officers whose actions implicitly manifest bias.

In the absence of damning outside testimony, the first officer will be held blameless. The second officer will be said to have acted on conscious biases and his honesty puts him at risk of discipline or discharge. Although the disciplinary actions each officer faces will differ, the same result was obtained, under identical circumstances. The only difference is that the second officer made the implicit explicit, and the first officer simply denied that his own implicit bias was a factor in his decision.

Where, then, does the moral difference lie between, on one hand, the foreseeable violation of the rights of Black lives in a society that systemically devalues those lives, and, on the other hand, the intentional violation of the rights of Black lives? If the well-documented effect of racial bias in law enforcement leads us to foresee the same pattern of disproportionate rights-violations in the future, and we do nothing about it, our acceptance of those violations is no more morally justified than the acceptance of intentional right-violations.

That is, if we can’t say why the intentional violations of Black rights is morally worse than giving police a monopoly on sanctioned violence under social conditions that harbor implicit racial biases, then sanctioning police violence looks morally unjustifiable in principle. That is enough to validate the call to divert funding from police departments into better economic, health, and educational resources for communities of color.

If North Korea Launches a Nuclear Attack, How Should the U.S. Respond?

A photo of the North Korean-South Korean border

North Korea’s regime has taken a bolder step in its confrontation with the United States: it has threatened to launch an attack against Guam, a US territory in the Pacific. Then, it walked it back. But, we have seen this kind of behavior in Kim Jong Un many times, so we may foresee that, sooner or later, he will again threaten to attack Hawaii, Guam, South Korea, or any other target within North Korea’s range. If such an attack takes place, and it is a nuclear attack, how should the U.S. ethically respond?

Continue reading “If North Korea Launches a Nuclear Attack, How Should the U.S. Respond?”