← Return to search results
Back to Prindle Institute

The Speaker’s Climate Change Skepticism

photograph of Louisiana state flag before clouded sky

After a tumultuous election process House Republicans finally elected Louisiana representative Michael Johnson as Speaker of the House on October 25th. As the relatively unknown politician takes up one of the most powerful positions in American governance, he has come under scrutiny for his strident Evangelicism, denial of the legitimacy of the 2020 election, and climate change skepticism.

Beyond his staunchly oil and gas voting record, a crucial red flag was a 2017 comment: “The climate is changing, but the question is, is it being caused by natural cycles over the span of Earth’s history. Or is it changing because we drive SUVs?” There is little reason to believe Johnson has become any more serious about climate change since then. For the majority of the American public, who want the government to do more to address climate change, Johnson’s position is disturbing. Such comments are also at odds with the consensus of the scientific community. But are they morally wrong?

Typically, we would consider someone’s position on climate change a factual matter, as opposed to a moral one. However, the factual and the moral are often entangled. First, because our belief in certain facts often has direct moral implications. Lives are literally on the line with climate change. Second, and more controversially, we might consider some beliefs themselves to be immoral, for example, pseudoscientific beliefs about the different intelligence of races. Part of the reason this is controversial is because some philosophers hold that people should not generally be held morally accountable for their beliefs, but only their actions.

Let us assume going forward that Speaker Johnson’s climate change skepticism is sincere. As a Representative from a state boasting a sizable oil and gas industry and as a recipient of oil and gas campaign donations, it can’t be ruled out that he is simply cynically lying about his personal beliefs to fatten his pockets. This would not be good of course, but it would hardly present a moral conundrum. More challenging is what to make of sincere climate skepticism.

To be clear, sincerity does not legitimate climate skepticism. Johnson is not a scientist, he is a lawyer, and it is unlikely that his position is formed based on serious confrontation with the scientific evidence. Like most of us, his stance on climate change probably came from some casual reading and the views of his social circle. We might suspect that motivated reasoning – where what he wants to believe subconsciously impacts what he does believe – is at play.  But none of this seems especially unethical, it’s merely human. In fact, given the long history of disinformation deployed by the fossil fuel industry to promulgate doubt about human-caused global warming, to hold someone accountable for sincere climate change skepticism is essentially to condemn them for believing someone else’s lies.

On the grounds that Johnson is sincere, and he did not “choose” his climate change skepticism, most philosophers would find it difficult to hold him morally accountable for this ignorance. But it is not impossible. One option would be if his climate skepticism is not so accidental after all. Imagine for example that Mike Johnson’s clerks prepared a report on the evidence for human caused climate change, and Representative Johnson declined to read it because he worried it would lead him to doubt his oil and gas initiatives in Louisiana. Here, in an important sense, Johnson chose ignorance.

Alternatively, we may argue that Johnson failed to live up to a reasonable expectation to form more accurate beliefs about climate change. This argument may apply especially strongly to someone in his position. As a member of Congress he has access to extensive resources and, so the argument goes, he should have used some of these resources to understand climate change more fully. In other words, this line of argument contends his ignorance is unreasonable, and therefore worthy of moral condemnation.

We can slightly tweak this, arguing that it is not simply about the resources he has available, but rather the nature of his position. He is a public servant with a responsibility to the people, and therefore his climate skepticism could be viewed as a failure to uphold the obligations of his office. Under this analysis, the moral concern about climate change skepticism does not attach to everyone, but rather specifically to those with certain kinds of responsibilities, such as politicians. Although the precise ethical contours of Johnson’s responsibilities to his voters, his party, his state, and his country is a complicated question in representative democracy, especially with so few republican voters seeing climate change as a priority.

Then there are attributionist approaches. Attributionist theories of moral responsibility aren’t concerned with whether or not Speaker Johnson chose to be a climate change skeptic, but rather what him being a climate skeptic says about him as a person and how he regards others. In some cases, like racist beliefs, it is clear how one’s belief leads to a negative evaluation of their moral character. If you learn that someone holds a bushel of racist beliefs, you learn something about who they are. Climate change skepticism does not involve vicious beliefs about others in the same way as racism, but could there be relevant similarities? Mike Johnson’s home state of Louisiana is uniquely vulnerable to the impacts of climate change, and has been deeply affected by hurricanes, tropical storm, heat waves, invasive species, and other environmental harms at least partly stemming from a warming climate. From an attributionist perspective, one can argue that Johnson’s dismissal of the human harms of climate change, illustrates a morally troubling callousness and he deserves moral condemnation on this ground.

Outside of challenging metaphysical considerations of when people are morally responsible for what beliefs, we might also focus on the real world expressions and practices of moral blame in response to Mike Johnson. For even if public moral condemnation does not intersect with some ultimate truth about moral responsibility, it can tell a story about what Americans care about, and what beliefs they will or will not accept in their leaders.

Just Say No to Christmas Turkey

top-down photograph of holiday table set with turkey and sides

My colleague Benjamin Rossi recently defended buying meat. With Christmas dinners approaching and readers’ New Years’ resolutions up for grabs, it’s a good time to consider the other side of that debate.

Rossi’s argument gets a lot right. Buying and consuming meat doesn’t harm the animal from which it came — that animal is already dead. He’s also right that an individual consumer buying meat, in our globalized mass-production food chain, can’t raise demand for meat significantly enough to cause more animals to be farmed or killed. Neither does purchasing meat violate the animal’s rights. If the animal’s rights were violated, that took place before the animal’s death. It appears, then, that buying meat makes no moral difference; our individual consumer choices produce so little positive or negative change that they don’t seem to have any moral significance at all.

But there’s an objection to this argument that I wish Rossi had considered: that buying and eating meat is wrong because it makes us complicit with an industry that is committing very serious wrongs.

The philosopher Tristram McPherson imagines a case in which a woman called Alice wants to buy a house in a quiet neighborhood near an elementary school that her young children could attend. The realtor has a great selection of houses in this neighborhood. The catch is that the realtor is a racist “who uses his business to promote the racial homogeneity of ‘nice’ neighborhoods. He does this by showing houses in these neighborhoods only to members of Alice’s race.”

Supposing there were other realtors available with reasonable alternative homes, it seems wrong for Alice to use the racist realtor’s services. Why? Because the “realtor has a wrongful plan” and by using his services to buy a home, “Alice would be seeking to benefit by cooperating with that plan. And it is wrong to do that.” In other words, using the racist realtor’s services would make Alice complicit with the racist plan of the realtor.

What’s interesting about being complicit with evil is that it seems wrong even if it doesn’t end up causing any additional harm.

If the racist realtor didn’t sell the house to Alice, he would sell it to another member of Alice’s race. So Alice wouldn’t cause greater racial segregation by doing business with the realtor and purchasing the house herself. She also wouldn’t reduce the level of racial segregation by buying a different house in the same neighborhood from a non-racist realtor. Nonetheless, it would be wrong for Alice to use the racist realtor’s services.

According to McPherson, the moral principle that explains this difference is that it is wrong to cooperate with the “wrongful plans” of others. The realtor’s racist plan was wrongful, and that’s why Alice shouldn’t be complicit with that plan.

The meat industry, McPherson argues, also has a “wrongful plan.” It systematically inflicts terrible and unnecessary suffering on billions of animals each year (and causes profound environmental damage), with the goal of generating a profit by selling meat products to consumers. This is a plan which cannot be ethically justified. The suffering caused by the industry is so massive it is impossible to truly comprehend. In the U.S., 99% of animals are factory farmed, often kept inside in small, overcrowded cages for their entire lives (strangely, most Americans believe that they are consuming the 1% of animals that aren’t factory farmed).

The suffering caused by the meat industry might be justifiable if a huge supply of meat were required for some morally significant purpose — if humans were carnivores for instance. But the truth is, we don’t actually need it.

We would get by just fine without meat in our diets. The meat industry causes massive needless suffering so that it can sell you meat at profit. Hence McPherson’s claim that the meat industry has a wrongful plan.

By buying that meat, you’re cooperating with the meat industry’s wrongful plan — you’re complicit with the industry’s massive wrongdoing. Just as it would be wrong for Alice to cooperate with the racist realtor’s plan, it would be wrong for you to cooperate with the meat industry’s plan.

This argument from complicity only works on the assumption that you have reasonable alternatives to being complicit. If Alice had to either do business with the racist realtor or become homeless, it seems morally permissible for her to cooperate with the wrongful plan of the realtor. Likewise, suppose a meat eater could not afford to adopt a vegetarian or vegan diet or had a medical condition that demanded the consumption of meat. In that case, it seems permissible to cooperate with the meat industry by purchasing its products. But for almost all of us, reasonable alternatives exist.

Rossi may be right that individual consumer choices don’t (individually) cause more suffering, animal rights violations, or environmental damage. But being complicit with the meat industry is enough to make it wrong to buy that Christmas turkey.

Neurodivergence, Diagnosis, and Blame

photograph of woman using TikTok on iphone

If your For You page on TikTok looks anything like mine, you know that there is a veritable trove of content about autism and ADHD, much of it focused on questions about diagnosis. The spread of this online content and discussion has been lauded for the potential good it can do, allowing women and non-binary people to have access to information about conditions that are often missed in those populations or by giving voice to traditionally marginalized groups who often deal with others speaking inaccurately on their behalf.

At the same time, the algorithm may function in ways that trend towards stereotyping the populations in question or pushing content that associates ADHD and autism with things not necessarily related to diagnostic criteria (e.g., ADHD with talking fast or autism with disliking the big room light). This can lead to misunderstandings and poor self-diagnosis that misses underlying issues, such as someone mistaking bipolar for ADHD. While similar misunderstandings and misdiagnoses can happen in medical contexts, those who rely on questionably-credentialed social media influencers may be more susceptible to misinformation.

But why is having a diagnosis so appealing? What does the diagnosis do for autistic and ADHD individuals?

I suspect that at least one part of the answer is found in our practices of blame and our beliefs about who deserves support: the diagnosis promises less self-blame and blame from others and more understanding and accommodations.

How might a diagnosis lead to less self-blame and blame from others? There are several possible philosophical answers to this question.

The first answer is relatively common: ADHD and autism are caused by brain chemistry and structure — they should be seen as medical or neurological conditions, not moral ones. On the purely medical view, ADHD and autism have nothing to do with character or who that person is as a moral agent. So, if someone is diagnosed with ADHD or autism, they shouldn’t be blamed for anything resulting from those conditions because they’re simply medical problems that are out of one’s control.

This answer has a few benefits:

the medical diagnosis adds a sense of legitimacy to the experience of individuals with ADHD and autism, it provides access to medical care, and it gives a clear conceptual apparatus to communicate to others about the specific accommodations that are needed.

At the same time, the purely medical answer has key drawbacks.

First, the medical mode is often moralized in its own way, with its own norms about health, disease, and disorder. Sometimes this is appropriate, but other times natural variations in human expression become labeled as disorders or deficits when they should not be (see how intersex people have been treated or the history of eugenics). The aim of medicine is often to provide a cure, but some things do not need to be cured. Medical care can and often has been helpful for individuals needing access to Adderall or Ritalin to function, but the purely medical mode has its limits for understanding the experiences of individuals with ADHD and autism.

Second, the medical mode tends to locate the problem in the individual, though some public health approaches have started to move towards structural and social thinking. For those with ADHD and autism, they may experience their condition as a disability in large part because of a lack of social support and understanding rather than a purely internal discomfort.

Third, the medical mode cannot always be separated from character. See, for example, the overlap of depression and grief or the fact that even normal psychological states are also caused by brain chemistry and structure.

In the case of autistic and ADHD individuals, the condition isn’t something that can be easily carved off from the person because they affect broad domains of the person’s life. In trying to separate out the autism or ADHD, others can easily create this idea of the “real” non-autistic, non-ADHD person, which can lead to failing to love and appreciate the actual person.

The second philosophical answer to the question as to how a diagnosis might lead to less blame is a capacities-based view of moral responsibility. This view is similar to the medical mode, in that the focus is often primarily on the individual, but it differs in its decidedly moral focus. On the capacities view, agents are morally responsible if they have some normal (or minimally normal) capacities of reasoning and choice. Agents are not responsible if they lack these capacities. There are ways of refining this kind of view, but let’s take the basic idea for now.

If we combine this kind of philosophical idea with the idea that ADHD and autistic people are deficient with regard to some of these capacities necessary to be a morally responsible agent, then it would make sense that ADHD and autistic folks would be either less responsible or not responsible at all in certain domains. But if the point of accommodations is to increase capacities, then accommodations should be supported. However, like the medical approach, there are a few drawbacks to at least some versions of this view.

First, there isn’t a clear capacities hierarchy between neurotypical people and neurodivergent people. While someone with ADHD may have trouble starting on a large project in advance, they may work exceptionally well under pressure. Someone with autism may have more difficulty in social situations but could have the ability to focus their time and energy to learn immense amounts of knowledge about a special interest. While parts of the ADHD and autistic experience involve deficits in certain capacities, the overall assessment is much less clear.

Second, claiming that someone with autism and ADHD can’t be a fully morally responsible agent also seems to have a troubling implication that they might not be full, self-legislating members of the moral community. This kind of view places people with autism and ADHD in the position of, say, a child who has some understanding of moral principles but isn’t yet a full agent.

Neither the medical model nor at least some versions of the capacities model seem to fully provide what people are looking for in a diagnosis. While both offer rationales for removing blame, they can have a dehumanizing effect. The drawbacks to these views, however, teach us some lessons: a good view should 1) consider the whole, actual person, 2) think about the person in their social context, and 3) avoid making the autistic or ADHD person out to be less than full moral agents.

I think the right question to ask isn’t “how is this person deficient in some way that removes responsibility?” but instead “what expectations are reasonable to place on this person, given who they are at this point in time?”

This is a rough suggestion that requires more development than I can give it here.

There are ethical considerations that enter in at the level of expectations which go beyond questions about capacity. What would it look like to be kind? To give each other space to be comfortable? To accept parts of ourselves we can’t change? To build a world that works for everyone? Capacity is certainly implicated by these questions, but it isn’t the whole picture.

By shifting our focus to the question about what expectations are reasonable to place on an individual person, we are recentering the whole person and recognizing the dis/abilities that the individual experiences.

Experiences with autism and ADHD can be very different from person to person, and the accommodations needed will vary from person to person. The expectations we can reasonably place on people with ADHD and autism may not be any less than those without — they may just be different.

And neurotypical people who interact with ADHD and autistic people may also be reasonably expected to provide certain accommodations. Everyone’s needs should be considered, and no one should be othered.

For example, say that an autistic person says something that comes off as rude to a neurotypical friend. This has happened a few times before, each within a new domain of conversation. Every time, the autistic individual apologizes and explains how autism affects their social communication and understanding of social norms and how they’re trying to get things right. Eventually the neurotypical friend gets upset and says “why do you always use the autism as an excuse to get out of responsibility?”

In this case, it doesn’t seem that the autistic person is abnegating responsibility, it seems that they’re clarifying what they are actually responsible for. The autistic person isn’t responsible for intentionally saying something rude, they’re responsible for accidentally saying something rude despite their best intentions otherwise. And the autistic person still apologizes for the hurt caused and promises that they will continue to try to do better in the future. Whichever way the two friends negotiate this part of their relationship, it seems important that they each understand where the other is coming from and that each friend’s feelings are given space.

What does this example tell us about relationship between diagnosis and blame? Perhaps we need to develop alternative frameworks to recontextualize responsibility, rather than simply diminish it.

Driving with the Machine: Self-Driving Cars, Responsibility, and Moral Luck

photograph of driver sleeping in self-driving car

Charlie and Ego pick me up from a university parking lot on a balmy September afternoon. A soccer match is about to start nearby, and Charlie, eyeing the fans bustling around his car, carefully drives us off the lot. Charlie is a former student of mine. I gather as we catch up that Charlie now does something lucrative and occult with cryptocurrencies, of which the Tesla Model S we are riding in is presumably an alchemical product. It’s nice to see Charlie. As pleasant as our conversation is, though, that’s not why I’m here. I want to meet Ego.

There’s a map of our sleepy little town of San Luis Obispo, California on the touch screen in the center console of Charlie’s car. As we approach the edge of campus, Charlie casually selects a location downtown, clicks a few buttons, and lets go of the wheel. The wheel and the five-thousand pound car begin to move of their own accord. Ego is driving.

Ego, despite the moniker, is not a person. “Ego” is what Charlie calls the Full Self-Driving beta function on his car, a test version of Tesla’s self-driving program that is designed to navigate city streets and highways.

When Ego takes over, my riding experience immediately feels different, less familiar. Charlie’s driving was smooth and confident; Ego’s driving feels jerky and neurotic.

Ego drives us down suburban streets, past cars, bicyclists, and pedestrians. It doesn’t come close to hitting anyone (you can tell Ego is programmed to be extra careful around pedestrians), and it gets us where we want to go. But it moves unnaturally. The wheel jitters. Sometimes the car moves haltingly, slowing on empty streets or stopping abruptly in intersections. At other times it moves like a missile, accelerating rapidly into left-hand turns or sweeping within inches of inanimate obstacles. You wouldn’t mistake it for a bad human driver if your eyes were closed. It feels unmistakably robotic. I’m sure that many of Ego’s peculiarities reflect temporary technical problems, but it’s hard to shake the sense that there’s something fundamentally alien at the wheel.

Despite my unease about Ego, I never felt unsafe while Ego was driving. That’s because Charlie was attending assiduously to Ego’s movements. Whenever Ego would do something weird, Charlie would fiddle with the console to inform Tesla’s algorithms that something went wrong. And when Ego started to do something egregious or annoying, Charlie would grab the wheel and manually navigate us to a new situation. I soon realized that it wasn’t accurate to say that Ego is driving or that Charlie is driving. The better thing to say is that they’re driving together.

This is how Charlie sees things, too.

Over time it’s started to feel like it’s a team effort, that we’re working together. And I think that way because it messes up in the same spots. It’s very predictable. It shows me what it’s going to do, and I can override some of those functions. So it’s kind of like it’s doing the actual task of driving, but I’m overseeing it and making sure that it’s, you know, not crashing. So I do feel like it’s a team effort.

This dynamic piques my interest. I’ve spent a lot of time thinking about moral responsibility in contexts of shared agency. Participants in shared agency are often praised or blamed for actions or outcomes that originate outside the sphere of their own individual agency. For example, if a medical provider working as part of a healthcare team goes above and beyond in a moment of crisis to save a patient’s life, the team members who enabled or supported the provider’s care may share some praise for saving the patient even though they weren’t directly involved in the crisis.

Whenever a person’s moral status (including their praiseworthiness or blameworthiness) depends upon factors that are at least partly outside of their control, they are subject to what’s called moral luck.

Moral luck is controversial because in the abstract we tend to think that a person’s moral status should be based on the quality of their intentions, choices, or character, on things they can fully control. However, our intuitions about particular cases often suggest otherwise.

A classic example involves drunk driving: we tend to morally blame drunk drivers who hit and kill children much more harshly than equally negligent drunk drivers who luckily get home safely.

In the past, I’ve argued that moral luck is a common feature of shared agency because when you act jointly with other people your moral status can be affected by them in ways you can’t fully anticipate or control. You might find yourself to blame for another agent’s actions. And as I watched Charlie and Ego drive around town together, I couldn’t help but feel that their shared activity exhibited a similar dynamic.

Ego does not meet the conditions required for moral responsibility. But Charlie does. He is the responsible adult in this activity, which is inherently risky and could result in serious harms. It’s natural to think that he is responsible for it, even if, because he and Ego are sharing the reins, he is not fully in control of how it unfolds.

If that’s right, then people who use self-driving programs are susceptible to moral luck because they can be on the moral hook for what these programs do. And this luck is analogous to the luck involved in shared agency between people.

It’s possible to complicate this line of thought. For one, it will not always be feasible for people to productively intervene to prevent harmful self-driving malfunctions, especially as the technology becomes more sophisticated and reliable. Accidents often happen quickly, and intervening can make things worse. When an accident involving a self-driving car is not due to the human driver’s negligence (or some other morally criticizable error), many people will say that the human driver is not morally responsible. Moreover, the human driver is not the only potentially responsible person in the mix. As my colleague Patrick Lin has pointed out, those who design self-driving cars can bear responsibility for bad outcomes that result from criticizable design choices. In fact, in many situations designers would seem to be a better candidate for blame than drivers, since, unlike drivers, designers have the luxury of time and forethought.

These points are both important, but they are compatible with the claim that human drivers are subject to a significant sort of moral luck by way of self-driving cars. At least when a human driver’s negligence leads to a harmful self-driving accident that would not have occurred had the driver not been negligent, it seems reasonable to say that the driver is blameworthy for that accident, even if other parties, such as designers or other drivers, bear some responsibility, too.

Reactions like praise and blame perform important functions in human life. Thus, thinking about the specific conditions under which humans are and are not morally responsible for self-driving cars is worthwhile. However, it is perhaps possible to overemphasize the importance of fault and blameworthiness here.

The more reliable and autonomous self-driving cars become, the more tempting it will be for human drivers to morally, socially, and personally distance themselves from harmful accidents involving their self-driving cars with the thought: “There’s nothing I could have done; I am unfortunate but as blameless as a mere spectator.”

This thought may be true, but it threatens to obscure in the driver’s conscience the fact that the driver’s own agency bears a special relation to the accident. There is something unsavory about someone who refuses to acknowledge this special relation. It’s appropriate for the driver, even if blameless, to feel a special type of first-personal regret about her choice to take the self-driving car out for a spin that day, a regret that is different from the sadness a spectator might feel and that might motivate her to make amends or apologize if she can. The willingness to take responsibility for those aspects of one’s embodied agency that fall outside of one’s control is a manifestation of a virtuous spirit and seems wholly appropriate – indeed, requisite – for those who choose to risk others’ lives by using self-driving cars.

The upshot is that using a self-driving car is morally risky, even for the most conscientious users. This is true of conventional cars as well. But the risk associated with self-driving cars is special because it originates in the actions of an artificial agent that has the potential to do great harm.

For now, I suspect that most self-driving car users are acutely aware of this. Charlie certainly is.

“If I was not paying attention, and I hit someone, I would feel 100% responsible. And I probably would feel at least mostly responsible if I was paying attention. So it very much feels like I am responsible for what it does.”

Victim-blaming and Guilty Victims

photograph of two hands pointing fingers at one another

Since the heinous attack on The Satanic Verses author Salman Rushdie, there has been a lively debate in The Prindle Post about free speech, victim blaming, and self-censorship. Giles Howdle argued that authors have no obligation to censor themselves, even if they are aware that publishing certain material might incite violence against themselves or others. Benjamin Rossi replied that although Rushdie might be blameless for the retribution leveled against him, we need a stronger principle to determine the cases in which victims might be, at least partially, responsible for their own misfortune.

Rossi’s argument hinges on a fascinating comparison between the actions of Salman Rushdie and those of Terry Jones, an America pastor who organized  Quran burnings in an ironic protest against the intolerance supposedly inherent in Islam. Both men’s actions led to violent reprisals and widespread, deadly protests. Whereas we are inclined to excuse Rushdie for the furor his publication caused, we are less sympathetic to Jones, who was widely condemned, and even arrested, for his actions.

I agree with both Howdle and Rossi on important points. Like Howdle, I think that Rushdie should not be blamed for the response to The Satanic Verses, and certainly not for his own stabbing. Like Rossi, I don’t think that all victims are blameless, or that it is always wrong to blame the victim.

But there are a couple of important clarifications which I think ought to be made in the debate.

The first clarification is to do with the idea of victimhood. If we are focusing on ‘victim-blaming’, we ought to have a firm idea of what constitutes a victim. The second clarification centers on intent, which I think can be a useful measure in deciding when and where to apportion blame.

Victim-blaming

‘Victim-blaming’ itself is a charged term. In the media, it most often crops up in misogynistic backlash to the stories of victims of sexual assault. “She was asking for it,” “she shouldn’t have worn that dress,” “she led him on”: these are paradigm examples of victim-blaming that claim women are responsible for the crimes men commit against them. These statements indicate that victims should be held at fault for the reprehensible actions of others.

Victim-blaming in sexual assault might be a form of the just-world delusion, where people think that because the world is inherently fair, people must deserve the things that happen to them. Or it might be a result or reflection of  patriarchal rape culture. The important thing, for now, is that in these cases, the victims are genuinely, completely, utterly innocent. No fashion decisions or quirks of personality can make you culpable for your own sexual assault because there is nothing morally wrong in dressing or acting the way you like. But if these victims are innocent, it raises the question: can victims ever be guilty?

Guilty Victims

Imagine your friend goes out for the night and comes home bloody and bruised, the victim of an assault. How terrible(!), you might think. As you tease the story out of your friend, however, something becomes clear: they had been roaming the streets, intentionally offending everybody in sight, until somebody reacted violently. Now your sympathy might start to subside – you might even think that on some level, your friend got what they deserved.

In this case, you might want to dismiss your friend’s victimhood entirely: you might think that because they wanted to offend people, they’re not really a victim at all. But this doesn’t quite capture the reality of the situation. Sure, your friend was being insufferable. But they probably didn’t deserve the beating they received. In that sense, they are still a victim.

A better approach is to say that although your friend is a victim, they are not an innocent victim: they are culpable in their own misfortune. They set out to cause offense and ended up the victim of physical harm. In a sense, they reaped what they sowed. They are a guilty victim and because they are guilty, they are worthy of blame. True, they might not have deserved the level of retribution they received. But that does not make them innocent. In the same way, we might feel sorry for somebody who receives an excessively harsh prison sentence for a relatively minor crime. Although we ought to have sympathy for their plight, we shouldn’t deceive ourselves into thinking that they are entirely innocent.

So as well as innocent victims, we can have guilty victims: those who, in doing something morally wrong, set in motion a chain of events in which they themselves are harmed. The next question is: how do we tell the difference between innocent and guilty victims?

Careless Victims

You might think that guilty victims are just those who play a causal role in their own misfortune. But this approach would cast too wide a net. Consider somebody walking down the footpath, daydreaming, who suddenly gets hit by a drunk driver mounting the curb. Their carelessness is a factor in the misery that befalls them: had they been paying attention to the road instead of singing along to Harry Styles, they would have been able to jump out of the way and escape unharmed. Although this victim is careless, it would be a stretch to say that they are guilty. After all, they haven’t done anything morally wrong. So careless victims are probably best considered a special type of innocent victim.

Intent and Guilt

A more promising method of determining whether a victim is innocent or guilty is to consider their intent in authoring the action that leads to their misfortune. If you intend to cause harm to others but are unlucky enough that the harm boomerangs around to hit you, you are probably a guilty victim.

If you make an offhanded, inoffensive comment to somebody who promptly wallops you, you can’t – or shouldn’t – be blamed. You are an innocent victim. But if your intent is to offend somebody, and you goad them into throwing a punch, then you can be held morally accountable for your actions.

Your accountability doesn’t absolve the puncher of theirs – but neither does their action absolve you of the role you played.

If your intent is to offend everybody, and you walk around hurling insults until a fight breaks out, then once again, you are morally responsible. Although your intent isn’t to personally cause physical harm, you are not only indifferent about the potential for physical harm to occur, but also perpetrating the exact morally dubious actions that make that harm more likely. For this, you ought to be held morally accountable.

So the basic rule is this: innocent victims are never worthy of blame, but guilty victims can be. Guilty victims are those who intend to cause harm (or offense), and, in doing so, set in motion a chain of events resulting in harm to themselves.

Rushdie and Jones

Finally, we can apply our notion of guilty victimhood to the cases of Salman Rushdie and Terry Jones. Both were victims of retribution for their non-violent expression. To decide whether they are blameworthy, we ought to consider whether they are innocent or guilty victims. To do that, we must consider the intent behind their inflammatory actions.

If Jones’s intent was to inflame and cause offense, then he is worthy of criticism, and we might be inclined to say that he is not just causally, but also morally, responsible for the backlash to his actions. He is a guilty victim. On the other hand, if Rushdie’s goal was to entertain and engage, and inflammation and insult were mere by-products of that intention, then we ought to be more forgiving. He is an innocent victim (or, at worst, a careless one). He remains causally responsible – as does Jones – but is less morally culpable for the ultimate outcomes of the chain of events in which he was only one link. Both actors played a role in a causal chain which led to violence and death. But to be equally morally responsible, they would have to have had the same (or similar) intentions.

Now this is not a defense of violence as a response to being offended.

No matter the intent behind Jones’s Quran burning, he doesn’t deserve violent retribution. Nor does his intent to offend excuse the actions of those who would seek to harm him.

But I think that the difference between guilty and innocent victims can help explain our different reactions to the Jones and Rushdie cases.

I suggest that it is not just that Rushdie seems to otherwise be a more sympathetic character than Jones. The difference is that Rushdie’s intent in writing and publishing The Satanic Verses might have been fundamentally different to Jones’s intent in burning Qurans. Of course, intent is incredibly hard to judge, and we may never be certain of the intent of either man. And there is certainly the counter that I might only think Rushdie’s intent was better because I am already more positively inclined towards him. But it might equally be the case that I am more positively inclined towards Rushdie because I think his intent was purer. So, we can call this a wash.

Nonetheless, intent ought to make a difference in assessing the morality of actions. If Rushdie was an innocent victim, and Jones a guilty one, then it makes sense that we are more sympathetic to Rushdie. This sympathy has nothing to do with the value of Rushdie’s work (I wouldn’t know, I haven’t read it!), and everything to do with the intention behind it. Rossi is right to say that there is no ‘consistent commitment’ in terms of apportioning blame to people whose non-violent acts contribute to violence. But my point is: there shouldn’t be a consistent commitment. We should praise or condemn actions not just on whether they contribute to a causal chain leading to violence, but on the intentions behind those actions. Not all victims are created equal.

Climate Change and the Defense of Ignorance

photograph of factory air pollution silhouette

Although first uncovered some years ago, a New Zealand newspaper article from 1912 touting the environmental dangers of carbon emissions has again been making the rounds. But why is information like this morally relevant? And what does it mean for the responsibility of particular parties?

Successfully combating the climate crisis will involve huge burdens for certain countries, corporations, and individuals. Some of these burdens will be in the form of mitigation – that is, taking action to do all we can to reduce the effects of climate change. In 2011, nearly all countries agreed to limit the global average temperature rise to no more than 2°C compared to preindustrial levels – the maximum global temperature rise we can tolerate while avoiding the most catastrophic effect of climate changes. According to the Intergovernmental Panel on Climate Change, achieving this with a probability of >66% will require us to keep our global carbon expenditure below 2900GtCO2. As at the time of writing, only 562GtCO2 remains. Note that this is already 2 GtCO2 less than when I wrote another article on climate harms only three weeks ago. In order to ensure we don’t go over budget, certain parties will have to severely reduce their consumption: forgoing the cheap and easily accessible fossil fuels we’ve been exploiting for hundreds of years, and investing heavily in new, cleaner sources of energy.

But there will also be adaptation burdens – that is, costs associated with dealing with the effects of climate change that already exist. Examples of these burdens include building seawalls, fighting floods and fires, and potentially rehoming those who find themselves displaced by extreme weather events and abandoned by their insurance companies.

Usually when a problem creates costs, we look to pass those costs on to the person/s who caused the problem.

Suppose I find a large, deep hole on what I believe to be an empty plot of land adjacent to my property. I then begin to use this hole as a dumping ground for organic waste – grass clippings, tree trimmings, and the like. It seems to be a fortuitous arrangement. I no longer have to pay for the expensive disposal of large amounts of green waste, while at the same time filling in a potential hazard to others. Suppose, however, that a few weeks later I’m approached by an angry neighbor who claims that I’m responsible for going onto their property and filling in their newly dug well. Our intuition would most likely be that if anyone needs to compensate the neighbor for this wrong, it’s me – the one who created the problem. This approach is commonly referred to as the “Polluter Pays Principle.”

In some cases, however, this principle doesn’t apply so well. Suppose that I’m particularly lazy, and instead pay someone to dispose of my green waste in that same hole. In that case it seems less appropriate to place responsibility on the one who is technically doing the polluting (the person I employ). Instead, it still seems apt to make me responsible. Why? Well, even though I’m not the one putting the refuse in the hole, I am the one benefiting from the outcome – disposing of my waste and saving money. This approach is referred to as the “Beneficiary Pays Principle.”

Both of these principles play a huge role in establishing – at the global level – who should take on the mitigation and adaptation burdens required to combat the climate crisis. But they also rely heavily on something we’ve not yet discussed: knowledge.

Consider the application of the Polluter Pays Principle to the well example above. Arguably, we might say that even if I’m responsible for filling the hole, it wouldn’t be right to hold me responsible so long as I had no reasonable idea that it was, in fact, somebody’s well. It seems that I should only be responsible for the actions I take after I’m informed that what I’m doing is wrong. The same is true of the Beneficiary Pays Principle. Suppose that I pay someone to remove the green waste from my property – but have no idea that they are, in fact, dumping it down someone’s well. Once again, this lack of knowledge would seem to make it inappropriate to hold me responsible. Ignorance would be an excuse.

Nineteen-ninety is often held as the watershed hour for the climate crisis. This is when the IPCC issued their first assessment report, and when the world came to officially learn of “climate change” and the existential risk it posed to us.

Countries and corporations often attempt to avoid responsibility for any contribution to the crisis (i.e., carbon emissions) made prior to 1990 – citing ignorance. But it’s a lot more complicated than that.

The Center for International Environmental Law has outlined how Humble Oil (now ExxonMobil) was aware of the impending climate crisis as early as 1957, with the American Petroleum Industry coming into this same information only a year later. By 1968, the U.S. oil industry was receiving warnings from its own scientists about the environmental risks posed by the climate crisis, such that – by the 1980s – these companies were spending millions of dollars to protect their own assets, such as by modifying oil rig designs to account for rising sea levels.

And then there’s that little New Zealand article from 1912. In fact, this is predated by an even earlier warning, with Swedish scientist Svante Arrhenius publishing a paper in 1896 predicting a global increase in temperature as a result of increasing carbon emissions. All of this means that while ignorance might sometimes be an excuse when attributing responsibility, no such ignorance can be claimed by those who have created – and continue to contribute to – the global climate crisis.

Is It Always Wrong to Blame the Victim?

photograph of burning match near a bunch of unlit matches

In July 2010, Terry Jones, the pastor of a small church in Florida, announced he would burn 200 Qurans on the ninth anniversary of the 9/11 attacks — an event he dubbed “International Burn the Quran Day.” The pastor blamed the Quran for the attacks and other terrorist violence. When September came, Jones was temporarily dissuaded from acting through the personal intervention of religious leaders and government officials, including a phone call from Defense Secretary Robert Gates. Nevertheless, in March 2011, Jones livestreamed a “trial” of the holy book, complete with Arabic subtitles. After a brief recitation of the “charges,” the pastor condemned a copy of the Quran to be torched in a portable fire pit. A few weeks later an Afghan mob, whipped into a frenzy by sermons and speeches denouncing the act, attacked a U.N. compound, killing seven U.N. employees. Subsequent riots left nine dead and more than ninety injured. Days later, two U.S. soldiers were shot and killed by an Afghan policeman in an attack that was later attributed to his anger over the burning.

Condemnation of Jones was nearly universal. A frequent theme in the chorus of opprobrium was the argument that Jones was responsible for putting American lives at risk overseas.

Prior to the burning, President Obama said that “I just want [Jones] to understand that this stunt that he is talking about pulling could greatly endanger our young men and women in uniform who are in Iraq, who are in Afghanistan.” After the riots, a Pentagon spokesman said the violence showed that “irresponsible words and actions do have consequences.” Some commentators also blamed the U.S. media for “recklessly” amplifying the story. Only a few, mostly conservative writers focused attention on the “eternal flame of Muslim outrage” that made Quran-burning such an explosive act.

This incident came to mind as I read Giles Howdle’s recent column on the assassination attempt against Salman Rushdie. Howdle argues that Rushdie is not responsible for any of the violence provoked by his novel, The Satanic Verses — including, but not limited to, violence directed at his own person.

To support his claim, Howdle points out that Rushdie’s actions, while part of a causal chain that predictably produced violence, were themselves non-violent, and that Rushdie never encouraged or desired violence.

According to Howdle, blaming Rushdie is akin to blaming the victim of sexual assault for having worn “provocative” clothing. Moreover, Howdle contends that placing responsibility for violence on Rushdie instead of the Muslim perpetrators treats the latter as “lacking full moral agency.”

These arguments are compelling, but I wonder if they derive some of their plausibility from the fact that Rushdie is an immensely sympathetic character: a brilliant writer and man of the left, persecuted for nothing more than a witty novel. Jones is a much less appealing figure; and yet, in its essentials, his act and Rushdie’s seem comparable. Jones’ act was non-violent, albeit part of a causal chain that predictably caused violence. While it is debatable whether Jones set out to incite violence, assume arguendo that his act expressed his sincerely held, if deeply bigoted beliefs, and that he merely foresaw the possibility of violence resulting from his act rather than wanting or intending it to occur. Doubtless, Rushdie’s novel is more valuable than Jones’ political stunt; but Howdle’s case does not turn on the value, aesthetic or otherwise, of Rushdie’s work.

If your intuitions about these cases still differ, I suggest it has something to do with your sympathy for Rushdie and aversion to Jones, rather than any consistent commitment to the proposition that those who, through their non-violent acts, provoke others to commit acts of violence as a foreseen or foreseeable but unwanted side effect are not responsible for that outcome.

Consider this thought experiment. Smith is walking briskly to a job interview for which he is already five minutes late. Suddenly, out of an alley appears a man holding a woman at gunpoint, blocking Smith’s path. The man warns Smith that if he takes one step closer, he will shoot the woman. Unfortunately, Smith has to move in the man’s direction if he wants to make his interview. Resolving to set up a college fund for any children the woman might have, Smith takes a step toward the man, who promptly shoots the woman. Here, Smith’s act is non-violent, though it has predictably violent consequences given the man’s credible threat. In addition, Smith does not want any misfortune to befall the victim: if, say, the man’s gun jammed and the woman were able to escape his clutches, Smith would be delighted. Yet surely he bears some responsibility for her death, and in the scenario in which the gun jams, he is still responsible for risking her life. It might be argued that by taking the step, Smith somehow encouraged or incited the man. But if simply doing what will predictably trigger the execution of another person’s threat constitutes incitement or encouragement, then writing, publishing, or not recalling a book in the face of credible threats that these acts will cause violence is also encouragement or incitement.

My point is not that the Smith case is analogous in every respect to the Rushdie case.

Rather, my argument is that we are sometimes partially responsible for other people’s violent acts and the harm that results, even if we don’t encourage or welcome them in any way.

If that’s true, then any argument for Rushdie’s lack of responsibility for the violence that occurred as the result of his novel’s publication needs to be more nuanced. It is not sufficient that Rushdie’s own acts were non-violent and that he did not encourage or incite violence or want it to occur.

What we need, in other words, is a more sophisticated theory of when we are morally responsible for causing others to harm third parties — notably including, but not limited to, situations in which we trigger the execution of another person’s credible threat to harm another. The range of cases is immense.

For example, when a government decides to abide by its policy never to pay a ransom in the face of a credible threat to a hostage’s life, and that decision leads to the hostage’s death, that is not generally considered an outcome for which the government is blameworthy. On the other hand, the media has sometimes been blamed for causing “copycat” acts of violence by publicizing the names or manifestos of mass shooters.

What distinguishes these cases? By carefully examining the differences between cases like these, we can start to build a theory that hopefully better explains our moral intuitions.

There is, of course, an obvious distinction between the Smith and Jones cases on the one hand, and Rushdie’s case on the other: Rushdie himself was a victim. Even if we grant that we are sometimes responsible for harm that others cause third parties, that is not the same as blaming the victim. The question, then, is whether we are ever responsible for self-harm that occurs as a foreseen or foreseeable but unwanted result of our actions’ influence on others.

There are actually two things we might mean when we say that we are “responsible” for this kind of self-harm. The first is that by knowingly running the risk of provoking harm to ourselves, we tacitly consent to the risk, thereby waiving our right against the perpetrator that she not harm us: the “he asked for it” defense. The second interpretation is that by knowingly running the risk of provoking harm to ourselves, we are blameworthy for the perpetrator’s acts and resulting self-harm. Space constraints prevent me from analyzing these interpretations in depth here, so a few general points must suffice.

As with responsibility for provoking others to harm third parties, it seems unlikely that we are either never or always responsible for self-harm in either of these senses.

The idea of holding sexual assault victims responsible for their perpetrators’ actions is morally repugnant, but this may be best explained in light of our attitudes and expectations related to sexual violence, rather than some general moral principle barring liability for self-harm in all cases. Again, it seems that we need a more nuanced theory than “the victim is never responsible.”

Despite the foregoing, I am confident that blaming Rushdie for the violence his novel provoked is morally perverse. However, as I hope to have shown, we need better arguments for why this is so.

Blaming the Blasphemer

photograph of Salman Rushdie

As I write, Salman Rushdie is in hospital on a ventilator, having been stabbed in the neck and torso while on stage in New York. His injuries are severe. It is, at this moment, unknown if he will survive.

Rushdie’s novel The Satanic Verses, a work of fiction, is considered blasphemous by many Muslims, including the late Ayatollah Khomeini. For those who don’t know, the Ayatollah issued a public fatwa (religious judgment) against Rushdie, calling for all Muslims to kill him and receive a reward of $3,000,000 and immediate passage to paradise. The cash reward was recently raised by $600,000, though the Iranians seem to have struggled to improve on the offer of eternal paradise.

In 1990, Rushdie attempted to escape his life in hiding. He claimed to have renewed his Muslim faith of birth, stating that he did not agree with any character in the novel and that he does not agree with those who question “the authenticity of the holy Qur’an or who reject the divinity of Allah.” Rushdie later described the move as the biggest mistake of his life. In any case, it made no difference. The fatwa stood. “Even if Salman Rushdie repents and becomes the most pious man of all time,” Khomeini stated, “it is incumbent on every Muslim to employ everything he has got, his life and his wealth, to send him to hell.”

There are now reports of celebration in Tehran. “I don’t know Salman Rushdie,” Reza Amiri, a 27-year-old deliveryman told a member of the Associated Press, “but I am happy to hear that he was attacked since he insulted Islam. This is the fate for anybody who insults sanctities.” The conservative Iranian newspaper Khorasan’s headline reads “Satan on the path to hell,” accompanied by a picture of Rushdie on a stretcher.

Rushdie is not the only victim of the religious backlash to his novel. Bookstores that stocked it were firebombed. There were deadly riots across the globe. And others involved with the publication and translation of the book were also targeted for assassination including Hitoshi Igarashi, the Japanese translator (stabbed to death), Ettore Capriolo, the Italian translator (stabbed multiple times), the Norwegian publisher William Nygaard (shot three times in the back outside his Oslo home), and Aziz Nesin, the Turkish translator (the intended target of a mob of arsonists who set fire to a hotel, brutally murdering 37 people).

These attacks, including the latest on Rushdie, and the issuing of the fatwa are all very obviously morally reprehensible. But there is perhaps a bit more room for discussion when it comes to the choice of Rushdie to publish his novel.

Is it morally permissible to write and publish something that you know, or suspect, will be taken to be blasphemous, that you think will result in the deaths of innocents?

At the time of the original controversy, this question divided Western intellectuals.

Western critics of Rushdie included the Archbishop of Canterbury, Prince Charles, John le Carre, Roald Dahl, Germaine Greer, John Berger, and Jimmy Carter. “Nobody has a God-given right to insult a great religion and be published with impunity,” wrote le Carre, calling on Rushdie to withdraw the book from publication.

In The New York Times, Jimmy Carter wrote: “Rushdie’s book is a direct insult to those millions of Moslems whose sacred beliefs have been violated.” Rushdie, Carter contended, was guilty of “vilifying” Muhammad and “defaming” the Qur’an. “The author, a well-versed analyst of Moslem beliefs,” complained Carter, “ must have anticipated a horrified reaction through the Islamic world.” John Berger, author, Marxist, and literary critic, provided a similar condemnation of Rushdie and his publishers in The Guardian, noting that his novel “has already cost several human lives and threatens to cost many, many more.” Roald Dahl, the well-loved children’s book writer, concurred: “he must have been totally aware of the deep and violent feelings his book would stir up among devout Muslims. In other words, he knew exactly what he was doing and he cannot plead otherwise.”

These intellectuals’ central contention was that Rushdie had acted immorally by publishing the book and thereby causing unnecessary loss of life.

(Both Carter and Berger also offered clear condemnations of both the violence and the fatwa.)

A peculiar thing about this critique is that Rushdie never attacked anyone. Other people did. And these murders and attempted murderers were not encouraged by Rushdie, nor were they acting in concordance with Rushdie’s beliefs or wishes. The criticism of Rushdie is merely that his actions were part of a causal chain that (predictably) produced violence, ultimately on himself.

But such arguments look a lot like victim-blaming. It would be wrong to blame a victim of sexual assault for having worn “provocative” clothing late at night. “Ah!” our intellectual might protest, “But she knew so much about what sexual assaulters are like; it was foreseeable that by dressing this way she might cause a sexual assault to occur, so she bears some responsibility, or at least ought not to dress that way.” I hope it is obvious how feeble an argument this is. The victim, in this case, is blameless; the attacker bears full moral responsibility.

Similarly, it would be wrong to blame Rushdie for having written a “provocative” work of fiction, even if doing so would (likely) spark religious violence. The moral responsibility for any ensuing violence would lie squarely at the feet of those who encourage and enact it.

It is not the moral responsibility of an author to self-censor to prevent mob violence, just as it is not the moral responsibility of a woman to dress conservatively to prevent sexual assault on herself or others.

“I do not expect many to listen to arguments like mine,” wrote Rushdie-critic John Berger, a bit self-pityingly (as Christopher Hitchens noted) for one of the country’s best-known public intellectuals writing in one of the largest newspapers in Britain, “The colonial prejudices are still too ingrained.” Berger’s suggestion is that Rushdie and his defenders are unjustifiably privileging values many of us find sacred in the West — such as free expression — over those found sacred in the Muslim world.

But there is another colonial prejudice that is also worth considering; the insulting presumption that Muslims and other “outsiders” have less moral agency than ourselves. According to this prejudice, Muslims are incapable of receiving criticism or insult to their religion without responding violently.

This prejudice is, of course, absurd. Many Muslims abhor the violent response to The Satanic Verses and wish to overturn the blasphemy laws which are so common in Muslim-majority countries. It is an insult to the authors who jointly wrote and published For Rushdie: Essays by Arab and Muslim Writers in Defense of Free Speech. It denies the 127 signatures of imprisoned Iranian writers, artists, and intellectuals who declared:

We underline the intolerable character of the decree of death that the Fatwah is, and we insist on the fact that aesthetic criteria are the only proper ones for judging works of art. To the extent that the systematic denial of the rights of man in Iran is tolerated, this can only further encourage the export outside the Islamic Republic of its terroristic methods which destroy freedom.

Rushdie’s critics, keen as they were to protect a marginalized group, condemned Rushdie for causing the violence committed by individual Muslims. But in doing so, these intellectuals treated the Muslim perpetrators of that violence as lacking full moral agency. You can’t cause autonomous people to do something – it is up to them! Implicitly, Rushdie’s Western critics saw Muslims as mere cogs in a machine run by Westerners, or “Englishmen with dark skin” such as Rushdie, as feminist Germaine Greer mockingly referred to him. Rushdie’s critics saw Muslims as less than fully capable moral actors.

True respect, the respect of moral equals, does not ask that we protect each other from hurt feelings. Rather, it requires that we believe that each of us has the capacity to respond to hurt feelings in a morally acceptable manner – with conversation rather than violence. In their haste to protect a marginalized group, Rushdie’s critics forgot what true respect consists of. And in doing so, they blamed the victim for the abhorrent actions of a small number of fully capable and fully responsible moral agents. This time around, let’s not repeat that moral mistake.

Can Santa Know Who’s Naughty or Nice?

photograph of child with present watching silhoette of Santa's sleigh

Reliable sources indicate that every year, Santa makes a list of boys and girls he deems “naughty” or “nice.” The stakes could not be higher: on the basis of this classification, a child receives either wonderful gifts or coal in his or her stocking. Thus, it would appear that a serious inquiry into Santa’s methods is in order. In short: how does Santa know who’s naughty or nice?

There are actually two parts to this inquiry. The first concerns Santa’s definition of “naughty” and “nice.” I’ll leave this interesting question to the serious normative ethicists. The issue I’m interested in is this: even if you’ve been naughty, I assume that Santa will not judge you deserving of coal unless you are responsible for your naughtiness. Naughtiness and responsibility for naughtiness are distinct. After all, some people behave naughtily but are blameless for doing so: for example, those who are blamelessly ignorant of what they are doing. So, the question I want to focus on is how Santa knows who is responsible for being naughty, thus deserving coal in their stockings.

Most philosophers agree that responsibility for wrongdoing has two components: a control component and a mental component. Plausibly, you are not responsible for what is not under your control. Likewise, you are not responsible if you don’t know what you’re doing. So, responsibility requires at least some sort of awareness of what one does and some sort of control over what one does. (There is much more to be said about both components, but for our purposes this will suffice).

However, as the philosopher Michael Zimmerman has observed, if you are only responsible for what is under your control, then it would appear that those who attempt naughtiness are no more and no less responsible (read: blameworthy) than those who are naughty. (Most of what follows is taken from Zimmerman’s book.) Consider Sue and Sara. Sue throws a baseball at a neighbor’s window, shattering it. Sara throws a baseball at a neighbor’s window, but a strong gust of wind blows the baseball off course and it lands harmlessly in the neighbor’s yard. Is Sue more to blame than Sara? If we are responsible only for what is under our control, the answer appears to be “no.” After all, Sara would have shattered the window with the baseball had it not been for something entirely outside of her control: namely, the direction and strength of the wind. Arguably, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. A pithier way of making the same point is to say that Sara was lucky that she didn’t hit the window, but since her not hitting the window was a matter of luck, Sara’s blameworthiness is the same as Sue’s.

If Santa accepts this reasoning, he will put those who attempted naughtiness on the naughty list with those who succeeded. Perhaps this expansion of the list is tolerable to the old man. The problem is that this same line of reasoning threatens to scramble the whole system. Consider first that what we do is determined not only by what we decide to do, but also by the opportunities and character that we happen to have. Consider Susanna, whose parents have refused to buy her a baseball; and Shirley, who would never dream of throwing a baseball at a window because her parents have effectively taught her never to destroy other people’s property. It may be true that Susanna would have shattered her neighbor’s window had it not been for something outside of her control: whether or not her parents had bought her a baseball. And it may be true that Shirley would have shattered her neighbor’s window had her parents raised her differently, which is also something outside of her control. Once again, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. Thus, the same reasoning that consigned those who attempted naughtiness to the naughty list also seems to consign those who lack opportunity for naughtiness and those whose characters preclude naughtiness to the naughty list.

Even worse, the same line of reasoning implies that everyone is blameless and, indeed, praiseworthy. Just as it may be true that Sara, Susanna, and Shirley would have shattered the window had they been free to do so were it not for some factor outside of their control, so it is also true that the three girls would have performed praiseworthy acts that they did not actually perform were it not for lack of opportunity or their own bad characters, all of which are beyond their control. If a person would have done something praiseworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as praiseworthy as if she had done that thing.

Now we can see why, if the argument is sound, Santa’s whole system is in trouble. The claims we have been making about the three girls — that they would have done such-and-such, were they free to do so — are sometimes called “counterfactuals of freedom.” According to the argument just developed, for any child there are an indefinite number of “counterfactuals of freedom” that make that child blameless, praiseworthy, and blameworthy. Santa will never know all of these counterfactuals, so he is never in a position to weigh them up to make a final judgment on the child’s naughtiness or niceness.

So, what is poor Santa to do? Zimmerman thinks that he should put everyone on the nice list, since it is far worse to punish the innocent than to not punish the guilty. As the ultimate niceness or naughtiness of a child is unknowable to Santa, he should assume that all children are nice to avoid punishing some nice children.

But there is an alternative response. For utilitarians, blame and praise (as well as punishment and reward) are tools that we ought to use to bring about good effects – such as deterrence or moral growth – rather than ways of registering a person’s desert. From this perspective, it would make no sense to blame or punish someone for something someone would have done in some counterfactual circumstance of which we have no knowledge. Such punishment would be arbitrary. Even if we somehow could know the truth of some counterfactual of freedom, people can’t choose to avoid being the kind of person who would commit naughty acts under some counterfactual circumstance, so there is no deterrence value in punishing them for being that kind of person. By contrast, it does make sense from this perspective to punish someone for something they have actually done — not because that person is more deserving, but because in response to such punishment people (including the punishee) can choose to avoid committing naughty acts in the future.

So, if Zimmerman’s argument is sound, then Santa has at least two choices: put everyone on the nice list, or turn utilitarian. I recommend the latter.

Dispatch from the Monument Wars

photograph of Ulysses S. Grant Monument in Chicago

The nationwide protests sparked by George Floyd’s murder that roiled the nation this summer provided additional impetus to a process that has been ongoing since 2015: the dismantling of Confederate monuments. The Southern Poverty Law Center recently reported that at least 168 Confederate symbols in public spaces — including statutes, institution names, plaques and markers — were removed or renamed last year. Increasingly, however, other monuments have come under activists’ and community leaders’ crosshairs.

Chicago’s experience is emblematic of this new trend. Over the summer, statues of Christopher Columbus became focal points of demonstrations across the city, leading Mayor Lori Lightfoot to remove the statues in the middle of the night. Lightfoot also formed a committee composed of community leaders, artists, architects, scholars, curators, and city officials to conduct a thorough review of other public works of art to assess if they should be removed or changed, promising an “inclusive and democratic public dialogue” about the future of Chicago’s internationally acclaimed public art collection.

A few weeks ago, the Chicago Monuments Project Advisory Committee released a list of 41 “problematic” artworks slated for review. The list included numerous statues of Abraham Lincoln and Christopher Columbus, as well as statues of Ulysses S. Grant, George Washington, Benjamin Franklin, and William McKinley. In an editorial in the Chicago Sun-Times, the co-chairs of the Committee suggested that the reason for Lincoln and Grant’s inclusion concerned their roles in the forcible removal of Native Americans from their land.

Despite, or perhaps because of, the fact that the Committee has not said what will be done with the monuments, the reaction in the local media has been mostly negative, with concerns about the transparency of the Committee’s deliberations — the Committee’s meetings during the first six months of work were kept secret — mingled with general alarm about the inclusion of great men on the list of works scheduled for review and possible removal. (No one seemed to shed a tear for the Italo Balbo Monument, which was gifted to the City of Chicago by fascist dictator Benito Mussolini). It is likely that those process concerns will be at least partly allayed by the Committee’s recent shift to a more public-facing posture, inviting feedback on its website, hosting a number of interactive speaker events, and soliciting proposals from artists for new monument ideas.

Still, an editorial in the Chicago Tribune is fairly representative of the local media response. “Take down Chicago statues of Lincoln? No.” makes two arguments. The first is that the moral standards of those who wish to reassess the commemorative landscape are too demanding: “some critics think every person we memorialize must be perfectly blameless by the standards of modern America.” The editorial sensibly replies that if this is the standard, then there will be no (or at least, very few) monuments. Because this is an obviously (?) undesirable result, those high standards ought to be abandoned. Instead, we should “weigh the good done by those who have been honored against their shortcomings, and in the context of their generation, not ours.” The second argument is that critics of the monuments are guilty of arrogance, believing that they are morally superior to “yesterday’s heroes.” Presumably, however, everyone has feet of clay, particularly by the standards of future generations. To avoid arrogance, then, we should not presume to stand in judgment of our forebears.

The Tribune’s arguments have the flavor of straw men, although it’s impossible to say whether they are right about at least some activists. At the heart of the Tribune’s argument is the assumption that activists are primarily interested in whether the subjects of monumental representations are blameworthy for what they’ve done by the standards of our own time. However, the much more relevant consideration seems to be what effect publicly expressing admiration for these men has on members of marginalized groups, regardless of whether they are to blame for what they did. Philosophers would distinguish these two considerations by calling the blameworthiness consideration backward-looking, concerning the basic desert of the subjects of monuments, while the consideration of effects is forward-looking, concerning the present and future consequences of honoring these individuals.

I have argued elsewhere that honoring individuals who either took part in, or expressed approval of, the oppression of currently marginalized groups can undermine the assurance of members of those groups that their basic moral and constitutional entitlements will be respected in their everyday interactions with others. Imagine for a moment living in a society in which individuals who approved of, or took part in, rights violations against members of the group to which you belong are the subjects of honorific monuments. Surely, this would make you doubt whether your society took your rights seriously. In my view, this is the primary, though not sole, reason why there is a strong prima facie moral case for modifying monuments to such individuals, and it has nothing to do with whether those individuals are blameworthy for what they did. Put another way: the movement to change the commemorative landscape should be about upholding the dignity of those who are currently marginalized, not punishing historical figures for past injustices. We need not stand in smug judgment of these figures in order to be concerned about the effects of honoring them.

If I am right about why we should care about these monuments, then there is good reason to consider how we honor figures like Lincoln and Grant. It’s not just that they were both morally flawed; more importantly given the considerations highlighted above, both of them played well-known roles in the oppression of Native Americans. Those roles should be emphasized in any honorific representation of these men in order to convey a properly balanced admiration tempered by acknowledgement of the injustices to which they contributed. This does not, it should be said, entail removal of the monuments. Various forms of recontextualization are possible and perhaps preferable, including the addition of signage or other monuments and artworks.

Another conservative argument against modifying or removing monuments goes like this. We owe a debt of gratitude to people like Lincoln and Grant for helping to build a more just society. We express that gratitude through honoring them. Thus, we are positively obligated to honor these individuals by creating and maintaining honorific representations of them. The problem with this argument is that even if we concede that we have a gratitude-based obligation to our illustrious forebears, there are many ways we could conceivably discharge that obligation other than creating monuments to them. Furthermore, even if creating monuments were the only way to discharge the obligation, there is no reason why those monuments could not be properly contextualized so as to avoid the damaging effects highlighted above.

It must be conceded, however, that there is an inherent tension between the goals of honoring an individual and providing proper historical context for understanding that individual’s attitudes and actions. The former goal aims at having an emotional impact; the other aims at encouraging a less emotional, reflective attitude. Again, the former goal aims at encouraging admiration and appreciation for an individual; the latter aims at tempering that admiration. Perhaps my preference for a properly balanced appreciation reflects an intellectual cast of mind that does not fully appreciate the role of emotion in civic life.

Nevertheless, for the reasons set out above, I believe that the attitudes of admiration and esteem that monuments encourage us to develop towards their subjects can be dangerous, and should be kept within their proper bounds. I would rather live in a world in which there is less unqualified admiration for Christopher Columbus or even Abraham Lincoln, if that meant that members of marginalized groups had greater assurance that their rights would be respected.

Liability and Luck

photograph of lone firefighter standing before small wildfire blaze

In the unlikely event that you have not yet experienced your daily dose of despair concerning the fate of humanity, then I’d highly encourage you to read Elizabeth Weil’s ProPublica piece “They Know How to Prevent Megafires. Why Won’t Anybody Listen?” The article makes two basic points. 1) Extensive controlled burns would be an effective precautionary strategy that would prevent recurring megafires. 2) There are political and financial incentives which trap us into a reactionary rather than precautionary fire strategies.

There are clearly lots of perverse incentives at play, but one part of the article was especially interesting:

“How did we get here? Culture, greed, liability laws and good intentions gone awry. There are just so many reasons not to pick up the drip torch and start a prescribed burn even though it’s the safe, smart thing to do. . . . Burn bosses in California can more easily be held liable than their peers in some other states if the wind comes up and their burn goes awry. At the same time, California burn bosses typically suffer no consequences for deciding not to light. No promotion will be missed, no red flags rise. ‘There’s always extra political risk to a fire going bad,’ Beasley said. ‘So whenever anything comes up, people say, OK, that’s it. We’re gonna put all the fires out.'”

It is risky to engage in controlled burns. Things can go wrong, and when they do go wrong it could be pretty bad, someone could lose their home, maybe even lose their life. Of course, it is far riskier, in one sense, to not engage in controlled burns. So why, then, our incentives set up the way they are?

At least two different explanations are likely at play.

Explanation 1: Action vs Inaction. First, in general, we are more responsible for actions than for inactions. The priest who ‘passed by the other side’ of a man left for dead did something terrible, but did not do something as terrible as the thieves who beat the man up in the first place. As a society we jail murders, we don’t jail the charitably apathetic, even if the apathetic are failing to save lives they could save.

And indeed, this point does have an appropriate corollary when talking about fire suppression. I am not responsible for houses burning in California — this is true even though last spring I could have bought a plane ticket, flown to California, and started burning stuff. Had I done so, likely things would have gone terribly wrong, and in that case I really would have been responsible for whatever property I had destroyed. This seems appropriate, it could be catastrophic if my incentives were structured such that I was punished for not starting vigilante fires.

Elizabeth Anscombe gives us a similar example. If the on-duty pilot and I are both asleep in our cabins, then we are doing the very same thing when our ship hits an iceberg. Yet it was the pilot, and not I, who sunk the ship. Indeed, had I, a random passenger, had tried to navigate the ship we would have absolutely held me responsible when something goes wrong.

So, what is the principle here? Is it that amateurs are specially responsible for actions? No, because we can also identify cases where we indemnify amateurs for their actions. Perhaps the best example here is good Samaritan laws. These laws protect untrained people, like myself, if we make a mistake when trying to render emergency first aid.

What is really going on is that we don’t want passengers trying to navigate ships. Nor do we want aspiring philosophers attempting unsupervised controlled burns in California. But we do want pilots to navigate ships, and we do want burn bosses attempting controlled burns. As such, we should construct incentives which encourage that, and protect people from culpability even if things occasionally go wrong.

Explanation 2: Causal Links. Second, we trace responsibility through causality. Because you caused a house to burn down you are, at least partially, responsible for that damage. The problem is, it is almost always easier to trace causality to actions than to inactions. We can identify exactly which active burning causes damage. We can easily say, “the first you started on February 14th destroyed these two house.” It’s much harder to say “the not burning that you didn’t do on February 14th was what allowed the fire to get out of hand.”

And indeed, I think probably we can’t really hold people responsible for any particular failure to burn. We can hold people responsible for how much controlled burning they can do in general, but we can’t trace causal paths to hold them responsible for any particular bad result of inaction. Indeed, it would be unfair to do so, no burn boss can’t foresee when a particular failure to burn will destroy a house (in the way they can sometimes foresee when burning in a particular area might destroy a house). This creates a problem though. Because we can’t hold people fully responsible for their inaction, that means we must hold people disproportionately responsible for actions, thus perversely incentivizing inaction.

This also parallels our interpersonal lives. For example, we generally want people willing to think for themselves. But we are also far more likely to condemn people for reaching terrible views they came up with themselves than for failing to recognize what is wrong with the conventional view. This can create perverse incentives, however. It might really be true that we are justly responsible for coming to terrible conclusions, but because it is so hard to hold people responsible for the majority view it might be important to forgive even egregious mistakes to keep incentives favoring original thought.

So here is the general point. Assessing responsibility is far more complicated than just establishing whether someone played a causal role. Sometimes holding people responsible for things they really should not have done can create perversely disincentivize people from taking risks we want them willing to take. The fires in California give one clear example of this, but the point generalizes to our lives as well.

When Are Leaders Culpable?

photograph of pyramid of wooden cubes indicating people on yellow background

When are leaders, especially politicians, morally culpable for the deaths their decisions and actions cause? This is a hard question of course because culpability comes in degrees. For example, Sally is culpable for murder if she knowingly kills someone without moral reason (e.g., self-defense); however, Sam is less culpable than Sally if he knowingly sells someone a defective automotive part which results in a fatal car accident. By the same token, the culpability of leadership comes in degrees too. This issue made especially salient recently when Kristen Urquiza, at the Democratic National Convention, shared how she lost her father due to coronavirus complications, arguing her father likely wouldn’t have died had he ignored President Trump’s downplaying of the threat. This isn’t an isolated problem. President Trump misled Americans about the impact of the pandemic, with disastrous results, in an attempt to revive his reelection prospects. We may wonder then about the blame leaders deserve for the death they cause.

There is an obvious way leaders, and politicians in particular, are directly culpable for the deaths of their citizens: starting an unjust conflict, like a war, without accurately assessing the long-run consequences. Leaders look blameworthy here because of the incentive structure at play: soldiers on a battlefield often face perverse incentives, like the prospect of prison, if they don’t carry out an order. This of course isn’t to deny that soldiers share some blame for following orders they know are wrong. However, leaders share in this responsibility given the position of power they hold, especially if they order something they know is unjust.

For example, we should be reticent to accept a proposed war is legitimate given the historical record: throughout history, especially recently, wars are often justified with moral language. Perhaps a group living in the targeted nation or region is claimed to have wronged us somehow; perhaps our invasion would help set things right; perhaps we would be justified using force to get back what was wrongly taken from us. If these kinds of justifications for war sound familiar, it is because they are. It is too easy to use flimsy moral appeals to justify things we would otherwise think morally wrong. We are susceptible to this sort of thing as individuals; so it wouldn’t be surprising if politicians and governments routinely abuse their trust to leverage baseless moral justifications to convince their citizens and constituents that the proposed war would be morally permissible.

Things are less clear when morally weighing an order from a leader or politician not intended to cause harm, but with foreseeable negative consequences. Some ethicists appeal here to what is known as the doctrine of double effect: an order or action is morally acceptable, even if it has bad and foreseen consequences, if they are the by-product of a morally good, intended action. For the sake of argument: even if abortion is morally bad, on this doctrine a doctor may still abort a fetus if the intention is to save the pregnant mother’s life: the intended, morally good outcome (saving the mother’s life) can’t occur without the bad, unintended outcome (aborting the fetus). Whether the doctrine of double effect exonerates leaders and politicians for ordering a war, even a just war, with very bad foreseen consequences is controversial.

What about indirect culpability of leaders and politicians? Things are dicier here. However, we can still call to mind cases that may help us think through indirect culpability. An obvious and recent case is that of managing the coronavirus in the United States: the current United States President, Donald Trump, downplayed the threat of the coronavirus and gave poor advice to U.S. citizens. This is not of course to say that the current U.S. president intended for people to die of coronavirus; but it does illustrate he could well have indirectly contributed to citizens deaths by downplaying the virus, and playing up ‘cures’ that ultimately failed.

We should pause here to reflect on why the current U.S. President — or any leader similarly situated — looks indirectly culpable for such deaths, even if he isn’t nearly as culpable, say, when starting an unjust war. There is an obvious source of indirect culpability here: abusing the trust placed in them by his followers and constituents. If Harry knows his constituents trust him (whether this is poor judgment on their part or not), he bears indirect culpability for what happens to them if he knowingly gives them bad advice, and they act on it, especially if they wouldn’t have acted that way had they not trusted him. This would be wrong, just as it would be wrong for a physician to knowingly give dangerous medical advice to her patients, especially knowing they only took her advice because they trusted her good intentions and competence.

This is because, broadly speaking, when there is trust, there is vulnerability. When I trust that someone is competent and has my best interests at heart, I place myself in a vulnerable position that can be exploited by those with bad intent. The point generalizes to the ethics of leadership: a leader may be in a position to exploit their followers because of the trust placed in them by their followers, even though such trust is only placed in them on the condition that the leader has their best interests at heart. And if the leader used the trust to knowingly put their followers in harms’ way for their own end, they bear some responsibility for that bad outcome, even if it was unintended.

Moral Luck and the Judgment of Officials

photograph of empty tennis court and judge's chair

Novak Djokovic was defaulted from the US Open last week for violating the Abuse of Balls rule. During the first set of his quarterfinal match with Pablo Busta, he struck a ball to the back of the court without looking. This resulted in the ball hitting a line judge. The referee, Soeren Friemel, after consulting with other officials, made a ruling to bypass the Point Penalty Schedule and issue an immediate default. In other words, Djokovic lost the match, left the tournament, forfeited all of his winnings in the tournament, and is subject to further fines. In the aftermath of this incident, many of the TV commentators discussed issues of the severity of the injury to the judge, that the ruling was correct, and the bad luck of Djokovic. The bad luck was in reference to the fact that just as Djokovic was striking the ball, the line judge straightened up from her bent over position which put her head in the direct path of the ball.

As I watched the events unfold and before the ruling was made, I immediately began to think about the fact that the referee’s judgment was going to hinge on the problem of moral luck. This problem was initially discussed by Bernard Williams and Thomas Nagel in a two-part article in 1976. Dana Nelkin describes the problem as one that “occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control.”  In other words, judgments of moral approval or disapproval, including the imposition of sanctions, can depend upon accidents or choices by third parties. The problem can be exemplified by considering two teenagers drag racing. Both of them are using poor judgment as well as speeding. The car on the right is clearly pulling ahead of the car on the left (due, let’s say, to crummy spark plugs in the left car) when an animal darts out into the street from the left. Neither teen attempts to avoid hitting the animals because neither sees the animal. As luck would have it, even though the animal darts into the road from the left, the car on the left misses the animal but the car on the right strikes it. Is it really the case that the driver on the left is morally innocent compared to the driver on the right? Had it not been for the crummy spark plugs the driver on the left would have struck the animal; had it not been for the presence of the animal the accident would not have occurred at all.

What seems to be at issue here, Nelkin explains, is the acceptability of two ideas, one called the Control Principle and the other a corollary of that principle.

Control Principle (CP): We are morally assessable only to the extent that what we are assessed for depends on factors under our control.

CP-Corollary: Two people ought not to be morally assessed differently if the only other differences between them are due to factors beyond their control.

At first, these ideas seem to be intuitively acceptable. To accept them means that luck should play no role in moral assessment. But notice that they imply that in our stipulated example of drag racing that the driver on the left seems to be just as culpable as the driver on the right for hitting the animal — either both are culpable or neither is culpable. After all, the only difference between the two drivers are factors beyond the control of either driver and both were in control of the decision to drag race. So, what is to be questioned? Should the judgment that the two drivers have different levels of culpability be jettisoned or should CP and its corollary be abandoned?

This hypothetical case is analogous to the situation with Djokovic. A few points before the offending event, Djokovic much more angrily and with much more force slammed a ball into a side wall of the court. None was injured. He was not warned, given a point penalty, or given a game penalty.  But, given the rule, the earlier event was just as much of a violation of the rule as the latter event. It is worth seeing the rule in its entirety:

ARTICLE III: PLAYER ON-SITE OFFENSES

  1. ABUSE OF BALLS Players shall not violently, dangerously or with anger hit, kick or throw a tennis ball within the precincts of the tournament site except in the reasonable pursuit of a point during a match (including warm-up). Violation of this Section shall subject a player to fine up to $20,000 for each violation. In addition, if such violation occurs during a match (including the warmup) the player shall be penalised in accordance with the Point Penalty Schedule hereinafter set forth. For the purposes of this Rule, abuse of balls is defined as intentionally hitting a ball out of the enclosure of the court, hitting a ball dangerously or recklessly within the court or hitting a ball with negligent disregard of the consequences.

What should be noticed is that the mere act of hitting a ball “violently, dangerously or with anger,” regardless of whether anyone is injured, is sufficient to violate the rule. So, the earlier act by Djokovic was sufficient for Friemel to issue a warning in accordance with the Point Penalty Schedule. Nowhere in the code does it specify that Friemel may skip directly to default based on the poor luck of the ball hitting and injuring someone, though, as with all officials in sports, part of his job is to use judgment to make decisions.  But, it seems as if part of that decision to not issue a warning for the earlier outburst and to default Djokovic for the latter outburst included a rejection of the control principle and its corollary. Otherwise it seems as if the only difference between the two events was the placement of the line judge and the fact that just as Djokovic hit the ball she stood up in a way that placed her head in the direct path of the ball. Both of these elements were beyond the control of Djokovic. So, if CP is operative, then Djokovic seems to be equally culpable and equally deserving of being defaulted for the earlier outburst as for the one that resulted in the injury to the line judge. By abandoning CP, while Djokovic clearly violated the rule earlier, he did not need to be sanctioned because luckily the outcome was different.

But now comes the twist. It looks like other officials at the match bear some responsibility for the line judge’s injury.

What do we say about the Friemel’s non-application of the rule earlier in the match?  Furthermore, what do we say about the officials at the Western & Southern Open just a few days before who did not default Aljaz Bedene for hitting a camera operator in a similar situation? Here we have an almost identical set of facts, but the injury sustained by the camera operator did not require immediate medical attention, unlike the line judge injured by Djokovic. The rules do not make an explicit allowance for the severity of the injury to factor into the judgment of the officials, but in these three cases, the severity of the injury was considered. The different decisions make sense if we abandon the control principle because those different outcomes, that were due in part to factors beyond the control of the players, seem to allow for different judgments.

Now, all we have to do is accept that luck plays a role when making moral judgments. This implies that you can be morally culpable for things beyond your control. Friemel and the other tennis officials seem to be committed to this idea. But now that we know that consequences matter, it appears that Friemel and other officials should also be culpable in the injury of the US Open line judge. After all, if we let consequences matter, then we have to confront the suggestion that acts of omission resulting in bad outcomes are open to moral censure. By not giving Bedene a harsher penalty a few days before, and not even issuing a warning a few minutes before in the Djokovic – Busta match, the officials perform acts of omission. These acts of omission appear to support a claim that Djokovic could vent his frustration in violation of the Abuse of Balls rule without fear of serious sanction. The officials are thus, oddly, morally implicated in Djokovic’s transgression. They seem to be responsible for creating a situation in which Djokovic could behave this way. The resulting injury involves actions beyond their control (the line judge standing up and Djokovic hitting the ball). But by abandoning the CP and its corollary, they nevertheless appear to share in the responsibility of injury.

These observations — to accept or reject the CP as well as the implications of doing so — apply beyond sports. In any social arena, officials who are entrusted with making judgments may have more responsibility for the outcomes of their silence than they want to recognize.

Clifford and the Coronavirus

photograph of empty ship helm

In 1877, mathematician and philosopher WK Clifford published a classic essay entitled “The Ethics of Belief.” In it, he asks us to consider a case involving a negligent shipowner:

“A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not overwell built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind, and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.”

Clifford then asks: what should we think of the shipowner? The answer, he thinks, is obvious: he is responsible for the death of the passengers. This is because he had all the evidence before him that his ship needed repairs and really wasn’t very safe, and instead of forming his beliefs in accordance with the evidence, he stifled his doubts and believed what he wanted.

As far as philosophical thought experiments go, Clifford’s case is easy to imagine happening in real life. In fact, there have recently been a number of real-life nautical disasters, although instead of ships sinking, they involve coronavirus outbreaks, the most recent being a Norwegian cruise ship that reported a number of coronavirus cases among crew and passengers earlier in August. In response to the incident, the CEO of the company owning the cruise line stated that “We have made mistakes” and that the outbreak was ultimately the product of a failure of several “internal procedures.” Indeed, the cruise line’s website states that they followed all the relevant guidelines from the Norwegian Institute for Public Health, implemented measures to encourage social distancing and good hygiene, and set sail with only 50% capacity. Despite these measures, though, people still got sick. This is not an isolated event: numerous businesses worldwide — that have adhered to government and other reopening guidelines — have seen spikes in cases of coronavirus among staff and customers.

In introducing his case, Clifford argued that what the shipowner did wrong was to form a belief on insufficient evidence. And it is easy enough to agree with Clifford’s diagnosis when it comes to such egregious belief-forming behavior as he describes. However, real life cases are typically more subtle. Cases like the Norwegian cruise ship and other businesses that have experienced problematic reopening should then lead us to question how much evidence is good enough when it comes to making the decision to reopen one’s business, and who we should find deserving of blame when things don’t work out.

To be fair, there are certainly differences between Clifford’s case and the case of the Norwegian cruise ship: there is no reason to think, for instance, that anyone in charge of the latter actively stifled doubts they knew to be significant. But there are also similarities, in that the evidence that cruise ships are generally not safe places to be right now is abundant and readily available. Even if one adheres to relevant health guidelines, we might wonder whether that is really good enough given what other evidence is available.

We might also wonder who is ultimately to blame. For instance, if guidelines concerning the re-opening of businesses that are provided by a relevant heath agency turn out to be inadequate, perhaps the blame should fall on those in charge of the guidelines themselves, and not those who followed them. There have, after all, been a number of countries that have reinstated stricter conditions on the operation of businesses after initially relaxing them in response to increases in new infections, Norway recently among them. When cases of coronavirus increased as a result of businesses being allowed to reopen, we might then put the blame on the government as opposed to the business owners themselves.

Clifford also makes an additional, more controversial argument that he illustrates in a second example:

“Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out. The question of right or wrong has to do with the origin of his belief, not the matter of it; not what it was, but how he got it; not whether it turned out to be true or false, but whether he had a right to believe on such evidence as was before him.”

Using this second case, Clifford argues that whether things turn out okay or not really isn’t important for determining whether someone has done something wrong: even if everyone on the ship made it safely the shipowner would still be guilty, he just got lucky that everyone survived. While we might think that Clifford is being harsh in his judgment, we might also wonder whether other businesses that have re-opened early in the face of some evidence that doing so may still be dangerous should be considered blameworthy, as well, regardless of the consequences.

Judgment, Condemnation, and Historical Context

photograph of statue of Thomas Jefferson seatedin profile

Is it right to condemn historical figures for moral beliefs that, while common during their time, are now known to be odious?

Our attitudes toward historical figures matter. Our attitudes bear on the question of what public honors should be bestowed on morally flawed historical figures, and our attitudes toward historical figures will influence our contemporary moral thinking. How I view historical figures may influence my trust in moral and institutional traditions I have received from those thinkers. If I believe our Founding Fathers were good and noble people with certain, though largely isolatable, tragic flaws, I’ll trust our constitutional system more than if I believe our Founding Fathers were mostly moral degenerates skilled at couching their corruption in the propagandistic rhetoric of admirable ideals. This trust need not be self-conscious. If you present multiple people with the exact same policy proposals while varying who you say supports it, you can flip who supports which policy. Just seeing an idea as presented by someone ‘on my side’ or ‘in my team’ or ‘within my in-group’ (to use the language of social psychology), will incline you to find it plausible. The extent to which I’ll instinctively trust the political structure set up by our Founding Fathers will depend, at least in part, on the extent to which I see the Founding Fathers as patriotic exemplars.

So how should we think of historical figures with odious beliefs? There are two lines of argument against judging them the way we would judge contemporaries.

The first line is often expressed by language like “they belonged to a particular time.” The argument suggests that these thinkers were, because of their historical context, blamelessly morally ignorant of things we now know.

If you heard about a doctor who, in their rush to treat patients as quickly as possible, did not bother to sterilize materials between amputations, you would reasonably condemn that person as culpably negligent or heartless. However, we do not make similar moral judgements about doctors in the seventeenth century. Sure, it would have been better had they sterilized their instruments, but these doctors did not have the germ theory of disease, they had no reason to think that boiling their surgical instruments would do anything, and indeed they had every reason to think that the longer they took to perform amputations the further infections could spread.

We do not judge historical figures for terrible surgical practices because we think that at least some forms of non-moral ignorance exculpate. But if non-moral ignorance can exculpate, can’t moral ignorance as well? Just as we, the beneficiaries of the modern medical awakening, cannot fairly judge historical figures for the poor choices they made as a consequence of their worse scientific environment, so, the thought goes, we, the beneficiaries of various moral awakenings, cannot fairly judge historical figures for the poor choices they made as a consequence of their worse moral environment.

There are, however, good reasons to doubt the extrapolation from the non-moral case to the moral case. One contemporary philosopher who argues for an asymmetry between moral and non-moral ignorance is Elizabeth Harman. Harman, following Nomy Arpaly, thinks you are blameworthy if you betray an inadequate concern for what is morally significant.

This account would explain why non-moral ignorance sometimes excuses. If I mistakenly believe a certain charitable organization does good work, then donating to that actually harmful charity need not display an inadequate concern for the plight of the poor, for example. I might really care as much as I should for justice, but simply be misled about what would best serve others.

This account of blameworthiness would also explain why moral ignorance does not excuse. If I’m morally ignorant that I ought to give to the poor, then that very ignorance displays a lack of concern for the poor, and thus a lack of concern for what really is morally significant. Circumstances where we fail to grasp the character of our acts (say I thought the backpack I grabbed on my way out of class was mine, when really it was your very similar-looking bag) do not communicate moral indifference (I may still be fully concerned to respect your property). In contrast, being aware that I was taking your property but not appreciating that it was wrong, would actually prove my lack of concern.

But not all philosophers agree with drawing this strong asymmetry between moral and non-moral ignorance. Why, for instance, is it wrong for us to morally condemn vicious people raised as child soldiers? One plausible answer is that child soldiers cannot be blamed for their ignorance of the moral law.

Of course, even if we accept moral ignorance can, in principle, excuse, it remains an open question if it does in the historical cases we’re considering. There is a difference between having had one’s conscience systematically flayed by the brutal brainwashing that goes into creating a child soldier, and simply growing up in a society with a high tolerance for evil.

Consider the view of Elizabeth Anscombe, who thinks there are some examples of moral ignorance that really do excuse. Anscombe describes an executioner who has private knowledge of a condemned man’s innocence, but who cannot use that knowledge to exonerate the man. She asks us to further suppose the executioner knows the man had a fair trial under a rightful legal authority. Anscombe thinks since the greatest moral theologians can’t agree about this case, the simple executioner might really be blameless for choosing wrongly.

But even if there are cases of excusing moral ignorance, Anscombe thinks they are exceedingly rare. They don’t cover the controversial cases of historical figures. Anscombe follows Aristotle and Aquinas in thinking that the main outlines of morality are accessible by the light of natural human reason, and while humans are incredibly self-deceived, that does not get us off the hook given that we should, and can, almost always know the core of what is right or wrong if we don’t give into vicious self-deception. Their actions betrayed ignorance of basic moral truths which Anscombe thinks were clearly accessible to them. Thus, Anscombe ends up thinking that while there is no principled asymmetry between moral and non-moral ignorance, there is a practical asymmetry. The main outlines of science (say germ theory) are not truths available to everyone just in light of common human reason, but the main outlines of morality (say the evil of chattel slavery) are truths available to everyone just in light of common human reason. Thus, it is far more common for non-moral ignorance to excuse; not because non-moral ignorance alone can excuse, but because moral ignorance is rarely blameless.

Perhaps this first line of argument could be salvaged, but for now I will put it aside, because…

there is a second line of argument I want to consider. This is the argument often expressed by the honest voice in the back of my head saying: “but are you really that confident that if you were a white kid growing up in Antebellum South that you would have had the moral clarity to see the right of things?” Sure, maybe I agree with Anscombe and Aquinas that I should  have been able to see the right of things. But am I really so certain I would have?

The force of this thought comes from an extension of the norm against hypocritical blame. We generally think it inappropriate to blame someone for things I expect I might do were I in your situation. Since I’m not particularly exceptional amongst my own moral cohort, I don’t have good reason to think I’d be exceptional if transplanted to a historical cohort, so I should temper my outrage at historical figures.

However, here we tend to draw the wrong lesson. We’re tempted to think something like:  I’m a morally decent person, I’m probably not in a position to judge at least many historical figures as far worse than me, so many of these historical figures must not have been that bad.

That is almost the opposite of the conclusion suggested here.  We have already seen when considering the first line of argument that there are good reasons for thinking historical figures are fully responsible for their bad beliefs. My hypocrisy does not show the other person is not evil, rather it shows I might be evil as well.

Thoughts on hypocrisy should not lead us to think better of historical figures, but rather to think worse, and more humbly, of ourselves. We should recognize that many of the beliefs about which we are self-righteous might be largely chosen, not from principle, but because it helps us gain the glowing approval of those whose opinions we prize. And likewise, we should perhaps recognize that whatever moral clarity we do have is an undeserved grace.

This does require a pessimistic view of humanity. Yet it is a sort of pessimism shared by many of the great moral traditions and thinkers. Plato thought that our material bodies, filled with appetites, continually pull us away from virtue. Aristotle thought that only someone with an exceptionally fortunate and unearned upbringing could ever become good. Stoics doubted there ever were, or even could be, any true sages. Christians taught humans were slaves to original sin absent the intervention of divine grace. Kant famously proclaimed humans were by nature evil.

If we accept this pessimism, what attitude should we take towards historical figures? On the one hand it allows you to acknowledge the utter evil and depravity of historical figures who defended odious practices. But on the other hand, it also discourages the hatred that inclines us to divide the world into the virtuous in-group and the vicious out-group. We should willingly acknowledge the evil of historical figures, but should be skeptical that it gives us any standing to look down on them, as though we have any moral height from which to condescend.

There are three principled attitudes to take towards historical figures. First, following Harman, we could think there is a real asymmetry between our own blameworthiness and theirs because our differing moral values really show differing levels of blameworthiness. Second, we could see them as similar to ourselves — largely good people though victims of largely blameless ignorance. Or third, and this one seems right to me, we could again see them as similar to ourselves, but as also blameworthy in their ignorance of their own depravity, and so conclude that we are actually closer to their wickedness than we realized.

Rio Tinto and the Distinction between Saying ‘Sorry’ and Being Sorry

photograph of Rio Tinto train cutting through landscape

“…we haven’t apologised for the event itself, per se, but apologised for the distress the event caused.” – Chris Salisbury, Rio Tinto Iron Ore CEO

In late May, mining giant Rio Tinto shocked Australia, and the world, by blasting an ancient and sacred Aboriginal site to expand an iron ore mine.

The blast destroyed a cave in the Juukan Gorge, located in the Hammersley Ranges in Northern Western Australia, that was one of the oldest of its kind in the Western Pilbara region, and was the only known inland site on the entire Australian continent to show signs of continual occupation through the last ice age (between 23,000 and 19,000 years ago) during which, evidence suggests, most of inland Australia was abandoned as the continent dried out and water sources disappeared. The cave site itself was found to be around 46,000 years old.

The blast received ministerial approval in 2013, consent obtained under Western Australia’s out-dated heritage laws drafted in 1972 to favor mining interests. Following the 2013 approval, archaeological work carried out at the site discovered it to be much older than originally thought, and to be rich with artefacts and sacred objects.

The 1972 Heritage Act does not allow for renegotiation of approvals based on new information; however, the act is due to be replaced by new legislation, and various factors have caused the renewed heritage act to be delayed. The new draft bill currently in preparation includes a process of review based on new information. In its response to the new draft legislation Rio Tinto has submitted a request that consent orders granted under the current system should be carried over.

The blasting of the site was conducted without prior notification to traditional Indigenous owners, or the state government, and has caused deep distress to the Puutu Kunti Kurrama and Pinikura people (PKKP). Among some of the precious and rare items recovered from the site prior to the blast was a 4000-year-old plaited length of human hair from several people which DNA testing revealed to belong to the direct ancestors of the living PKKP people.

“It’s one of the most sacred sites in the Pilbara region … we wanted to have that area protected,” PKKP director Burchell Hayes told Guardian Australia.

Peter Stone, Unesco’s Chair in Cultural Property and Protection said that the destruction at Juukan Gorge was among the worst in recent history, likening it to the destruction of the Bamiyan Buddas in Afghanistan and the ancient Syrian city of Palmyra.

Rio Tinto claims it was not aware of the importance of the site, nor of the traditional owner’s wish for it to be preserved. But the PKKP Aboriginal Corporation rejected Rio’s suggestion that its representatives had failed to make clear their concerns for the site, and their wish for it to be preserved. “The high significance of the site was further relayed to Rio Tinto by PKKP Council as recently as March,” Burchell Hayes said.

Following the blast, Rio Tinto issued an apology to the PKKP people. “We are sorry for the distress we have caused,” Rio Tinto Iron Ore chief executive Chris Salisbury said in a public statement.

Several days after the public apology a leaked recording from a private Rio Tinto staff meeting found its way to The Australian Financial Review which reported that Salisbury told staff ” … we haven’t apologised for the event itself, per se, but apologised for the distress the event caused.” In a subsequent interview, Salisbury did not contradict the report, and repeatedly refused to directly answer when asked if the company was wrong to blow up the site, only repeating that they were sorry for the distress.

So, what is going on here — what can we make of Salisbury’s remark that the company had apologized not for the event itself but the distress it caused?

In taking the line that it did not know about the site’s significance, and attempting to insulate its apology from an admission of responsibility, Rio Tinto is trying to avoid moral blame. But does the separation hold? Can an agent be sorry for causing distress without ipso facto being responsible for causing it? And if so, does Rio’s attempt to excuse its actions from moral blameworthiness succeed?

The attribution of moral blame is not straightforwardly connected to the objective wrongness of an action carried out or caused by an agent. One way to assess the connection in any given case is to consider what conditions would have to be present for an agent to be held morally responsible, that is, to be blameworthy for an action.

It is possible to identify cases in which an agent is blameworthy even if an action is not in itself wrong; or, conversely, in which an agent is not blameworthy even if an action is wrong.

To give a relatively simple example, Jane intends to poison Joe by putting a white substance, which she believes to be arsenic, in his tea. It turns out Jane was mistaken, and the powder was only sugar; nothing happens to Joe so there is no objective moral wrong committed, however Jane’s intention to poison him is blameworthy. Conversely, if Jane accidentally poisons Joe by putting what she believes to be sugar, but what in fact turns out to be arsenic, in his tea, she is not (necessarily) blameworthy though the act of poisoning Joe is itself an objective moral wrong.

Here we can see that the salient elements for establishing blame are intention and knowledge. In the first case, Jane’s intention is morally blameworthy, even if the outcome is neutral. In the second case, though Jane has no intention to harm Joe, further questions arise about how Jane came to make this mistake, and whether she should reasonably have been expected to know that the substance she put in Joe’s tea was in fact arsenic rather than sugar.

In the case of Rio’s destruction of the Juukan Gorge cave we cannot know if it was Rio’s intention to blast the site over the strong objection of the PKKP owners, though some suspect that it was.

For an action to be morally wrong yet the agent not blameworthy, the agent must have an excuse for carrying it out which absolves them of responsibility. As Holly Smith suggests, “Ignorance of the nature of one’s act is the preeminent example of an excuse that forestalls blame.”

The question, then, is epistemic — for an agent to be held responsible, certain epistemic conditions need to be fulfilled. The first condition is that there is an awareness of the action (that the agent knows what she is doing); second the agent has to have been cognizant of the moral significance of the action; third the agent has to have been aware of the consequences of the action.

The first condition is obviously fulfilled, as the action of blasting the site was deliberate. The second condition of cognizance of the moral significance, together with the third condition of cognizance of consequences is, by Rio, under dispute.

In another statement, made subsequent to the leaked tape of his remarks, Salisbury said he had “taken accountability that there clearly was a misunderstanding about the future of the Juukan Gorge.”

It isn’t clear what having ‘taken accountability’ means, but the claim that it was a misunderstanding is an attempt to avoid blameworthiness by claiming that an epistemic condition is not fulfilled.

However, ignorance can itself be morally culpable. If (in the above example) Jane did not read the box when she could have, say, or if she ignored reasonable suspicions that someone had replaced the sugar with arsenic, then her ignorance does not excuse her of blame for poisoning Joe. It must be noted that there is disagreement among philosophers on this point; while some argue that an agent can be blamed for their ignorance, others maintain that, however criticizable it is, ignorance nonetheless exculpates the agent from moral blameworthiness.

On the former view, if Rio is culpable for its ignorance, that ignorance fails to shield the company from moral blame. This, to me, seems correct — and I would argue that even if we take Rio Tinto at its word that it was not aware of the significance of the site and the PKKP people’s wish for it to be preserved, the company has failed in its responsibility to the traditional owners and is indeed blameworthy.

I might add that taking Rio at its word here seems to me exceedingly generous, and I remind the reader that the PKKP people strenuously denied the suggestion that they had not made their wishes known to the company.

So, regarding the dubious distinction between apologizing for the distress caused but not for the action which caused it, Rio Tinto may say it is sorry, but without an accompanying willingness to accept responsibility, its apology is hollow. It appears the company has apologized from an ostensible obligation to do so, but shows little genuine remorse for this act of cultural destruction.

Airplane Crashes and the Diffusion of Responsibility

Photograph of a Sky airplane taking off

Air travel has become steadily safer and more reliable for decades, but the second crash involving Boeing’s new 737 Max aircraft has created newfound uncertainty among potential flyers. The crash of an Ethiopian Airlines plane on March 17 has been linked to a similar crash of a Lion Air plane on October 29, 2018, pointing to a disturbing trend. In the wake of such a tragedy, we are often left looking for answers out of both pragmatic and moral motivations: we want to prevent future accidents, but we are also looking for someone to blame. Ultimately, such searches are often unsatisfying, particularly in the latter respect.

Although investigations are ongoing, early information seems to absolve the pilots, both of whom were highly experienced, and the focus has shifted to concern about the planes themselves. Software on Boeing’s new 737 Max airplanes called the Maneuvering Characteristics Augmentation System (MCAS) seems to have malfunctioned, causing the planes to angle downward and become uncontrollable. In light of this possibility, the United States Federal Aviation Administration (FAA) has grounded all 737 Max aircraft, and Boeing has slowed down—although not halted—production on what was their fastest-selling model.

If there is a problem with the airplane, who is to blame? Most fingers point to Boeing, the company which designed and manufactured the aircraft. Not only did Boeing’s software critically malfunction, but a report by The New York Times found potentially vital safety features being sold as optional extras for the airplanes, calling into question the excessively profit-oriented strategy of the company. Some, including the Ethiopian government, have criticized regulators like the FAA for their failure to enforce more stringent testing and safety requirements. It is also worth noting that airlines are responsible for safety inspections of aircraft before they fly. However, leaving the blame on any one of these corporations seems insufficient. All of these entities—manufacturers, regulators, airlines—represent vast networks of individuals, each of whom seems to bear little to no individual responsibility. What do we do when everyone does their job correctly, but things still go wrong?

This problem is exceptionally acute for companies like Boeing, which (one would assume) includes many fail-safes in its quality-control procedure. Even with careful review, it can be difficult to pinpoint exactly where in the process the error or oversight was introduced. Although these fail-safes are valuable and necessary for the safety and success of Boeing’s enterprise, they also create an ethical problem by diffusing responsibility across a wide network of individuals and systems. At worst, a network of diffused responsibility can create a bystander effect in which every individual assumes that someone else will deal with a problem, while in reality the problem goes unaddressed.

There is a tendency to reduce this problem to one of a few simple parameters. Who was the last person to inspect the aircraft or test the software? Perhaps they should have caught the problem after others had failed to do so. But one person’s position along a chain of safety checks is accidental: if the first person and the last person were switched, the result would be the same. Who has the greatest power in the organization? Often it is the C.E.O.s and presidents of companies that are left giving statements to the media when disaster strikes, but they are so distant from the daily operations of their company that it seems unreasonable to expect them to vet every action and decision. Neither these nor any other singling out will ever satisfy the larger question of culpability. Individuals caught up in this game of passing the buck are faced with two bad options: to point fingers is ignoble, but to accept responsibility can be a discrediting and thankless feat.

An alternative would be to blame the system itself. Because the system allowed for the diffusion of responsibility, the system itself must be flawed. One interpretation is that the way in which Boeing goes about producing aircraft is an unethical system which fails to protect the basic right to life of its customers. We could point a finger at the architects of the system, which in this case would probably comprise some past and current executives at Boeing, but again, no architect could possibly predict every outcome of the system they design. A broader interpretation might condemn the corporation model itself: in the ruthless pursuit of profit, corporations make forgetting human lives all too easy. A cynic might look at the common initialism “LLC,” or “limited liability company,” as an indicator of the role of the corporate structure in reducing liability and responsibility among its owners.

The search for answers regarding these plane crashes will probably arrive at many ways to prevent further tragedies. In all likelihood, new systems for design and quality control will be implemented, and greater oversight and cross-checking will be mandated. While the path forward on the pragmatic side is clear, the ethical dimension of these systemic failures is more uncertain. Humans desire obvious villains on whom to place blame, but, unfortunately, true tragedies are rarely so simple. Too often, we fall into the practice of scapegoating to resolve these dilemmas, but it would be better to embrace and negotiate the inherent complexities of these situations.