← Return to search results
Back to Prindle Institute

Neurodivergence, Diagnosis, and Blame

photograph of woman using TikTok on iphone

If your For You page on TikTok looks anything like mine, you know that there is a veritable trove of content about autism and ADHD, much of it focused on questions about diagnosis. The spread of this online content and discussion has been lauded for the potential good it can do, allowing women and non-binary people to have access to information about conditions that are often missed in those populations or by giving voice to traditionally marginalized groups who often deal with others speaking inaccurately on their behalf.

At the same time, the algorithm may function in ways that trend towards stereotyping the populations in question or pushing content that associates ADHD and autism with things not necessarily related to diagnostic criteria (e.g., ADHD with talking fast or autism with disliking the big room light). This can lead to misunderstandings and poor self-diagnosis that misses underlying issues, such as someone mistaking bipolar for ADHD. While similar misunderstandings and misdiagnoses can happen in medical contexts, those who rely on questionably-credentialed social media influencers may be more susceptible to misinformation.

But why is having a diagnosis so appealing? What does the diagnosis do for autistic and ADHD individuals?

I suspect that at least one part of the answer is found in our practices of blame and our beliefs about who deserves support: the diagnosis promises less self-blame and blame from others and more understanding and accommodations.

How might a diagnosis lead to less self-blame and blame from others? There are several possible philosophical answers to this question.

The first answer is relatively common: ADHD and autism are caused by brain chemistry and structure — they should be seen as medical or neurological conditions, not moral ones. On the purely medical view, ADHD and autism have nothing to do with character or who that person is as a moral agent. So, if someone is diagnosed with ADHD or autism, they shouldn’t be blamed for anything resulting from those conditions because they’re simply medical problems that are out of one’s control.

This answer has a few benefits:

the medical diagnosis adds a sense of legitimacy to the experience of individuals with ADHD and autism, it provides access to medical care, and it gives a clear conceptual apparatus to communicate to others about the specific accommodations that are needed.

At the same time, the purely medical answer has key drawbacks.

First, the medical mode is often moralized in its own way, with its own norms about health, disease, and disorder. Sometimes this is appropriate, but other times natural variations in human expression become labeled as disorders or deficits when they should not be (see how intersex people have been treated or the history of eugenics). The aim of medicine is often to provide a cure, but some things do not need to be cured. Medical care can and often has been helpful for individuals needing access to Adderall or Ritalin to function, but the purely medical mode has its limits for understanding the experiences of individuals with ADHD and autism.

Second, the medical mode tends to locate the problem in the individual, though some public health approaches have started to move towards structural and social thinking. For those with ADHD and autism, they may experience their condition as a disability in large part because of a lack of social support and understanding rather than a purely internal discomfort.

Third, the medical mode cannot always be separated from character. See, for example, the overlap of depression and grief or the fact that even normal psychological states are also caused by brain chemistry and structure.

In the case of autistic and ADHD individuals, the condition isn’t something that can be easily carved off from the person because they affect broad domains of the person’s life. In trying to separate out the autism or ADHD, others can easily create this idea of the “real” non-autistic, non-ADHD person, which can lead to failing to love and appreciate the actual person.

The second philosophical answer to the question as to how a diagnosis might lead to less blame is a capacities-based view of moral responsibility. This view is similar to the medical mode, in that the focus is often primarily on the individual, but it differs in its decidedly moral focus. On the capacities view, agents are morally responsible if they have some normal (or minimally normal) capacities of reasoning and choice. Agents are not responsible if they lack these capacities. There are ways of refining this kind of view, but let’s take the basic idea for now.

If we combine this kind of philosophical idea with the idea that ADHD and autistic people are deficient with regard to some of these capacities necessary to be a morally responsible agent, then it would make sense that ADHD and autistic folks would be either less responsible or not responsible at all in certain domains. But if the point of accommodations is to increase capacities, then accommodations should be supported. However, like the medical approach, there are a few drawbacks to at least some versions of this view.

First, there isn’t a clear capacities hierarchy between neurotypical people and neurodivergent people. While someone with ADHD may have trouble starting on a large project in advance, they may work exceptionally well under pressure. Someone with autism may have more difficulty in social situations but could have the ability to focus their time and energy to learn immense amounts of knowledge about a special interest. While parts of the ADHD and autistic experience involve deficits in certain capacities, the overall assessment is much less clear.

Second, claiming that someone with autism and ADHD can’t be a fully morally responsible agent also seems to have a troubling implication that they might not be full, self-legislating members of the moral community. This kind of view places people with autism and ADHD in the position of, say, a child who has some understanding of moral principles but isn’t yet a full agent.

Neither the medical model nor at least some versions of the capacities model seem to fully provide what people are looking for in a diagnosis. While both offer rationales for removing blame, they can have a dehumanizing effect. The drawbacks to these views, however, teach us some lessons: a good view should 1) consider the whole, actual person, 2) think about the person in their social context, and 3) avoid making the autistic or ADHD person out to be less than full moral agents.

I think the right question to ask isn’t “how is this person deficient in some way that removes responsibility?” but instead “what expectations are reasonable to place on this person, given who they are at this point in time?”

This is a rough suggestion that requires more development than I can give it here.

There are ethical considerations that enter in at the level of expectations which go beyond questions about capacity. What would it look like to be kind? To give each other space to be comfortable? To accept parts of ourselves we can’t change? To build a world that works for everyone? Capacity is certainly implicated by these questions, but it isn’t the whole picture.

By shifting our focus to the question about what expectations are reasonable to place on an individual person, we are recentering the whole person and recognizing the dis/abilities that the individual experiences.

Experiences with autism and ADHD can be very different from person to person, and the accommodations needed will vary from person to person. The expectations we can reasonably place on people with ADHD and autism may not be any less than those without — they may just be different.

And neurotypical people who interact with ADHD and autistic people may also be reasonably expected to provide certain accommodations. Everyone’s needs should be considered, and no one should be othered.

For example, say that an autistic person says something that comes off as rude to a neurotypical friend. This has happened a few times before, each within a new domain of conversation. Every time, the autistic individual apologizes and explains how autism affects their social communication and understanding of social norms and how they’re trying to get things right. Eventually the neurotypical friend gets upset and says “why do you always use the autism as an excuse to get out of responsibility?”

In this case, it doesn’t seem that the autistic person is abnegating responsibility, it seems that they’re clarifying what they are actually responsible for. The autistic person isn’t responsible for intentionally saying something rude, they’re responsible for accidentally saying something rude despite their best intentions otherwise. And the autistic person still apologizes for the hurt caused and promises that they will continue to try to do better in the future. Whichever way the two friends negotiate this part of their relationship, it seems important that they each understand where the other is coming from and that each friend’s feelings are given space.

What does this example tell us about relationship between diagnosis and blame? Perhaps we need to develop alternative frameworks to recontextualize responsibility, rather than simply diminish it.

Driving with the Machine: Self-Driving Cars, Responsibility, and Moral Luck

photograph of driver sleeping in self-driving car

Charlie and Ego pick me up from a university parking lot on a balmy September afternoon. A soccer match is about to start nearby, and Charlie, eyeing the fans bustling around his car, carefully drives us off the lot. Charlie is a former student of mine. I gather as we catch up that Charlie now does something lucrative and occult with cryptocurrencies, of which the Tesla Model S we are riding in is presumably an alchemical product. It’s nice to see Charlie. As pleasant as our conversation is, though, that’s not why I’m here. I want to meet Ego.

There’s a map of our sleepy little town of San Luis Obispo, California on the touch screen in the center console of Charlie’s car. As we approach the edge of campus, Charlie casually selects a location downtown, clicks a few buttons, and lets go of the wheel. The wheel and the five-thousand pound car begin to move of their own accord. Ego is driving.

Ego, despite the moniker, is not a person. “Ego” is what Charlie calls the Full Self-Driving beta function on his car, a test version of Tesla’s self-driving program that is designed to navigate city streets and highways.

When Ego takes over, my riding experience immediately feels different, less familiar. Charlie’s driving was smooth and confident; Ego’s driving feels jerky and neurotic.

Ego drives us down suburban streets, past cars, bicyclists, and pedestrians. It doesn’t come close to hitting anyone (you can tell Ego is programmed to be extra careful around pedestrians), and it gets us where we want to go. But it moves unnaturally. The wheel jitters. Sometimes the car moves haltingly, slowing on empty streets or stopping abruptly in intersections. At other times it moves like a missile, accelerating rapidly into left-hand turns or sweeping within inches of inanimate obstacles. You wouldn’t mistake it for a bad human driver if your eyes were closed. It feels unmistakably robotic. I’m sure that many of Ego’s peculiarities reflect temporary technical problems, but it’s hard to shake the sense that there’s something fundamentally alien at the wheel.

Despite my unease about Ego, I never felt unsafe while Ego was driving. That’s because Charlie was attending assiduously to Ego’s movements. Whenever Ego would do something weird, Charlie would fiddle with the console to inform Tesla’s algorithms that something went wrong. And when Ego started to do something egregious or annoying, Charlie would grab the wheel and manually navigate us to a new situation. I soon realized that it wasn’t accurate to say that Ego is driving or that Charlie is driving. The better thing to say is that they’re driving together.

This is how Charlie sees things, too.

Over time it’s started to feel like it’s a team effort, that we’re working together. And I think that way because it messes up in the same spots. It’s very predictable. It shows me what it’s going to do, and I can override some of those functions. So it’s kind of like it’s doing the actual task of driving, but I’m overseeing it and making sure that it’s, you know, not crashing. So I do feel like it’s a team effort.

This dynamic piques my interest. I’ve spent a lot of time thinking about moral responsibility in contexts of shared agency. Participants in shared agency are often praised or blamed for actions or outcomes that originate outside the sphere of their own individual agency. For example, if a medical provider working as part of a healthcare team goes above and beyond in a moment of crisis to save a patient’s life, the team members who enabled or supported the provider’s care may share some praise for saving the patient even though they weren’t directly involved in the crisis.

Whenever a person’s moral status (including their praiseworthiness or blameworthiness) depends upon factors that are at least partly outside of their control, they are subject to what’s called moral luck.

Moral luck is controversial because in the abstract we tend to think that a person’s moral status should be based on the quality of their intentions, choices, or character, on things they can fully control. However, our intuitions about particular cases often suggest otherwise.

A classic example involves drunk driving: we tend to morally blame drunk drivers who hit and kill children much more harshly than equally negligent drunk drivers who luckily get home safely.

In the past, I’ve argued that moral luck is a common feature of shared agency because when you act jointly with other people your moral status can be affected by them in ways you can’t fully anticipate or control. You might find yourself to blame for another agent’s actions. And as I watched Charlie and Ego drive around town together, I couldn’t help but feel that their shared activity exhibited a similar dynamic.

Ego does not meet the conditions required for moral responsibility. But Charlie does. He is the responsible adult in this activity, which is inherently risky and could result in serious harms. It’s natural to think that he is responsible for it, even if, because he and Ego are sharing the reins, he is not fully in control of how it unfolds.

If that’s right, then people who use self-driving programs are susceptible to moral luck because they can be on the moral hook for what these programs do. And this luck is analogous to the luck involved in shared agency between people.

It’s possible to complicate this line of thought. For one, it will not always be feasible for people to productively intervene to prevent harmful self-driving malfunctions, especially as the technology becomes more sophisticated and reliable. Accidents often happen quickly, and intervening can make things worse. When an accident involving a self-driving car is not due to the human driver’s negligence (or some other morally criticizable error), many people will say that the human driver is not morally responsible. Moreover, the human driver is not the only potentially responsible person in the mix. As my colleague Patrick Lin has pointed out, those who design self-driving cars can bear responsibility for bad outcomes that result from criticizable design choices. In fact, in many situations designers would seem to be a better candidate for blame than drivers, since, unlike drivers, designers have the luxury of time and forethought.

These points are both important, but they are compatible with the claim that human drivers are subject to a significant sort of moral luck by way of self-driving cars. At least when a human driver’s negligence leads to a harmful self-driving accident that would not have occurred had the driver not been negligent, it seems reasonable to say that the driver is blameworthy for that accident, even if other parties, such as designers or other drivers, bear some responsibility, too.

Reactions like praise and blame perform important functions in human life. Thus, thinking about the specific conditions under which humans are and are not morally responsible for self-driving cars is worthwhile. However, it is perhaps possible to overemphasize the importance of fault and blameworthiness here.

The more reliable and autonomous self-driving cars become, the more tempting it will be for human drivers to morally, socially, and personally distance themselves from harmful accidents involving their self-driving cars with the thought: “There’s nothing I could have done; I am unfortunate but as blameless as a mere spectator.”

This thought may be true, but it threatens to obscure in the driver’s conscience the fact that the driver’s own agency bears a special relation to the accident. There is something unsavory about someone who refuses to acknowledge this special relation. It’s appropriate for the driver, even if blameless, to feel a special type of first-personal regret about her choice to take the self-driving car out for a spin that day, a regret that is different from the sadness a spectator might feel and that might motivate her to make amends or apologize if she can. The willingness to take responsibility for those aspects of one’s embodied agency that fall outside of one’s control is a manifestation of a virtuous spirit and seems wholly appropriate – indeed, requisite – for those who choose to risk others’ lives by using self-driving cars.

The upshot is that using a self-driving car is morally risky, even for the most conscientious users. This is true of conventional cars as well. But the risk associated with self-driving cars is special because it originates in the actions of an artificial agent that has the potential to do great harm.

For now, I suspect that most self-driving car users are acutely aware of this. Charlie certainly is.

“If I was not paying attention, and I hit someone, I would feel 100% responsible. And I probably would feel at least mostly responsible if I was paying attention. So it very much feels like I am responsible for what it does.”

Blaming the Blasphemer

photograph of Salman Rushdie

As I write, Salman Rushdie is in hospital on a ventilator, having been stabbed in the neck and torso while on stage in New York. His injuries are severe. It is, at this moment, unknown if he will survive.

Rushdie’s novel The Satanic Verses, a work of fiction, is considered blasphemous by many Muslims, including the late Ayatollah Khomeini. For those who don’t know, the Ayatollah issued a public fatwa (religious judgment) against Rushdie, calling for all Muslims to kill him and receive a reward of $3,000,000 and immediate passage to paradise. The cash reward was recently raised by $600,000, though the Iranians seem to have struggled to improve on the offer of eternal paradise.

In 1990, Rushdie attempted to escape his life in hiding. He claimed to have renewed his Muslim faith of birth, stating that he did not agree with any character in the novel and that he does not agree with those who question “the authenticity of the holy Qur’an or who reject the divinity of Allah.” Rushdie later described the move as the biggest mistake of his life. In any case, it made no difference. The fatwa stood. “Even if Salman Rushdie repents and becomes the most pious man of all time,” Khomeini stated, “it is incumbent on every Muslim to employ everything he has got, his life and his wealth, to send him to hell.”

There are now reports of celebration in Tehran. “I don’t know Salman Rushdie,” Reza Amiri, a 27-year-old deliveryman told a member of the Associated Press, “but I am happy to hear that he was attacked since he insulted Islam. This is the fate for anybody who insults sanctities.” The conservative Iranian newspaper Khorasan’s headline reads “Satan on the path to hell,” accompanied by a picture of Rushdie on a stretcher.

Rushdie is not the only victim of the religious backlash to his novel. Bookstores that stocked it were firebombed. There were deadly riots across the globe. And others involved with the publication and translation of the book were also targeted for assassination including Hitoshi Igarashi, the Japanese translator (stabbed to death), Ettore Capriolo, the Italian translator (stabbed multiple times), the Norwegian publisher William Nygaard (shot three times in the back outside his Oslo home), and Aziz Nesin, the Turkish translator (the intended target of a mob of arsonists who set fire to a hotel, brutally murdering 37 people).

These attacks, including the latest on Rushdie, and the issuing of the fatwa are all very obviously morally reprehensible. But there is perhaps a bit more room for discussion when it comes to the choice of Rushdie to publish his novel.

Is it morally permissible to write and publish something that you know, or suspect, will be taken to be blasphemous, that you think will result in the deaths of innocents?

At the time of the original controversy, this question divided Western intellectuals.

Western critics of Rushdie included the Archbishop of Canterbury, Prince Charles, John le Carre, Roald Dahl, Germaine Greer, John Berger, and Jimmy Carter. “Nobody has a God-given right to insult a great religion and be published with impunity,” wrote le Carre, calling on Rushdie to withdraw the book from publication.

In The New York Times, Jimmy Carter wrote: “Rushdie’s book is a direct insult to those millions of Moslems whose sacred beliefs have been violated.” Rushdie, Carter contended, was guilty of “vilifying” Muhammad and “defaming” the Qur’an. “The author, a well-versed analyst of Moslem beliefs,” complained Carter, “ must have anticipated a horrified reaction through the Islamic world.” John Berger, author, Marxist, and literary critic, provided a similar condemnation of Rushdie and his publishers in The Guardian, noting that his novel “has already cost several human lives and threatens to cost many, many more.” Roald Dahl, the well-loved children’s book writer, concurred: “he must have been totally aware of the deep and violent feelings his book would stir up among devout Muslims. In other words, he knew exactly what he was doing and he cannot plead otherwise.”

These intellectuals’ central contention was that Rushdie had acted immorally by publishing the book and thereby causing unnecessary loss of life.

(Both Carter and Berger also offered clear condemnations of both the violence and the fatwa.)

A peculiar thing about this critique is that Rushdie never attacked anyone. Other people did. And these murders and attempted murderers were not encouraged by Rushdie, nor were they acting in concordance with Rushdie’s beliefs or wishes. The criticism of Rushdie is merely that his actions were part of a causal chain that (predictably) produced violence, ultimately on himself.

But such arguments look a lot like victim-blaming. It would be wrong to blame a victim of sexual assault for having worn “provocative” clothing late at night. “Ah!” our intellectual might protest, “But she knew so much about what sexual assaulters are like; it was foreseeable that by dressing this way she might cause a sexual assault to occur, so she bears some responsibility, or at least ought not to dress that way.” I hope it is obvious how feeble an argument this is. The victim, in this case, is blameless; the attacker bears full moral responsibility.

Similarly, it would be wrong to blame Rushdie for having written a “provocative” work of fiction, even if doing so would (likely) spark religious violence. The moral responsibility for any ensuing violence would lie squarely at the feet of those who encourage and enact it.

It is not the moral responsibility of an author to self-censor to prevent mob violence, just as it is not the moral responsibility of a woman to dress conservatively to prevent sexual assault on herself or others.

“I do not expect many to listen to arguments like mine,” wrote Rushdie-critic John Berger, a bit self-pityingly (as Christopher Hitchens noted) for one of the country’s best-known public intellectuals writing in one of the largest newspapers in Britain, “The colonial prejudices are still too ingrained.” Berger’s suggestion is that Rushdie and his defenders are unjustifiably privileging values many of us find sacred in the West — such as free expression — over those found sacred in the Muslim world.

But there is another colonial prejudice that is also worth considering; the insulting presumption that Muslims and other “outsiders” have less moral agency than ourselves. According to this prejudice, Muslims are incapable of receiving criticism or insult to their religion without responding violently.

This prejudice is, of course, absurd. Many Muslims abhor the violent response to The Satanic Verses and wish to overturn the blasphemy laws which are so common in Muslim-majority countries. It is an insult to the authors who jointly wrote and published For Rushdie: Essays by Arab and Muslim Writers in Defense of Free Speech. It denies the 127 signatures of imprisoned Iranian writers, artists, and intellectuals who declared:

We underline the intolerable character of the decree of death that the Fatwah is, and we insist on the fact that aesthetic criteria are the only proper ones for judging works of art. To the extent that the systematic denial of the rights of man in Iran is tolerated, this can only further encourage the export outside the Islamic Republic of its terroristic methods which destroy freedom.

Rushdie’s critics, keen as they were to protect a marginalized group, condemned Rushdie for causing the violence committed by individual Muslims. But in doing so, these intellectuals treated the Muslim perpetrators of that violence as lacking full moral agency. You can’t cause autonomous people to do something – it is up to them! Implicitly, Rushdie’s Western critics saw Muslims as mere cogs in a machine run by Westerners, or “Englishmen with dark skin” such as Rushdie, as feminist Germaine Greer mockingly referred to him. Rushdie’s critics saw Muslims as less than fully capable moral actors.

True respect, the respect of moral equals, does not ask that we protect each other from hurt feelings. Rather, it requires that we believe that each of us has the capacity to respond to hurt feelings in a morally acceptable manner – with conversation rather than violence. In their haste to protect a marginalized group, Rushdie’s critics forgot what true respect consists of. And in doing so, they blamed the victim for the abhorrent actions of a small number of fully capable and fully responsible moral agents. This time around, let’s not repeat that moral mistake.

On the Appropriateness of Shame

photograph of the Statue of Cain in Paris

Shame has taken up a prominent role in the public discourse recently. For instance, The Atlantic’s Conor Friedersdorf recently tweeted, arguing that Americans have an obligation to right past wrongs but not to feel shame over “wrongs perpetrated before our births.” Shame also plays a role in discourse about the pandemic. Earlier on, people might have felt shame over getting COVID-19: “If someone who thought they were being careful got the virus, well…maybe they weren’t being so careful.” And now the issue of vaccine shaming arises, with debates over whether people should be shamed for not getting the vaccine.

But shame is a nuanced thing. It is an emotion we feel, but it is also something we do to other people. I might feel shame, but I might also try to get you to feel shame: I shame you. This leads to two different questions: When is it appropriate to feel shame? When is it appropriate to shame somebody?

One mistake, a mistake that Friedersdorf makes, is to tie shame too tightly to wrongdoing. Some emotions are linked to wrongdoing. For instance, guilt tends to be linked to having done something morally wrong. And you certainly can be ashamed of your own wrongdoing. But there are more things in heaven and earth than moral rightness and wrongness. Some things are ugly, pitiful, or bad in non-moral ways. You might also be ashamed that you have a large nose, or you might be ashamed that you were too cowardly to take an exciting opportunity.

If shame were tied only to your own wrongdoing, then shame over wrongs perpetrated before your birth would be nonsensical. But shame isn’t even just tied to what you have done, hence the possibility of being ashamed of your nose. Shame is instead based on who we are. And shame is distinctly interpersonal: much of the time we feel shame because we know others think poorly of us (perhaps because of our looks or our inability to better ourselves). Further, who we are is based on our broader connections to other people: being in a family, being a fan of a certain sports team, or being a citizen of someplace or other.

So, you might be ashamed not of your own wrongdoing, but of the wrongdoing of your father. And you might be ashamed of your country, too. Nikole Hannah-Jones said that she was ashamed of America’s bombing of Hiroshima.

Now, you might question whether we should feel ashamed by things we haven’t done, by things we are merely associated with. For one, it seems perfectly reasonable to care about our non-moral qualities and to care about what others think of us. Secondly, shame and pride come hand-in-hand. Parents are proud of what their kids have done, and people are proud of their country’s achievements. Hannah-Jones was right when, responding to Friedersdorf, she pointed out that if you want to feel proud of your country – for what it does well now, and what it has done well through its history – you better be willing to be ashamed of it, too, for what it does badly and what it did badly in the past.

So, we can be ashamed of many things, including things we haven’t done. What about shaming somebody else? When should we shame people? Perhaps the obvious answer is: when they have done something shameful.

Though there might be a variety of forms of shaming, how shaming works should be fairly obvious: if you fail to meet certain standards, other people – remember, shame is interpersonal – can point out that they think less of you. For this to be effective, you need to then reflect on your failures, and this can involve feeling shame: you see why they think less of you, and you think less of yourself for it. Perhaps this process even must involve shame: to fully appreciate your failure might require that you do in fact feel ashamed of it.

So, when should we shame people? Again, the obvious answer is “when they do something shameful,” but that is too simple. It can depend on the relationship between people. You – a serial card cheat – might have no right to tell me that it’s wrong to count cards. You – a stranger on the street – might have no right to tell me not to be so rude to my wife (whereas our friends can step in and say something). So, shaming might be inappropriate if you are a hypocrite or if you have no business in judging me, whereas if you are a respected member of my community and my actions negatively affect my community, you might be well placed to shame me.

We must also keep in mind that some forms of shaming might carry costs: rather than making somebody feel mildly ashamed for a past misdeed, you might make them feel awful. And we need to be careful, as Kenneth Boyd noted in this venue, because shaming can be unfair, either picking out individuals who may have done something that was more acceptable at the time, and it can be a tool of bigotry, shaming people for being a minority and perpetuating harmful systems of oppression.

So, should we shame people for not getting vaccinated? Firstly, not all the unvaccinated have acted shamefully. In places where it can be hard to get time off of work to get the jab (or where people are not aware that they are entitled to time off), or in places where misinformation is rife, perhaps they are meeting or exceeding the standards we should expect of them as fellow members of the public. Or they may have genuine, conscientious objections.

But it is more likely that opposition to “vaccine shaming” turns on the idea that shaming is ineffective. Somebody might be acting shamefully: they might be failing to protect others, relying upon an overly individualized notion of rights (and failing to recognize how they interact with others in a society), and failing to evaluate the evidence properly because – though they should know better – they have been captured by petty, angry politics. It can be frustrating to be told not to shame these people. But if our aim is to get them to take the vaccine, we need to find an alternative strategy that doesn’t prompt a retreat into deeper skepticism.

Or, so the argument goes. But maybe that argument is wrong: there is some evidence that appealing to the sense of shame or embarrassment someone would feel if they spread COVID to a loved one is somewhat effective at increasing the vaccination rate. Ultimately, I don’t know when Americans should feel shame for what happened in the past. And I don’t know when we should shame people for their behavior in this pandemic. I do know that to have a well-informed public discussion, we need to understand the many facets of shame.

Liability and Luck

photograph of lone firefighter standing before small wildfire blaze

In the unlikely event that you have not yet experienced your daily dose of despair concerning the fate of humanity, then I’d highly encourage you to read Elizabeth Weil’s ProPublica piece “They Know How to Prevent Megafires. Why Won’t Anybody Listen?” The article makes two basic points. 1) Extensive controlled burns would be an effective precautionary strategy that would prevent recurring megafires. 2) There are political and financial incentives which trap us into a reactionary rather than precautionary fire strategies.

There are clearly lots of perverse incentives at play, but one part of the article was especially interesting:

“How did we get here? Culture, greed, liability laws and good intentions gone awry. There are just so many reasons not to pick up the drip torch and start a prescribed burn even though it’s the safe, smart thing to do. . . . Burn bosses in California can more easily be held liable than their peers in some other states if the wind comes up and their burn goes awry. At the same time, California burn bosses typically suffer no consequences for deciding not to light. No promotion will be missed, no red flags rise. ‘There’s always extra political risk to a fire going bad,’ Beasley said. ‘So whenever anything comes up, people say, OK, that’s it. We’re gonna put all the fires out.'”

It is risky to engage in controlled burns. Things can go wrong, and when they do go wrong it could be pretty bad, someone could lose their home, maybe even lose their life. Of course, it is far riskier, in one sense, to not engage in controlled burns. So why, then, our incentives set up the way they are?

At least two different explanations are likely at play.

Explanation 1: Action vs Inaction. First, in general, we are more responsible for actions than for inactions. The priest who ‘passed by the other side’ of a man left for dead did something terrible, but did not do something as terrible as the thieves who beat the man up in the first place. As a society we jail murders, we don’t jail the charitably apathetic, even if the apathetic are failing to save lives they could save.

And indeed, this point does have an appropriate corollary when talking about fire suppression. I am not responsible for houses burning in California — this is true even though last spring I could have bought a plane ticket, flown to California, and started burning stuff. Had I done so, likely things would have gone terribly wrong, and in that case I really would have been responsible for whatever property I had destroyed. This seems appropriate, it could be catastrophic if my incentives were structured such that I was punished for not starting vigilante fires.

Elizabeth Anscombe gives us a similar example. If the on-duty pilot and I are both asleep in our cabins, then we are doing the very same thing when our ship hits an iceberg. Yet it was the pilot, and not I, who sunk the ship. Indeed, had I, a random passenger, had tried to navigate the ship we would have absolutely held me responsible when something goes wrong.

So, what is the principle here? Is it that amateurs are specially responsible for actions? No, because we can also identify cases where we indemnify amateurs for their actions. Perhaps the best example here is good Samaritan laws. These laws protect untrained people, like myself, if we make a mistake when trying to render emergency first aid.

What is really going on is that we don’t want passengers trying to navigate ships. Nor do we want aspiring philosophers attempting unsupervised controlled burns in California. But we do want pilots to navigate ships, and we do want burn bosses attempting controlled burns. As such, we should construct incentives which encourage that, and protect people from culpability even if things occasionally go wrong.

Explanation 2: Causal Links. Second, we trace responsibility through causality. Because you caused a house to burn down you are, at least partially, responsible for that damage. The problem is, it is almost always easier to trace causality to actions than to inactions. We can identify exactly which active burning causes damage. We can easily say, “the first you started on February 14th destroyed these two house.” It’s much harder to say “the not burning that you didn’t do on February 14th was what allowed the fire to get out of hand.”

And indeed, I think probably we can’t really hold people responsible for any particular failure to burn. We can hold people responsible for how much controlled burning they can do in general, but we can’t trace causal paths to hold them responsible for any particular bad result of inaction. Indeed, it would be unfair to do so, no burn boss can’t foresee when a particular failure to burn will destroy a house (in the way they can sometimes foresee when burning in a particular area might destroy a house). This creates a problem though. Because we can’t hold people fully responsible for their inaction, that means we must hold people disproportionately responsible for actions, thus perversely incentivizing inaction.

This also parallels our interpersonal lives. For example, we generally want people willing to think for themselves. But we are also far more likely to condemn people for reaching terrible views they came up with themselves than for failing to recognize what is wrong with the conventional view. This can create perverse incentives, however. It might really be true that we are justly responsible for coming to terrible conclusions, but because it is so hard to hold people responsible for the majority view it might be important to forgive even egregious mistakes to keep incentives favoring original thought.

So here is the general point. Assessing responsibility is far more complicated than just establishing whether someone played a causal role. Sometimes holding people responsible for things they really should not have done can create perversely disincentivize people from taking risks we want them willing to take. The fires in California give one clear example of this, but the point generalizes to our lives as well.

When Are Leaders Culpable?

photograph of pyramid of wooden cubes indicating people on yellow background

When are leaders, especially politicians, morally culpable for the deaths their decisions and actions cause? This is a hard question of course because culpability comes in degrees. For example, Sally is culpable for murder if she knowingly kills someone without moral reason (e.g., self-defense); however, Sam is less culpable than Sally if he knowingly sells someone a defective automotive part which results in a fatal car accident. By the same token, the culpability of leadership comes in degrees too. This issue made especially salient recently when Kristen Urquiza, at the Democratic National Convention, shared how she lost her father due to coronavirus complications, arguing her father likely wouldn’t have died had he ignored President Trump’s downplaying of the threat. This isn’t an isolated problem. President Trump misled Americans about the impact of the pandemic, with disastrous results, in an attempt to revive his reelection prospects. We may wonder then about the blame leaders deserve for the death they cause.

There is an obvious way leaders, and politicians in particular, are directly culpable for the deaths of their citizens: starting an unjust conflict, like a war, without accurately assessing the long-run consequences. Leaders look blameworthy here because of the incentive structure at play: soldiers on a battlefield often face perverse incentives, like the prospect of prison, if they don’t carry out an order. This of course isn’t to deny that soldiers share some blame for following orders they know are wrong. However, leaders share in this responsibility given the position of power they hold, especially if they order something they know is unjust.

For example, we should be reticent to accept a proposed war is legitimate given the historical record: throughout history, especially recently, wars are often justified with moral language. Perhaps a group living in the targeted nation or region is claimed to have wronged us somehow; perhaps our invasion would help set things right; perhaps we would be justified using force to get back what was wrongly taken from us. If these kinds of justifications for war sound familiar, it is because they are. It is too easy to use flimsy moral appeals to justify things we would otherwise think morally wrong. We are susceptible to this sort of thing as individuals; so it wouldn’t be surprising if politicians and governments routinely abuse their trust to leverage baseless moral justifications to convince their citizens and constituents that the proposed war would be morally permissible.

Things are less clear when morally weighing an order from a leader or politician not intended to cause harm, but with foreseeable negative consequences. Some ethicists appeal here to what is known as the doctrine of double effect: an order or action is morally acceptable, even if it has bad and foreseen consequences, if they are the by-product of a morally good, intended action. For the sake of argument: even if abortion is morally bad, on this doctrine a doctor may still abort a fetus if the intention is to save the pregnant mother’s life: the intended, morally good outcome (saving the mother’s life) can’t occur without the bad, unintended outcome (aborting the fetus). Whether the doctrine of double effect exonerates leaders and politicians for ordering a war, even a just war, with very bad foreseen consequences is controversial.

What about indirect culpability of leaders and politicians? Things are dicier here. However, we can still call to mind cases that may help us think through indirect culpability. An obvious and recent case is that of managing the coronavirus in the United States: the current United States President, Donald Trump, downplayed the threat of the coronavirus and gave poor advice to U.S. citizens. This is not of course to say that the current U.S. president intended for people to die of coronavirus; but it does illustrate he could well have indirectly contributed to citizens deaths by downplaying the virus, and playing up ‘cures’ that ultimately failed.

We should pause here to reflect on why the current U.S. President — or any leader similarly situated — looks indirectly culpable for such deaths, even if he isn’t nearly as culpable, say, when starting an unjust war. There is an obvious source of indirect culpability here: abusing the trust placed in them by his followers and constituents. If Harry knows his constituents trust him (whether this is poor judgment on their part or not), he bears indirect culpability for what happens to them if he knowingly gives them bad advice, and they act on it, especially if they wouldn’t have acted that way had they not trusted him. This would be wrong, just as it would be wrong for a physician to knowingly give dangerous medical advice to her patients, especially knowing they only took her advice because they trusted her good intentions and competence.

This is because, broadly speaking, when there is trust, there is vulnerability. When I trust that someone is competent and has my best interests at heart, I place myself in a vulnerable position that can be exploited by those with bad intent. The point generalizes to the ethics of leadership: a leader may be in a position to exploit their followers because of the trust placed in them by their followers, even though such trust is only placed in them on the condition that the leader has their best interests at heart. And if the leader used the trust to knowingly put their followers in harms’ way for their own end, they bear some responsibility for that bad outcome, even if it was unintended.

Moral Luck and the Judgment of Officials

photograph of empty tennis court and judge's chair

Novak Djokovic was defaulted from the US Open last week for violating the Abuse of Balls rule. During the first set of his quarterfinal match with Pablo Busta, he struck a ball to the back of the court without looking. This resulted in the ball hitting a line judge. The referee, Soeren Friemel, after consulting with other officials, made a ruling to bypass the Point Penalty Schedule and issue an immediate default. In other words, Djokovic lost the match, left the tournament, forfeited all of his winnings in the tournament, and is subject to further fines. In the aftermath of this incident, many of the TV commentators discussed issues of the severity of the injury to the judge, that the ruling was correct, and the bad luck of Djokovic. The bad luck was in reference to the fact that just as Djokovic was striking the ball, the line judge straightened up from her bent over position which put her head in the direct path of the ball.

As I watched the events unfold and before the ruling was made, I immediately began to think about the fact that the referee’s judgment was going to hinge on the problem of moral luck. This problem was initially discussed by Bernard Williams and Thomas Nagel in a two-part article in 1976. Dana Nelkin describes the problem as one that “occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control.”  In other words, judgments of moral approval or disapproval, including the imposition of sanctions, can depend upon accidents or choices by third parties. The problem can be exemplified by considering two teenagers drag racing. Both of them are using poor judgment as well as speeding. The car on the right is clearly pulling ahead of the car on the left (due, let’s say, to crummy spark plugs in the left car) when an animal darts out into the street from the left. Neither teen attempts to avoid hitting the animals because neither sees the animal. As luck would have it, even though the animal darts into the road from the left, the car on the left misses the animal but the car on the right strikes it. Is it really the case that the driver on the left is morally innocent compared to the driver on the right? Had it not been for the crummy spark plugs the driver on the left would have struck the animal; had it not been for the presence of the animal the accident would not have occurred at all.

What seems to be at issue here, Nelkin explains, is the acceptability of two ideas, one called the Control Principle and the other a corollary of that principle.

Control Principle (CP): We are morally assessable only to the extent that what we are assessed for depends on factors under our control.

CP-Corollary: Two people ought not to be morally assessed differently if the only other differences between them are due to factors beyond their control.

At first, these ideas seem to be intuitively acceptable. To accept them means that luck should play no role in moral assessment. But notice that they imply that in our stipulated example of drag racing that the driver on the left seems to be just as culpable as the driver on the right for hitting the animal — either both are culpable or neither is culpable. After all, the only difference between the two drivers are factors beyond the control of either driver and both were in control of the decision to drag race. So, what is to be questioned? Should the judgment that the two drivers have different levels of culpability be jettisoned or should CP and its corollary be abandoned?

This hypothetical case is analogous to the situation with Djokovic. A few points before the offending event, Djokovic much more angrily and with much more force slammed a ball into a side wall of the court. None was injured. He was not warned, given a point penalty, or given a game penalty.  But, given the rule, the earlier event was just as much of a violation of the rule as the latter event. It is worth seeing the rule in its entirety:

ARTICLE III: PLAYER ON-SITE OFFENSES

  1. ABUSE OF BALLS Players shall not violently, dangerously or with anger hit, kick or throw a tennis ball within the precincts of the tournament site except in the reasonable pursuit of a point during a match (including warm-up). Violation of this Section shall subject a player to fine up to $20,000 for each violation. In addition, if such violation occurs during a match (including the warmup) the player shall be penalised in accordance with the Point Penalty Schedule hereinafter set forth. For the purposes of this Rule, abuse of balls is defined as intentionally hitting a ball out of the enclosure of the court, hitting a ball dangerously or recklessly within the court or hitting a ball with negligent disregard of the consequences.

What should be noticed is that the mere act of hitting a ball “violently, dangerously or with anger,” regardless of whether anyone is injured, is sufficient to violate the rule. So, the earlier act by Djokovic was sufficient for Friemel to issue a warning in accordance with the Point Penalty Schedule. Nowhere in the code does it specify that Friemel may skip directly to default based on the poor luck of the ball hitting and injuring someone, though, as with all officials in sports, part of his job is to use judgment to make decisions.  But, it seems as if part of that decision to not issue a warning for the earlier outburst and to default Djokovic for the latter outburst included a rejection of the control principle and its corollary. Otherwise it seems as if the only difference between the two events was the placement of the line judge and the fact that just as Djokovic hit the ball she stood up in a way that placed her head in the direct path of the ball. Both of these elements were beyond the control of Djokovic. So, if CP is operative, then Djokovic seems to be equally culpable and equally deserving of being defaulted for the earlier outburst as for the one that resulted in the injury to the line judge. By abandoning CP, while Djokovic clearly violated the rule earlier, he did not need to be sanctioned because luckily the outcome was different.

But now comes the twist. It looks like other officials at the match bear some responsibility for the line judge’s injury.

What do we say about the Friemel’s non-application of the rule earlier in the match?  Furthermore, what do we say about the officials at the Western & Southern Open just a few days before who did not default Aljaz Bedene for hitting a camera operator in a similar situation? Here we have an almost identical set of facts, but the injury sustained by the camera operator did not require immediate medical attention, unlike the line judge injured by Djokovic. The rules do not make an explicit allowance for the severity of the injury to factor into the judgment of the officials, but in these three cases, the severity of the injury was considered. The different decisions make sense if we abandon the control principle because those different outcomes, that were due in part to factors beyond the control of the players, seem to allow for different judgments.

Now, all we have to do is accept that luck plays a role when making moral judgments. This implies that you can be morally culpable for things beyond your control. Friemel and the other tennis officials seem to be committed to this idea. But now that we know that consequences matter, it appears that Friemel and other officials should also be culpable in the injury of the US Open line judge. After all, if we let consequences matter, then we have to confront the suggestion that acts of omission resulting in bad outcomes are open to moral censure. By not giving Bedene a harsher penalty a few days before, and not even issuing a warning a few minutes before in the Djokovic – Busta match, the officials perform acts of omission. These acts of omission appear to support a claim that Djokovic could vent his frustration in violation of the Abuse of Balls rule without fear of serious sanction. The officials are thus, oddly, morally implicated in Djokovic’s transgression. They seem to be responsible for creating a situation in which Djokovic could behave this way. The resulting injury involves actions beyond their control (the line judge standing up and Djokovic hitting the ball). But by abandoning the CP and its corollary, they nevertheless appear to share in the responsibility of injury.

These observations — to accept or reject the CP as well as the implications of doing so — apply beyond sports. In any social arena, officials who are entrusted with making judgments may have more responsibility for the outcomes of their silence than they want to recognize.

Rio Tinto and the Distinction between Saying ‘Sorry’ and Being Sorry

photograph of Rio Tinto train cutting through landscape

“…we haven’t apologised for the event itself, per se, but apologised for the distress the event caused.” – Chris Salisbury, Rio Tinto Iron Ore CEO

In late May, mining giant Rio Tinto shocked Australia, and the world, by blasting an ancient and sacred Aboriginal site to expand an iron ore mine.

The blast destroyed a cave in the Juukan Gorge, located in the Hammersley Ranges in Northern Western Australia, that was one of the oldest of its kind in the Western Pilbara region, and was the only known inland site on the entire Australian continent to show signs of continual occupation through the last ice age (between 23,000 and 19,000 years ago) during which, evidence suggests, most of inland Australia was abandoned as the continent dried out and water sources disappeared. The cave site itself was found to be around 46,000 years old.

The blast received ministerial approval in 2013, consent obtained under Western Australia’s out-dated heritage laws drafted in 1972 to favor mining interests. Following the 2013 approval, archaeological work carried out at the site discovered it to be much older than originally thought, and to be rich with artefacts and sacred objects.

The 1972 Heritage Act does not allow for renegotiation of approvals based on new information; however, the act is due to be replaced by new legislation, and various factors have caused the renewed heritage act to be delayed. The new draft bill currently in preparation includes a process of review based on new information. In its response to the new draft legislation Rio Tinto has submitted a request that consent orders granted under the current system should be carried over.

The blasting of the site was conducted without prior notification to traditional Indigenous owners, or the state government, and has caused deep distress to the Puutu Kunti Kurrama and Pinikura people (PKKP). Among some of the precious and rare items recovered from the site prior to the blast was a 4000-year-old plaited length of human hair from several people which DNA testing revealed to belong to the direct ancestors of the living PKKP people.

“It’s one of the most sacred sites in the Pilbara region … we wanted to have that area protected,” PKKP director Burchell Hayes told Guardian Australia.

Peter Stone, Unesco’s Chair in Cultural Property and Protection said that the destruction at Juukan Gorge was among the worst in recent history, likening it to the destruction of the Bamiyan Buddas in Afghanistan and the ancient Syrian city of Palmyra.

Rio Tinto claims it was not aware of the importance of the site, nor of the traditional owner’s wish for it to be preserved. But the PKKP Aboriginal Corporation rejected Rio’s suggestion that its representatives had failed to make clear their concerns for the site, and their wish for it to be preserved. “The high significance of the site was further relayed to Rio Tinto by PKKP Council as recently as March,” Burchell Hayes said.

Following the blast, Rio Tinto issued an apology to the PKKP people. “We are sorry for the distress we have caused,” Rio Tinto Iron Ore chief executive Chris Salisbury said in a public statement.

Several days after the public apology a leaked recording from a private Rio Tinto staff meeting found its way to The Australian Financial Review which reported that Salisbury told staff ” … we haven’t apologised for the event itself, per se, but apologised for the distress the event caused.” In a subsequent interview, Salisbury did not contradict the report, and repeatedly refused to directly answer when asked if the company was wrong to blow up the site, only repeating that they were sorry for the distress.

So, what is going on here — what can we make of Salisbury’s remark that the company had apologized not for the event itself but the distress it caused?

In taking the line that it did not know about the site’s significance, and attempting to insulate its apology from an admission of responsibility, Rio Tinto is trying to avoid moral blame. But does the separation hold? Can an agent be sorry for causing distress without ipso facto being responsible for causing it? And if so, does Rio’s attempt to excuse its actions from moral blameworthiness succeed?

The attribution of moral blame is not straightforwardly connected to the objective wrongness of an action carried out or caused by an agent. One way to assess the connection in any given case is to consider what conditions would have to be present for an agent to be held morally responsible, that is, to be blameworthy for an action.

It is possible to identify cases in which an agent is blameworthy even if an action is not in itself wrong; or, conversely, in which an agent is not blameworthy even if an action is wrong.

To give a relatively simple example, Jane intends to poison Joe by putting a white substance, which she believes to be arsenic, in his tea. It turns out Jane was mistaken, and the powder was only sugar; nothing happens to Joe so there is no objective moral wrong committed, however Jane’s intention to poison him is blameworthy. Conversely, if Jane accidentally poisons Joe by putting what she believes to be sugar, but what in fact turns out to be arsenic, in his tea, she is not (necessarily) blameworthy though the act of poisoning Joe is itself an objective moral wrong.

Here we can see that the salient elements for establishing blame are intention and knowledge. In the first case, Jane’s intention is morally blameworthy, even if the outcome is neutral. In the second case, though Jane has no intention to harm Joe, further questions arise about how Jane came to make this mistake, and whether she should reasonably have been expected to know that the substance she put in Joe’s tea was in fact arsenic rather than sugar.

In the case of Rio’s destruction of the Juukan Gorge cave we cannot know if it was Rio’s intention to blast the site over the strong objection of the PKKP owners, though some suspect that it was.

For an action to be morally wrong yet the agent not blameworthy, the agent must have an excuse for carrying it out which absolves them of responsibility. As Holly Smith suggests, “Ignorance of the nature of one’s act is the preeminent example of an excuse that forestalls blame.”

The question, then, is epistemic — for an agent to be held responsible, certain epistemic conditions need to be fulfilled. The first condition is that there is an awareness of the action (that the agent knows what she is doing); second the agent has to have been cognizant of the moral significance of the action; third the agent has to have been aware of the consequences of the action.

The first condition is obviously fulfilled, as the action of blasting the site was deliberate. The second condition of cognizance of the moral significance, together with the third condition of cognizance of consequences is, by Rio, under dispute.

In another statement, made subsequent to the leaked tape of his remarks, Salisbury said he had “taken accountability that there clearly was a misunderstanding about the future of the Juukan Gorge.”

It isn’t clear what having ‘taken accountability’ means, but the claim that it was a misunderstanding is an attempt to avoid blameworthiness by claiming that an epistemic condition is not fulfilled.

However, ignorance can itself be morally culpable. If (in the above example) Jane did not read the box when she could have, say, or if she ignored reasonable suspicions that someone had replaced the sugar with arsenic, then her ignorance does not excuse her of blame for poisoning Joe. It must be noted that there is disagreement among philosophers on this point; while some argue that an agent can be blamed for their ignorance, others maintain that, however criticizable it is, ignorance nonetheless exculpates the agent from moral blameworthiness.

On the former view, if Rio is culpable for its ignorance, that ignorance fails to shield the company from moral blame. This, to me, seems correct — and I would argue that even if we take Rio Tinto at its word that it was not aware of the significance of the site and the PKKP people’s wish for it to be preserved, the company has failed in its responsibility to the traditional owners and is indeed blameworthy.

I might add that taking Rio at its word here seems to me exceedingly generous, and I remind the reader that the PKKP people strenuously denied the suggestion that they had not made their wishes known to the company.

So, regarding the dubious distinction between apologizing for the distress caused but not for the action which caused it, Rio Tinto may say it is sorry, but without an accompanying willingness to accept responsibility, its apology is hollow. It appears the company has apologized from an ostensible obligation to do so, but shows little genuine remorse for this act of cultural destruction.

Aging and Blaming in the Criminal Justice System

Photograph of a long hall of cells with light and a dome at the end

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


A recent study in the medical journal The Lancet suggests that, if trends hold, 50% of babies born today will live to be over 100 years old.  Though long life is typically thought of as a good thing, some of our ordinary practices may need to change to track philosophical and practical challenges posed by longer life spans.  In particular, we need to reflect on whether our attitudes about blame and punishment need to be adjusted. For example, last year, John “Sonny” Franzese was released from an American prison at the age of 100.  Franzese was sentenced to fifty years for a bank robbery. The unique challenges and philosophical questions posed by extreme old age cast the moral permissibility of incarcerating the elderly into question.

Arguably, we need to think critically about duration of punishment. The criminal justice system in The United States relies heavily on retributivism as a justification for sentencing.  The concept of blame is central to a philosophy of retributivist justice. As an act of retribution, criminals are often given multiple life sentences or are sentenced to a number of years in prison that far exceeds the amount of time that criminal could reasonably expect to be alive. There is room for debate concerning the usefulness of blame as a moral concept.  Supposing, however, that blame is an important evaluative attitude in our moral lives, there is good reason for reflection on whether and under what conditions other moral considerations are more important than whether an agent is morally blameworthy. As lifespans increase, a life sentence becomes a still more serious proposition. At what point, if any, does respect for human dignity outweigh our retributivist concerns to ensure that a blameworthy agent is held responsible for their actions?

Intuitively, regardless of the nature of the crime, there are some upper limits to how long it is appropriate to punish someone.  For example, in his paper Divine Evil, David Lewis points out that it could never be just to punish a person infinitely for a finite crime.  Of course, in the context of the paper, Lewis is arguing that an omnibenevolent God couldn’t sentence a person to an eternity of torment in hell for a finite sin, but the main point here holds.  If human beings were immortal, it would be unjust to hold them in prison forever with no chance of release as punishment for a single crime or series of crimes.  That suggests that there is a time at which continuing to punish a blameworthy person is no longer morally justified. Some countries, like Portugal, Norway, and Spain, don’t sentence convicted criminals to life in prison at all.  In many other European nations, a life sentence always includes the possibility of parole. The understanding seems to be that a life sentence without the possibility of parole is a human rights violation. Even if the United States does not come around to thinking about the issue in this way, as human lifespans continue to get longer, it’s important to identify the point at which punishment is no longer morally permissible.

For retributivism to be justified, our assessments of blame must be apt.  For our judgments of blameworthiness to be apt, it must be the case that we are blaming one and the same person who engaged in the wrongdoing for which they are being blamed.  Increased lifespans muddy the waters of identity judgment. An extremely elderly person may have little to no psychological continuity with the being they were when they engaged in wrongdoing.  In his paper The Makropulos Case: Reflections on the Tedium of Immortality, Bernard Williams argues that if a being were immortal, or even if that being were to live an exceptionally long life, that being would either become extremely bored or would change so much that they would no longer be justified in judging future experiences as their own experiences.  Living a flourishing human life is a matter of setting goals and completing projects.  The kinds of goals we set goes a long way to establishing who we are as people. If we continue to set goals of the same type, Williams argues, we will inevitably get bored.  If we set different goals, we will eventually become totally different people, unrecognizable to our former selves.

Aging criminals aren’t immortal, but as human lifespans continue to increase, it may well be the case that they resemble their former selves in very few respects.  If this is the case, it is far from clear that our identity judgments are justified or that our assessments of blameworthiness are apt. This recognition should also cause us to reevaluate our goals when it comes to punishment.  As prisoners age, should our philosophy of punishment still be retributivism?

If blame is a useful moral concept, it is, at least in part, because a moral community that makes use of blame has a mechanism for encouraging bad actors to change their behavior in the future.  To successfully bring about this change in behavior, it is important that the behavior in question is a salient thread in the life narrative of the wrongdoer. Once enough time has past such that this is no longer true, it’s possible that continuing to blame a wrongdoer no longer serves this important social function in our moral community.