← Return to search results
Back to Prindle Institute

Neurodivergence, Diagnosis, and Blame

photograph of woman using TikTok on iphone

If your For You page on TikTok looks anything like mine, you know that there is a veritable trove of content about autism and ADHD, much of it focused on questions about diagnosis. The spread of this online content and discussion has been lauded for the potential good it can do, allowing women and non-binary people to have access to information about conditions that are often missed in those populations or by giving voice to traditionally marginalized groups who often deal with others speaking inaccurately on their behalf.

At the same time, the algorithm may function in ways that trend towards stereotyping the populations in question or pushing content that associates ADHD and autism with things not necessarily related to diagnostic criteria (e.g., ADHD with talking fast or autism with disliking the big room light). This can lead to misunderstandings and poor self-diagnosis that misses underlying issues, such as someone mistaking bipolar for ADHD. While similar misunderstandings and misdiagnoses can happen in medical contexts, those who rely on questionably-credentialed social media influencers may be more susceptible to misinformation.

But why is having a diagnosis so appealing? What does the diagnosis do for autistic and ADHD individuals?

I suspect that at least one part of the answer is found in our practices of blame and our beliefs about who deserves support: the diagnosis promises less self-blame and blame from others and more understanding and accommodations.

How might a diagnosis lead to less self-blame and blame from others? There are several possible philosophical answers to this question.

The first answer is relatively common: ADHD and autism are caused by brain chemistry and structure — they should be seen as medical or neurological conditions, not moral ones. On the purely medical view, ADHD and autism have nothing to do with character or who that person is as a moral agent. So, if someone is diagnosed with ADHD or autism, they shouldn’t be blamed for anything resulting from those conditions because they’re simply medical problems that are out of one’s control.

This answer has a few benefits:

the medical diagnosis adds a sense of legitimacy to the experience of individuals with ADHD and autism, it provides access to medical care, and it gives a clear conceptual apparatus to communicate to others about the specific accommodations that are needed.

At the same time, the purely medical answer has key drawbacks.

First, the medical mode is often moralized in its own way, with its own norms about health, disease, and disorder. Sometimes this is appropriate, but other times natural variations in human expression become labeled as disorders or deficits when they should not be (see how intersex people have been treated or the history of eugenics). The aim of medicine is often to provide a cure, but some things do not need to be cured. Medical care can and often has been helpful for individuals needing access to Adderall or Ritalin to function, but the purely medical mode has its limits for understanding the experiences of individuals with ADHD and autism.

Second, the medical mode tends to locate the problem in the individual, though some public health approaches have started to move towards structural and social thinking. For those with ADHD and autism, they may experience their condition as a disability in large part because of a lack of social support and understanding rather than a purely internal discomfort.

Third, the medical mode cannot always be separated from character. See, for example, the overlap of depression and grief or the fact that even normal psychological states are also caused by brain chemistry and structure.

In the case of autistic and ADHD individuals, the condition isn’t something that can be easily carved off from the person because they affect broad domains of the person’s life. In trying to separate out the autism or ADHD, others can easily create this idea of the “real” non-autistic, non-ADHD person, which can lead to failing to love and appreciate the actual person.

The second philosophical answer to the question as to how a diagnosis might lead to less blame is a capacities-based view of moral responsibility. This view is similar to the medical mode, in that the focus is often primarily on the individual, but it differs in its decidedly moral focus. On the capacities view, agents are morally responsible if they have some normal (or minimally normal) capacities of reasoning and choice. Agents are not responsible if they lack these capacities. There are ways of refining this kind of view, but let’s take the basic idea for now.

If we combine this kind of philosophical idea with the idea that ADHD and autistic people are deficient with regard to some of these capacities necessary to be a morally responsible agent, then it would make sense that ADHD and autistic folks would be either less responsible or not responsible at all in certain domains. But if the point of accommodations is to increase capacities, then accommodations should be supported. However, like the medical approach, there are a few drawbacks to at least some versions of this view.

First, there isn’t a clear capacities hierarchy between neurotypical people and neurodivergent people. While someone with ADHD may have trouble starting on a large project in advance, they may work exceptionally well under pressure. Someone with autism may have more difficulty in social situations but could have the ability to focus their time and energy to learn immense amounts of knowledge about a special interest. While parts of the ADHD and autistic experience involve deficits in certain capacities, the overall assessment is much less clear.

Second, claiming that someone with autism and ADHD can’t be a fully morally responsible agent also seems to have a troubling implication that they might not be full, self-legislating members of the moral community. This kind of view places people with autism and ADHD in the position of, say, a child who has some understanding of moral principles but isn’t yet a full agent.

Neither the medical model nor at least some versions of the capacities model seem to fully provide what people are looking for in a diagnosis. While both offer rationales for removing blame, they can have a dehumanizing effect. The drawbacks to these views, however, teach us some lessons: a good view should 1) consider the whole, actual person, 2) think about the person in their social context, and 3) avoid making the autistic or ADHD person out to be less than full moral agents.

I think the right question to ask isn’t “how is this person deficient in some way that removes responsibility?” but instead “what expectations are reasonable to place on this person, given who they are at this point in time?”

This is a rough suggestion that requires more development than I can give it here.

There are ethical considerations that enter in at the level of expectations which go beyond questions about capacity. What would it look like to be kind? To give each other space to be comfortable? To accept parts of ourselves we can’t change? To build a world that works for everyone? Capacity is certainly implicated by these questions, but it isn’t the whole picture.

By shifting our focus to the question about what expectations are reasonable to place on an individual person, we are recentering the whole person and recognizing the dis/abilities that the individual experiences.

Experiences with autism and ADHD can be very different from person to person, and the accommodations needed will vary from person to person. The expectations we can reasonably place on people with ADHD and autism may not be any less than those without — they may just be different.

And neurotypical people who interact with ADHD and autistic people may also be reasonably expected to provide certain accommodations. Everyone’s needs should be considered, and no one should be othered.

For example, say that an autistic person says something that comes off as rude to a neurotypical friend. This has happened a few times before, each within a new domain of conversation. Every time, the autistic individual apologizes and explains how autism affects their social communication and understanding of social norms and how they’re trying to get things right. Eventually the neurotypical friend gets upset and says “why do you always use the autism as an excuse to get out of responsibility?”

In this case, it doesn’t seem that the autistic person is abnegating responsibility, it seems that they’re clarifying what they are actually responsible for. The autistic person isn’t responsible for intentionally saying something rude, they’re responsible for accidentally saying something rude despite their best intentions otherwise. And the autistic person still apologizes for the hurt caused and promises that they will continue to try to do better in the future. Whichever way the two friends negotiate this part of their relationship, it seems important that they each understand where the other is coming from and that each friend’s feelings are given space.

What does this example tell us about relationship between diagnosis and blame? Perhaps we need to develop alternative frameworks to recontextualize responsibility, rather than simply diminish it.

Blaming the Blasphemer

photograph of Salman Rushdie

As I write, Salman Rushdie is in hospital on a ventilator, having been stabbed in the neck and torso while on stage in New York. His injuries are severe. It is, at this moment, unknown if he will survive.

Rushdie’s novel The Satanic Verses, a work of fiction, is considered blasphemous by many Muslims, including the late Ayatollah Khomeini. For those who don’t know, the Ayatollah issued a public fatwa (religious judgment) against Rushdie, calling for all Muslims to kill him and receive a reward of $3,000,000 and immediate passage to paradise. The cash reward was recently raised by $600,000, though the Iranians seem to have struggled to improve on the offer of eternal paradise.

In 1990, Rushdie attempted to escape his life in hiding. He claimed to have renewed his Muslim faith of birth, stating that he did not agree with any character in the novel and that he does not agree with those who question “the authenticity of the holy Qur’an or who reject the divinity of Allah.” Rushdie later described the move as the biggest mistake of his life. In any case, it made no difference. The fatwa stood. “Even if Salman Rushdie repents and becomes the most pious man of all time,” Khomeini stated, “it is incumbent on every Muslim to employ everything he has got, his life and his wealth, to send him to hell.”

There are now reports of celebration in Tehran. “I don’t know Salman Rushdie,” Reza Amiri, a 27-year-old deliveryman told a member of the Associated Press, “but I am happy to hear that he was attacked since he insulted Islam. This is the fate for anybody who insults sanctities.” The conservative Iranian newspaper Khorasan’s headline reads “Satan on the path to hell,” accompanied by a picture of Rushdie on a stretcher.

Rushdie is not the only victim of the religious backlash to his novel. Bookstores that stocked it were firebombed. There were deadly riots across the globe. And others involved with the publication and translation of the book were also targeted for assassination including Hitoshi Igarashi, the Japanese translator (stabbed to death), Ettore Capriolo, the Italian translator (stabbed multiple times), the Norwegian publisher William Nygaard (shot three times in the back outside his Oslo home), and Aziz Nesin, the Turkish translator (the intended target of a mob of arsonists who set fire to a hotel, brutally murdering 37 people).

These attacks, including the latest on Rushdie, and the issuing of the fatwa are all very obviously morally reprehensible. But there is perhaps a bit more room for discussion when it comes to the choice of Rushdie to publish his novel.

Is it morally permissible to write and publish something that you know, or suspect, will be taken to be blasphemous, that you think will result in the deaths of innocents?

At the time of the original controversy, this question divided Western intellectuals.

Western critics of Rushdie included the Archbishop of Canterbury, Prince Charles, John le Carre, Roald Dahl, Germaine Greer, John Berger, and Jimmy Carter. “Nobody has a God-given right to insult a great religion and be published with impunity,” wrote le Carre, calling on Rushdie to withdraw the book from publication.

In The New York Times, Jimmy Carter wrote: “Rushdie’s book is a direct insult to those millions of Moslems whose sacred beliefs have been violated.” Rushdie, Carter contended, was guilty of “vilifying” Muhammad and “defaming” the Qur’an. “The author, a well-versed analyst of Moslem beliefs,” complained Carter, “ must have anticipated a horrified reaction through the Islamic world.” John Berger, author, Marxist, and literary critic, provided a similar condemnation of Rushdie and his publishers in The Guardian, noting that his novel “has already cost several human lives and threatens to cost many, many more.” Roald Dahl, the well-loved children’s book writer, concurred: “he must have been totally aware of the deep and violent feelings his book would stir up among devout Muslims. In other words, he knew exactly what he was doing and he cannot plead otherwise.”

These intellectuals’ central contention was that Rushdie had acted immorally by publishing the book and thereby causing unnecessary loss of life.

(Both Carter and Berger also offered clear condemnations of both the violence and the fatwa.)

A peculiar thing about this critique is that Rushdie never attacked anyone. Other people did. And these murders and attempted murderers were not encouraged by Rushdie, nor were they acting in concordance with Rushdie’s beliefs or wishes. The criticism of Rushdie is merely that his actions were part of a causal chain that (predictably) produced violence, ultimately on himself.

But such arguments look a lot like victim-blaming. It would be wrong to blame a victim of sexual assault for having worn “provocative” clothing late at night. “Ah!” our intellectual might protest, “But she knew so much about what sexual assaulters are like; it was foreseeable that by dressing this way she might cause a sexual assault to occur, so she bears some responsibility, or at least ought not to dress that way.” I hope it is obvious how feeble an argument this is. The victim, in this case, is blameless; the attacker bears full moral responsibility.

Similarly, it would be wrong to blame Rushdie for having written a “provocative” work of fiction, even if doing so would (likely) spark religious violence. The moral responsibility for any ensuing violence would lie squarely at the feet of those who encourage and enact it.

It is not the moral responsibility of an author to self-censor to prevent mob violence, just as it is not the moral responsibility of a woman to dress conservatively to prevent sexual assault on herself or others.

“I do not expect many to listen to arguments like mine,” wrote Rushdie-critic John Berger, a bit self-pityingly (as Christopher Hitchens noted) for one of the country’s best-known public intellectuals writing in one of the largest newspapers in Britain, “The colonial prejudices are still too ingrained.” Berger’s suggestion is that Rushdie and his defenders are unjustifiably privileging values many of us find sacred in the West — such as free expression — over those found sacred in the Muslim world.

But there is another colonial prejudice that is also worth considering; the insulting presumption that Muslims and other “outsiders” have less moral agency than ourselves. According to this prejudice, Muslims are incapable of receiving criticism or insult to their religion without responding violently.

This prejudice is, of course, absurd. Many Muslims abhor the violent response to The Satanic Verses and wish to overturn the blasphemy laws which are so common in Muslim-majority countries. It is an insult to the authors who jointly wrote and published For Rushdie: Essays by Arab and Muslim Writers in Defense of Free Speech. It denies the 127 signatures of imprisoned Iranian writers, artists, and intellectuals who declared:

We underline the intolerable character of the decree of death that the Fatwah is, and we insist on the fact that aesthetic criteria are the only proper ones for judging works of art. To the extent that the systematic denial of the rights of man in Iran is tolerated, this can only further encourage the export outside the Islamic Republic of its terroristic methods which destroy freedom.

Rushdie’s critics, keen as they were to protect a marginalized group, condemned Rushdie for causing the violence committed by individual Muslims. But in doing so, these intellectuals treated the Muslim perpetrators of that violence as lacking full moral agency. You can’t cause autonomous people to do something – it is up to them! Implicitly, Rushdie’s Western critics saw Muslims as mere cogs in a machine run by Westerners, or “Englishmen with dark skin” such as Rushdie, as feminist Germaine Greer mockingly referred to him. Rushdie’s critics saw Muslims as less than fully capable moral actors.

True respect, the respect of moral equals, does not ask that we protect each other from hurt feelings. Rather, it requires that we believe that each of us has the capacity to respond to hurt feelings in a morally acceptable manner – with conversation rather than violence. In their haste to protect a marginalized group, Rushdie’s critics forgot what true respect consists of. And in doing so, they blamed the victim for the abhorrent actions of a small number of fully capable and fully responsible moral agents. This time around, let’s not repeat that moral mistake.

When Is Someone Responsible for Not Acting?

photograph of empty chair with "Lifeguard On Duty" sign displayed

A recent law in Minnesota legalized edible marijuana for those over the age of 21 with support from both Republicans and Democrats. While this new law is perhaps unsurprising to many locals and lawmakers, it came as a surprise to one Republican who voted for the bill. Minnesota State Senator Jim Abeler claims that he was not aware of what he was voting for. In fact, he called for the legislature to repeal the new law, a call that was shot down as quickly as it was raised.

Abeler’s claim, that he was not aware of what he was voting for, might be met with genuine suspicion. It’s presumably a part of Abeler’s job to know what he is voting for, and such a claim might be disingenuous. But, for the sake of argument, I would like to consider Abeler’s claim to be genuine.

If Abeler was not aware that his vote would support the legalization of edible marijuana, is he still responsible for his vote?

Abeler’s claim effectively amounts to a denial of one if not both of the classical conditions of moral responsibility: (i) awareness and (ii) voluntariness. Classically, individuals are responsible for an action only if both of these conditions are met. Abeler’s claim is a denial of awareness concerning the specific content of the bill. And his claim is a denial of voluntariness with respect to support for the specifics about which he was unaware. Indeed, assuming Abeler’s claim is genuine, how could Abeler willfully vote to legalize marijuana edibles if he wasn’t aware of this aspect of the bill?

This particular question is an application of the following general question:

Is an individual responsible for something the same individual fails to do?

In philosophy and law, this problem is known as the problem of negligent omissions (see here for an overview in law, and here for an excellent piece in philosophy).

Pinning down when a lack of action becomes a failure of action is tricky. For example, we generally think that if a child is drowning and there is an individual nearby who does not act, this individual is culpable in some sense. The individual ought to have acted and did not — there is a negligent omission. However, we do not generally think the individual is culpable if he is not able to save the child. For example, if the individual is a couple of miles away and unaware of the child drowning, he does not seem to be culpable for not acting.

So, under what conditions is an individual responsible for a failure to act or a failure to act knowledgeably? Generally, there are three conditions. An individual is responsible for an omission if the same individual:

1. Is able to act;
2. Is obliged to act;
3. Is aware of the relevant events and obligation.

To see how these conditions are important, consider the following example. Sylvia is a lifeguard at the local beach. Her job is to save people from drowning and to alert people about the weather conditions by placing the correctly colored flags on the beach. For example, if there is a riptide and it is dangerous to swim in the ocean, she is to place a red flag on the beach. As it happens, an individual begins to drown. Sylvia jumps to the rescue and successfully saves the individual.

Notice that, among the individuals on the beach, it is Sylvia’s job to save the person drowning. The other swimmers do not have the training. And, even if they do, they are not obligated in the same way that Sylvia is obligated to save the person drowning.

Imagine now that Silvia fails to act and the person drowns. Reasonably, she is responsible in some sense. Moreover, imagine the outrage if Sylvia were to turn to a surfer and exclaim, “Why didn’t you save him?” The surfer can legitimately say that he could not have responded because he does not have the proper training. Even if the surfer were to have the training, he would not be obligated to save the individual in terms of his role or job (of course, the surfer still has the more general obligation to help those in need).

The example shows how Sylvia is responsible for her failure to act. She has the ability (the proper training), the obligation in virtue of her role, and the awareness of the event and obligation.

When she fails to save the drowning individual, her omission is negligent.

Now, Abler’s omission is a bit more subtle. He claims to have not voted knowledgeably. He acted, and yet he failed to act with an awareness and knowledge of the bill for which he voted. To see how Abeler is responsible for this omission, let us revisit the lifeguard. Among Sylvia’s various responsibilities is the task to alert the beachgoers of the swimming conditions. Imagine that she raises a yellow flag to alert the people that there is a medium hazard to swimming. However, Sylvia did not take into consideration that the tide is now becoming high tide. This collection of conditions will cause a riptide — Sylvia should have raised a red flag to alert the beachgoers to not swim. Regardless of whether the beachgoers swim, we have a situation where Sylvia acted and did not act knowledgeably. She is responsible for her failure of knowledge because it is her job to account for the water conditions.

Abeler is likewise obligated to know what he votes for in virtue of his role. So, when he fails to vote and when he fails to vote knowledgeably, he is still responsible. His omission is negligent.

All is well at the intuitive level. It seems intuitively correct to ascribe responsibility to Sylvia and Abeler. The conditions are articulated, work together, and have common examples to back them up. But something worrying has happened.

If we can ascribe responsibility for an omission, and more specifically, omission of knowledgeable action, we seem to lose one of the classical conditions of responsibility: awareness.

One way to keep the awareness condition is to maintain that Abeler is generally aware of his responsibilities as a state senator. When Abeler assumed his position, he was aware of his decision and obligated himself to read certain documents. Thus, when he enters his office on voting day and fails to read the bill thoroughly, he has already fulfilled the awareness condition in an important sense. So too with our lifeguard. When Sylvia signs on the dotted line to become a lifeguard, it obligates her to act and be aware in certain ways.

It may also seem that the voluntariness condition is in peril. However, we can offer the same answer that applies to the awareness condition: Abeler voluntarily took on the role of a lawmaker.

There is more to be said about negligent omissions, and there are more ways to pair the classical conditions of responsibility with negligent omissions. What is clear, however, is that Abeler is still responsible for his vote.

Wanda Maximoff and the Metaphysics of Responsibility

photograph of Dr. Strange movie display

This article contains spoilers for the Disney+ series Wandavision and the films Avengers: Infinity War, Avengers: Endgame, and Doctor Strange and the Multiverse of Madness.

In the latest entry to the Marvel Cinematic Universe, Doctor Strange and the Multiverse of Madness, the titular hero squares off against a former ally in a race across universes. After losing the love of her life (twice) at the end of Avengers: Infinity War and watching almost everyone else miraculously resurrected at the climax of Avengers: Endgame, Wanda Maximoff retreated to a small town in New Jersey to mourn. As shown in the Disney+ series Wandavision, she instead ends up (mostly accidentally) trapping the town inside a painful illusion wherein she could pretend that her beloved Vision was still alive; her powerful magic even creates two children (Billy and Tommy) to complete the couple’s happy life of domestic bliss — until everything unravels, that is, and Wanda is again forced to say goodbye to the people she loves.

Last March, I wrote about Wanda’s journey through grief and love for the Post;

at that point, MCU fans had a number of reasons to be hopeful for a genuine Maximoff family reunion. Now, the newest Doctor Strange film has buried those chances firmly under the rubble of Mount Wundagore.

In brief, Wandavision ends by revealing Wanda as a being of immense (and ominous) power known as the “Scarlet Witch” — she frees the town of her illusion, apologizes for the harm she caused, and escapes with a mysterious spellbook called the Darkhold, seemingly intending to somehow use it to reconnect with Billy and Tommy. But from her first scene in Multiverse of Madness, it’s clear that Wanda Maximoff is no longer sorry for what she plans to do: namely, absorb an innocent teenager’s soul and travel to a different universe (where Billy and Tommy are still alive) to kill and replace her counterpart, then live out her days as a mother to the alternate versions of her children. Moreover, Wanda is fully comfortable with killing anyone who tries to stop her — something she does in spades before the story’s end (including to most of the film’s celebrity cameos). Ultimately, it turns out that the Darkhold is a thoroughly evil book which taints whoever reads it with darkness and madness — by searching its pages for a spell to save her children, Wanda was also unknowingly corrupting her once-heroic soul. After Doctor Strange and his allies manage to cut through the Darkhold’s influence, Wanda sacrifices her own life to destroy the demonic book and spare the multiverse from the threat of the Scarlet Witch.

So, here’s where we can ask a more philosophical question:

Wanda brutally murders dozens of people in her quest to save her children, but — if she was under the influence of the Darkhold’s power — was she responsible for her actions?

One common idea (connected to the philosophical idea of “libertarian free will”) is that for an agent to be fully responsible for some action, they must be fully free or in control of the choice to perform the action — as it is often put, the responsible person must have been “able to do otherwise than they actually did” (more technically, they must satisfy the “Principle of Alternative Possibilities,” or PAP). If I were to cast a spell that hypnotically forces you to transfer your life savings into my bank account, you would not have the power to do otherwise, so you would not be free and I would be responsible for the money transfer.

On the other hand, some philosophers believe that a strong commitment to PAP is scientifically untenable: if our actions are ultimately rooted in the material interactions of molecules in our brains (as opposed to something like an immaterial soul), and if those material conditions necessarily obey regular laws of physics, then it seems like no one can ever satisfy PAP (because you will only ever do what the material conditions of the universe dictate). On this view (typically called “determinism”), notions like “free will” and “moral responsibility” are often written off as mere intuitions or illusions that, though sometimes useful in certain conversations, shouldn’t ultimately be taken too seriously.

The middle ground between these views is an interesting position called “compatibilism” which argues that determinism (as described in the preceding paragraph) actually is compatible with a robust sense of freedom and moral responsibility, but not one that requires PAP.

Instead, compatibilists argue that a person is free (and therefore responsible) for a choice if that choice aligns with their dispositions (like wanting or believing certain things). Often, compatibilists will frame responsibility for determined-but-free choices as a matter of “getting what you want” (even if you couldn’t have “gotten” anything else).

For example, suppose that you want to sit in a particular chair and read a book, so you enter a room, close the door, sit in your chair, and read the book — unbeknownst to you, the door locks after you close it, but that doesn’t matter, because you just want to sit and read — are you responsible for the choice to stay in the room? The compatibilist will easily say yes: you’re satisfying your desire, so the fact that you couldn’t have chosen otherwise (violating PAP, thanks to the locked door) is unimportant.

So, what does this mean for Wanda?

Admittedly, the MCU has given only sparse explanations about the metaphysical nature of the Darkhold (so we have to engage in a bit of speculation here), but the film does make clear that the demonic book exerts some kind of influence on (and extracts a price from) its readers. Which means that we can ask two questions:

1. Was Wanda “able to do otherwise than she actually did” while under the Darkhold’s influence?

2. Regardless of the Darkhold’s influence, did Wanda want to do what she did?

If the answer to (1) is “No,” then Wanda’s condition fails to satisfy PAP — just like how Wanda-838 (the actual mother to Billy and Tommy from the Illuminati’s universe) isn’t responsible for the actions that Wanda-616 (from the standard MCU reality) performs while dreamwalking across the multiverse, Wanda-616 would be similarly at the mercy of the Darkhold. If the answer to (2) is also “No,” the compatibilists will also be able to recognize that Wanda wasn’t responsible for her murderous choices, even though she couldn’t have done otherwise.

One of the most interesting things about this whole conversation, though, is that it’s actually not clear that the answer to (2) is “No.” While the movie takes pains to signpost the dangerous nature of the Darkhold (most notably by implicating it in the deaths of multiple versions of Stephen Strange), Wanda repeatedly suggests that her (understandable) desire to find her children is fully her own. If this is the case, then the Darkhold’s influence might have provoked her to act in extreme ways (to say the least), but the compatibilist might not be able to draw a sharp line between Wanda’s dispositions and the book’s suggestions.

However, though Wanda fans might balk at the notion that she authentically “broke bad” and is responsible for murdering whole armies of sorcerers and superheroes, this narrative might make Wanda’s decision to destroy both the Darkhold and herself at the film’s end all the more impressive.

It remains to be seen whether Wanda Maximoff’s tenure in the MCU has come to an end (the movie notoriously avoids offering conclusive proof of her death), just as it is unclear how her character might handle questions of guilt and responsibility, should she return. (For what it’s worth, I’m still hoping that the MCU will grant her a happy ending!) One thing, though, is certain: having grossed nearly a billion dollars in its first month, Doctor Strange and the Multiverse of Madness proves that Marvel Studios is all-but-determined to continuing making MCU films — and audiences will absolutely choose to keep watching them.

Can Machines Be Morally Responsible?

photograph of robot in front of chalkboard littered with question marks

As artificial intelligence becomes more advanced, we find ourselves relying more and more on the decision-making of neural nets and other complex AI systems. If the machine can think and decide in ways that cannot be easily traced back to the decision of one or multiple programmers, who do we hold responsible if, for instance, the AI decision-making reflects the biases and prejudices that we have as human beings? What if someone is hurt by the machine’s discrimination?

To answer this question, we need to know what makes someone or something responsible. The machine certainly causes the processing it performs and the decisions it makes, but is the AI system a morally responsible agent?

Could artificial intelligence have the basic abilities required to be an appropriate target of blame?

Some philosophers think that the ability that is core to moral responsibility is control or choice. While sometimes this ability is spelled out in terms of the freedom to do otherwise, let’s set aside questions of whether the AI system is determined or undetermined. There are some AI systems that do seem to be determined by fixed laws of nature, but there are others that use quantum computing and are indeterminate, i.e., they won’t produce the same answers even if given the same inputs under the same conditions. Whether you think that determinism or indeterminism is required for responsibility, there will be at least some AI systems that will fit that requirement. Assume for what follows that the AI system in question is determined or undetermined, according to your philosophical preferences.

Can some AI systems exercise control or engage in decision-making? Even though AI decision-making processes will not, as of this moment, directly mirror the structure of decision-making in human brains, AI systems are still able to take inputs and produce a judgment based on those inputs. Furthermore, some AI decision-making algorithms outcompete human thought on the same problems. It seems that if we were able to get a complex enough artificial intelligence that could make its own determinations that did not reduce to its initial human-made inputs and parameters, we might have a plausible autonomous agent who is exercising control in decision-making.

The other primary capacity that philosophers take to be required for responsibility is the ability to recognize reasons. If someone couldn’t understand what moral principles required or the reasons they expressed, then it would be unfair to hold them responsible. It seems that sophisticated AI can at least assign weights to different reasons and understand the relations between them (including whether certain reasons override others). In addition, AI that are trained on images of a certain medical condition can come to recognize the common features that would identify someone as having that condition. So, AI can come to identify reasons that were not explicitly plugged into them in the first place.

What about the recognition of moral reasons? Shouldn’t AI need to have a gut feeling or emotional reaction to get the right moral answer?

While some philosophers think that moral laws are given by reason alone, others think that feelings like empathy or compassion are necessary to be moral agents. Some worry that without the right affective states, the agent will wind up being a sociopath or psychopath, and these conditions seem to inhibit responsibility. Others think that even psychopaths can be responsible, so long as they can understand moral claims. At the moment, it seems that AI cannot have the same emotional reactions that we do, though there is work to develop AI that can.

Do AI need to be conscious to be responsible? Insofar as we allow that humans can recognize reasons unconsciously and that they can be held responsible for those judgments, it doesn’t seem that consciousness is required for reasons-recognition. For example, I may not have the conscious judgment that a member of a given race is less hard-working, but that implicit bias may still affect my hiring practices. If we think it’s appropriate to hold me responsible for that bias, then it seems that consciousness isn’t required for responsibility. It is a standing question as to whether some AI might develop consciousness, but either way, it seems plausible that an AI system could be responsible at least with regard to the capacity of reasons-recognition. Consciousness may be required for choice on some models, though other philosophers allow that we can be responsible for automatic, unconscious, yet intentional actions.

What seems true is that it is possible that there will at some point be an artificial intelligence that meets all of the criteria for moral responsibility, at least as far as we can practically tell. When that happens, it appears that we should hold the artificial intelligence system morally responsible, so long as there is no good reason to discount responsibility — the mere fact that the putative moral agent was artificial wouldn’t undermine responsibility. Instead, a good reason might look like evidence that the AI can’t actually understand what morality requires it to do, or maybe that the AI can’t make choices in the way that responsibility requires. Of course, we would need to figure out what it looks like to hold an AI system responsible.

Could we punish the AI? Would it understand blame and feel guilt? What about praise or rewards? These are difficult questions that will depend on what capacities the AI has.

Until that point, it’s hard to know who to blame and how much to blame them. What do we do if an AI that doesn’t meet the criteria for responsibility has a pattern of discriminatory decision-making? Return to our initial case. Assume that the AI’s decision-making can’t be reduced to the parameters set by its multiple creators, who themselves appear without fault. Additionally, the humans who have relied on the AI have affirmed the AI’s judgments without recognizing the patterns of discrimination. Because of these AI-assisted decisions, several people have been harmed. Who do we hold responsible?

One option would be to have there be a liability fund attached to the AI, such that in the event of discrimination, those affected can be compensated. There is some question here as to who would pay for the fund, whether that be the creators or the users or both. Another option would be to place the responsibility on the person relying on the AI to aid in their decision-making. The idea here would be that the buck stops with the human decision-maker and that the human decision-maker needs to be aware of possible biases and check them. A final option would be to place the responsibility on the AI creators, who, perhaps without fault created the discriminatory AI, but took on the burden of that potential consequence by deciding to enter the AI business in the first place. They might be required to pay a fine or take measures to retrain the AI to avoid the discrimination in the first place.

The right answer, for now, is probably some combination of the three that can recognize the shared decision-making happening between multiple agents and machines. Even if AI systems become responsible agents someday, shared responsibility will likely remain.

What Does It Take to Make an Apology?

caricature of a resigned Boris Johnson

Boris Johnson recently apologized for attending a party at No 10 Downing Street in March 2020, just after a national lockdown had been ordered. His apology has been met with general outrage and a call for his resignation. Individuals respecting the lockdown measures couldn’t see their loved ones on their deathbeds, attend funerals, celebrate weddings, and more. For instance, the queen of England had to attend her own husband’s, the Duke of Edinburgh’s, funeral in relative isolation. As of the latest, the consensus is not a matter of whether Boris Johnson will resign, but when and how he will resign.

Unsurprisingly, most parties accuse Boris of delivering a weak apology. Worse, some accuse him of failing to apologize altogether. This raises the question, “What does it take to make an apology?” Boris Johnson’s apology provides a good case study of what does and does not qualify as an apology. In what follows, I will outline three conditions for making an apology and contrast this with the related concepts of excuse, justification, and explanation. Given the notion of apology I put forth, Boris Johnson fails to make a successful apology.

Let us stipulate that an apology consists of three things: the individual who apologizes must (i) take ownership of the action and the damaging effects, (ii) regard the action as bad, and (iii) be open to making amends or reconciling in some fashion. Consider a trivial case of apology (modeled on a favorite poem of mine). Juan and Sarah are work colleagues. Juan always places his snack of choice, some tangerines, in the work cooler. As it happens, one day Sarah eats Juan’s tangerines. Juan sees the tangerine peels in Sarah’s bin, so naturally addresses the issue. Upon being confronted, Sarah apologizes. She says, “I’m sorry, Juan. Those were your tangerines, and I ate them. I don’t want to take your food and treat you that way.”

While somewhat trivial, notice a couple of things about this scenario. First, Sarah owns up to the action. She doesn’t deny that she is the tangerine-eating-culprit — she takes responsibility for the action of eating Juan’s tangerines. Particularly, she takes responsibility for a wrong action and the effects of that wrong action. Next, Sarah is open to making amends. That is, she apologizes for the sake of reconciling. If this is not clear, simply consider the opposite: Sarah says, “I apologize,” and turns around in a huff for having been confronted. Like children on a playground who only verbally apologize, this would seem to only be half an apology (or no apology at all!). So, it seems like being open to amends is necessary for an apology. Of course, even if Sarah does successfully apologize, Juan can still refuse the apology. But this does not mean that Sarah has failed to apologize. Rather, it means that reconciliation has not occurred.

So, let us consider these three conditions as necessary and sufficient for an apology. You need them all for an apology to happen; and when you have them all, an apology happens.

Now, apologies are not usually as clean-cut as in the above example. Apologies are usually muddled with justifications and excuses. Imagine the same scenario where Sarah has eaten Juan’s tangerines. Excuses are frequent companions to apologies. Sarah could reply: “Oh goodness, I had no clue those were your tangerines in the cooler — sorry I ate them!” Excuses aim to deflect responsibility for the action in question. Sure, Sarah ate the tangerines. But she didn’t knowingly eat Juan’s tangerines from the cooler. That’s not the type of action that she is willing to own up to. Thus, when present, the excuse deflates the apology. Sometimes, excuses simply replace the apology. Sarah maintains, legitimately or not, that she should not and even cannot be held responsible for the action in question. When an excuse accompanies an apology, it at least deflates it, for it calls into question the first condition of taking responsibility for the action in question.

Justifications are also frequent companions or alternatives to apologies. Sarah could try to justify her action when Juan confronts her by saying, “Yeah, I ate it. But it wasn’t so bad after all — I was starving! Sorry, Juan.” Where excuses deny responsibility for the action in question, justification takes responsibility for it. Moreover, the person who offers a justification disowns the fact that the action was bad and thus deflects the blame. Perhaps the justification is legitimate, perhaps not.

Illegitimate justifications, as we know all too well, are a common way to deflect blame. When an illegitimate justification is present, it defeats the apology. The person who says ‘sorry’ at this point has simply not taken responsibility for the wrongness of the action and the bad effects.

Now, we don’t want to conflate justifications with explanations. To see the difference, imagine that Sarah says, “Sorry I ate your tangerines. I was hungry and not thinking of you — I regret doing that.” Similar to a justification, the explanation serves to provide context. Unlike a justification, the explanation does not seek to deflect blame. Explanation is a way of making amends and helping the offended person understand. The explanation aligns with the purpose of the apology, to take responsibility and make amends of some sort. When done correctly, explanations can go a long way in making an apology and the aspired reconciliation successful.

How does this help us analyze Boris Johnson’s apology? The presence of justifications or excuses calls the apology into question. Indeed, illegitimate justifications or excuses defeat the apology. Turning to his statement, the Prime Minister repeatedly and directly stated “I apologize.” He made note of his regret at the damage caused by his actions. He ended by encouraging the investigation, indicating his openness towards making another statement. We might be charitable and presume this means he is open to making amends. These all meet the conditions of an apology. So far, so good.

When taken as a whole, however, he made both excuses and justifications. Consider one of his excuses: he “believed implicitly that this was a work event.” Really? This classifies as an excuse. If we were to liberally elaborate, the above statement amounts to the following: “you are saying I attended a party and could blame me if this was the case. But that was not what I was doing! I attended a meeting. You can’t hold me responsible and blame me for that action.”

Additionally, the Prime Minister made a justification. He claimed it was “technically legal.” This classifies as a justification. Whether it was a party or work event, it was legal and he is not therefore blameworthy. In the justification, he takes responsibility but denies the action was bad.

I won’t get into the details of whether Boris Johnson offered any legitimate excuses or justifications. Neither have I commented on whether leaders have different kinds of responsibilities, or simply heightened responsibilities for the citizens they serve. But, as is clear from the tangerine-theft, the presence of any excuses and justifications at least deflates and calls the apology into question. And this is no different with Boris.

Considered Position: On Voluntary Non-Vaccination – Real World Complications

photograph of child among masked crowd

This piece concludes a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here (Part II).

In Part I of this series I tried to explain the moral intuition that I have less reason to engage in precautionary social-distancing behavior once those who are unvaccinated are unvaccinated by choice rather than because they could not access the vaccine. This intuition, I argued, is explained by the fact that justice mainly requires us to give people reasonable options for safety and does not require us to make others safe.

In Part II, I looked at what it takes for an option to be reasonable, and we realized that this is often sensitive to the reasons that someone does not want to get a vaccine. If someone’s objection to the vaccine is reasonable, then justice may still require me to adopt precautionary behavior.

In both posts, I idealized the questions at hand. I ignored real life complications, and just tried to identify the abstract principles involved. For that reason, my cases were often unrealistic, involving such absurdities as the unvaccinated engaging in moral blackmail or people taking pills to make themselves allergic to the vaccine.

In this final post, I want to turn to the real world and look at some of the complications which make this a difficult ethical question.

Complication 1: We judge the reasonableness of others in biased ways

We saw in Part II that if the other person has reasonable grounds for refusing the vaccine, then justice still requires me to protect them by social distancing (even if their grounds are mistaken). One challenge, however, is that we tend to be biased in our own assessments of what are reasonable grounds.

Consider, for example, the following two suggested grounds of vaccine hesitancy:

Skepticism 1: Distrust of a Racist Healthcare System

Some Black people in the U.S. are reluctant to get a vaccine due to distrust of the American medical system. While this is sometimes attributed to historical injustices, like the Tuskegee study, it is more plausibly explained by current disparities in health care treatment. (It also, as a whole, might just be overblown; but we will put that aside for now.) The thought might go as follows:

“As Ben Almassi has argued in the context of organ donation, there are good grounds, given persistent racial health inequities, for Black people in the U.S. to distrust that the medical system has their best interest at heart. But if one has good reason to distrust the health system, then one also has good reasons to distrust the recommendations of the health system. This is especially true because we know that drugs and vaccines can sometimes affect different racial groups differently, and we also know that Black people tend to be massively underrepresented in vaccine trials (even when those rates are reported).”

Skepticism 2. Distrust of the Liberal Narrative 

Some conservatives are reluctant to get the vaccine due to distrust in the way that mainstream media portrays medical information. They might say things like:

“Putting aside worries that the COVID threat was overhyped to sink former President Trump’s reelection chances; we have seen a systematic tendency for the media to provide unreliable coverage on how to respond to the vaccine in order to fit its preferred political narrative. First, we see the same caseloads suggest different reactions depending on who is in charge. The week after President Biden was sworn in, long before any change in policy could have altered risk, blue states began opening up and schools began seriously trying to return students to in-person education. That was true, even though the absolute case numbers were still worse than they were in the summer when everyone insisted things needed to be shut down under President Trump.

Then, of course, ‘the Scientists’ consistently lauded Democratic governors like Andrew Cuomo and panned Republican governors like Ron Desantis. And yet, we have consistently found that the media narrative was backwards. Florida, despite an extremely old population, came out of the pandemic pretty well, with a much stronger economy, and with many more kids staying in school. This is not just cherry-picking. Republican states, on average, had fewer deaths and caused significantly less damage to the economy. Then ‘the scientists’ told us to take our kids out of school, but didn’t object to massive unmasked Black lives matter protests. ‘The scientists’ told us not to wear masks, until they needed a social symbol for purposes of public shaming.”

I bring up these two explanations of skepticism, not because I find them equally plausible, but because I don’t find them equally plausible. Intuitively, I find the first reasonable and the second ludicrous. But here is the issue: it’s hard to identify any objective reason the first is more reasonable. Sure, it fits better with my own preferred political narrative; but I think there are decisive objections to both lines of reasoning, and I don’t think the errors in one are in any sense more egregious than the errors in the other.

The danger, then, is that I am more sympathetic to members of what I see as my political in-group. But that differential in sympathy means I’m especially likely to inconsistently apply a standard as squishy as ‘reasonable.’

I don’t have a good solution here, and so just advise extreme caution when you label political allies reasonable or when you label political opponents unreasonable.

Complication 2: Immunization isn’t up to some people

While there are few, if any, groups in the U.S. for whom it would be dangerous to get the vaccine. There is some evidence that immunocompromised patients, while they can safely get the vaccine, do not always produce the required antibodies.

Similarly, there is a group of people in the U.S. who cannot choose to vaccinate: children. This is true in the limited sense that the vaccines are not currently approved for use in children below the age of 16. But it is also true in the sense that, even once the FDA approves the vaccine for children, children cannot choose to be vaccinated without a parent’s permission. Unvaccinated children, then, might not be unvaccinated by any choice of their own.

These are important complications, but I’m not sure that on their own they would show you must socially distance until we reach herd immunity (on the hypothesis that there are a large percentage of vaccine skeptical holdouts). Children are far less susceptible to COVID-19, and only a very small portion of the population are severely immunocompromised. Given these facts, the threat posed to children and the immunocompromised is far smaller than the risk posed by pre-pandemic activities when most people did not have access to the vaccine. Certainly, you should engage in some precautionary measures, especially if you know you are likely to be around someone who is immunocompromised. But it is unclear that those are any different than the ordinary obligations one has during flu season.

Complication 3: Deception and consent

One further complication is that deception tends to undermine voluntariness. For example, if I lie to you about the results of a surgery, then your consent to that surgery does not actually constitute voluntary consent. Similar issues arise about sexual consent.

Or suppose you told your friend that you would pick them up from the airport. But then I, just for the fun of it, lie to you and tell you that your friend’s flight was delayed, that they were not able to reach you, and that they don’t need a ride after all. If you don’t pick your friend up from the airport, then breaking the promise was involuntary. It was involuntary because I am the one who bears responsibility for your failure.

Now, if it is true that deception can undermine voluntariness, then one worry we might have is that there may be a good number of people who refuse the vaccine because they were lied to, and if so, it is those who lied who bear the actual responsibility for the non-vaccination.

One reason this is an important point to notice, is because a lot of people are especially likely to think that those with unreasonable reasons for refusing the vaccine accept those reasons because they are being lied to by their media ecosystem. Thus, many on the left think the vaccine hesitancy on the right is ludicrous, but those same people on the left are also likely to think that Fox News, OAN, or Newsmax, are systematically deceiving their viewers. Similarly, many on the right think that concerns of racism are blown way out of proportion, but those same people on the right are also likely to think that mainstream media organizations — like CNN or The New York Times — are providing systematically distorted information on those very issues.

Indeed, it is not just cases of outright lying that might trigger a shift in responsibility. Not only do I preclude the voluntariness of your action by lying to you, I do the same thing if I tell you something false when I should have known better. If I tell you something false but am really honestly trying to tell you the best I know, then your actions are still voluntary. You made the choice to trust me in good faith. But if I am not acting in good faith myself, then I am the one at fault when you err.

Conclusion

So once vaccines are widely available (such that the unvaccinated are mostly unvaccinated by choice) but before we reach herd immunity (due to widespread vaccine hesitancy) can you return to pre-pandemic behavior?

As we’ve seen, this is a difficult question. However, it seems likely that the right answer is generally yes. For the most part, because it is reasonable to expect people to get the vaccine, it is reasonable to return to behaviors that would be safe were others to be vaccinated. This is true, even without factoring in the fact that the vaccinated are very unlikely to spread COVID. And so, it does seem like justice allows life to return to normal.

However, we have also learned an important moral lesson about what it takes to justly live together in a society. For justice to allow us to return to pre-pandemic activities, it does not just require the vaccine to be widely available. It also depends on other people being able to voluntarily refuse the vaccine. And as it turns out, there are complicated ways in which we can undermine the voluntariness of other’s actions. When we are not fastidious about what we tell others, we risk undermining their own capacity to make voluntary choices. If I thoughtlessly spread misinformation or repeat something as fact that I am only pretty sure of, then I undermine one of the fundamental building blocks of our system of justice.

My own testimonial irresponsibility undermines the voluntariness of those who believe me in good faith. And systems of justice largely depend on the mutual voluntariness of everyone’s choices. This is one reason why lying and bullshitting are such profound moral wrongs. It is not just that others end up with some wrong beliefs (we all have a bunch of misguided convictions), but that other people are rendered, in a sense, passive in relation to your choices. By breaking down even a small portion of the reciprocal norms of justice, you don’t just lie to another but partially bring the whole system of justice down on top of you.

Considered Position: On Voluntary Non-Vaccination – Types of Reasons

photograph of masked and unmasked people in a crowd

This piece is part of a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here.

Hopefully pretty soon, my state, Florida, will reach the point that anyone who wants to be vaccinated can be vaccinated. In Part I of this series, I argued that once we reach that point, I have fewer reasons of justice to engage in aggressive social distancing. After all, everyone has the option to get the vaccine and so protect themselves from whatever risks I impose by my everyday activities.

In that argument, however, I ignored an important variable in our assessment of justice: why are people not getting vaccinated? This is important because different reasons give rise to different duties on my end.

To see this, let’s start with a particularly extreme contrast of cases:

Coordinated Moral Blackmail: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population are excited to return to normal and immediately get vaccinated, the other 40% coordinate together and refuse to get vaccinated until their demands are met. The 40% realize that the other 60% don’t want to put anyone’s health at risk, and so they refuse to receive the free vaccine unless the other 60% agree to collectively pay each person in the 40% camp three hundred dollars. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a tool of moral blackmail.

Widespread Deadly Allergy: Suppose that herd immunity requires that 80% of the population get vaccinated. Let us also suppose that one of the weird things about this vaccine is that it produces a severe, possibly fatal, allergic reaction in people with peanut allergies. But while 60% of the population have no allergies and so get the vaccine, the other 40% all have severe peanut allergies and so forgo vaccination until a safer vaccine is approved. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated because the vaccination is not worth the risk of anaphylaxis.

I presume that you share my intuitions about these two cases. In the first case, the 60% have no reasons of justice to forgo returning to pre-pandemic life. But in the second case, the 60% still have reasons to social distance. Indeed, I would think that the 60% in the second case have duties pretty similar to the duties we would have if the vaccine were only available to 60% of the population.

Normally, if someone voluntarily chooses to forgo the vaccination that changes my duty of justice. However, it only makes a difference to duties of justice if we can reasonably expect the person to get the vaccine.

It is unreasonable to expect someone to take a very risky vaccine, just as it would be unreasonable to expect a family to bankrupt themselves to buy the vaccine. But such pragmatic concerns are not the only relevant ones. For instance, if it is unethical to get the vaccine, say because it was produced by slave labor, then again if the majority of the population refuse the slave made vaccine, justice requires me to continue to maintain pandemic precautions.

If it would be a mistake to get the vaccine, then I am still bound by the same norms of justice to engage in precautionary behaviors. If people truly believe that they should not get the vaccine, then I can’t reasonably expect them to vaccinate.

But what if instead people falsely believe there is a problem with vaccines? Well, in that case things become more complicated. There are two kinds of error that could explain why one thinks a right action is wrong, both of which are described by Elizabeth Anscombe. First, one might be wrong about the ‘facts on the ground,’ the circumstantial descriptions of what is going on — for instance, one might think the vaccine is less safe than it is, or one might falsely think it is produced by slaves. Second, one might be wrong about the moral implications of those ‘facts on the ground.’ For instance, one might know the risk is tiny, but have the false principle that one cannot be expected to undergo even the smallest possible risk for the public good. Following a convention that I use in my own work, let’s call the first an error about circumstantial facts and the second an error of classificatory facts.

Error 1: Mistakes about Circumstance

The COVID vaccines are not, on the whole, very risky. The consensus judgment of the medical community is that the vaccines, like most vaccines, are safe and effective. But even if the vaccines are safe, some people might believe them to be unsafe. And this raises a question of whether such honest beliefs change my own reasons of justice.

Widespread Error of Belief: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population get the vaccine, truly believing that vaccines are safe and effective, 40% of the population mistakenly believe that the vaccine is dangerous. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a result of mistaken beliefs.

In this case, does justice require me to forgo pre-pandemic activities? To answer this question, we need to know more information. In particular, we need to know why people have the false belief.

Suppose that preliminary scientific studies, incorrectly, suggest that the vaccine is unsafe for pregnant women. This information is widely announced and the majority of scientists and doctors accept the result. As it turns out, however, the conclusion is the result of a subtle mathematical error that no one has yet noticed.

If the best scientific evidence suggests that pregnant women should not get the vaccine, then it is clearly unreasonable for you to expect pregnant women to get the vaccine. It does not matter that the studies are wrong, because you cannot expect the average person to realize that the studies are wrong. If this is right, then at least some of the time false beliefs about matters of circumstance (such as the safety of belief), really do make it unreasonable for me to expect you to be vaccinated.

But not all mistaken factual beliefs work that way. Now imagine someone who is profoundly lazy and just assumes vaccines are unsafe. Because they are lazy and do not care about public health, they never bother to do any research to check whether their assumption is right.

We can accept that the person really thinks that the vaccines are unsafe. And we can further accept that the person cannot, by a voluntary act of will, get herself to believe the vaccines are unsafe (for instance, you cannot, at this very moment, choose to believe there is an elephant behind you even if I offered you five hundred dollars to form the belief).

So suppose our imagined interlocutor says: “I’m not getting a vaccine because I really believe they are unsafe; ultimately I don’t choose what I believe, so you can’t blame me for not getting vaccinated.” Is this right? Does the fact that we cannot choose our own beliefs mean we cannot be blamed for our false beliefs?

Elizabeth Anscombe, in her article “On Being in Good Faith,” argues this line of thought is mistaken. While good faith belief can make one’s behavior reasonable. A good faith belief, in the sense that excuses behavior, requires more than you really believing the thing you say you believe:

“Consider a rash and uncharitable judgement which led one to slander someone gravely. One does not – usually at least—say to oneself ‘Now I will judge rashly’ or ‘I am determined, rash as it is, to judge that so-and-so’- one simply judges rashly. What does ‘in good faith’ mean? If it means that one has not got one’s tongue in one’s cheek, is not consciously lying or hypocritical, then good faith is not enough to exonerate. If it is enough to exonerate, then the slander uttered under the influence of a rash and passionate conviction is not a statement made in good faith. . . . Thus good faith or sincerity are either not much good as a defence, or not so easily assured as we might think at first. . . . ‘He ought to have realised…’, ‘He ought to have thought of…’, ‘He had no business to suppose that…’, ‘He ought to have found out…’, are phrases that bear witness to the fact that a man’s beliefs do not suffice to justify him so long as he acts accordingly.”

What Anscombe is arguing is that one can be responsible for false beliefs if we could have expected you to know the truth. If the reason you have a false belief is because the entire scientific community is mistaken, we can’t expect you to know any better. But if the reason you have a false belief is because you are too lazy to do research, then of course we can blame you for your error.

If I accidentally kill a child because I didn’t know they had a deadly carrot allergy, then I’m usually not responsible for that death. However, if the only reason I don’t know about the allergy is because I skipped our camp’s health and safety meeting, then I am at least somewhat responsible. I can’t object that ‘I didn’t know,’ because I should have known.

The same principle applies to vaccines. If you have a false belief that might make your non-vaccination reasonable, but it only does so if we could not have expected you to know better.

Error 2: Mistakes about Classification

What about cases of moral ignorance? That is, someone knows all the ‘facts on the ground,’ but she mistakenly thinks that those facts mean she should not take the vaccine.

Some philosophers think moral ignorance is unique, that while circumstantial ignorance can excuse, moral ignorance never can. Other philosophers disagree. (I discussed that debate at length in a previous Prindle Post.)

In this context, however, it seems that moral ignorance is probably not unique. That is because we want justice to allow for at least some moral pluralism. If there are no reasonable, though false, moral principles; then the project of democratic pluralism is in serious danger. If we want to live together in society with at least some level of deep moral disagreement; then we should acknowledge the reasonability of people acting on at least some moral principles we think are wrong.

Now, in our society we don’t have widespread reasonable moral ignorance preventing us from reaching herd immunity. But there are at least some real-life cases where it is reasonable to wrongly refuse a vaccine on moral grounds. A good example is those who don’t want to take a vaccine that was created via use of aborted fetal tissue. This seems to me to be to be too morally fastidious, but the reasoning is not unreasonable, and I do not think we can expect people to never make that error.

Conclusion

The reason that people refuse the vaccine matters. If they are right to refuse it, then one is just as responsible to take precautionary actions as one was before the vaccine became available. If they are wrong to refuse, then the question becomes whether or not it is reasonable to expect them to take the right action. If their incorrect refusal is explained by a reasonable form of circumstantial or moral ignorance, then justice continues to make the same demands.

Continue to Part III – “Real World Complications”

Considered Position: On Voluntary Non-Vaccination – The Difference Voluntariness Makes

photograph of people walking and biking in masks

This piece begins a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated.

My state, Florida, recently opened up COVID vaccinations to everyone. This does not quite mean that anyone who wants to be vaccinated can be vaccinated. There are still a limited number of vaccines available, so not everyone who wants to get vaccinated has been able to schedule an appointment. But we are getting close to the point where those who remain unvaccinated are unvaccinated by choice.

This raises a question: does the fact that the vulnerable choose to remain vulnerable make a moral difference to what precautions I should observe? I have the strong intuition that this does make a moral difference; it intuitively seems that imposing risks on the unvaccinated is not as bad when the unvaccinated are unvaccinated by choice. (The evidence increasingly suggests that the vaccinated cannot really spread COVID-19, and if that is confirmed it will render much of this practical discussion moot. However, the underlying philosophical questions are important and worth investigating.)

But is my intuition that I can be less cautious correct? 

In this, and two subsequent posts, I will try to answer that question. Each post will be dedicated to one part of an answer.

  • Part I: What principle underlies the intuition that the voluntariness of non-vaccination makes a difference to my own actions? And is that principle a true moral principle?
  • Part II: Does it matter why others are choosing not to be vaccinated? Are there differences, for example, in how careful I should be around someone who avoids vaccination because they think COVID-19 is overblown or around a pregnant mother concerned about the lack of trial data in pregnant women?
  • Part III: How do the complexities of real life complicate the moral calculation? What are the implications of the fact that children cannot get the vaccine without a parent’s permission? And is someone’s choice really voluntary if that person was lied to about the safety of vaccines?

In this first post, I want to investigate what principle might underlie my intuition that I have fewer obligations of caution to those who are voluntarily unvaccinated. To identify the principle at work, it will be useful to start with a simple argument that voluntariness should not make any difference. The thought goes as follows:

  1. During the pandemic, I avoid certain behaviors — such as licking doorknobs — to avoid spreading illness and death.
  2. If someone forgoes vaccination, the reason they forwent vaccination makes no difference to their susceptibility to illness and death.
  3. So, people being unvaccinated by choice makes no difference to my reason to avoid certain behaviors.

Let us call this the ‘simple utilitarian perspective.’ The simple utilitarian thinks that because voluntarily refusing a vaccine conveys no immunological protection, the fact people voluntarily forgo the vaccine makes no difference to my moral calculation. If you are in a community where 40% of people are unvaccinated by choice and I am in a community where 40% of people are unvaccinated because of a limited supply of vaccines, then the simple utilitarian says we are in a morally equivalent position.

The Utilitarian Explanation of the Difference

I call this the ‘simple utilitarian perspective’ because there is a perfectly good utilitarian argument against this reasoning. It is true that it makes no difference to my own risk whether I cannot get a vaccine or whether I choose to not get a vaccine; in either case I am unvaccinated. However, that does not mean that if you compare a random person who could not get the vaccine to a random person who chose to not get the vaccine, that the average risk is the same. Assuming people are at least somewhat rational, people at higher risk are more likely to choose to be vaccinated.

Even if utilitarians only ultimately care about happiness, they still will place some value on freedom. When people are free to make their own choices, they can make choices that are best for themselves. The elderly are at greater risk than the young are. As such the elderly are more likely to choose to vaccinate. Similarly, those who are very healthy — and without any risk factors for COVID-19 — are more likely to forgo vaccination because their risks of contracting it are smaller.

All this means that it’s probably safer to resume licking doorknobs once everyone had the choice to get the vaccine because those at highest risk will also be vaccinated at the highest rates.

Going Beyond the Utilitarian Answer  — This might partly explain my intuition, but it cannot be the whole story. This is because my intuition persists, even when I know the utilitarian explanation does not apply; for example, even if I know that the person is forgoing a vaccine for a reason unrelated to personal risk — like because ‘vaccines don’t fit with their personal style’ — I still intuitively feel I have less reason to be cautious.

Distributed Responsibility 

Part of the intuition is explained, I think, by the fact that people who are unvaccinated by choice will share some of the responsibility when they get sick.

If the only way to prevent people from getting sick is if I stop licking doorknobs, then by licking doorknobs I take on complete responsibility for their illnesses. However, if there are two ways to prevent people getting sick — I stop licking doorknobs or they get vaccinated — then at worst I am only partially responsible. They share in responsibility by declining the vaccine.

If we imagine other more ordinary behaviors, like frequent grocery shopping rather than doorknob licking, then the other person actually bears most of the responsibility for getting sick. It seems more reasonable to ask them to get vaccinated than to ask me to stay indefinitely in lockdown; the more reasonable the choice you reject, the more responsible you are for the consequences of that rejection.  (This, then, is why you might feel I am mostly responsible if I really were licking doorknobs; licking doorknobs was not a reasonable thing to be doing in the first place.)

This idea, that the choices of others can mitigate our own responsibility is prominent in both ethics and law. I like how Christine Korsgaard presents the idea in her discussion of our responsibility for the consequences of lying:

“In a Kantian theory our responsibility has definite boundaries: each person as a first cause exerts some influence on what happens, and it is your part that is up to you. If you make a straightforward appeal to the reason of another person, your responsibility ends there and the other’s responsibility begins. But the liar tries to take the consequences out of the hands of others; he, and not they, will determine what form their contribution to destiny will take. By refusing to share with others the determination of events, the liar takes the world into his own hands, and makes the events his own.”

Going Beyond the Distributed Responsibility Answer — But if this is the explanation of the intuition, then we have a problem. There is something morally vicious about someone who is solely concerned with avoiding responsibility. The virtuous reason to take precautions is not to avoid responsibility for someone’s death, it is to save people’s lives.

To see this, let’s look at an example from my own life (an example I still look back on with shame).

Years ago, an acquaintance of mine expressed an intent to commit suicide. I became deeply distressed, was unsure how to proceed, and grew paralyzed by indecision. So, I reached out to two mentors of mine, both of whom had experience working with suicidal people.

Reaching out was the correct thing to do; I did not know how best to offer help. The problem was the reason I reached out for help. Ultimately, it was not so that I could better support this acquaintance. Rather, I was racked by anxiety about messing up and becoming responsible for the person’s death. I reached out to these mentors because I knew that it would be irresponsible to not follow their advice. Deep down, I wanted to reach out because that way, even if the person did kill herself, at least I would not be blameworthy.

Why think this is morally perverse? Most simply because my own guilt was not the important good at stake in the choice. The thing that mattered was my acquaintance getting the help she needed; decreasing my own culpability if things went badly was not anywhere near as important! (For a more detailed discussion of the way in which a concern for our own responsibility distorts our moral reasoning, see Elizabeth Anscombe’s article “On Being in Good Faith.”)

Reasons of Justice

Even though we should not be strongly motivated by a concern to avoid responsibility; there is a close connection between what we should do and what we would be responsible for not doing. So, this difference in how responsible I would be if someone gets sick might not explain why I have weaker reasons to take precautions, but it is evidence that my reasons are weaker.

But if I do have weaker reasons, then that must mean that my reasons to take precautions are not quite so simple as I have reasons to keep people from getting sick. And this is the key to unlocking the puzzle. While I do have reasons to lower the risk that other people get sick, I have especially important reasons of justice to give people control over their own risk.

Before the vaccine is widely available, if I go around engaging in ordinary risky activities, I impose risks on others that they cannot reasonably avoid. They have no control over whether what I do poses a risk to them. As such, it is reasonable to expect me to forgo certain activities for the sake of maintaining some minimal freedom for others.

After the vaccine is widely available, however, the risks I impose on others are risks that can be reasonably avoided. Others have control over how large a risk my frequent grocery shopping imposes on them. People have the option of safety. Whether they take that option makes some difference to my reasons for infrequent grocery shopping; but it is a less stringent reason than my reasons of justice to avoid imposing unavoidable risks.

Justice is that virtue which enables us to live our own life in community with others; as such, it is the virtue that sets boundaries on what I can choose, where those boundaries mutually accommodate the choices of others. We can drive faster now that every car comes equipped with seatbelts. Why? Not because everyone always uses their seatbelts, but because everyone having access to seatbelts ensures that everyone has a free option that allows them to maintain their previous level of safety even as I start driving faster on highways.

Justice is focused on whether people have choices of health, and not whether people are healthy. For example, justice requires that we provide those who are starving with food, but it does not require us to force feed someone who refuses to eat. Were this not true, then justice could actually harm our ability to live our own life in concert with others by giving rise to certain kinds of moral blackmail. Suppose I have no objection to being vaccinated and a high personal risk tolerance. As such, I insist that unless you pay me one hundred dollars I will not go and get a vaccine. If your duties of justice meant that as long as I forgo the vaccine, you cannot return to pre-pandemic activities, then I would be able to hold your actions hostage by means of your duty of justice.

Justice, of course, is not the only virtue. I also have duties of charity. And indeed, one of the things that makes charity so demanding is precisely that it opens us up to this kind of moral blackmail. To love another person requires caring about even their self-made misery. Charity is not ultimately about living your own life; it demands instead that you live, at least in part, for others. This is why charity is such a high and holy virtue; and in turn why even if everyone who forgoes a vaccination does so for entirely voluntary reasons, that does not end all duties of precaution.

Conclusion

Of course, in real life things are a little more complicated. For example, some people are forgoing the vaccine for trivial reasons while others seem to have reasonable concerns. Does my duty of justice change depending on why others are not being vaccinated? That will be the topic of Part II.

Continue to Part II – “Types of Reasons”

Liability and Luck

photograph of lone firefighter standing before small wildfire blaze

In the unlikely event that you have not yet experienced your daily dose of despair concerning the fate of humanity, then I’d highly encourage you to read Elizabeth Weil’s ProPublica piece “They Know How to Prevent Megafires. Why Won’t Anybody Listen?” The article makes two basic points. 1) Extensive controlled burns would be an effective precautionary strategy that would prevent recurring megafires. 2) There are political and financial incentives which trap us into a reactionary rather than precautionary fire strategies.

There are clearly lots of perverse incentives at play, but one part of the article was especially interesting:

“How did we get here? Culture, greed, liability laws and good intentions gone awry. There are just so many reasons not to pick up the drip torch and start a prescribed burn even though it’s the safe, smart thing to do. . . . Burn bosses in California can more easily be held liable than their peers in some other states if the wind comes up and their burn goes awry. At the same time, California burn bosses typically suffer no consequences for deciding not to light. No promotion will be missed, no red flags rise. ‘There’s always extra political risk to a fire going bad,’ Beasley said. ‘So whenever anything comes up, people say, OK, that’s it. We’re gonna put all the fires out.'”

It is risky to engage in controlled burns. Things can go wrong, and when they do go wrong it could be pretty bad, someone could lose their home, maybe even lose their life. Of course, it is far riskier, in one sense, to not engage in controlled burns. So why, then, our incentives set up the way they are?

At least two different explanations are likely at play.

Explanation 1: Action vs Inaction. First, in general, we are more responsible for actions than for inactions. The priest who ‘passed by the other side’ of a man left for dead did something terrible, but did not do something as terrible as the thieves who beat the man up in the first place. As a society we jail murders, we don’t jail the charitably apathetic, even if the apathetic are failing to save lives they could save.

And indeed, this point does have an appropriate corollary when talking about fire suppression. I am not responsible for houses burning in California — this is true even though last spring I could have bought a plane ticket, flown to California, and started burning stuff. Had I done so, likely things would have gone terribly wrong, and in that case I really would have been responsible for whatever property I had destroyed. This seems appropriate, it could be catastrophic if my incentives were structured such that I was punished for not starting vigilante fires.

Elizabeth Anscombe gives us a similar example. If the on-duty pilot and I are both asleep in our cabins, then we are doing the very same thing when our ship hits an iceberg. Yet it was the pilot, and not I, who sunk the ship. Indeed, had I, a random passenger, had tried to navigate the ship we would have absolutely held me responsible when something goes wrong.

So, what is the principle here? Is it that amateurs are specially responsible for actions? No, because we can also identify cases where we indemnify amateurs for their actions. Perhaps the best example here is good Samaritan laws. These laws protect untrained people, like myself, if we make a mistake when trying to render emergency first aid.

What is really going on is that we don’t want passengers trying to navigate ships. Nor do we want aspiring philosophers attempting unsupervised controlled burns in California. But we do want pilots to navigate ships, and we do want burn bosses attempting controlled burns. As such, we should construct incentives which encourage that, and protect people from culpability even if things occasionally go wrong.

Explanation 2: Causal Links. Second, we trace responsibility through causality. Because you caused a house to burn down you are, at least partially, responsible for that damage. The problem is, it is almost always easier to trace causality to actions than to inactions. We can identify exactly which active burning causes damage. We can easily say, “the first you started on February 14th destroyed these two house.” It’s much harder to say “the not burning that you didn’t do on February 14th was what allowed the fire to get out of hand.”

And indeed, I think probably we can’t really hold people responsible for any particular failure to burn. We can hold people responsible for how much controlled burning they can do in general, but we can’t trace causal paths to hold them responsible for any particular bad result of inaction. Indeed, it would be unfair to do so, no burn boss can’t foresee when a particular failure to burn will destroy a house (in the way they can sometimes foresee when burning in a particular area might destroy a house). This creates a problem though. Because we can’t hold people fully responsible for their inaction, that means we must hold people disproportionately responsible for actions, thus perversely incentivizing inaction.

This also parallels our interpersonal lives. For example, we generally want people willing to think for themselves. But we are also far more likely to condemn people for reaching terrible views they came up with themselves than for failing to recognize what is wrong with the conventional view. This can create perverse incentives, however. It might really be true that we are justly responsible for coming to terrible conclusions, but because it is so hard to hold people responsible for the majority view it might be important to forgive even egregious mistakes to keep incentives favoring original thought.

So here is the general point. Assessing responsibility is far more complicated than just establishing whether someone played a causal role. Sometimes holding people responsible for things they really should not have done can create perversely disincentivize people from taking risks we want them willing to take. The fires in California give one clear example of this, but the point generalizes to our lives as well.

Military Operations and Questions of Collective Responsibility

photograph of soldiers in uniform saluting

On January 3, while at a ceremony for Evangelical Christians in Miami, Donald Trump announced the execution of Iranian General Qasem Soleimani, saying “…he was planning a very major attack and we got him.”

Two days later, multiple news outlets reported that the United States will be deploying roughly 3000 soldiers to the Middle East in response to escalating tensions, with possibly several thousand more to follow.

On Tuesday, several hours before Iran attacked an Iraqi military base housing US troops, U.S. Defense Secretary Mark Esper told CNN that “We are not looking to start a war with Iran, but we are prepared to finish one.”

Although Trump and Esper are clearly referring to groups of people when they say “we” have done (or will do) such things, it is far from clear exactly who they understand to comprise those groups. Is “[the military] prepared to finish a war with Iran” or does Esper mean “[the American people]”?

Similarly, attributing responsibility to collective nouns like “the United States” is vague – what portion of US citizens, for example, made the decision to deploy troops overseas? Clearly, since citizens do not directly vote on either federal or military operations, such a question is confused in several ways. So, perhaps, “the US” should be understood as an abstract concept along the lines of “a nation-state that is different than the sum of its parts” with some individual or sub-group (like “the government”) responsible for making practical decisions.

This is a small example of what philosophers call “The Problem of Collective Responsibility.” Many considerations of the nature of blameworthiness are interested in questions of individual culpability – “what do I deserve as a consequence of my own actions?” However, some philosophers have suggested that collections of agents can be viewed as culpable (or innocent), such as hate groups or terrorist organizations – however, this raises a host of questions. Transferring between group-based blame and individual culpability is tricky (if one soldier commits a war crime, should his entire unit be held responsible for them?). Internal disagreement within a group seems problematic as well (is it right to hold a full group responsible for something if 62% of the less-powerful individuals in the group disagreed with the decision? What if only 43% protested?)

Nevertheless, collective-responsibility models are not without precedent. For centuries, the just war tradition has relied on distinctions between “combatants” and “non-combatants” to codify its rules for jus in bello; consider the statement released on Sunday by Hezbollah threatening all US military agents that also explicitly stated how US civilians should not be targeted.

So, consider the soundbite “We got him” – who is the “we” actually responsible for killing Soleimani? Multiple interpretations of Trump’s term “we” seem possible:

  1. The individual pilots of the drone that killed Soleimani,
  2. The military unit engaged in the attack,
  3. The military unit and its line of commanding officers (up to and including the Commander-in-Chief),
  4. The US military as a whole,
  5. The US military and the US government as wholes,
  6. The collective citizenry of the US,
  7. The nation-state of the US (as an abstract concept),
  8. The particular group of people in Miami where Trump was delivering his speech.

And this list could go on.

By saying “we” (as opposed to “they”), Trump includes himself in the responsible group, ruling out options 1 and 2. It seems like option 8 could also easily be rejected, but it also seems reasonable to think that Trump was attempting to include his audience in his celebration, at least in part, thereby ruling out options 3, 4, and 5.

If Trump means “[the United States as a collection of citizens] got him” (that is, if he means option 6), then he’s attributing responsibility for Soleimani’s death to millions of people (including children) who have never heard of Soleimani, have never voted, and – in many cases – would explicitly reject such an operation if they had the option to do so. Each of these outcomes seems, at best, odd.

So, at this point, option 7 – the “US as an abstract concept” choice – appears to be the least problematic. Admittedly, this is the sort of tactic we take in other contexts to explain how group identities remain constant over time, even as group membership fluctuates (the 1997 Colorado Rockies and the 2019 Colorado Rockies are, in some sense, the same baseball team, despite no player from the 90s remaining on the roster). But abstract constructs cannot be held morally responsible – only individuals can! If every member of the 1997 Rockies were found to have been using steroids throughout their season, it would be unjust to punish the 2019 Rockies because the individuals are different. If people cannot be blamed in this way, then it seems like they also cannot be praised in this way, leaving Trump’s “we” to be puzzling once more.

Collective responsibility problems are messy and far from intuitively obvious. This point is always useful to remember when listening to representatives of organizations or governments, but it is especially important when war drums are starting to beat.

Determining Moral Responsibility in the Pizzagate Shooting

On December 4th, North Carolina resident Edgar Welch walked into Comet Ping Pong, a Washington, D.C., pizza restaurant, with an assault rifle strapped to his chest. Inside, he reportedly fired several shots and pointed his rifle at a Comet Ping Pong employee as the restaurant’s patrons scattered. No bystanders were injured, and once Welch failed to find what he came for, he surrendered to police.

This week, Welch will return to court in relation to the incident at Comet Ping Pong, a dramatic turn in what has become known as the “Pizzagate” conspiracy. For weeks prior to the attack, online conspiracy theorists had besieged the restaurant with baseless accusations that it has conspired with politicians like Hillary Clinton to traffic and abuse young children. Welch reportedly latched onto these conspiracies, ultimately deciding to take matters into his own hands through a vigilante “investigation.” While Welch’s legal guilt may seem straightforward, the ethical questions his case raises underscore the complexities of moral responsibility in the time of fake news.

Continue reading “Determining Moral Responsibility in the Pizzagate Shooting”

The Gentrification of Hip-Hop

Hip-hop music began in the 1980s, and was primarily a means for African American communities to express commentary and frustration related to politics, discrimination, and common struggles often related to race relations. Crucially, music was being used to give voice to a people that has traditionally been suppressed or discounted because of the effects of systemic racism in the American political institution. One of the most significant groups to pioneer this genre was Public Enemy, whose music focused largely on sociopolitical commentary.

Continue reading “The Gentrification of Hip-Hop”

So I am a racist. What do I do now?

This post originally appeared on October 27, 2015.

Like most human beings, I grew up imbibing racist stereotypes. Since I am Italian, those stereotypes were to some extent different from the kind of stereotypes I would have acquired had I grown up in the United States. For instance, I thought all people “of color” were exotic and more beautiful than “Whites”. This positive, and yet still damaging, stereotype included Black women and men, and Asian men, who in the American dating market are known to be greatly disadvantaged.

My personal attitude was to some extent reflective of Italian culture. The fascination with women of color, for instance, is fairly widespread among Italian men, as you would expect given Italy’s colonial past and its relatively racially homogenous present.

When I started visiting the US academically more ten years ago, I grew accustomed to a much more sophisticated discussion about race, and went through an awkward and often painful process of realization of how implicitly racist I was. I learned that asking “Where are you really from?” to a Seattle native of Korean descent was racist, or at the very least racially insensitive. I realized the tricky undertones of many expressions that I deemed simply descriptive, such as “Black music”. And I found out, much to my surprise, that even my aesthetic appreciation for non-Caucasian people was highly suspicious.

I also discovered that Black women are supposed to be bossy, angry, and dependent on welfare, and that Black men are supposed to be criminals and absent fathers; that East-Asian men are supposed to be unattractive and effeminate, and all Asian women submissive; that Asians in general are good at science… Some of these stereotypes were somewhat in line with my own culture’s, if not necessarily my own, but some were a complete surprise, and that surprise, that sense of “I would never think that” gave me an unwarranted sense of reassurance. When taking the IAT, I even compared positively to White Americans with regard to implicit bias toward Native Americans. So I thought: now that I know all this stuff about race, and given that I am a committed anti-racist, I’ll get rid of all the bad stuff, and I’ll stop being racist!

But, in fact, it didn’t go quite like that… When walking in segregated New Haven, seeing hooded Black men walking behind me made me nervous. I was very aware and ashamed of my own nervousness, but I was nervous nonetheless. Later on, when living in the United Kingdom, I found myself mistaking Black men for store employees. These are only two of the most unnerving instances of my implicit racism surfacing to my uncomfortable consciousness.

And it doesn’t even stop at race: I have become aware of many other forms of discrimination, over the years, and that has greatly increased my capacity at catching myself being implicitly homophobic or transphobic, fattist, ableist, and so forth. But, in fact, it seems to have only increased my awareness, not my ability to be less biased.

Philosopher Robin Zheng, whose research is on moral responsibility and implicit bias, has reassured me that I am not alone. Empirical research confirms that fighting implicit bias require a lot more than just informing people about the reality of discrimination.

This research wouldn’t be surprising to those familiar with more general work on implicit reasoning. For those who are not, I find useful an ancient metaphor from the Buddhist tradition popularized by Jonathan Haidt in his acclaimed pop-psychology book The Happiness Hypothesis. The metaphor describes the human mind as composed by an elephant and its rider. According to Haidt, the elephant roughly corresponds to what has been called System I in dual-processing accounts of reasoning: a system that is old in evolutionary terms, and shared with other animals. This system is comprised of a set of autonomous subsystems that include both innate input modules and domain-specific knowledge acquired by a domain-general learning mechanism. System I is fast, automatic and operates under the level of consciousness. The rider roughly corresponds to System II: a system that is evolutionarily recent and distinctively human. System II permits abstract reasoning and hypothetical thinking, and is slower, controlled and conscious. “The rider evolved to serve the elephant,” says Haidt, and while it may sometimes override it, trick it into obedience, “it cannot order the elephant around against its will” (The Happiness Hypothesis, p. 17).

This tension between the rider and the elephant has many different manifestations, but one that is particularly relevant to the discussion of the implicit biases is the case of mental intrusions. If we are explicitly asked to not think about a white bear, all we can think of is, you guessed it, a white bear. This ironic process of mental control is the consequence of automatic and controlled processes firing at each other: the request of not thinking a certain thought activates System II, which attempts to suppress the thought. System I activates automatic monitoring of one’s progress, which in this case means continuously checking whether one is not thinking about a white bear. That move turns out to be obviously counterproductive, since it reintroduces the thought that one is supposed to ban. But “because controlled processes tire quickly, eventually the inexhaustible automatic processes run unopposed, conjuring up herds of white bears” (The Happiness Hypothesis, p. 20). Dan Wegner, who first studied ironic process in a lab setting, has shown that it affects also people who try to repress unendorsed stereotypes.

While there is interesting research addressing more productive and effective ways of fighting implicit bias and stereotyping, I want to conclude with a remark about the implications of this empirical literature for microaggressions, a topic that has gained much attention recently.

I largely disagree with Haidt’s criticisms of trigger and content warnings in academic settings, for reasons well-articulated by Regina Rini and Kate Manne. But I do share his attention to underlying psychological mechanisms, and I worry that they are sometimes neglected in the political commentary.

Committed anti-racists are unlikely to engage in overtly prejudiced behavior. However, they may still find themselves inadvertently engaging in microaggressions such as those I described at the beginning of the post: inappropriate jokes or questions, or bona fide mistakes stemming from deeply-ingrained stereotypes. The elephant acts against the rider’s wishes, or even awareness: when something that has been internalized as a threat (such as a hooded Black man) appears in view, the elephant doesn’t hesitate, and kicks the rider in the shins, making it jump. The rider will take one or two seconds to realize that there is in fact no threat, and that will be too late: the jump was visible, the offense taken, the harm done. Not fully understanding how powerful these unconscious mechanisms are affects not only our moral assessment of the perpetrators (which can be also self-assessment). It also produces condemnatory reactions that, while appropriate in theory, are not necessarily fertile in practice, such as a certain relatively widespread paralyzing White guilt of well-intentioned liberals, who go around admitting their White privilege without knowing exactly what to do about it. Realizing that some of the mechanisms motivating our behavior are outside of our direct control allows us to focus on indirect ways to modify our behavior, and to shift from a sterile admission of White privilege to a more proactive commitment to changing the institutional injustice that gives rise to it. You can’t order the elephant at will, but you can change the environment it is raised in.