← Return to search results
Back to Prindle Institute

“Not Like Other Girls” and Internalized Misogyny

photograph of two young women of different attitudes

If you were a young person with internet access in the early 2010’s, you’ll almost certainly be able to visualize the “Not Like Other Girls” meme, which proliferated on sites like Deviantart and Reddit about a decade ago. Two girls stand next to each other, the one on the left (usually blonde and dressed head to toe in pink) is labeled “other girls,” and the one on the left (usually brunette, and less cartoonish than her counterpart) is labeled “me.” The “other girl” is the archetypal mean high school cheerleader. She wears makeup, loves boy bands, wears lip gloss, and is ostentatiously vain about her appearance. The girl representing the artist, while rarely being overtly tomboyish, rejects traits associated with traditional femininity. She eats voraciously, reads books, wears baggy or modest clothing, and snubs her nose at pop music. She’s quirky, unpolished, and raises an eyebrow with condescending confusion at her blonde neighbor.

A thousand versions of this image exist; sometimes the “normal” girl sports a mohawk and leather jacket, other times she’s holding an Xbox controller. Regardless of the finer details, the inherent silliness of this dichotomy, and the presumptuous superiority of the “normal” artist, was easy to mock. The meme rose to popularity because it spoke so directly to the experience of preteen tomboyish girls (or really any girl who felt alienated from her peers), but it experienced a wave of backlash as mainstream culture became more sensitive to feminist issues. The “other girl” is almost always a caricature of offensive stereotypes, which is why many have viewed the original meme as a manifestation of the artist’s internalized misogyny.

Internalized misogyny happens when we absorb and regurgitate sexist stereotypes, often subconsciously. Even the most diehard feminist is not completely immune to patriarchal socialization, which is why young women are encouraged to be vigilant with regards to gender norms. A 2009 study on this phenomenon published in the Journal of Integrated Social Sciences explains the manifestations of subconscious sexism:

“women and girls may learn to have low expectations of their capabilities . . . may be treated as if they need to be taken care of . . . may be criticized or ostracized for being assertive, visible, or outspoken, may find their opinions discounted, may be disliked as leaders unless they fit female stereotypes by acting nurturing, may be valued and appreciated primarily for their looks, bodies, or sexualities, may face expectations that they will spend considerable time and money modifying their physical appearance, may need to manage unwanted sexual attention or physical contact from men, [and] may be expected to act passive in sex, dating, and relationships.”

The study notes that this process usually begins in the middle school years, when girls are encouraged to relinquish their preadolescent androgyny and conform to femininity. Girls are made to feel incompetent and powerless, and then pressure other girls to conform with standards that hurt all women.

But at the same time, it’s hard to slap an “internalized misogyny” label on this meme and call it a day. After all, internalized misogyny is more than just a dislike for other women; it has to do with reinforcing power structures. Scholar Greta Olson explains that

“Within a system of hegemonic masculinity, women who have successfully internalized misogyny will be rewarded to the degree that they uphold and enforce the structures of this system to the detriment of other women who are less compliant. By contrast, such women will be treated with hostility who refuse to hold up the prevalent system of male privilege.”

In other words, feminine pursuits may be denigrated, but any girl who steps outside those pursuits is doubly ostracized. Sexism remains a double-edged sword, a fact that this meme (albeit unintentionally) captures with its simplistic dichotomy.

At the same time, the notion that gender-non-conforming girls are somehow too confident or arrogant, and that their unwillingness to conform to social expectations was a ploy to gain male attention or approval (as is so often implied in parodies of the original meme), is deeply misogynistic in itself. As Anusha Ashim explains,

“Many of these anti-’I’m not like other girls’ memes mock the girl on the other side rather than stating that both are equal. The unfeminine girl is portrayed as unhygienic, lazy, bitter, and even jealous. Things like baggy clothing, dark hair, and types of music are associated with her. We must ask ourselves this: Why are we creating another sexist caricature to prove that a sexist caricature is false?”

We’ve reached a point where any criticism or minor aversion to traditional femininity is labeled as internalized misogyny; even a dislike of the color pink, which many women were practically force-fed as children, is deemed a manifestation of their hatred for other women. Internalized misogyny is extremely hard to unpack, and often pits women against one another for arbitrary reasons. Empathy is required on both sides of the artificial divide if we’re to achieve any substantial form of gender-consciousness.

Considered Position: On Voluntary Non-Vaccination – Real World Complications

photograph of child among masked crowd

This piece concludes a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here (Part II).

In Part I of this series I tried to explain the moral intuition that I have less reason to engage in precautionary social-distancing behavior once those who are unvaccinated are unvaccinated by choice rather than because they could not access the vaccine. This intuition, I argued, is explained by the fact that justice mainly requires us to give people reasonable options for safety and does not require us to make others safe.

In Part II, I looked at what it takes for an option to be reasonable, and we realized that this is often sensitive to the reasons that someone does not want to get a vaccine. If someone’s objection to the vaccine is reasonable, then justice may still require me to adopt precautionary behavior.

In both posts, I idealized the questions at hand. I ignored real life complications, and just tried to identify the abstract principles involved. For that reason, my cases were often unrealistic, involving such absurdities as the unvaccinated engaging in moral blackmail or people taking pills to make themselves allergic to the vaccine.

In this final post, I want to turn to the real world and look at some of the complications which make this a difficult ethical question.

Complication 1: We judge the reasonableness of others in biased ways

We saw in Part II that if the other person has reasonable grounds for refusing the vaccine, then justice still requires me to protect them by social distancing (even if their grounds are mistaken). One challenge, however, is that we tend to be biased in our own assessments of what are reasonable grounds.

Consider, for example, the following two suggested grounds of vaccine hesitancy:

Skepticism 1: Distrust of a Racist Healthcare System

Some Black people in the U.S. are reluctant to get a vaccine due to distrust of the American medical system. While this is sometimes attributed to historical injustices, like the Tuskegee study, it is more plausibly explained by current disparities in health care treatment. (It also, as a whole, might just be overblown; but we will put that aside for now.) The thought might go as follows:

“As Ben Almassi has argued in the context of organ donation, there are good grounds, given persistent racial health inequities, for Black people in the U.S. to distrust that the medical system has their best interest at heart. But if one has good reason to distrust the health system, then one also has good reasons to distrust the recommendations of the health system. This is especially true because we know that drugs and vaccines can sometimes affect different racial groups differently, and we also know that Black people tend to be massively underrepresented in vaccine trials (even when those rates are reported).”

Skepticism 2. Distrust of the Liberal Narrative 

Some conservatives are reluctant to get the vaccine due to distrust in the way that mainstream media portrays medical information. They might say things like:

“Putting aside worries that the COVID threat was overhyped to sink former President Trump’s reelection chances; we have seen a systematic tendency for the media to provide unreliable coverage on how to respond to the vaccine in order to fit its preferred political narrative. First, we see the same caseloads suggest different reactions depending on who is in charge. The week after President Biden was sworn in, long before any change in policy could have altered risk, blue states began opening up and schools began seriously trying to return students to in-person education. That was true, even though the absolute case numbers were still worse than they were in the summer when everyone insisted things needed to be shut down under President Trump.

Then, of course, ‘the Scientists’ consistently lauded Democratic governors like Andrew Cuomo and panned Republican governors like Ron Desantis. And yet, we have consistently found that the media narrative was backwards. Florida, despite an extremely old population, came out of the pandemic pretty well, with a much stronger economy, and with many more kids staying in school. This is not just cherry-picking. Republican states, on average, had fewer deaths and caused significantly less damage to the economy. Then ‘the scientists’ told us to take our kids out of school, but didn’t object to massive unmasked Black lives matter protests. ‘The scientists’ told us not to wear masks, until they needed a social symbol for purposes of public shaming.”

I bring up these two explanations of skepticism, not because I find them equally plausible, but because I don’t find them equally plausible. Intuitively, I find the first reasonable and the second ludicrous. But here is the issue: it’s hard to identify any objective reason the first is more reasonable. Sure, it fits better with my own preferred political narrative; but I think there are decisive objections to both lines of reasoning, and I don’t think the errors in one are in any sense more egregious than the errors in the other.

The danger, then, is that I am more sympathetic to members of what I see as my political in-group. But that differential in sympathy means I’m especially likely to inconsistently apply a standard as squishy as ‘reasonable.’

I don’t have a good solution here, and so just advise extreme caution when you label political allies reasonable or when you label political opponents unreasonable.

Complication 2: Immunization isn’t up to some people

While there are few, if any, groups in the U.S. for whom it would be dangerous to get the vaccine. There is some evidence that immunocompromised patients, while they can safely get the vaccine, do not always produce the required antibodies.

Similarly, there is a group of people in the U.S. who cannot choose to vaccinate: children. This is true in the limited sense that the vaccines are not currently approved for use in children below the age of 16. But it is also true in the sense that, even once the FDA approves the vaccine for children, children cannot choose to be vaccinated without a parent’s permission. Unvaccinated children, then, might not be unvaccinated by any choice of their own.

These are important complications, but I’m not sure that on their own they would show you must socially distance until we reach herd immunity (on the hypothesis that there are a large percentage of vaccine skeptical holdouts). Children are far less susceptible to COVID-19, and only a very small portion of the population are severely immunocompromised. Given these facts, the threat posed to children and the immunocompromised is far smaller than the risk posed by pre-pandemic activities when most people did not have access to the vaccine. Certainly, you should engage in some precautionary measures, especially if you know you are likely to be around someone who is immunocompromised. But it is unclear that those are any different than the ordinary obligations one has during flu season.

Complication 3: Deception and consent

One further complication is that deception tends to undermine voluntariness. For example, if I lie to you about the results of a surgery, then your consent to that surgery does not actually constitute voluntary consent. Similar issues arise about sexual consent.

Or suppose you told your friend that you would pick them up from the airport. But then I, just for the fun of it, lie to you and tell you that your friend’s flight was delayed, that they were not able to reach you, and that they don’t need a ride after all. If you don’t pick your friend up from the airport, then breaking the promise was involuntary. It was involuntary because I am the one who bears responsibility for your failure.

Now, if it is true that deception can undermine voluntariness, then one worry we might have is that there may be a good number of people who refuse the vaccine because they were lied to, and if so, it is those who lied who bear the actual responsibility for the non-vaccination.

One reason this is an important point to notice, is because a lot of people are especially likely to think that those with unreasonable reasons for refusing the vaccine accept those reasons because they are being lied to by their media ecosystem. Thus, many on the left think the vaccine hesitancy on the right is ludicrous, but those same people on the left are also likely to think that Fox News, OAN, or Newsmax, are systematically deceiving their viewers. Similarly, many on the right think that concerns of racism are blown way out of proportion, but those same people on the right are also likely to think that mainstream media organizations — like CNN or The New York Times — are providing systematically distorted information on those very issues.

Indeed, it is not just cases of outright lying that might trigger a shift in responsibility. Not only do I preclude the voluntariness of your action by lying to you, I do the same thing if I tell you something false when I should have known better. If I tell you something false but am really honestly trying to tell you the best I know, then your actions are still voluntary. You made the choice to trust me in good faith. But if I am not acting in good faith myself, then I am the one at fault when you err.

Conclusion

So once vaccines are widely available (such that the unvaccinated are mostly unvaccinated by choice) but before we reach herd immunity (due to widespread vaccine hesitancy) can you return to pre-pandemic behavior?

As we’ve seen, this is a difficult question. However, it seems likely that the right answer is generally yes. For the most part, because it is reasonable to expect people to get the vaccine, it is reasonable to return to behaviors that would be safe were others to be vaccinated. This is true, even without factoring in the fact that the vaccinated are very unlikely to spread COVID. And so, it does seem like justice allows life to return to normal.

However, we have also learned an important moral lesson about what it takes to justly live together in a society. For justice to allow us to return to pre-pandemic activities, it does not just require the vaccine to be widely available. It also depends on other people being able to voluntarily refuse the vaccine. And as it turns out, there are complicated ways in which we can undermine the voluntariness of other’s actions. When we are not fastidious about what we tell others, we risk undermining their own capacity to make voluntary choices. If I thoughtlessly spread misinformation or repeat something as fact that I am only pretty sure of, then I undermine one of the fundamental building blocks of our system of justice.

My own testimonial irresponsibility undermines the voluntariness of those who believe me in good faith. And systems of justice largely depend on the mutual voluntariness of everyone’s choices. This is one reason why lying and bullshitting are such profound moral wrongs. It is not just that others end up with some wrong beliefs (we all have a bunch of misguided convictions), but that other people are rendered, in a sense, passive in relation to your choices. By breaking down even a small portion of the reciprocal norms of justice, you don’t just lie to another but partially bring the whole system of justice down on top of you.

On Climate Refugees and Captain America

image of faded Captain America shield

WARNING: The following article contains spoilers for all six episodes of The Falcon and the Winter Soldier on Disney+.

After the release of Avengers: Infinity War, the 2018 entry in the Marvel Cinematic Universe that ended with the villainous Thanos snapping his fingers to magically erase half of all life in the universe, the internet lit up to debate the (im)morality of his actions. According to the movie, the character’s motivations were, arguably, altruistic (because after seeing his own planet succumb to resource depletion and overpopulation, the “Mad Titan” reportedly wanted to prevent similar sufferings elsewhere). In this way, Thanos joined Black Panther’s Eric Killmonger, Spider-Man: Homecoming’s Adrian “The Vulture” Toomes, and Captain America: Civil War’s Baron Zemo in the ranks of “MCU Bad Guys who might be making some Good Points.” Of course, however defensible or understandable their philosophies might be, the murderous brutality exhibited by each antagonist has consistently kept the MCU’s moral dichotomy more-or-less clear; just as superhero comics have been called “moral pornography” for their oversimplified and exaggerated depictions of good and evil, superhero movies are rarely different.

Although it isn’t, strictly speaking, a movie, the latest MCU story — The Falcon and the Winter Soldier, a limited-series on the Disney+ streaming service — follows basically this same vein, featuring an enemy whose message is far more sympathetic than her methods. In brief, the six-episode miniseries focuses on Avengers characters Sam “Falcon” Wilson and Bucky “Winter Soldier” Barnes as they work to smooth out the geopolitical chaos provoked by their team’s defeat of Thanos in 2019’s Avengers: Endgame. Although the Avengers were able to undo Thanos’ Snap and bring the people killed in Infinity War back to life, it took them five years to do so and, in the meantime, the world soldiered on. During the period between movies (now referred to in-universe as the “Blip”), humanity had done its best to rally together in the anarchy that followed half the globe’s sudden disappearance; the equally sudden return of all those missing people a half-decade later, although joyful in many ways, once again upset the fragile balances built during the Blip. As the series opens, the international Global Repatriation Council has taken charge of the situation and is in the process of essentially “resetting” Earth back to the way it was before the Snap; against this, Karli Morgenthau leads a team of insurgents called the Flag Smashers in an attempt to salvage the more unified way of life they had created in the Blip.

A key thread in the series is the upcoming GRC vote to deport refugees displaced by Thanos’ Snap back to wherever they lived before the Blip. From the perspective of the Council, this would solve many problems: as one character insists in a later episode, imagine a situation where someone was killed by the Snap, returns to life five years later, and discovers that someone else has moved into their house in the interim — who is the house’s rightful owner? In order to simplify these kinds of murky questions, the series sees the GRC poised to forcibly displace thousands of people, many of whom had managed to forge better lives for themselves after the Snap. As Sam explains at one point: “For five years, people have been welcomed into countries that had kept them out using barbed wire. There were houses and jobs. Folks were happy to have people around to help them rebuild. It wasn’t just one community coming together, it was the entire world coming together.” In short, although Karli and the Flag Smashers are initially described as just wanting a “world that’s unified without borders,” their actual goals are more focused on the often-ignored needs of the world’s refugees — particularly those who would be demonstrably harmed by simply “going back to the way things were” before the Blip.

Granted, the Flag Smashers are also revealed to have acquired Captain-America-esque strength and stamina (after double-crossing a Madripoorian crime boss) and they use their newfound superpowers to kill more than a few GRC agents in their crusade to stop the vote — no matter how sympathetic the cause, comic book logic (not to mention corporate incentives and, at times, outright propaganda) demands that Karli and her friends ultimately play a “Bad Guy” role for the MCU (even as one of their own is savagely executed in broad daylight by John Walker, an enraged American agent). Still, the show ends with Sam — as the new Captain America — chastising the rescued GRC leadership for effectively ignoring the refugees, giving at least some credence to the (at that point, mostly dead) Flag Smashers and their message.

Indeed, it’s hard not to sympathize with a group of people who, through no fault of their own (and as an explicit consequence of others’ recklessness) are displaced from their homes and forced into poverty. In a similar way, real-world philosopher Rebecca Buxton has argued that we should attend more carefully to the needs of real-world refugees forced to flee their homes as a result of climate change. Although rising global temperatures make for much less exciting action sequences than a purple-skinned alien fighting the Hulk, their threat is significant and their result is roughly the same: recent years have seen as many as 20 million people become climate refugees for one reason or another. Buxton points out, though, that these displaced citizens are predominantly not from those nations most responsible for the carbon emissions and other pollutants contributing to climate change; for example, the nation of Tuvalu was projected to become the first carbon-neutral state, but is now facing submersion as sea levels rise. Consequently, although debates about climate refugees tend to focus on compensation for certain, specific harms, Buxton instead contends that refugees are owed reparations (which can only be paid by those who actually bear responsibility for the damages). Although this burden of proof is more difficult to satisfy, Buxton argues convincingly that it is possible, at least in principle, to identify specifically who owes who what before leaving it to policymakers to work out the applications for specific cases.

So, if you enjoyed The Falcon and the Winter Soldier and found yourself agreeing with the new Captain America that Karli and her friends should not simply be written off as “terrorists,” it might be prudent to consider some of the real-world counterparts of the refugees that the Flag Smashers were trying to help.

(It might also be wise to consider how Buxton’s defense of reparations might relate to the story of Isaiah Bradley and the other ways that the miniseries engages with race and racism in America, though I’ll leave that topic for a different article.)

Considered Position: On Voluntary Non-Vaccination – Types of Reasons

photograph of masked and unmasked people in a crowd

This piece is part of a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here.

Hopefully pretty soon, my state, Florida, will reach the point that anyone who wants to be vaccinated can be vaccinated. In Part I of this series, I argued that once we reach that point, I have fewer reasons of justice to engage in aggressive social distancing. After all, everyone has the option to get the vaccine and so protect themselves from whatever risks I impose by my everyday activities.

In that argument, however, I ignored an important variable in our assessment of justice: why are people not getting vaccinated? This is important because different reasons give rise to different duties on my end.

To see this, let’s start with a particularly extreme contrast of cases:

Coordinated Moral Blackmail: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population are excited to return to normal and immediately get vaccinated, the other 40% coordinate together and refuse to get vaccinated until their demands are met. The 40% realize that the other 60% don’t want to put anyone’s health at risk, and so they refuse to receive the free vaccine unless the other 60% agree to collectively pay each person in the 40% camp three hundred dollars. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a tool of moral blackmail.

Widespread Deadly Allergy: Suppose that herd immunity requires that 80% of the population get vaccinated. Let us also suppose that one of the weird things about this vaccine is that it produces a severe, possibly fatal, allergic reaction in people with peanut allergies. But while 60% of the population have no allergies and so get the vaccine, the other 40% all have severe peanut allergies and so forgo vaccination until a safer vaccine is approved. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated because the vaccination is not worth the risk of anaphylaxis.

I presume that you share my intuitions about these two cases. In the first case, the 60% have no reasons of justice to forgo returning to pre-pandemic life. But in the second case, the 60% still have reasons to social distance. Indeed, I would think that the 60% in the second case have duties pretty similar to the duties we would have if the vaccine were only available to 60% of the population.

Normally, if someone voluntarily chooses to forgo the vaccination that changes my duty of justice. However, it only makes a difference to duties of justice if we can reasonably expect the person to get the vaccine.

It is unreasonable to expect someone to take a very risky vaccine, just as it would be unreasonable to expect a family to bankrupt themselves to buy the vaccine. But such pragmatic concerns are not the only relevant ones. For instance, if it is unethical to get the vaccine, say because it was produced by slave labor, then again if the majority of the population refuse the slave made vaccine, justice requires me to continue to maintain pandemic precautions.

If it would be a mistake to get the vaccine, then I am still bound by the same norms of justice to engage in precautionary behaviors. If people truly believe that they should not get the vaccine, then I can’t reasonably expect them to vaccinate.

But what if instead people falsely believe there is a problem with vaccines? Well, in that case things become more complicated. There are two kinds of error that could explain why one thinks a right action is wrong, both of which are described by Elizabeth Anscombe. First, one might be wrong about the ‘facts on the ground,’ the circumstantial descriptions of what is going on — for instance, one might think the vaccine is less safe than it is, or one might falsely think it is produced by slaves. Second, one might be wrong about the moral implications of those ‘facts on the ground.’ For instance, one might know the risk is tiny, but have the false principle that one cannot be expected to undergo even the smallest possible risk for the public good. Following a convention that I use in my own work, let’s call the first an error about circumstantial facts and the second an error of classificatory facts.

Error 1: Mistakes about Circumstance

The COVID vaccines are not, on the whole, very risky. The consensus judgment of the medical community is that the vaccines, like most vaccines, are safe and effective. But even if the vaccines are safe, some people might believe them to be unsafe. And this raises a question of whether such honest beliefs change my own reasons of justice.

Widespread Error of Belief: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population get the vaccine, truly believing that vaccines are safe and effective, 40% of the population mistakenly believe that the vaccine is dangerous. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a result of mistaken beliefs.

In this case, does justice require me to forgo pre-pandemic activities? To answer this question, we need to know more information. In particular, we need to know why people have the false belief.

Suppose that preliminary scientific studies, incorrectly, suggest that the vaccine is unsafe for pregnant women. This information is widely announced and the majority of scientists and doctors accept the result. As it turns out, however, the conclusion is the result of a subtle mathematical error that no one has yet noticed.

If the best scientific evidence suggests that pregnant women should not get the vaccine, then it is clearly unreasonable for you to expect pregnant women to get the vaccine. It does not matter that the studies are wrong, because you cannot expect the average person to realize that the studies are wrong. If this is right, then at least some of the time false beliefs about matters of circumstance (such as the safety of belief), really do make it unreasonable for me to expect you to be vaccinated.

But not all mistaken factual beliefs work that way. Now imagine someone who is profoundly lazy and just assumes vaccines are unsafe. Because they are lazy and do not care about public health, they never bother to do any research to check whether their assumption is right.

We can accept that the person really thinks that the vaccines are unsafe. And we can further accept that the person cannot, by a voluntary act of will, get herself to believe the vaccines are unsafe (for instance, you cannot, at this very moment, choose to believe there is an elephant behind you even if I offered you five hundred dollars to form the belief).

So suppose our imagined interlocutor says: “I’m not getting a vaccine because I really believe they are unsafe; ultimately I don’t choose what I believe, so you can’t blame me for not getting vaccinated.” Is this right? Does the fact that we cannot choose our own beliefs mean we cannot be blamed for our false beliefs?

Elizabeth Anscombe, in her article “On Being in Good Faith,” argues this line of thought is mistaken. While good faith belief can make one’s behavior reasonable. A good faith belief, in the sense that excuses behavior, requires more than you really believing the thing you say you believe:

“Consider a rash and uncharitable judgement which led one to slander someone gravely. One does not – usually at least—say to oneself ‘Now I will judge rashly’ or ‘I am determined, rash as it is, to judge that so-and-so’- one simply judges rashly. What does ‘in good faith’ mean? If it means that one has not got one’s tongue in one’s cheek, is not consciously lying or hypocritical, then good faith is not enough to exonerate. If it is enough to exonerate, then the slander uttered under the influence of a rash and passionate conviction is not a statement made in good faith. . . . Thus good faith or sincerity are either not much good as a defence, or not so easily assured as we might think at first. . . . ‘He ought to have realised…’, ‘He ought to have thought of…’, ‘He had no business to suppose that…’, ‘He ought to have found out…’, are phrases that bear witness to the fact that a man’s beliefs do not suffice to justify him so long as he acts accordingly.”

What Anscombe is arguing is that one can be responsible for false beliefs if we could have expected you to know the truth. If the reason you have a false belief is because the entire scientific community is mistaken, we can’t expect you to know any better. But if the reason you have a false belief is because you are too lazy to do research, then of course we can blame you for your error.

If I accidentally kill a child because I didn’t know they had a deadly carrot allergy, then I’m usually not responsible for that death. However, if the only reason I don’t know about the allergy is because I skipped our camp’s health and safety meeting, then I am at least somewhat responsible. I can’t object that ‘I didn’t know,’ because I should have known.

The same principle applies to vaccines. If you have a false belief that might make your non-vaccination reasonable, but it only does so if we could not have expected you to know better.

Error 2: Mistakes about Classification

What about cases of moral ignorance? That is, someone knows all the ‘facts on the ground,’ but she mistakenly thinks that those facts mean she should not take the vaccine.

Some philosophers think moral ignorance is unique, that while circumstantial ignorance can excuse, moral ignorance never can. Other philosophers disagree. (I discussed that debate at length in a previous Prindle Post.)

In this context, however, it seems that moral ignorance is probably not unique. That is because we want justice to allow for at least some moral pluralism. If there are no reasonable, though false, moral principles; then the project of democratic pluralism is in serious danger. If we want to live together in society with at least some level of deep moral disagreement; then we should acknowledge the reasonability of people acting on at least some moral principles we think are wrong.

Now, in our society we don’t have widespread reasonable moral ignorance preventing us from reaching herd immunity. But there are at least some real-life cases where it is reasonable to wrongly refuse a vaccine on moral grounds. A good example is those who don’t want to take a vaccine that was created via use of aborted fetal tissue. This seems to me to be to be too morally fastidious, but the reasoning is not unreasonable, and I do not think we can expect people to never make that error.

Conclusion

The reason that people refuse the vaccine matters. If they are right to refuse it, then one is just as responsible to take precautionary actions as one was before the vaccine became available. If they are wrong to refuse, then the question becomes whether or not it is reasonable to expect them to take the right action. If their incorrect refusal is explained by a reasonable form of circumstantial or moral ignorance, then justice continues to make the same demands.

Continue to Part III – “Real World Complications”

Justice for All?: William Kelly and Kyle Rittenhouse

photograph of police officer with blurred civilians in the background

Last week, a police officer was fired over the details of an anonymous donation he made. Norfolk Police Lieutenant William Kelly contributed $25 to a legal defense fund for Kenosha shooting suspect Kyle Rittenhouse last September. That donation was accompanied by a message:

“God bless. Thank you for your courage. Keep your head up. You’ve done nothing wrong. Every rank and file police officer supports you. Don’t be discouraged by actions of the political class of law enforcement leadership.”

Kelly’s donation was anonymous and only made public following a security breach of Christian crowdfunding website GiveSendGo when data was shared with and circulated by Distributed Denial of Secrets and later published by The Guardian.

In the wake of his firing, GiveSendGo has started a fundraising campaign for Kelly. Co-founder Heather Wilson argues that “Regardless of how you feel regarding Kyle Rittenhouse, the fact is that Mr. Kelly’s individual rights have been grossly violated.” His donation “wasn’t against the law, but a criminal hacker group and a biased media outlet decided that was enough to make an example of him.”

This particular framing conforms to a broader (misleading) narrative regarding cancel culture’s all-out assault on individual rights. The story is presented by some as the obvious overreach of the progressive thought police. Kelly, these voices claim, is being persecuted merely for holding private, personal opinions that a powerful bunch have deemed distasteful. Woke mob rule has conspired once again to force the hand of another institution to cut ties with a controversial figure or risk being tarred with the same brush. What was once a call for boycott or an urging to deplatform has transformed into something much bigger. This isn’t a mere public shaming; Kelly’s dismissal highlights the serious threat to professional livelihood: an 18-year veteran and the second-highest ranking officer in the Norfolk Police Department lost his job in less than 72 hours.

Given the situation, labor lawyers like Ray Hogge have suggested that the firing was “inappropriate and illegal.” Kelly’s dismissal is a violation of his rights of speech and association. As a free citizen, Kelly is at liberty to support any charitable cause he chooses, regardless of whether city leaders approve. Employers shouldn’t be in the business of picking and choosing the values their employees can espouse. And this should be especially true in the case of a private, off-duty communication between friends.

The trouble is that Mr. Kelly’s rights are not the only rights at issue. His interests must be weighed against the state’s interest in delivering impartial justice for us all. Kelly’s case is more than just a matter of bad optics or a squeamish politician rolling over to avoid backlash from a mob spoiling for a fight. This is a state official countermanding the expressed purpose and obligations of the post he serves a post that sometimes requires the use of deadly force. Kelly’s words give us reason to question whether he can adequately execute the functions of his office.

Even out of uniform, officers have a duty to uphold public image and not engage in activities that might erode respect for the badge. As Police Chief Larry D. Boone made clear,

“A police department cannot do its job when the public loses trust with those whose duty is to serve and protect them. We do not want perceptions of any individual officer to undermine the relations between the Norfolk Police Department and the community.”

The effect Kelly’s position as an officer of the law has on this speech act (even in private as a public citizen) appears inescapable (for discussion see A.G. Holdier’s “Pastor Fritts, the First Amendment, and Public and Private Reason”). His incidental use of his police department email in making the donation helps to highlight the trouble: Lt. Kelly is incapable of speaking on this matter while wearing a different hat. A police officer expressing support for a vigilante (publicly or privately) and suggesting that outlaw is above the law is fundamentally at odds with the sworn duty to protect and serve. It betrays an indifference to the law he is meant to uphold and to the exclusive position that he occupies. It confers legitimacy on some while denying it to others and fails to discourage us from taking the law into our own hands.

But there remains much that needs to be settled. Rittenhouse only stands accused and has pleaded not guilty on the basis of self-defense; the jury is still out. Unfortunately, this fact means that Kelly’s endorsement is more egregious, not less. Choosing to support a suspect before his day in court is a problematic stance for law enforcement to take. The police shouldn’t stand as judge, jury, and executioner. Kelly’s actions are objectionable, then, not because he chose the wrong side in the culture war, but because he chose to take a side at all.

Considered Position: On Voluntary Non-Vaccination – The Difference Voluntariness Makes

photograph of people walking and biking in masks

This piece begins a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated.

My state, Florida, recently opened up COVID vaccinations to everyone. This does not quite mean that anyone who wants to be vaccinated can be vaccinated. There are still a limited number of vaccines available, so not everyone who wants to get vaccinated has been able to schedule an appointment. But we are getting close to the point where those who remain unvaccinated are unvaccinated by choice.

This raises a question: does the fact that the vulnerable choose to remain vulnerable make a moral difference to what precautions I should observe? I have the strong intuition that this does make a moral difference; it intuitively seems that imposing risks on the unvaccinated is not as bad when the unvaccinated are unvaccinated by choice. (The evidence increasingly suggests that the vaccinated cannot really spread COVID-19, and if that is confirmed it will render much of this practical discussion moot. However, the underlying philosophical questions are important and worth investigating.)

But is my intuition that I can be less cautious correct? 

In this, and two subsequent posts, I will try to answer that question. Each post will be dedicated to one part of an answer.

  • Part I: What principle underlies the intuition that the voluntariness of non-vaccination makes a difference to my own actions? And is that principle a true moral principle?
  • Part II: Does it matter why others are choosing not to be vaccinated? Are there differences, for example, in how careful I should be around someone who avoids vaccination because they think COVID-19 is overblown or around a pregnant mother concerned about the lack of trial data in pregnant women?
  • Part III: How do the complexities of real life complicate the moral calculation? What are the implications of the fact that children cannot get the vaccine without a parent’s permission? And is someone’s choice really voluntary if that person was lied to about the safety of vaccines?

In this first post, I want to investigate what principle might underlie my intuition that I have fewer obligations of caution to those who are voluntarily unvaccinated. To identify the principle at work, it will be useful to start with a simple argument that voluntariness should not make any difference. The thought goes as follows:

  1. During the pandemic, I avoid certain behaviors — such as licking doorknobs — to avoid spreading illness and death.
  2. If someone forgoes vaccination, the reason they forwent vaccination makes no difference to their susceptibility to illness and death.
  3. So, people being unvaccinated by choice makes no difference to my reason to avoid certain behaviors.

Let us call this the ‘simple utilitarian perspective.’ The simple utilitarian thinks that because voluntarily refusing a vaccine conveys no immunological protection, the fact people voluntarily forgo the vaccine makes no difference to my moral calculation. If you are in a community where 40% of people are unvaccinated by choice and I am in a community where 40% of people are unvaccinated because of a limited supply of vaccines, then the simple utilitarian says we are in a morally equivalent position.

The Utilitarian Explanation of the Difference

I call this the ‘simple utilitarian perspective’ because there is a perfectly good utilitarian argument against this reasoning. It is true that it makes no difference to my own risk whether I cannot get a vaccine or whether I choose to not get a vaccine; in either case I am unvaccinated. However, that does not mean that if you compare a random person who could not get the vaccine to a random person who chose to not get the vaccine, that the average risk is the same. Assuming people are at least somewhat rational, people at higher risk are more likely to choose to be vaccinated.

Even if utilitarians only ultimately care about happiness, they still will place some value on freedom. When people are free to make their own choices, they can make choices that are best for themselves. The elderly are at greater risk than the young are. As such the elderly are more likely to choose to vaccinate. Similarly, those who are very healthy — and without any risk factors for COVID-19 — are more likely to forgo vaccination because their risks of contracting it are smaller.

All this means that it’s probably safer to resume licking doorknobs once everyone had the choice to get the vaccine because those at highest risk will also be vaccinated at the highest rates.

Going Beyond the Utilitarian Answer  — This might partly explain my intuition, but it cannot be the whole story. This is because my intuition persists, even when I know the utilitarian explanation does not apply; for example, even if I know that the person is forgoing a vaccine for a reason unrelated to personal risk — like because ‘vaccines don’t fit with their personal style’ — I still intuitively feel I have less reason to be cautious.

Distributed Responsibility 

Part of the intuition is explained, I think, by the fact that people who are unvaccinated by choice will share some of the responsibility when they get sick.

If the only way to prevent people from getting sick is if I stop licking doorknobs, then by licking doorknobs I take on complete responsibility for their illnesses. However, if there are two ways to prevent people getting sick — I stop licking doorknobs or they get vaccinated — then at worst I am only partially responsible. They share in responsibility by declining the vaccine.

If we imagine other more ordinary behaviors, like frequent grocery shopping rather than doorknob licking, then the other person actually bears most of the responsibility for getting sick. It seems more reasonable to ask them to get vaccinated than to ask me to stay indefinitely in lockdown; the more reasonable the choice you reject, the more responsible you are for the consequences of that rejection.  (This, then, is why you might feel I am mostly responsible if I really were licking doorknobs; licking doorknobs was not a reasonable thing to be doing in the first place.)

This idea, that the choices of others can mitigate our own responsibility is prominent in both ethics and law. I like how Christine Korsgaard presents the idea in her discussion of our responsibility for the consequences of lying:

“In a Kantian theory our responsibility has definite boundaries: each person as a first cause exerts some influence on what happens, and it is your part that is up to you. If you make a straightforward appeal to the reason of another person, your responsibility ends there and the other’s responsibility begins. But the liar tries to take the consequences out of the hands of others; he, and not they, will determine what form their contribution to destiny will take. By refusing to share with others the determination of events, the liar takes the world into his own hands, and makes the events his own.”

Going Beyond the Distributed Responsibility Answer — But if this is the explanation of the intuition, then we have a problem. There is something morally vicious about someone who is solely concerned with avoiding responsibility. The virtuous reason to take precautions is not to avoid responsibility for someone’s death, it is to save people’s lives.

To see this, let’s look at an example from my own life (an example I still look back on with shame).

Years ago, an acquaintance of mine expressed an intent to commit suicide. I became deeply distressed, was unsure how to proceed, and grew paralyzed by indecision. So, I reached out to two mentors of mine, both of whom had experience working with suicidal people.

Reaching out was the correct thing to do; I did not know how best to offer help. The problem was the reason I reached out for help. Ultimately, it was not so that I could better support this acquaintance. Rather, I was racked by anxiety about messing up and becoming responsible for the person’s death. I reached out to these mentors because I knew that it would be irresponsible to not follow their advice. Deep down, I wanted to reach out because that way, even if the person did kill herself, at least I would not be blameworthy.

Why think this is morally perverse? Most simply because my own guilt was not the important good at stake in the choice. The thing that mattered was my acquaintance getting the help she needed; decreasing my own culpability if things went badly was not anywhere near as important! (For a more detailed discussion of the way in which a concern for our own responsibility distorts our moral reasoning, see Elizabeth Anscombe’s article “On Being in Good Faith.”)

Reasons of Justice

Even though we should not be strongly motivated by a concern to avoid responsibility; there is a close connection between what we should do and what we would be responsible for not doing. So, this difference in how responsible I would be if someone gets sick might not explain why I have weaker reasons to take precautions, but it is evidence that my reasons are weaker.

But if I do have weaker reasons, then that must mean that my reasons to take precautions are not quite so simple as I have reasons to keep people from getting sick. And this is the key to unlocking the puzzle. While I do have reasons to lower the risk that other people get sick, I have especially important reasons of justice to give people control over their own risk.

Before the vaccine is widely available, if I go around engaging in ordinary risky activities, I impose risks on others that they cannot reasonably avoid. They have no control over whether what I do poses a risk to them. As such, it is reasonable to expect me to forgo certain activities for the sake of maintaining some minimal freedom for others.

After the vaccine is widely available, however, the risks I impose on others are risks that can be reasonably avoided. Others have control over how large a risk my frequent grocery shopping imposes on them. People have the option of safety. Whether they take that option makes some difference to my reasons for infrequent grocery shopping; but it is a less stringent reason than my reasons of justice to avoid imposing unavoidable risks.

Justice is that virtue which enables us to live our own life in community with others; as such, it is the virtue that sets boundaries on what I can choose, where those boundaries mutually accommodate the choices of others. We can drive faster now that every car comes equipped with seatbelts. Why? Not because everyone always uses their seatbelts, but because everyone having access to seatbelts ensures that everyone has a free option that allows them to maintain their previous level of safety even as I start driving faster on highways.

Justice is focused on whether people have choices of health, and not whether people are healthy. For example, justice requires that we provide those who are starving with food, but it does not require us to force feed someone who refuses to eat. Were this not true, then justice could actually harm our ability to live our own life in concert with others by giving rise to certain kinds of moral blackmail. Suppose I have no objection to being vaccinated and a high personal risk tolerance. As such, I insist that unless you pay me one hundred dollars I will not go and get a vaccine. If your duties of justice meant that as long as I forgo the vaccine, you cannot return to pre-pandemic activities, then I would be able to hold your actions hostage by means of your duty of justice.

Justice, of course, is not the only virtue. I also have duties of charity. And indeed, one of the things that makes charity so demanding is precisely that it opens us up to this kind of moral blackmail. To love another person requires caring about even their self-made misery. Charity is not ultimately about living your own life; it demands instead that you live, at least in part, for others. This is why charity is such a high and holy virtue; and in turn why even if everyone who forgoes a vaccination does so for entirely voluntary reasons, that does not end all duties of precaution.

Conclusion

Of course, in real life things are a little more complicated. For example, some people are forgoing the vaccine for trivial reasons while others seem to have reasonable concerns. Does my duty of justice change depending on why others are not being vaccinated? That will be the topic of Part II.

Continue to Part II – “Types of Reasons”

Earth Day in a Year of Reckoning

image of Japan and Korea landscape from space

Last year’s Earth Day, April 22nd, 2020 was unique. The United States, the country where the now international holiday originated, had realized it was in the midst of a pandemic just a little more than a month earlier. Lockdowns and closures meant less travel and less pollution. We all got a chance to see a glimmer of what might happen if we in the West reduced our consumption, even if only for a short period of time. Carbon emissions were dramatically reduced. Animals roamed the landscape. We were provided with a poignant visual representation of exactly what we have done to the planet. Most people had no sense at this point of what to expect out of the pandemic or of the lessons it would offer, if we would only pay attention.

This year’s Earth Day stands out as well. As a result of our experiences during the long pandemic, many people are both physically and emotionally scarred and battle weary. In spite of the challenges, or, perhaps, because of them, many now find themselves in a position to think about human activity on the planet in a more honest way than ever before, even if that means coming to some grim realizations. Human encroachment into wild spaces puts us in contact with non-human animals who spread diseases that may not be dangerous to them but are deadly to us, and vice versa. We never seem to stop encroaching.

How did all of this begin? The Industrial Revolution fundamentally changed the nature of human experience. The relationship between humans and the natural world in which they live changed along with it. Throughout the course of most of the human narrative, humans had short lifespans, interacted in reasonably small groups, and their actions had mostly modest consequences. Humans stood in awe and often in fear of the creative and destructive forces of the natural world. They were largely powerless and insignificant against those forces.

Much ink has been spilt over the centuries and through the course of philosophical thought on what many take to be the distinctive feature of human beings — our capacity to reason. Aristotle, for example, set up a natural hierarchy of living things — plants are at the bottom, non-human animals are superior to plants, and humans, guiding the whole enterprise with the reigns of reason, preside over all of creation. In the 17th century, philosopher Rene Descartes argued that human beings are fundamentally different from all other living things in light of our capacity for reason. For Descartes, non-human animals were “mere machines,” unable to form beliefs and to express those beliefs through the use of language.

Childhood stories also focus on reason as a mysterious, precious, and dangerous feature of human experience. Consider Mowgli from The Jungle Book: Mowgli is raised by wolves, but when it becomes clear that he can make tools and manipulate fire through use of reason, two other things also become clear — first, that Mowgli belongs in a community with other reasoners, and second, that his capacity to use reason to make tools makes him very dangerous to those against whom those tools might be used. The story of Tarzan tells a similar tale.

So here we are in the 21st century. We’ve used our capacity to reason to bring us to places early humans never imagined possible (for example, we’ve recently flown our first aircraft on Mars!). We’ve dramatically extended the range of our social encounters. We can now interact with people from radically different places and cultures. Under ordinary conditions, we can hop on a plane and visit a person from another country whenever we can afford it and the urge strikes us. In many places, using technology, humans produce food in abundance, often in so much abundance that there is significant waste. We drive to jobs and to visit family members. We can grow both food and bodily organs in petri dishes. We can predict the weather and respond to it before it happens. We cool our houses in the summer and warm them in the winter. In developed countries, many people are almost never in a position to feel even a moment’s hardship as a result of weather. It may be that through various types of geoengineering, we’ll have control over even the weather itself.

We’ve made some miraculous progress. The impressive degree to which human beings are creative forces is matched only by the degree to which we are often the sources of horrifying acts of destruction. We’ve produced so much non-biodegradable garbage that we created the Great Pacific Garbage Patch stretching from the West Coast of North America to Japan. It covers 1.6 million square kilometers and is twice the size of Texas. We’ve engaged in deforestation at an alarming rate, clearing critical trees to make room for grazing land for cattle and to grow soy to feed animals raised on factory farms. We’ve overfished our oceans. We’ve released tremendous amounts of greenhouse gases into the atmosphere, causing temperatures to rise, melting glaciers and ice caps, and causing ocean acidification that has bleached our coral — essentially killing our reefs.

The tragic irony in all of this is that most individuals love nature. Many people experience an almost religious sense of awe and wonder when they gaze out across a breathtaking natural landscape or when they observe simultaneously fragile and resilient natural systems replenish and renew themselves.

Those are the background conditions that situate our reflections on the current moment. This set up may have been full of doom and gloom, but there remains much reason for hope. Human beings are creative and resilient. Our ability to use reason to manipulate the natural world has caused a tremendous amount of trouble, but, at its best, that capacity makes it possible for us to really reflect on philosophical and moral questions related to the natural world and our place in it. On this Earth Day, we can take the opportunity to reassess — to learn one of the primary lessons imparted by the pandemic, which is that no environmental problem is really an isolated matter. The environmental choices that we make are global and momentous. They are choices we must come together to make as a global community with our eyes wide open, willing to be receptive to evidence and to be motivated to change our behavior.

Qualified Immunity: An Unqualified Disaster?

photograph of police line with riot shields

Calls to end qualified immunity have been ongoing for years, but have intensified throughout the United States after the murder of George Floyd at the hands of then-officer Derek Chauvin of the Minneapolis Police Department. But what exactly is qualified immunity, and what is the case for eliminating it?
Qualified immunity is a doctrine inferred by the Supreme Court of the United States (SCOTUS) in light of the Third Enforcement Act of 1871. This was part of the bundle of constitutional amendments and federal law passed during the Reconstruction Era to codify and protect the rights of Black people in the southern United States. It allowed citizens to sue individuals acting as officials for any of the States when those officials violated the citizens’ constitutional rights. Prior to the Third Enforcement Act, government officials enjoyed nearly absolute immunity from civil suits for damages.  This meant state government officials acting in their official capacity couldn’t be sued at all for violating constitutional rights. The Third Enforcement Act changed that.
Eventually the SCOTUS invoked the history of government official’s civil immunity to create qualified immunity. In Pierson v. Ray (1967) the court ruled that a common law defense was available to government officials when acting in their official capacity, even when sued under 42 U.S. Code Sec. 1983 (the name under which the Third Enforcement Act now goes). So long as they were acting in good faith and on probable cause, an official was immune from civil liability even if their act did violate a constitutional right.
Numerous court cases have modified this statement of immunity, and provided rules for determining whether it applies in a given case. In Harlow v. Fitzgerald (1982) the SCOTUS reasoned that governmental officials (here, aides to the President) need their official actions to be immunized from liability in order to do their jobs effectively. A subsequent series of cases dealt with the fourth amendment and execution of search warrants by law enforcement officials. In Malley v Briggs (1986) the SCOTUS held that officers were only immune to liability from arrests made on faulty warrants if they had an objectively reasonable belief that there was probable cause for the warrant. The requirement of objective reasonability was upheld for warrantless searches in Anderson v. Creighton (1987). The test for whether an officer has qualified immunity in a given case was articulated in Saucier v. Katz (2001). There was a need for such a judicial test, reasoned the SCOTUS, because qualified immunity had to be determined before a trial could begin. Two facts had to be determined according to the Saucier test: (1) whether a constitutional right was violated and (2) whether the right in question was clearly established at the time of the conduct of the officer in question.
The second criterion has been the cause of much of the public anger concerning qualified immunity. For one it removes any consideration of the reasonability of an officer’s action, objective or subjective. Instead of probing whether that particular officer reasonably believed that their conduct was lawful, the test instead simply asks whether there was any extant, and clearly articulated, constitutional right violated by the officer’s conduct. The issue of whether there was a “clearly established right” has often been interpreted extremely narrowly by courts. Among the starkest manifestations of narrow judicial interpretation comes from Nashville, TN. Police officers sent a dog after Alexander Baxter after he had sat down on the ground and raised his hands in surrender. The dog bit Alexander. However, the officers were granted qualified immunity because the clearest previous judicial ruling on the matter only pertained to suspects who were lying down in surrender. This ruling was made by a Tennessee court and upheld by appeals courts in Tennessee and the federal court presiding in that area.
Should qualified immunity exist in the first place? And if so, how can it be pruned back from its current extent to make it acceptable? Qualified immunity is a form of affirmative defense: that is, a legal way of saying, “I admit I broke the law, but I shouldn’t be (fully) punished for it.” Self-defense is an affirmative defense against criminal prosecution for violent crimes. At trial, the defendant would assert that they did commit an illegal action (e.g., homicide) but that their illegal action was either justified, or should be excused. In general, the possibility of affirmative defenses is desirable. It should be possible to escape punishment, or receive reduced punishment, for illegal actions done under mitigating circumstances.
It is instructive to compare self-defense and qualified immunity. In their current forms, both contend that strictly illegal actions are justified — which is to say morally or pragmatically appropriate despite being illegal. We shouldn’t punish people for their justified actions, because punishing people for doing the right thing is perverse. So if someone killed or injured another person in self-defense, we think it would be wrong to punish them. Does this make sense in the case of government officials violating the rights of citizens? It might, if qualified immunity were a defense that had to be proven at trial. However, the issue of qualified immunity is settled by a summary judgment. It is determined before trial by a judge without the benefit of a full process of evidential discovery and the structured arguments of a trial held before a jury. Being a rule which was created by unelected judges and never tried by a jury of citizens, qualified immunity arguably lacks democratic legitimacy.
The pretrial nature of qualified immunity has been argued for on both practical and moral grounds. These grounds were clearly articulated by the SCOTUS in Harlow v. Fitzgerald. As a practical matter they argue that deciding qualified immunity before trial prevents frivolous litigation, saving massive amounts of time and money. From the moral point of view, they argue that it is unfair to hold government officials to the standards of statutes and judicial decisions which are either unclear or unknown to them. Setting aside the relevance of the practical considerations, the moral considerations are flimsy. After all, as a general rule, normal citizens are not allowed to invoke ignorance of the law to excuse their illegal conduct. Government officials are both better placed to know the relevant laws and have a clearer obligation to be familiar with them. It is their job to enforce, interpret, or enact them.
Numerous other objections assail qualified immunity, coming even from SCOTUS justices of diametrically opposed ideological orientations. It does not seem that the balance of reasons lies in favor of this doctrine. Government officials can invoke the same affirmative defenses as regular citizens at trial and submit them to a jury for consideration. Obliterating qualified immunity from the law will not render them unprotected from baseless lawsuits.

Do Police Intentions Matter?

photograph of armed police officer

Imagine if it became widely-reported that police officers had been intentionally killing Black Americans for the expressed reason that they are Black. Public outrage would be essentially universal. But, while it is true that Black Americans are disproportionately the victims of police use of force, including lethal force, it seems unlikely that these rights violations are part of a conscious, intentional scheme on the part of those in power to oppress or terrorize Black citizens. At any rate, the official statements from law enforcement regarding these incidents invariably deny discriminatory motivations on the part of officers. Why, then, are we seeing calls to defund the police?

The slogan “Defund the Police” has been clarified by Black Lives Matter co-founder Alicia Garza on NBC’s Meet the Press: “When we talk about defunding the police, what we’re saying is, ‘Invest in the resources that our communities need.’” The underlying problem runs deep: it is rooted in an unrelenting devaluation of communities of color. Rights violations by police are part of a larger picture of racial inequality that includes economic, health, and educational disadvantages.

The sources of this inequality are mostly implicit and institutional: a product of the unconscious biases of individuals, including police officers, and prejudicial treatment “baked into” our institutions, like the justice system. That is, social inequality seems to be systemic and not an intentional program of overtly racist policies. In particular, most of us feel strongly that the all-to-frequent killing of unarmed Black citizens, though repellent, has been unintentional.

But does this distinction matter? A plausible argument could be made that the chronic, unintentional killing of unarmed Black men and women by police is morally on a par with the intentional killing of these citizens. Let me explain.

Let’s begin with the reasonable assumption that implicit racial bias, specifically an implicit devaluation of Black lives, impacts decisions made by all members of our society, including police officers. What is devaluation? Attitudes toward enemy lives in war throws some light on the concept: each side invariably comes to view enemy lives as less valuable than their own. Even unintended enemy civilian casualties, euphemistically termed “collateral damage,” become tolerable if the military objective is important enough. On the battlefield, tactical decisions must conform to a “tolerable” relation between the value of an objective and the anticipated extent of collateral damage. This relation is called “proportionality.”

By contrast, policing is intended to be a preventative exercise of authority in the interest of keeping the peace and protecting the rights of citizens, including suspected criminals. Still, police do violate rights on occasion, and police officials operate with their own concept of proportionality: use of force must be proportional to the threat or resistance the officer anticipates.

Ironically, rights violations usually occur in the name of the protection of rights; when, for example, an officer uses excessive force to subdue a thief. Often, these violations are regarded as regrettable, but unavoidable; they are justified as the price we pay for law and order. But, in reality, these violations frequently stem from implicit racial biases. What’s more, the policy of “qualified immunity” offers legal protections for police officers and this disproportionately deprives Black victims of justice in such cases. This combination of factors has led some to argue that police authority amounts to a form of State-sponsored violence. These rights violations resemble wartime collateral damage: they are unintended consequences deemed proportional to legitimate efforts to protect citizen’s rights.

Now consider the following question posed by philosopher Igor Primoratz regarding wartime collateral damage: is the foreseeable killing of civilians as a side-effect of a military operation any morally better than the intentional killing of civilians. Specifically he asks, “suppose you were bound to be killed, but could choose between being killed with intent and being killed without intent, but as a side-effect of the killer’s pursuit of his end. Would you have any reason for preferring the latter fate to the former?”

Imagine two police officers, each of whom has killed a Black suspect under identical circumstances. When asked whether the suspect’s race was relevant to the use of force, the first officer says, “No, and I regret that deadly force was proportional to the threat I encountered.” The second officer says, “Yes, race was a factor. Cultural stereotypes predispose me to view Black men as likely threats, and institutional practices in the justice system keep the stakes for the use of lethal force relatively low. Thus, I regret my use of deadly force that I considered proportional to my perception of the threat in the absence of serious legal consequences.”

The second officer’s response would be surprising, but honest. Depictions of Black men in particular as violent “superpredators” in the media, in movies, and by politicians, are ample. Furthermore, the doctrine of qualified immunity, which bars people from recovering damages when police violate their rights, offers protection to officers whose actions implicitly manifest bias.

In the absence of damning outside testimony, the first officer will be held blameless. The second officer will be said to have acted on conscious biases and his honesty puts him at risk of discipline or discharge. Although the disciplinary actions each officer faces will differ, the same result was obtained, under identical circumstances. The only difference is that the second officer made the implicit explicit, and the first officer simply denied that his own implicit bias was a factor in his decision.

Where, then, does the moral difference lie between, on one hand, the foreseeable violation of the rights of Black lives in a society that systemically devalues those lives, and, on the other hand, the intentional violation of the rights of Black lives? If the well-documented effect of racial bias in law enforcement leads us to foresee the same pattern of disproportionate rights-violations in the future, and we do nothing about it, our acceptance of those violations is no more morally justified than the acceptance of intentional right-violations.

That is, if we can’t say why the intentional violations of Black rights is morally worse than giving police a monopoly on sanctioned violence under social conditions that harbor implicit racial biases, then sanctioning police violence looks morally unjustifiable in principle. That is enough to validate the call to divert funding from police departments into better economic, health, and educational resources for communities of color.

AstraZeneca, Blood Clots, and Media Reporting

photograph of patients waiting in gym to be vaccinated

In some ways, it seems like most respectable news media have begun to take science more seriously and to take greater care in making sure that claims about COVID are fact-checked and that misinformation is debunked. But there is more to scientific communication than getting the facts right. Often it is the selection, arrangement, and emphasis of facts that matter most and holds the greatest sway over the average person’s comprehension of scientific matters. This can have very serious consequences such as the coverage of the AstraZeneca vaccine and its potential to cause vaccine hesitancy. Does the media have a responsibility to be more careful in how they cover scientific issues?

Not long after the AstraZeneca vaccine was approved in many nations, reports in March indicated that some who took the vaccine developed blood clots. Since then, over thirteen nations have either halted the rollout of the vaccine or limited its usage. While such clots can be lethal, they are treatable. However, the more important consideration is the lack of evidence that the vaccine causes clots and the limited number of cases. There is no direct evidence of a connection between the vaccine and the development of a blood clot. Despite this, the European Medicines Agency in its review of  over 80 cases has concluded that unusual blood clots should be listed as a rare side effect. However, it is the rarity of the symptoms which is even more important. Less than one hundred people out of 20 million people who have received the vaccine have developed blood clots.

This is actually lower than what you’d normally see from unvaccinated people, and in the meantime COVID itself can lead to clots showing up in “almost every organ.” All of this leaves regulators with an inductive risk scenario: if they say that the vaccine is safe, and it isn’t many people could develop clots and potentially die; if they that the vaccine isn’t safe, and it is then it will slow down the rollout of the vaccine and many more people could die. In fact, the experts have been pretty clear that in terms of risk management, the benefits of the AstraZeneca vaccine still outweigh the risks. In other words, even if the vaccine does cause blood clots, the rates are so low that the risk of people dying is far higher if you don’t use the vaccine than if you do. This is why experts have been critical about the suspensions as a “stupid, harmful decision” that will likely lead to more avoidable deaths and will make people more hesitant to get vaccinated. As Dr. Paul Offit of the Vaccine Education Center has said, “While it’s easy to scare people, it’s very hard to unscare them.”

Yet, despite the risk being small and possibly treatable, and the fact that experts have determined that it is still better to use the vaccine anyways, the news media hasn’t been helpful in covering this issue. For example, the Canadian media has chosen to cover (apparently) every case of a blood clot developing despite the messaging ultimately being the same. One story notes, ‘“While this case is unfortunate, it does not change the risk assessment that I have previously communicated to Albertans,’ Dr. Deena Hinshaw said during a teleconference,” while the other reports, “‘We have been very transparent that there could be one case per 100,000,’ he said. ‘We knew this could happen.’” In other words, this is a situation where statistically the formation of a blood clot is expected in limited numbers but is considered acceptable because it is still such a limited risk compared to the much larger benefits. So, it is simply unhelpful to report each confirmed case of something that is expected anyways. After all, we are told that the contraceptive pill carries a greater risk of developing a blood clot, so why cherry-pick cases?

As statistician David Spiegelhalter has suggested, the scare over blood clots has demonstrated our “basic and often creative urge to find patterns even where none exist.” Unsurprisingly, a majority of unvaccinated Canadians now report being uncomfortable with potentially receiving the AstraZeneca vaccine. All of this relates to the moral responsibilities of the media in covering scientific topics where it isn’t merely a matter of reporting facts but reporting them in context. While the media has been “on a crusade against COVID vaccine skepticism” and promoting science-based medicine, to some the selective skepticism of the media has led to charges of hypocrisy as “the press has made a habit of giving finger-wagging lectures about ‘following the science,’ they need to consistently practice what they preach.” Afterall, the media doesn’t choose to report every case of someone who gets a blood clot from a contraceptive.

In fairness, while no one is suggesting that the risk of clots should be ignored, there may be good reason to raise alarm. As The Atlantic reports,

“The risk of a dangerous vaccine reaction could be very real, if also very rare—and major European vaccine authorities have not, in fact, been overcautious, political, or innumerate in responding to this possibility…regulators must address the possibility (still unproved) that perhaps one in every 1 million vaccinated people could have a potentially fatal drug reaction—as more than 1 million vaccine doses are being injected each day in Europe alone.”

In other words, there is a real risk (even if small) and morally speaking it is important to have a public conversation about risks and how to manage them. The public should be aware of the risk and how those risks are appraised. However, the issue has become confused owing to a lack of scientific literacy as well as the media choosing to focus on individual and personal cases. Instead, a more constructive topic of focus could have been on the larger moral issue of managing risk in the face of uncertainty such as when and how to use the precautionary principle.

This isn’t the only case recently where cherry-picking media coverage has proven problematic. Recently a study found that media coverage of COVID-19 in the US has been excessively negative compared to international media. A separate study has found that a significant number of Americans (mostly those who lean Democratic) were likely to overexaggerate the risks of COVID. Further, it is becoming increasingly evident that developing scientific literacy is more difficult than thought, and presenting novel scientific findings in news is problematic anyways. So, if those in the news media wish to present a scientifically-informed picture of public affairs, it is morally imperative that greater attention be paid to the context in which scientific findings are reported.

Our Moral Obligations to the Afghans

photograph of bootprints in the sand

On April 14, President Biden announced the withdrawal of all U.S. combat troops from Afghanistan by September 11, 2021. NATO forces, which today have a far larger presence in Afghanistan, will also depart, European officials reported. “We went to war with clear goals,” Biden said in a short speech. “We achieved those objectives.”

Noticeably absent from Biden’s speech was any mention of the many Afghans who, because of their collaboration with U.S. or NATO forces during the last twenty years, now face persecution under a Taliban government that, given the weakness and corruption of the current American-backed Afghan government, is likely to one day come to power. Nor did Biden mention the presumed fate of the many Afghans who took advantage of opportunities afforded by the U.S. presence. For example, today 40 percent of Afghanistan’s students are women. This is highly unlikely to continue under Taliban rule.

The moral case for opening our doors to Afghans is straightforward. Millions of Afghans have reasonably relied on American largesse and protection, and many have actively aided the U.S. in achieving its war aims. Those facts provide the basis for an obligation to protect those people from the harms they will surely suffer when the U.S. and its allies pull out of the country. In other words, we owe them protection from the Taliban because of what they’ve done for us, and because we allowed them to enjoy benefits under the pretense that we would ensure those benefits would remain available. And since no remedy other than resettlement can guarantee protection from these harms, we must provide the means for Afghans to safely make their way to this country or some allied nation, and we must help them readjust to their new lives in their new homelands.

Unfortunately, during the Trump administration the U.S. abnegated its role as a global leader for refugee resettlement. In 2017, for example, the U.S. for the first time resettled fewer refugees than the rest of the world combined. The refugee ceiling was progressively lowered after that; the FY 2021 refugee ceiling is the lowest in the history of the U.S. refugee resettlement program. Biden has said he will raise the ceiling, but didn’t sign the document required to put his announced policy change into effect. Biden can easily reverse Trump’s cruel policy, and should do so with all due haste. And, as I have argued, he should go further than this in the specific case of the Afghans, given our special obligations to them.

For those with a historical bent, our current predicament feels like déjà vu all over again. Some fifty years ago, the United States began its withdrawal from another unpopular “forever war” in South Vietnam. The death of the Republic of Vietnam finally arrived in 1975. Just as now, millions of Vietnamese then found themselves under a totalitarian regime that was committed to persecuting and “re-educating” large segments of the population to achieve ideological ends. There followed one of the greatest humanitarian crises in modern history, as millions of Vietnamese refugees — many of them former collaborators with the U.S. during the war — desperately took to the seas. From 1975, the U.S. opened its doors and provided resettlement support to over a million of these refugees, despite waning public support for refugees and a declining U.S. economy during the 1980s and early 90s.

Our country is far wealthier now than in 1975 — far more able to accommodate so many newcomers — despite the coronavirus-related economic downturn.  If we could at least come close to doing the right thing then, we can do so now. All it takes is the will and adequate political leadership.

Abusing Public Faith: Brooks, Gladwell, and Journalistic Ethics

photograph of newspaper vending machines with businesses in background

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Not long ago it was revealed that David Brooks, well-known opinion columnist for The New York Times, and Malcolm Gladwell, long-time New Yorker journalist, had received financial compensation for lending their journalistic credibility to different corporate ventures. Brooks used his column on multiple occasions to talk up a project to which he had significant financial ties — a fact he failed to disclose to his audience or his editors — while Gladwell continues to feature prominently in General Motors’s recent environmental vehicle campaign. To many, these celebrity endorsements may not seem like grave offenses; Brooks and Gladwell simply leveraged their notoriety to their financial advantage like anyone else might. On what grounds could one possibly object? Surely it would be unfair to demand that journalists be held to a higher standard than their peers and forgo all those financial incentives that so many other professions unabashedly enjoy.

It’s often suggested that over-policing possible conflicts of interest leads to absurd results. We don’t want to demand that journalists be so disinterested as to require their withdrawal from public life. We would be doing ourselves a terrible disservice to bar those often best-informed and civically-minded from public work. We shouldn’t bind the hands of those best-positioned to do the most good. Everyone should have a stake in the social projects of their communities and feel free to get their hands dirty.

Following this line of defense, Brooks and Gladwell’s endorsements have been characterized as nothing more than the benign by-product of a personal hobby. Gladwell speaks of his private passion for autos (a self-professed “MASSIVE car nut”), and Brooks describes Weave as nothing more than a pet project. Their advocacy, then, is simply an immediate reaction to their emotional investment and curiosity in those specific enterprises. There’s no reason to assume nefarious intent; these writers were simply overwhelmed with zeal and couldn’t wait to share the good news with the rest of us.

But there’s a significant distinction that separates championing a cause from promoting a product. Believing in something and rallying support behind it doesn’t require reducing one’s audience to corporate marks. Journalists shouldn’t sully their reputations by engaging in manipulation. Confronted by these allegations, Gladwell has claimed that if he’s guilty of being bought, then all of journalism has been similarly corrupted by relying on advertising dollars to sustain itself. There is, however, a marked difference between the banner ads adorning a periodical’s website and a journalist voicing support and throwing their weight behind a brand. When reporters start delivering the testimonials, the line meant to establish journalistic independence gets blurred and the waters get muddied. (Consider, for instance, this ad presented as an interview and even invoking the name of one of Gladwell’s popular investigative works.)

That said, criticism of Brooks and Gladwell’s behavior tends to draw our focus to the wrong thing. Failure to disclose isn’t the most damning sin Brooks committed, and his after-the-fact admissions can’t rectify the harm. Likewise, the potential for conflicts of interest doesn’t adequately capture the risk Gladwell’s paid endorsement poses. These actions, at bottom, violate the cardinal rule of journalism: Journalism’s first obligation is to the truth, and its first loyalty is to citizens.

It would be naive to think Gladwell’s corporate partners fail to appreciate what they are buying. Gladwell’s position is decidedly different from that of his commercial co-stars. He is not a mere entertainer; the value of his endorsement isn’t based on his ability to define what “cool” is. People give weight to Gladwell’s words because he promotes himself (and is promoted by institutions of journalism) as having the inside track on truth. Gladwell’s work weaves a complex story uniting social science and statistics — connections that are unintelligible to the rest of us. He divines the true way of the world and delivers these pronouncements to the masses. What Gladwell is selling, then, is a unique capacity for truth-telling. His trustworthiness depends on the public’s faith in the profession. His credibility and the credibility of the institutions he represents (just like Brooks’s) relies on transparency, accuracy, and unerring loyalty to the public. We believe him insofar as we believe journalism aims to benefit we, the people. To serve another master is to break this sacred bond. It is fidelity to this purpose – pursuing truth in the people’s name – that separates the devoted journalist from the faithless mercenary or fanatical partisan.

This is hardly the first time Gladwell has come under scrutiny for failing to respect the firewall we’ve erected to divide truth-telling journalists from marketing shills. But whether it’s speaking engagements, product placements, celebrity endorsements, or faux journalism, the rules of neutrality never change. The Society for Professional Journalists code of ethics is uncompromising in its guidelines about preserving journalistic independence: “Remain free of associations and activities that may compromise integrity or damage credibility.” The profession’s ethical code exists to defend the virtue of the entire field from those who would undercut it. For journalism to capably serve its necessary functions – as public forum, watchdog, and voice for the voiceless – it must be above suspicion.

It would be easy to dismiss these actions as isolated, one-off transgressions, but the consequences extend far beyond the responsible parties. These dealings undermine not only Brooks and Gladwell’s credibility, as well as that of The New York Times and The New Yorker, but also erode confidence in the profession as a whole. They threaten the finite, shared resource of public trust — a good that we are in greater need of now than ever.

Workers’ Well-Being and Employers’ Duties of Care

photograph of amazon warehouse

If you’ve been working from home during the pandemic then there’s a good chance your employer has sent you an email expressing their concern about your well-being and general level of happiness. Perhaps they’ve suggested some activities you could perform from the comfort of your own home working space, or offered Zoom classes or workshops on things like meditation, exercise, and mindfulness. While most likely well-intentioned, these kinds of emails have become notorious for being out of touch with the scale of the stresses that workers face. It is understandable why: it is, after all, unlikely that a half-hour mindfulness webinar is going to make a dent in the stress accumulated while living in a pandemic over the last year.

It goes without saying that the pandemic has taken a toll on many people’s physical and mental health. And while employers certainly have obligations towards their employees, do they have any specific duties to try to improve the well-being of their employees that has taken a hit during the pandemic?

In one sense, employers clearly do have some obligations towards the happiness and well-being of their employees. Consider, for instance, a recent scandal involving Amazon: the company baldly denied a statement that some Amazon workers were under so much pressure at their jobs that they were unable to take bathroom breaks, and were forced to urinate in bottles instead. Great quantities of evidence were then quickly accumulated that such practices were, in fact, taking place, and Amazon was forced to issue a weak conciliatory reply. It is reasonable in this case to say that Amazon has put their workers in situations in which their well-being is compromised, and they have an obligation to treat them better.

“Don’t make your workers pee in bottles” is an extremely low bar to clear, and it is an indictment of our times that it has to be said at all. People working from home offices, however, are typically not in the same circumstances: while they likely have access to washrooms, their stressors will instead be those that stem from isolation, uncertainty, and many potential additional burdens in the form of needing to care for themselves and others. So, as long as an employer is allowing its employees to meet a certain minimal standard of comfort, and assuming that those working from home during the pandemic meet this standard, do they have any additional obligations to care for employees happiness and well-being?

One might think that the answer to this question is “no.” One reason why we might think this is that we typically regard one’s own happiness as being one’s own responsibility. Indeed, much of the recent narrative on happiness and well-being emphasizes the extent to which we have control over these aspects of our lives. For example, consider a passage from a recent Wall Street Journal article, entitled “Forget What You Think Happiness Is,” that considers how the pandemic has impacted how we conceive of happiness:

“Mary Pipher, clinical psychologist and author of ‘Women Rowing North’ and ‘Reviving Ophelia,’ says the pandemic underscored what she long believed: that happiness is a choice and a skill. This past Christmas, she and her husband spent the day alone in their Lincoln, Neb., home, without family and friends, for the first time since their now adult children were born. ‘I thought, ‘What are we going to do?’ We went out for a walk on the prairie and saw buffalo. I ended up that day feeling really happy.’”

If happiness is a choice then it is not a choice that I can make for you; if happiness is a skill then it’s something you have to learn on your own. Perhaps I can help you out – I can help you learn about happiness activities like gratitude exercises, meditation, and mindfulness – but the rest is then up to you. If this is all we’re able to do for someone else, then perhaps the mindfulness webinars really are all we are entitled to expect from our employers.

There are a couple of worries here. First, to say that “happiness is a choice and a skill” is clearly a gross oversimplification: while serendipitous buffalo sightings will no doubt lift the spirits of many, happiness may not be so easily chosen for those who suffer from depression and anxiety. Second, while there is a lot of hype around the “skills” involved in acquiring happiness, empirical studies of gratitude interventions (as well as the notion of “gratitude” itself), meditation, and mindfulness (especially mindfulness, as discussed here, and here), have had mixed results, with researchers expressing concerns over vague concepts and a general lack of efficacy, especially when it comes to those who are, again, suffering from depression and anxiety. Of course, such studies concern averages across many individuals, meaning that any or all of these activities may work for some while failing to work for others. If you find yourself a member of the former group, then that’s great. A concern, however, is that claims that there are simple skills that can increase happiness are still very much up for debate within the psychological community.

Of course, those working from home will likely have much more practical roots of their decreased happiness; a guided meditation session over Zoom will not, for instance, ameliorate one’s childcare needs. Here, then, is the second worry: there are potentially much more practical measures that employers could take to help increase the happiness and well-being of employees.

For comparison, consider a current debate occurring in my home province of Ontario, Canada: while the federal government has made certain benefits available to those who are forced to miss work due to illness or need to quarantine, many have called on the provincial government to create a separate fund for paid sick days. The idea is that since the former is a prolonged process – taking weeks or months for workers to receive money – this disincentivizes workers to take days off when they may need to. This can result in more people going into work while sick, which is clearly something that should be minimized. The point, then, is that while recommendations for how you can exercise at your desk may be popular among employers, it seems that it would be much more effective to offer more practical solutions to problems of employee well-being, e.g., allowing for more time off.

The question of what an employer owes its employees is, of course, a complex one. While there are clear cases in which corporations fail to meet even the most basic standard of appropriate treatment of its employees – e.g., the recent Amazon debacle – it is up for debate just how much is owed to those with comparatively much more comfortable jobs working from home. Part of the frustration, however, no doubt stems from the fact that if employers are, in fact, concerned about employee well-being, then there are probably better ways of increasing it than offering yet another mindfulness webinar.

Art, Death, and Experience

photograph of rollercoaster at dusk

In 2010, Lithuanian artist Julijonas Urbonas created a work of conceptual art that posed, and continues to pose, a fascinating challenge to contemporary notions of death. The piece is called the “Euthanasia coaster,” represented materially by a small model of a roller coaster made of thin wire. Described on Urbonas’ website as “a hypothetic death machine in the form of a roller coaster,” the machine is “engineered to humanely–with elegance and euphoria–take the life of a human being,” or more accurately, twenty-four human beings at once.

Urbonas is right, the model does have an austere elegance. The tracks rise in a narrow but staggeringly tall parabola, then plummet back to the ground, finishing in a series of loops that grow smaller and smaller until the ride is over. The loops generate enough centrifugal force that the passengers are deprived of oxygen quickly enough to die before the car reaches the end of the line.

When we talk about medically-assisted suicide, we usually ask whether or not the terminally ill have the right to humane euthanasia. A far less common but no less interesting question, which takes for granted the notion that those who wish to die have a legal right to do so, is whether or not euthanasia in a clinical hospital setting is the best way to aid the suffering. Has the ritualistic and cultural significance of death been eroded by our efforts to be humane, and are there ways to restore that lost meaning to our final moments?

It’s easy to be put off by the idea of death by roller coaster. Urbonas was partly inspired by his stint working at a theme park, but to many, hopping on a coaster may seem a flippant or macabre way to spend one’s last moments. However, the roller coaster is a very polyvalent symbol, especially in relation to death. It isn’t a space for quiet reflection, like a hospice, but a visceral experience of power and speed. It’s also a communal experience, which reminds us of how often death is a solitary experience in the modern world. There also may be meaning in the form of the coaster; tall, sloping coasters often resemble mountains, which have long been associated with divinity (Mount Olympus is just one example), and the idea of ascending to one’s death has parallels in many world religions.

Perhaps most importantly, the ride creates an increased sense of bodily awareness in the participant, just as the body is about to die. You give up bodily control to the track, but you get something in return: terror and ecstasy. These words, terror and ecstasy, are often linked in the ancient Greek tradition of death. Through this piece, Urbanos seems to suggest that we have lost touch with that older tradition, and exchanged the vitality of full acceptance for sterility.

The idea of death as a machine is bound to make us flinch. The guillotine, which has inspired terror and revulsion for centuries, is a machine of death inspired by Enlightenment values. It kills with cold, rational efficiency, and in many ways, Urbanos’ coaster is the anti-guillotine. His work is about joy rather than rationality, and though it may involve a kind of spectacle, like the guillotine, it’s less about efficiency and more about the bodily experience of the deceased. His work also reminds us that we live in a culture where death and mourning are relegated to private spaces. Roller coasters are hulking monuments that dominate the landscape, and their silhouette of sprawling rib-like tracks are unmistakable from a distance. Death is not hidden in Urbanos’ work, but forced out into the open.

It is, of course, just a conceptual experiment. Urbanos has no plans to actually build his roller coaster of death, as the legality (and ethics) of the machine would be highly questionable. But his art still raises many questions about how we experience death, and whether or not we should expand the options available to those suffering from terminal illnesses. The solution to their plight isn’t to build a macabre theme park, but to question our understanding of death as unspeakable and unthinkable.

Gun Control and Constitutional Interpretation

photograph of gavel resting on Bill of Rights

Since the beginning of 2021, there have been 133 mass shootings in the United States. While some of these shootings find their roots in racism, misogyny, and xenophobia, they are also the result of the relatively easy access to firearms in the United States. Many Americans might attribute this prevalence of guns to the right guaranteed in the United States Constitution, enunciated in the Second Amendment’s phrase “the right of the people to keep and bear Arms.” However, the application of the Second Amendment as affording the individual liberty of gun ownership is a modern, highly disputed, doctrine in constitutional law. Two contemporary landmark cases, District of Columbia v Heller (2008) and McDonald v City of Chicago (2010) fundamentally altered the power of state and federal governments to regulate gun ownership by recognizing an individual right to own and keep firearms. These cases were highly divisive, included multiple dissenting opinions, and were decided by bare, 5-4 majorities. However, the majority and minority in both cases relied a type of constitutional interpretation called originalism to bolster their conclusions. The difference, however, lies in the majority’s use of original meaning and the minority’s use of original intent.

How were the different forms of originalism employed in both cases? What are the moral pitfalls of either type of originalist interpretation? And is relying on the ideas or desires of the Founders an ethical basis on which to decide cases?

In District of Columbia v Heller (2008), several individuals, including members of lobbying groups like the National Rifle Association and the Cato Institute, challenged the constitutionality of a D.C. law which imposed several restrictions on gun ownership, including the regulation that rifles and shotguns be kept “unloaded and disassembled or bound by trigger lock.” In an opinion written by Justice Antonin Scalia, the Supreme Court held that the Second Amendment granted individual citizens the right to keep and bear arms for the purpose of self-defense. Though the Court had previously defined the scope of the Second Amendment in United States v Miller (1939), the widely-accepted application of the Second Amendment in constitutional law was that the right to bear arms only applied to ownership of weapons in the context of use in an organized militia. For this reason, many decisions involving individual gun ownership in the 20th century refused to acknowledge that the Second Amendment bestowed an individual right which restricted states or the federal government from regulation.

The little room that was left open for gun regulation after District of Columbia v Heller was quickly closed two years later in McDonald v City of Chicago (2010). The majority opinion, this time penned by Justice Alito, held that Chicago’s regulation which essentially prevented firearm registration and ownership was a violation of rights bestowed to individuals by the Constitution. This case was significant because it granted individuals broad legal protection from gun regulation. Many constitutional scholars and theorists view both Heller and McDonald as landmark Supreme Court decisions which fundamentally altered the legal conception of the Second Amendment.

Originalism is a form of constitutional interpretation which understands the Constitution in terms of what it originally meant when it was written, the original intent of the Framers, or both. One consideration, which applies to both original meaning and original intent interpretation is the reliance on the ideas and intentions of people long-dead and gone when deciding what rules should govern society. Nobody alive today had a hand in writing the original Constitution. Sometimes known as the “dead hand problem,” a moral and philosophical paradox arises when considering whether one can consent to a government, they had no hand in designing.

Fundamental to the reasoning employed in both Heller and McDonald was originalist meaning. Proponents of original meaning generally argue that it ultimately benefits society by limiting the discretion given to judges. By limiting the power of judges to use their own authority to decide what the Constitution says (or should say), originalism might be said to prevent an unequal application of the law and to leave political battles to the legislature where it can be more democratically controlled.

Those critical of originalism, however, might see it as giving far too much weight to the language, culture, and politics of a long-gone society and see it as detrimentally rigid and inapplicable to a rapidly changing society. Upholding the original meaning of the Constitution might also force one to condone immoral provisions, such as those enabling slavery or only bestowing rights upon certain groups of people. While originalism might safeguard certain moral values present in the Constitution it also prevents moral correction. For example, in the case of gun reform, even if the original meaning of the Constitution was meant to bestow individual gun rights, perhaps this liberty should be reexamined in the face of advanced technology and the prevalence of gun violence.

Turning not to the original meaning but instead to the original intentions of the Framers is another strategy used in originalist constitutional interpretation. In their dissents in Heller, both Justice Stevens and Justice Breyer point to historical evidence which implies that the Second Amendment was intended only to apply to militias and was not conceived to apply to individual self-defense. Originalism which focuses on the intentions of the Framers in terms of gun rights has also been a popular method among gun reform advocates, and late night comedy hosts, who point out that firearm capabilities at the time of ratification were vastly different than firearm capabilities today. While many gun regulation advocates might agree with the application of the Framers’ intention in this instance, the pitfall of applying such an interpretive strategy includes reckoning with some of the potentially immoral intentions present in certain provisions of the Constitution. Article I, Section 9 of the Constitution condoned slavery, and experts have contended that the Framers did not intend for certain Constitutional rights, like the right to vote, to extend to poor men, women, or BIPOC folks.

Another potential problem with considering the intention is that the Founders were arguably themselves immoral. More than a few of the Founders owned slaves, and though some expressed moral grief over this fact, their decision to subjugate human beings is entirely reprehensible. Even relativist arguments which defend the Founders and advocate for original intent, might fail on the basis that the point is exactly that we have changed as a society and should not be turning to the dead for guidance in solving society’s most pressing legal problems.

Since Heller, the rate of gun deaths in the United States have increased 17%. The United States is the one of the leading countries in the world when it comes to firearms death, and there is a clear political, cultural, and moral interest in reexamining the individual right to gun ownership. If one’s goal is to regulate firearms, the most effective form of constitutional interpretation to do so might very well lie outside of originalism altogether. However, it is important to recognize how originalist application has greatly influenced gun control in the United States. By understanding the moral quandaries that belie the forms of modern constitutional interpretation applied to firearms cases, we can better understand which types of arguments best support the gun regulation we seek.

Underrepresentation in Clinical Trials and COVID-19

photograph of vaccine waiting line

There have long been concerns about underrepresentation in the clinical trials of medical products, and there has been a large push for the testing of COVID-19 vaccines to be more inclusive due to the urgency of the problem and danger the virus poses to all of us. The current situation presents an opportunity to assess the previous model that tended to emphasize efficiency — doing the most good for the most amount of people as quickly as possible, often in the interests of speeding production — at the cost to underrepresented groups continually having to use medical products not specifically designed for them.

Historically, both medical professionals and clinical trial participants were primarily men. Making up more than half of the population, women were drastically underrepresented in medicine. This underrepresentation resulted in vast differences in healthcare for men and women. This is important because men and women present different symptoms of disease. Since men have largely been the sample for clinical trials of medical products, there is far more medical knowledge about how men present symptoms of various diseases. This makes diagnosis for women more difficult, and it makes their symptoms seem abnormal.

The underrepresentation of women in clinical trials also impacts the treatment of diseases. When male cells, male animals, and men are the basis of clinical trials, researchers are looking only at the effects on one-half of the population. Often, when women were included in these trials, their data was not analyzed separately or significantly, so differences in experiences were not addressed. After trials, women would take medicine that had not been optimized for them, which could result in negative health consequences. For example, Ambien, a sleep aid, was not tested for differences in effects on men and women. When Ambien became available for use, the slower metabolism of women meant that the drug stayed in their systems longer than men, so the dose was too high for women.

One reason for the low representation of women in clinical trials is that researchers worried that women’s reproductive cycles and hormones would overcomplicate the study and provide confounding variables. However, these variables are part of the reason why men and women both need to be included in clinical trials. When hormonal fluctuations and reproductive cycles are not taken into account, it is uncertain exactly how a medical product will work.

Another reason women were often excluded from clinical trials was fear of harming fertility. In 1977, the FDA recommended  women of childbearing age do not participate in clinical trials for fear that medical products may harm a fetus in the event that the woman became pregnant. This recommendation was put into place after exposure to some drugs caused birth defects. In the past, little research was done about women’s health that did not relate to reproductive health. Fertility seemed to be the primary concern about women’s health, so it was treated as more important than the potential benefits of including women in clinical trials.

The FDA reversed this recommendation in 1993 over ethical concerns about prioritizing the fetus over the potential benefits of including women in clinical trials. The recommendation’s reversal was also related to concerns about autonomy. When the FDA put forth the recommendation, they made it significantly more difficult for women who wanted to participate in clinical trials to do so. Despite some women not wanting children or simply valuing the advancement of women’s health more than any potential negative health effects they may face, women were excluded from trials. The FDA attempted to address safety concerns, but ultimately changed their recommendation because of the constraints on personal choice. While there remain safety concerns, women can now choose to participate in clinical trials.

Due to the urgent nature of the pandemic, ensuring that the vaccine is viable for as many people as possible is a priority. To achieve this, clinical trials have been more inclusive of many medically underrepresented groups, such as pregnant women, minorities, and people with certain medical conditions. Pregnant women can choose to participate in the clinical trials for the vaccine far earlier than they are able to for most medical products. This is partly due to the funding for vaccine development and the higher risk that pregnant women face with COVID-19.

In the future, it is uncertain whether medically underrepresented groups will continue to have the same participation they currently enjoy. While it is clear that these groups ought to have more representation in clinical trials, some medical professionals have concerns about funding and time. Most medical products do not have the funding that the COVID-19 vaccine has. Lower funding limits the number of people who can participate in a clinical trial. Additionally, there are time constraints. When a medical product is necessary, it is important that it is released as soon as possible. Knowledge of the potential health risks for different groups must be weighed with the benefits of releasing the medical product as soon as possible. Often, this means that the human trials consist of healthy men and women with no other conditions being the primary subjects. Later, the effects for individuals with various health conditions may be tested as well, but this is rarely made a priority.

The history of discrimination in clinical trials forces us to consider whether efficiency in the production of a medical product designed for the greatest number of people is truly fair when it means that persistently underrepresented groups continue to suffer from a lack of viable medical products tailored to their particular needs.

The Ethics of Dark Tourism

photograph of neon Cecil Hotel sign

In February 2020, Netflix released a four-part docuseries called Crime Scene: The Vanishing at the Cecil Hotel. The series focuses on the death of Elisa Lam, but along the way it tells the story of the building. It was built in the 1920s with all of the glamour that is often associated with hotels of that age in that area. The hotel struggled during the Great Depression. It is located on skid row, and eventually it became a common resting point for the city’s poor. The Cecil is infamous for the deaths that have taken place there and for the fact that two famous serial killers, Richard Ramirez and Jack Unterweger, stayed there during the period in which they were actively killing people. A season of American Horror Story was based on the folklore that surrounds the Cecil Hotel.

Elisa Lam was a 21-year-old student at the University of British Columbia. She vacationed in California in the early months of 2013. Several days into her trip, she checked into the Cecil Hotel. It was frequented by international travelers because it was inexpensive and functioned as a hostel. These travelers were also largely unfamiliar with the hotel’s past and as a result they were undeterred by it. During her stay, Lam initially shared a room with some of the hotel’s other international travelers. She was moved to her own room when those travelers complained about her erratic behavior. Shortly thereafter, Lam disappeared. The last known images of her are captured on a hotel security tape. Her behavior is unusual. The police released the tape and the video went viral, causing internet sleuths across the globe to speculate about what happened to her. At times, she appears to be checking for something or someone outside the elevator door. She moves her hands in unusual ways and presses the buttons to all of the floors. Finally, she walks awkwardly out of the elevator and down the hall. She was found weeks later, naked, dead in the water tower on top of the hotel which a maintenance worker checked after guests complained that their shower and tap water was coming out black.

After the series came out, there was renewed interest in staying at the Cecil Hotel. Crime aficionados and ghost hunters were eager to spend the night — preferably in a room in which Ramirez or Lam once stayed. The hotel has been closed for renovations since 2017, but this has not stopped “dark tourists” and social media personalities from sneaking in to take pictures and footage.

Many people would rather visit the home of a serial killer, the location where a famous murder happened, or the site of a natural disaster than a sandy beach or a world heritage site. Dark tourism isn’t new. People often feel powerful connections to some of the world’s most tragic events. This connection is so strong that thousands of people visit Gettysburg every year, not simply to observe a historical site or to pay their respects to the many human beings that died in that battle, but to actually take on roles and act out what occurred there.

People will engage in dark tourism even when there is risk that doing so might be dangerous to their health and safety. For instance, for years tourists have been visiting Chernobyl, the location of the nuclear disaster that led to agonizing death and long-term illness for so many people in the 1980s and beyond. The risk of exposure to radiation has been no source of concern for many tourists who just want to be close to tragedy.

One way of viewing this kind of behavior is as just one form that an interest in history can take, and there is no reason to be critical of anyone for taking an interest in history. Millions of people visit the Tower of London every year. The fact that terrible things happened there is part of what makes it an interesting place. Most cities and the buildings in them have a rich variety of stories to tell. The ability these destinations have to call up our sense of empathy and shared humanity is part of what makes many of us interested in traveling in the first place.

On the other hand, intentions may turn out to matter quite a bit. If a person gets a charge from visiting the home of a serial killer and their preferred vacation destination is a tour of death, that person may have some soul searching to do.

It also might matter whether it is “too soon” to treat the location in question as a place where tourists can get cheap thrills. Since the Battle of Bosworth happened in 1485, it may be the case that no one can be thought of as particularly perverse for experiencing excitement when visiting the location where it took place. If the event occurred in living memory, it may be wise to be more circumspect. There are actual living, breathing human beings that might be hurt by the decision to treat the location of their personal tragedy as if it is a great spot to grab an Instagram photo on spring break. In the case of Elisa Lam, there is good reason to believe that mental illness played a role in her death. When people visit the Cecil Hotel hoping to contact the ghost they believe killed her, it minimizes the real tragedy of what likely actually happened.

That said, it may be that some events were so inhumane that it is never appropriate to visit sites associated with them for kicks. For instance, over the years there has been much discussion about what to do with Hitler’s childhood home. There was discussion for a while of turning it into a museum dedicated to the memory of the victims of the Nazis. In recent years, Austria has decided to tear it down to reduce or eliminate the attraction the location has for neo-Nazis.

In Salem, Massachusetts, visitors can buy a ticket to the Salem Witch Dungeon, which is ostensibly a site to educate tourists about what the trials, imprisonment, and execution of people accused of witchcraft would have been like for those who experienced them. Unfortunately, at many turns the Witch Dungeon is more like a modern haunted house than it is a respectful educational opportunity. When people wearing spooky makeup are hired to generate screams, it can be easy to forget that everyone who was accused of witchcraft was innocent of that charge and that the events that are being reenacted in the dungeons are based on the last torturous days of the innocent.

Aristotle thought that part of what it is to be a virtuous person is to habituate the dispositions to have apt feelings and reactions to one’s circumstances. This requires practice and keeping a close eye on others who have well-developed characters. Having the right response to a location associated with tragedy may not be a matter of avoiding these locations, but, instead, visiting with the appropriate amount of respect and understanding.

Do Hashtags Make Political Discourse Worse?

image of hashtags on sticky note

Are hashtags ruining political discourse? On Twitter, the hashtag now serves little technical purpose following upgrades to the site’s search function, yet the use of hashtags in the political sphere are incredibly popular as a means of bringing attention to, or giving one’s thoughts on, a subject of significant public interest. While some suggest that hashtags facilitate better public debate, there is good reason to believe that they also make political discussion less rational and more polarized. If the means of expressing political ideas through a hashtag makes political discourse worse off, then their continued use poses a morally significant decision for anyone choosing to use them.

Let’s begin by considering a hashtag like #defundthepolice. The past year has drawn more attention to the idea of police reform. While much of this attention centered on reduction of police violence following the death of George Floyd, greater focus has also been attached to reconsidering the institutions and meaning of policing. For example, questions have been raised whether police should be responding to mental health crises. In the case of Daniel Prude, for instance, police responded to someone in a state of delirium and it resulted in Prude’s death. Walter Wallace Jr. suffered from bipolar disorder until a domestic disturbance also led to his death by the police. 13-year-old Linden Cameron is now paralyzed after being shot by police responding to a call that a juvenile was having a “violent psychological issue.”

These cases remind us that police are called to handle a wide variety of social disturbances and that the best way to handle such cases might vary, which can render traditional policing a bad fit. It is worth noting that the concept of policing and the means by which public order has been maintained has changed over time. For example, not that long ago, it was a novel idea to have a police force that went out and actively investigated crimes, or to have uniformed officers use military-style training and tactics. And yet, many concepts of policing and their institutions come from a time before any significant advances were made in understanding and treating mental health, and before contemporary methods for social work were devised. The question, therefore, is whether the concept of policing, and the means employed, still best suit the problems for which they were created as we understand those problems today?

In terms of the debate about reform, it is important to note that this isn’t an issue of big or small government. Police already respond to these calls anyway; taxpayers already pay for these services (and the lawsuits that follow from them). Deciding what kinds of problems we think police should respond to, whether social workers should be involved, and how to assess effectiveness are all matters for careful community debate, evidence-gathering, and experimentation. But does a slogan like “#defundthepolice” actually make this deliberative process worse?

Part of the problem with a phrase like “defund the police” is that it is incredibly vague: Does defund mean to reduce in budget? Does it mean elimination of police? Does it mean reform to traditional policing? But the problem is even more complicated. The broader question facing the public is about redefining what “policing” even is (and should be) given our current understanding of the contemporary problems it is meant to address. Given this, we may choose to redefine social work and policing such that they may blur. Police officers do not have to be the only people involved in ensuring safety and preventing disorder. So, what does “the police” mean? Are we talking about a specific police institution, such as in the case of Minneapolis where the police department itself may be eliminated? Or does “the police” refer to the entire concept of civil protection in general?

Part of the problem with the popularization of a phrase like #defundthepolice is that it not only makes political discourse more ambiguous, but that it also has the potential to limit our thinking. We do a disservice to ourselves and our concepts by presuming a narrow definition of things like “policing” in our shared vocabulary and collective imagination. And the ambiguity introduced with such slogans may make it more difficult to achieve political consensus.

In a recent study by Eugenia Ha Rim Rho and Melissa Mazmanian of the University of California comparing the difference between people who read news which includes hashtags compared to news which does not. They found that when people were exposed to a politically charged hashtag like #BlackLivesMatter or #MeToo, these people were more likely to use partisan language to discuss the news, and more likely to focus on assumed political biases rather than the social issue discussed in the news content. The study notes, “those shown news posts with hashtags exhibited more black-and-white and less emotionally temperate rhetorical patterns compared to the control group” and found that nuanced understandings of content get drowned out by the hashtag. Such findings reinforce the idea that hashtags are potentially harmful to political discourse.

On the other hand, researchers have assumed that political hashtags facilitate better and more meaningful conversations. Such hashtags are known to increase narrative agency by allowing for personal and communal storytelling. For example, the ability to share personal stories using the #MeToo hashtag contributes to political discourse on the issue of sexual harassment by offering perspective and by making the nature of the problem clearer. Hashtags can also make it easier to draw attention to important issues that might not receive attention otherwise, and allow for more opportunity for contribution by each participant in the discussion. As a recent paper argues, Twitter does offer the potential for the emergence of non-elite actors to engage in content creation and framing in communities that form in response to an event or issue.

Hashtags can also be used as part of heuristic processing, making it easier to understand topics and events. This can facilitate communication, organization, and cooperation in response to social issues. As a form of “hashtag activism,” the use of hashtags may make people more likely to be engaged. According to a 2014 study almost 60% of Americans felt that tweeting or posting about something is an effective form of advocacy. As Bev Gooden, a creator of the hashtag #whyIstayed, notes, “I think the beauty of hashtag activism is that it creates an opportunity for sustained engagement.”

So while there is a lot of potential promise to the idea of hashtags as a tool to rally and inform, hashtags also have the potential to rally to the point of obstinacy and to misinform. Of course, it is unlikely that all hashtags will always have the same effects on political discourse and so the choice of when to use them, and how, ultimately becomes an important individual moral question about how to best contribute to a public democratic discourse, demanding of us the need to carefully consider nuance and context.

“Stand Back and Stand By”: The Demands of Loyal Opposition

photograph of miniature US flag with blurred background

An incendiary essay is currently making the rounds. Glenn Ellmers’s “‘Conservatism’ is no Longer Enough” is a call to arms: “The United States has become two nations occupying the same country.” The essay details a kind of foreign occupation:

most people living in the United States today—certainly more than half—are not Americans in any meaningful sense of the term. […] They do not believe in, live by, or even like the principles, traditions, and ideals that until recently defined America as a nation and as a people. It is not obvious what we should call these citizen-aliens, these non-American Americans; but they are something else.

Given this dire situation where there is “almost nothing left to conserve,” “counter-revolution” represents “the only road forward.” Those brave enough to grasp this grave truth also possess the clarity of vision to see that “America, as an identity or political movement, might need to carry on without the United States.” For if true patriots fail to find the courage to mobilize and take action, “the victory of progressive tyranny will be assured. See you in the gulag.”

While it may seem irresponsible to grant such obvious propaganda additional attention, Ellmers’s essay is worthy of consideration for two reasons. First, this is not your run-of-the-mill internet debris. It bears the seal of a prominent conservative think tank. Published by The American Mind with direct ties to the Claremont Institute (where Ellmers graduated and serves as fellow), the essay is endorsed by a body with not insignificant cachet. The various fellows and graduates, for instance, have ties to major universities. It would be a mistake to see this as obscure preaching to a small flock; this narrative is emblematic. It’s an intellectualized hard-right manifesto serving as mission statement for the Claremont Institute for the Study of Statesmanship and Political Philosophy whose name Ellmers invokes.

Second, the essay provides a compelling framework by which to understand the motivations behind a number of recent events — the various efforts to overturn the results of the presidential election, the January 6th Capitol riot, as well as voting legislation in Georgia (and elsewhere) attempting to restrict the franchise to “real” Americans. Like Michael Anton’s “The Flight 93 Election” (another Claremont fellow whose work was published by the same body), Ellmers’s essay paints the current political moment as a desperate choice: fight or face extinction, rush the cockpit or die.

Ellmers’s essay has received attention in no small part due to its eerie similarity to Weimar-era German political writings. Echoing the kind of language used by Carl Schmitt – the constitutional scholar and jurist who embraced National Socialism – Ellmers emphasizes the need to declare a state of emergency and purge those who have infiltrated the state and subjected American politics, all in an act of restoration and purification. “What is needed, of course,” Ellmers claims, “is a statesman who understands both the disease afflicting the nation, and the revolutionary medicine required for the cure” — a pronouncement which seems strikingly similar to Schmitt’s explanation of the role of the sovereign to normalize the situation by embracing the responsibility to deliver the miracle of the decision – that is, the extra-legal authority to say whether everyday legal norms should apply.

Likewise, Ellmers’s essay seconds Schmitt’s conviction that the basis of politics rests on distinguishing friend from foe and treating them accordingly. For any state to continue to be, it must be willing and able to forcibly expel those who might undermine its fundamental homogeneity in order to save itself from corruption from within. Again, following Schmitt, Ellmers issues a dire warning on the supposed political virtue of tolerance and questions our blind faith in democracy’s ability to assimilate conflicting and antagonistic viewpoints and house them under the same roof.

Lost in all the fascist rhetoric is an important philosophical problem. The challenge is familiar to students of political obligation: how can citizens feel any tie to the law when it isn’t their team who’s making the rules? It is what David Estlund has called the “puzzle of the minority democrat”: how can those in the minority consider themselves self-governing if they are subject to laws they have not explicitly endorsed?

This is no small thing; resolving this tension is the key to the bloodless transition of power. Ensuring citizens can adequately identify with the law and see themselves sufficiently reflected in their government is a necessary component of the exercise of legitimate political authority. We need a compelling answer for how citizens might still see themselves as having had a hand in authoring these constraints even when their private preferences have failed to win the day. Why should those in the minority sacrifice their own sense of what is right simply because they lack numbers on their side on any particular occasion?

Our answers to this puzzle often begin by emphasizing that democratic decision-making is essentially about compromise. Majority rule acknowledges our basic equality by publicly affirming the worth of each citizen’s viewpoint. It privileges no single individual’s claim to knowledge or expertise. It grants each citizen the greatest share of political power possible that remains compatible with people’s basic parity. From there, explanations begin to diverge.

Some accounts emphasize the duty to live by the result of the game in which we’ve been a willing participant. Others highlight the opportunity to impact the decision, voice concerns, and engage in reason-giving. A few maintain faith in the majority’s ability to come to the “correct” decision.

Regardless of the particulars, each of these accounts makes a virtue of reciprocity; individual freedom must be balanced against the equally legitimate claims to liberty by one’s fellows. When we refuse to acknowledge this, we usurp others’ right to equal discretion in shaping our shared world and thus violate our moral commitment to the fundamental equality of people.

These considerations about how best to accommodate deep, and potentially incompatible, disagreement have important implications for our politics today. For example, the ongoing debate over reforming the filibuster is a conversation about, among other things, the appropriate portion of power those in the minority should wield. Different people articulate different visions of the part the opposition party needs to play. But we seemingly all agree that this role must be more robust than one wherein those in the minority simply bide their time until they can rewrite the law and install their own private political vision. Instead, we must continue to articulate the significant demands the concept of loyal opposition makes on all of us. Responsible statesmanship is not solely the burden of those who wear the crown.

True Crime and Empathy

photograph of coroner and officer hovering over body

True crime is a prism through which we understand a myriad of social concerns, including race, gender, class, and mental illness. It’s an arena where the political is made personal, where structural inequalities are boiled down to or made manifest through individual acts of stunning violence. It’s also infinitely versatile in terms of form (podcast, documentary, online forum) and tone (prestigious, comic, sensational). There are many obvious pitfalls for the investigative journalists and television producers that peddle true crime stories. They might influence public opinion about a case, or even change the course of an investigation, as happened with both the podcast Serial and the hit show Making a Murderer. But what does the popularity of this genre teach us about empathy, and what ethical dilemmas are faced by its adherents?

Those adherents, as Rachel Monroe says in her 2019 book Savage Appetites: Four True Stories of Crime, Women, and Obsession, are overwhelmingly women (one 2019 study says that women make up about 85% of true crime aficionados). These women, as Monroe explains, “reinvented themselves, finding personal meaning through other people’s tragedies. They used [true crime] as a way to live out other kinds of lives, ones that were otherwise unavailable to them.” Monroe argues that those with an active interest in true crime (men and women alike) tend to identify with an archetypal figure; they can be the analytical detective who seeks justice within the judicial system, the battered victim, the avenging vigilante, or even the murderer.

We can even identify with the archetypal true-crime reporter. The success of the podcast Serial, Alice Bolin says, depends on how relatable the host is: “Like the figure of the detective in many mystery novels, the reporter stands in for the audience, mirroring and orchestrating our shifts in perspective, our cynicism and credulity, our theories, prejudices, frustrations, and breakthroughs.” This process of identification is the driving thrust behind true crime fandom. It isn’t enough to see the facts laid out, we want a narrative to project ourselves into, and the form that projection takes is rooted in our deepest needs and desires. Though the stereotype of true-crime fans is that of the addict, the passive consumer, it’s virtually impossible to get invested in crime without some element of it speaking to you.

In the world of true crime, the line between creator and consumer is tenuous at best. This is most evident in the Elisa Lam case, which was the subject of a recent Netflix documentary. Lam died tragically on the roof of a hotel after going off her medication, and a surprising number of people (many of whom are featured in the documentary) became obsessed with “uncovering the truth” behind her death. Internet sleuths constructed an elaborate web of conspiracy, positioning themselves as authorities over her story, while also consuming it voyeuristically. Some claimed she was the victim of supernatural forces, others harassed members of her family. This case exemplifies that identifying with the victim is not the same as showing empathy for the victim.

Narratives, more broadly, give us a sense of justice and stability. Kevin Balfe, the founder of the wildly popular true crime convention CrimeCon, explained to Time that “most of these stories represent what all great stories have. There’s a hero. There’s a villain. There’s usually a mystery. There’s oftentimes a traumatic event. There’s usually a resolution.” The question is whether or not immersing ourselves in such narratives can make us blind to reality, as the almost fairy-tale narrative structure described by Balfe suggests.

At the same time, it’s difficult to prove concretely that true crime reinforces negative stereotypes or stirs up undue fear of violence. How do you draw a straight line between a true crime documentary and a person’s heightened anxiety about serial killers, or their blind support for the judicial system? And such concerns can easily veer into baseless moralizing, which is especially troubling given how many true crime fans are women. Women’s interests are so often trivialized and policed, and any critique of true crime should take this into account. Anyone with critical thinking skills can love true crime as mere entertainment, without over-investing their identity into their favorite stories, and there is nothing inherently wrong with an interest in the macabre.

Content creators have obligations to research their stories thoroughly, and present the story without sensationalizing or cheapening tragedy. But those who consume true crime also have obligations to remain empathetic, and not massage a real tragedy into a more cohesive or alluring narrative. When we do this, it is an injustice to ourselves, to the messiness of reality, and most importantly, to the victims true crime fans are meant to care about.

A Problem with Emotions

abstract acrylic painting of divided canvas

There is a certain challenge to the adequacy of our emotional reactions — especially those reactions, like grief and joy, which feel ‘called for’ at certain times. Suppose a family has a child who falls grievously ill. After many sleepless nights, the child stabilizes and eventually recovers. There are appropriate emotional responses to this sequence; the parents will, and should, feel relieved and joyed at the child’s recovery. Now suppose another family has a child who similarly falls grievously ill. Except this child does not recover and eventually dies. Again, there are appropriate emotional responses. The parents will, and should, feel grieved and heartbroken at the child’s death.

So far, there is no challenge. But now suppose that instead of two different families, it was one family with two children — one recovers, one dies. Here, what are the parents supposed to feel? There are a couple of options.

Perhaps they should feel a sort of moderated grief. After all, something wonderful has happened (a child has recovered) and something terrible has happened (a child has died). Do they partially cancel out (but maybe weighted in the direction of grief since ‘bad is stronger than good’)? The problem with this answer is that the grief is a response to the tragedy of the child’s death. And that child’s death is no less a tragedy just because the other child survived. Moderation would be appropriate if something happened to moderate the tragedy of the child’s death — such as the child being spared death and instead placed within an enchanted sleep — but it does not seem like the appropriate response to some other good thing occurring.

Perhaps then, you just need to feel either emotion. Both grief and joy are appropriate — so long as you feel one, then you are feeling well. But this won’t do either. There is something wrong with the parent who feels nothing for the recovery of their child, just as there is something wrong with the parent who feels nothing for the child’s death.

In fact, the only response that seems appropriate to the situation is to feel both grief and joy. You ought to be grieved at the one child’s death and joyed at the other child’s recovery.

But here is the issue. It doesn’t seem possible to fully feel both at once. Feelings, unlike some other mental states, compete with each other. When I feel happy about one thing, it pushes sadness about other things to the periphery. This is unlike, say, beliefs. The parents can fully believe that one child recovered while, at the same moment, fully believing that the other child died. This is because beliefs do not require active attention. Moments ago, you believed all sorts of things about your former elementary school, but I expect until you read this sentence you were not actively attending to any of those beliefs.

Emotions, however, do require attention. If I can become fully absorbed in my work, then for a time my grief will retreat. (Of course, one of the frustrating things about grief is the way that it maintains a ‘grip’ on your attention — forcing your thoughts to circle back and return again, and again, to the tragedy.)

So, to fully feel the grief at the one child’s death, and to fully feel the joy at the other child’s recovery, would require me to keep my full attention on both at the same time. But we can’t do that with attention, attention is a limited resource. It can only be fully engaged in one direction.

The best we can do, then, is a sort of ping-ponging back and forth between grief and joy. Feeling complete grief when attending to the death, feeling thankful and relieved when attending to the recovery. But at no point, it seems, can my emotions be completely responsive to what is called for.

Berislav Marušić, in his essay “Do Reasons Expire”, considers a related puzzle:

“Grief is, plausibly, a response to reasons; the reason for my grief was my mother’s death; her death does not change over time; but it is not wrong for me to grieve less over time. Yet how could the diminution of grief not be wrong, if my reason for grief stays the same?”

The reason the problem is similar is that there is a disconnect between the response demanded by the event (the tragedy of someone’s death) and the psychological realities of our capacity to have emotions. You just can’t indefinitely grieve, and in turn you don’t indefinitely grieve. But doesn’t it seem as if there is a sense in which you ought to?

There is a conflict, then, between the psychological realities that constraint our emotions, and the appropriateness conditions surrounding what emotions we ‘ought’ to feel.

This is an important conflict to think about. One reason it’s important to be aware of this conflict is because it helps recognize exactly why we need to be so skeptical of grounding our moral decisions simply on emotions like anger or grief. Since we can only feel some emotions to an extent, our emotional responses, at a given time, are usually not responsive to the full range of relevant considerations. You can feel outrage about an injustice, or hopeful at political progress that has been made, but you can’t feel both at the same time to the appropriate extent. But given that psychological reality, that means that basing policy recommendations on emotions of rage or optimistic hope is likely to be morally dangerous.

This does not mean that emotions should play no role in our moral decision-making. Emotions are important. Instead, what this means is that we need to be extremely cautious when acting on our emotional reactions. We should always bear in mind that emotions are likely to not be reflective of the full range of complexities in any given case.