← Return to search results
Back to Prindle Institute

Informed Consent and the Joe Rogan Experience

photograph of microphone and headphones n recording studio

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The Joe Rogan Experience (JRE) podcast was again the subject of controversy when a recent episode was criticized by scientific experts for spreading misinformation about COVID-19 vaccinations. It was not the first time this has happened: Rogan has frequently been on the hot seat for espousing views on COVID-19 that contradict the advice of scientific experts, and for entertaining guests who provided similar views. The most recent incident involved Dr. Robert Malone, who relied on his medical credentials to make views that have been widely rejected seem more reliable. Malone has himself recently been at the center of a few controversies: he was recently kicked off of YouTube and Twitter for violating their respective policies regarding the spread of misinformation, and his appearance on the JRE podcast has prompted some to call for Spotify (where the podcast is hosted) to employ a more rigorous misinformation policy.

While Malone made many dubious claims during his talk with Rogan – including that the public has been “hypnotized,” and that policies that have been enforced by governments are comparable to policies enforced during the Holocaust – there was a specific, ethical argument that perhaps passed under the radar. Malone made the case that it was, in fact, the moral duty of himself (and presumably other doctors and healthcare workers) to tell those considering the COVID-19 vaccine about a wide range of potential detrimental effects. For instance, in the podcast he stated:

So, you know my position all the way through this comes off of the platform of bioethics and the importance of informed consent, so my position is that people should have the freedom of choice particularly for their children… so I’ve tried really hard to make sure that people have access to the information about those risks and potential benefits, the true unfiltered academic papers and raw data, etc., … People like me that do clinical research for a living, we get drummed into our head bioethics on a regular basis, it’s obligatory training, and we have to be retrained all the time… because there’s a long history of physicians doing bad stuff.

Here, then, is an argument that someone like Malone may be making, and that you’ve potentially heard at some point over the past two years: Doctors and healthcare workers have a moral obligation to provide patients who are receiving any kind of health care with adequate information in order for them to make an informed decision. Failing to provide the full extent of information about possible side-effects of the COVID-19 vaccine represents a failure to provide the full extent of information needed for patients to make informed decisions. It is therefore morally impermissible to refrain from informing patients about the full extent of possible consequences of receiving the COVID-19 vaccine.

Is this a good argument? Let’s think about how it might work.

The first thing to consider is the notion of informed consent. The general idea is that providing patients with adequate information is required for them to have agency in their decisions: patients should understand the nature of a procedure and its potential risks so that the decision they make really is their decision. Withholding relevant information would thus constitute a failure to respect the agency of the patient.

The extent and nature of information that patients need to be informed of, however, is open for debate. Of course, there’s no obligation for doctors and healthcare workers to provide false or misleading information to patients: being adequately informed means receiving the best possible information at the doctor’s disposal. Many of the worries surrounding the advice given by Malone, and others like him, pertain to just this worry: the concerns that they have are overblown, or have been debunked, or are generally not accepted by the scientific community, and thus there is no obligation to provide information that falls under those categories to patients.

Regardless, one might still think that in order to have fully informed consent, one should be presented with the widest range of possible information, after which the patient can make up their own mind. Of course, Malone’s thinking is much closer to the realm of the conspiratorial – for example, he stated during his interview with Rogan that scientists manipulate data in order to appease drug companies, as well as his aforementioned claims to mass hypnosis. Even so, if these views are genuinely held by a healthcare practitioner, should they present them to their patients?

While informed consent is important, there is also debate about how fully informed, exactly, one ought to be, or can be. For instance, while an ideal situation would be one in which patients had a complete, comprehensive understanding of the nature of a relevant procedure, treatment, etc., there is reason to think that many patients fail to achieve that degree of understanding even after being informed. This isn’t really surprising: most patients aren’t doctors, and so will be at a disadvantage when it comes to having a complete medical understanding, especially if the issue is complex. A consequence, then, may be that patients who are not experts could end up in a worse position when it comes to understanding the nature of a medical procedure when presented with too much information, or else information that could lead them astray.

Malone’s charge that doctors are failing to adhere to their moral duties by not fully informing patients of a full range of all possible consequences of the COVID-19 vaccination therefore seems misplaced. While people may disagree about what constitutes relevant information, a failure to disclose all possible information is not a violation of a patient’s right to be informed.

Correcting Bias in A.I.: Lessons from Philosophy of Science

image of screen covered in binary code

One of the major issues surrounding artificial intelligence is how to deal with bias. In October, for example, a protest was held by Uber drivers, decrying the algorithm the company uses to verify its drivers as racist. Many Black drivers were unable to verify themselves because the software fails to recognize them. Because of this, many drivers cannot get verified and are unable to work. In 2018, a study showed that a Microsoft algorithm failed to identify 1 in 5 darker-skinned females, and 1 in 17 darker-skinned males.

Instances like these prompt much strategizing about how we might stamp out bias once and for all. But can you completely eliminate bias? Is the solution to the problem a technical one? Why does bias occur in machine learning, and are there any lessons that we can pull from outside the science of AI to help us consider how to address such problems?

First, it is important to address a certain conception of science. Historically, scientists – mostly influenced by Francis Bacon – espoused the notion that science was purely about investigation into the nature of the world for its own sake in an effort to discover what the world is like from an Archimedean perspective, independent of human concerns. This is also sometimes called the “view from nowhere.” However, many philosophers who would defend the objectivity of science now accept that science is pursued according to our interests. As philosopher of science Philip Kitcher has observed, scientists don’t investigate any and all forms of true claims (many would be pointless), but rather they seek significant truth, where what counts as significant is often a function of the interests of epistemic communities of scientists.

Next, because scientific modeling is influenced by what we take to be significant, it is often influenced by assumptions we take to be significant, whether there is good evidence for them or not. As Cathy O’Neil notes in her book Weapons of Math Destruction, “a model…is nothing more than an abstract representation of some process…Whether it’s running in a computer program or in our head, the model takes what we know and uses it to predict responses to various situations.” Modeling requires that we understand the evidential relationships between inputs and predicted outputs. According to philosopher Helen Longino, evidential reasoning is driven by background assumptions because “states of affairs…do not carry labels indicating that for which they are or for which they can be taken as evidence.”

As Longino points out in her book, often these background assumptions cannot always be completely empirically confirmed, and so our values often drive what background assumptions we adopt. For example, clinical depression involves a myriad of symptoms but no single unifying biological cause has been identified. So, what justifies our grouping all of these symptoms into a single illness? According to Kristen Intemann, what allows us to infer the concept “clinical depression” from a group of symptoms are assumptions we have that these symptoms impair functions we consider essential to human flourishing, and it is only through such assumptions that we are justified in grouping symptoms with a condition like depression.

The point philosophers like Intemann and Longino are making is that such background assumptions are necessary for making predictions based off of evidence, and also that these background assumptions can be value-laden. Algorithms and models developed in AI also involve such background assumptions. One of the bigger ethical issues involving bias in AI can be found in criminal justice applications.

Recidivism models are used to help judges assess the danger posed by each convict. But people do not carry labels saying they are recidivists, so what would you take as evidence that would lead you to conclude someone might become a repeat offender? One assumption might be that if a person has had prior involvement with the police, they are more likely to be a recidivist. But if you are Black or brown in America where stop-and-frisk exists, you are already disproportionately more likely to have had prior involvement with the police, even if you have done nothing wrong. So, because of this background assumption, a recidivist model would be more likely to predict that a Black person is going to be a recidivist than a white person who is less likely to have had prior run-ins with the police.

But whether the background assumption that prior contact with the police is a good predictor of recidivism is questionable, and in the meantime this assumption creates biases in the application of the model. To further add to the problem, as O’Neil notes in her analysis of the issue, recidivism models used in sentencing involve “the unquestioned assumption…that locking away ‘high-risk’ prisoners for more time makes society safer,” adding “many poisonous assumptions are camouflaged by math and go largely untested and unquestioned.”

Many who have examined the issue of bias in AI often suggest that the solutions to such biases are technical in nature. For example, if an algorithm creates a bias based on biased data, the solution is to use more data to eliminate such bias. In other cases, attempts to technically define “fairness” are used where a researcher may require models that have equal predictive value across groups or require an equal number of false and negative positives across groups. Many corporations have also built AI frameworks and toolkits that are designed to recognize and eliminate bias. O’Neil notes how many responses to biases created by crime prediction models simply focus on gathering more data.

On the other hand, some argue that focusing on technical solutions to these problems misses the issue of how assumptions are formulated and used in modeling. It’s also not clear how well technical solutions may work in the face of new forms of bias that are discovered over time. Timnit Gebru argues that the scientific culture itself needs to change to reflect the fact that science is not pursued as a “view from nowhere.” Recognizing how seemingly innocuous assumptions can generate ethical problems will necessitate greater inclusion of people from marginalized groups.  This echoes the work of philosophers of science like Longino who assert that not only is scientific objectivity a matter of degree, but science can only be more objective by having a well-organized scientific community centered around the notion of “transformative criticism,” which requires a great diversity of input. Only through such diversity of criticism are we likely to reveal assumptions that are so widely shared and accepted that they become invisible to us. Certainly, focusing too heavily on technical solutions runs the risk of only exacerbating the current problem.

Why Don’t People Cheat at Wordle?

photograph of Wordle being played on phone

By now, you’ve probably encountered Wordle, the colorful daily brainteaser that gives you six attempts to guess a five-letter word. Created in 2020 by Josh Wardle, the minimalistic website has gone viral in recent weeks as players have peppered their social media feeds with the game’s green-and-yellow boxes. To some, the Wordle craze is but the latest passing fad capturing people’s attention mid-pandemic; to others, it’s a window into a more thoughtful conversation about the often social nature of art and play.

Philosopher of games C. Thi Nguyen has argued that a hallmark feature of games is their ability to crystallize players’ decision-making processes, making their willful (and reflexive) choices plain to others; to Nguyen, this makes games a “unique art form because they work in the medium of agency.” I can appreciate the tactical cleverness of a game of chess or football, the skillful execution of a basketball jump shot or video game speedrun, or the imaginative deployment of unusual forms of rationality towards disposable ends (as when we praise players for successfully deceiving their opponents in a game of poker or Mafia/Werewolf, despite generally thinking that deception is unethical) precisely because the game’s structure allows me to see how the players are successfully (and artistically) navigating the game’s artificial constraints on their agency. In the case of Wordle, the line-by-line, color-coded record of each guess offers a neatly packaged, easily interpretable transcript of a player’s engagement with the daily puzzle: as Nguyen explains, “When you glance at another player’s grid you can grasp the emotional journey they took, from struggle to likely victory, in one tiny bit of their day.”

So, why don’t people cheat at Wordle?

Surely, the first response here is to simply reject the premise of the question: it is almost certainly the case that some people do cheat at Wordle in various ways or, furthermore, lie about or manipulate their grids before sharing them on social media. How common such misrepresentations are online is almost impossible to say.

But two facets of Wordle’s virality on social media suggest an important reason for thinking that many players have strong reasons to authentically engage with the vocabulary game; I have in mind here:

  1. the felt pressure against “spoiling” the daily puzzle’s solution, and
  2. the visceral disdain felt by non-players at the ubiquity of Wordle grids on their feeds.

In the first case, despite no formal warning presented by the game itself (and, presumably, no “official” statement from either Wordle’s creator or players), there exists a generally unspoken agreement online to avoid giving away puzzle answers. Clever sorts of innuendo and insinuation are frequent among players who have discovered the day’s word, as are meta-level commentaries on the mechanics or difficulty-level of the latest puzzle, but a natural taboo has arisen against straightforwardly announcing Wordle words to one’s followers (in a manner akin to the taboo against spoiling long-awaited movie or television show plots). In the second case, social media users not caught up in Wordle’s grid have frequently expressed their annoyance at the many posts filled with green-and-yellow boxes flying across their feeds.

Both of these features seem to be grounded in the social nature of Wordle’s phenomenology: it is one thing to simply play the game, but it is another thing entirely to share that play with others. While I could enjoy solving Wordle puzzles privately without discussing the experience with my friends, Wordle has become an online phenomenon precisely because people have fun doing the opposite: publicly sharing their grids and making what Nguyen calls a “steady stream of small communions” with other players via the colorful record of our agential experiences. It might well be that the most fun part of Wordle is not simply the experience of cleverly solving the vocab puzzle, but of commiserating with fellow players about their experiences as well; that is to say, Wordle might be more akin to fishing than to solving a Rubik’s cube — it’s the story and its sharing that we ultimately really care about. Spoiling the day’s word doesn’t simply solve the puzzle for somebody, but ruins their chance to engage with the story (and the community of players that day); similarly, the grids might frustrate non-players for the same reason that inside jokes annoy those not privy to the punchline — they underline the person’s status as an outsider.

So, this suggests one key reason why people might not want to cheat at Wordle: it would entail not simply fudging the arbitrary rule set of an agency-structuring word game, but would also require the player to violate the very participation conditions of the community that the player is seeking to enjoy in the first place. That is to say, if the fun of Wordle is sharing one’s real experiences with others, then cheating at Wordle is ultimately self-undermining — it gives you the right answer without any real story to share.

Notice one last point: I haven’t said anything here about whether or not it’s unethical to cheat at Wordle. In general, you’ll probably think that your obligations to tell the truth and avoid misrepresentation will apply to your Wordle habits in roughly the same way that they apply elsewhere (even if you’re not unfairly disadvantaging an opponent by cheating). But my broader point here is that cheating at Wordle doesn’t really make sense — at best, cheating might dishonestly win you some undeserved recognition as a skilled Wordle player, but it’s not really clear why you might care about that, particularly if the Wordle community revolves around communion moreso than competition.

Instead, swapping Wordle grids can offer a tantalizing bit of fun, authentic connection (something we might particularly crave as we enter Pandemic Year Three). So, pick your favorite starting word (mine’s “RATES,” if you want a suggestion) and give today’s puzzle your best shot; maybe we’ll both guess this one in just three tries!

The Curious Case of Evie Toombes: Alternative Realities and Non-Identity

photograph of elongated shadow of person on paved road

Evie Toombes just won a lawsuit against her mother’s doctor. She was born with spina bifida, a birth defect affecting her spine, which requires continual medical care. Taking folic acid before and during pregnancy can help reduce the risk of spina bifida, but Toombes says that the doctor told her mother that folic acid supplements weren’t necessary. The judge ruled that, had the doctor advised Toombes’ mother “about the relationship between folic acid supplementation and the prevention of spina bifida/neural tube defects,” she would have “delayed attempts to conceive” until she was sure her folic acid levels were adequate, and that “in the circumstances, there would have been a later conception, which would have resulted in a normal healthy child.” The judge therefore ruled that the doctor was liable for damages because of Toombes’ condition.

Let’s assume that Toombes is right about the facts. If so, the case may seem straightforward. But it actually raises an incredibly difficult philosophical conundrum noted by the philosopher Derek Parfit. Initially, it seems Toombes was harmed by the doctor’s failure to advise her mother about folic acid. But the suggestion is that, if he’d done so, her mother would have “delayed attempts to conceive,” resulting in the “later conception” of a “normal healthy child.” And, presumably, that child would not have been Evie Toombes. Had her mother waited, a different sperm would have fertilized a different egg, producing a different child. So had the doctor advised her mother to take folic acid and delay pregnancy, it’s not as though Toombes would have been born, just without spina bifida. A different child without spina bifida would have been born, and Toombes would not have existed at all.

It may be that some lives are so bad that non-existence would be better. And if your life is worse than non-existence, then it’s easy to see why you’d have a complaint against someone who’s responsible for your life. But Toombes’ life doesn’t seem to be like this: she is a successful equestrian. And anyway, she didn’t make that claim as part of her argument, and the court didn’t rely on it. However, if Toombes’ life is worth living, and if the doctor’s actions are responsible for her existing at all, it might seem puzzling how the doctor’s actions could have wronged her.

The non-identity problem arises in cases like this, where we can affect how well-off future people are, but only by also changing which future people come to exist. It’s a problem because causing future people to be less well-off seems wrong, but it’s also hard to see who is wronged in these cases, provided the people who come to exist have lives worth living. E.g., it seems that the doctor should have told Toombes’ mother about folic acid, but, assuming her life is worth living, it’s also hard to see how Toombes is wronged by his not doing so, since that’s why she exists.

The non-identity problem also has implications for many other real-world questions. For instance, if we enact sustainable environmental policies, perhaps future generations will be better-off. But these generations will also consist of different people: the butterfly effect of different policies means that different people will get married, will conceive at different times, etc. Provided the (different) people in the resource-depleted future have lives worth living, it may be hard to see why living unsustainably would be wrong.

(It might be plausible that the doctor wronged Toombes’ mother, whose existence doesn’t depend on his actions. But wrongs against currently-existing people may not be able to explain the wrong of the unsustainable environmental policy, provided the bad effects won’t show up for a long time. Some unsustainable policies might only help current people, by allowing them to live more comfortably. And anyway, the court thought Toombes was also wronged: she’s getting the damages.)

Because it is relevant to important questions like this, it would be very handy to know what the solution to the non-identity problem is. Unfortunately, all solutions have drawbacks.

An obvious possibility is to say that we should make the world as good as possible. Since well-being is good, then, all else equal, we would be obligated to make sure that better-off people exist in the future rather than worse-off ones. But the decision of the court was that the doctor wronged Toombes herself, not just that he failed to make the world as good as possible: if that was the problem, he should have been ordered to pay money to some charity that makes the world as good as possible, rather than paying money to Toombes. And anyway, it isn’t obvious that we’re obligated to make sure future generations contain as much well-being as possible. One way to do that is by having happy children. But most people don’t think we’re obligated to have children, even if, in some case, that would add the most happiness to the world on balance.

Another possibility is to say that we can wrong people without harming them. Perhaps telling comforting lies is like this: here, lying prevents a harm, but can still be wrong if the person has a right to know the painful truth. Perhaps individuals have a right against being caused to exist under certain sorts of difficult conditions. But notice that we can usually waive rights like this. If I have a right to the painful truth, I can waive this right and ask you not to tell me. People who haven’t been born yet can’t waive rights (or do anything else). But when people are not in a position to waive a right, we can permissibly act based on whether we think they would or should waive the right, or something like that. You have a right to refuse having your legs amputated. But if paramedics find you unconscious and must amputate your legs to save your life, they’ll probably do it, since they figure you would consent, if you could.  Why not think that, similarly, future people whose lives are worth living generally would or should consent to the only course of action that can bring them into being, even if their lives are difficult in some ways?

A third solution says that Toombes’ doctor didn’t act wrongly after all–and neither would we act wrongly by being environmentally unsustainable, etc. But that’s very hard to believe. It’s even harder to believe in other cases. Here’s a case inspired by the philosopher Gregory Kavka. Suppose I and my spouse sign a contract to sell our (not yet conceived) first child into slavery. Because of the deal, we conceive a child under slightly different circumstances than we otherwise would have, resulting in a different child. (Maybe the slaver gives us a special hotel room.) There’s no way to break the contract and keep our child from slavery. Suppose the child’s life is, though difficult, (barely) worth living. This solution appears to suggest that signing the slave contract is permissible: after all, the child has a life worth living, and wouldn’t exist otherwise. But that doesn’t seem right!

I wrote more about this in chapter eight of this book. There are other possible moves, but they have problems, too. So the non-identity problem is a real head-scratcher. Maybe someone reading this can make some progress on it.

The Morality of “Sharenting”

black-and-white photograph of embarrassed child

The cover of Nirvana’s Nevermind — featuring a naked baby diving after a dollar bill in a pool of brilliant, blue water — is one of the most iconic of the grunge era, and perhaps of the ‘90s. But not everyone looks back on that album with fond nostalgia. Just last week, Spencer Elden — the man pictured as the baby on that cover — renewed his lawsuit against Nirvana, citing claims of child pornography.

Cases like this are nothing new. Concerns regarding the exploitation of children in the entertainment industry have existed for, well, as long as the entertainment industry. What is new, however, is the way in which similar concerns might be raised for non-celebrity children. The advent of social media means that the public sharing of images and videos of children is no longer limited to Hollywood. Every parent with an Instagram account is capable of doing this. The practice even has a name: sharenting. Indeed, those currently entering adulthood are unique in that they are the first generation to have had their entire childhoods shared online — and some of them aren’t very happy about it. So it’s worth asking the question: is it morally acceptable to share imagery of children online before they can give their informed consent?

One common answer to this question is to say that it’s simply up to the parent or guardian. This might be summed up as the “my child, my choice” approach. Roughly, it relies on the idea that parents know what is in the best interests of their child, and therefore reserve the right to make all manner of decisions on their behalf. As long as parental consent is involved whenever an image or video of their child is shared, there’s nothing to be concerned about. It’s a tempting argument, but it doesn’t stand up to scrutiny. Being a parent doesn’t provide you with the prerogative to do whatever you want with your child. We wouldn’t, for example, allow parental consent as a justification for child labor or sex trafficking. If every parent did know what was best for their child, there wouldn’t be a need for institutions like the Child Protection Service. Child abuse and neglect wouldn’t exist. But they do. And that’s because sometimes parents get things wrong. The “my child, my choice” argument, then, is not a good one. So we must look for an alternative.

We might instead take a “consequentialist” approach — that is, to weigh up the good consequences and bad consequences of sharenting to see if it results in a net good. To be fair, there are many good things that come from the practice. For one, social media provides an opportunity for parents to share details of a very important part — perhaps the most important part — of their lives. In doing so, they are able to strengthen their relationships with family, friends, and other parents, bonding with — and learning from — each other along the way. Such sharing also enables geographically distant loved ones to be more involved in a child’s life. This is something that’s become even more important in a world that has undergone unprecedented travel restrictions as a result of the COVID-19 pandemic.

But the mere existence of these benefits is not enough to justify sharenting. They must be weighed against the actual and potential harms of the practice. And there are many. Sharing anything online — especially imagery of young children — is an enormously risky endeavor. Even images that are shared under supposedly private conditions can easily enter the public forum — either through irresponsible resharing by well-intentioned loved ones, or by the notoriously irresponsible management of our data by social media companies.

Once this imagery is in the public domain, it can be used for all kinds of nefarious purposes. But we needn’t explore such dark avenues. Many of us have a lively sense of our own privacy, and don’t want our information shared with the general public regardless of how it ends up being used. It makes sense to imagine that our children — once capable of giving informed consent — will feel the same way. Much of the imagery shared of them online involves private, personal moments intended only for themselves and those they care about. Any invasion of that privacy is a bad thing.

Which brings us to yet another way of analyzing this subject. Instead of focusing purely on the consequences of sharenting, we might instead apply what’s referred to as a “deontological” approach. One of the most famous proponents of deontology was Immanuel Kant. In its most straight-forward formulation, Kant’s ethical theory tells us to always seek to treat others as an end in themselves, not as a means to some other end. This approach reveres respect for the autonomy of others, and abhors using people for your own purposes. Thus, even if there are goods to be gained from sharenting, these should be ignored if the child — upon developing their autonomy — would wish that their private lives had never been made public.

What both the consequentialist approach and the deontological approach seem to boil down to, then, is a question of what the child will want once they are capable of giving informed consent. And this is something we can never know. They may develop into a gregarious braggart who shares every detail of their life online. But they may just as likely turn into a fiercely private individual who wants no record of their childhood — awkward and embarrassing as these always tend to be — in the digital ether. Given this uncertainty, what should parents do? It’s difficult to say, but perhaps the safest approach might be to apply some kind of “precautionary principle.” This principle states that where an unnecessary action brings a significant risk of harm, we should refrain from acting. So, given the potential harm associated with sharenting and the largely unnecessary nature of the practice (especially when similar goods can be achieved in other ways; for example, by mailing photographs to loved ones the old-fashioned way), we should respect our children’s right to privacy — at least until they can give their informed consent to having their private lives shared publicly.

Boris Johnson and the Hypocrisy of Lawmakers

photograph of Boris Johnson making a face

There is something ridiculous about the idea that Boris Johnson might have to resign for hosting a few parties. You might think that it is his policies, or his saying he’d rather “let the bodies pile high” than institute further lockdowns, that should see him go. But parties?

The problem with these gatherings is that they violated COVID regulations, regulations set by Johnson and his party. And the fact that he violated his own decrees (nobody takes seriously his claim that the parties were, in fact, work events) raises an interesting question: what’s so wrong about lawmakers breaking the law?

The first obvious, but bland, answer is that – in a fair legal system – breaking the law simply is wrong, and it’s wrong for lawmakers to break the law in just the same way that it is wrong for anybody to break the law.

This might be a reasonable explanation for why it is wrong for lawmakers to break some laws. For instance, if a lawmaker breaks the speed limit, that seems bad in the same way as if an ordinary member of the public breaks the speed limit. This isn’t just because it is a minor offense. If a member of parliament went and murdered someone, it would be a grave moral wrong, but I don’t think there would be anything especially wrong about it.

In these cases, the wrongness involved is simply the (appalling or minor) wrongness of breaking the law. But there seems to be something especially bad about Johnson’s behavior.

What I think is key is that there is something more involved when a lawmaker breaks a law they have set. Gideon Yaffe has an interesting argument that could lead to this conclusion. He thinks that, since the law is created by citizens in communities, we are complicit in the creation of these laws. But some people are more complicit than others. For instance, kids aren’t very complicit at all in creating the law (since they can’t vote). Yaffe thinks that the more (or less) complicit one is in creating a law, the stronger (or weaker) that law’s reasons apply to you, and the more strongly (or weakly) you should be punished for violating it. And someone like Johnson was maximally complicit in setting England’s COVID laws.

But I’m not sure I’m persuaded. I simply do not buy Yaffe’s “complicity” argument: I don’t see why we need to suppose that the more say someone has over the law, the more it binds them. And I think there is something to be said for the idea that we are all equal before the law: politicians should be punished, but they shouldn’t face any harsher legal punishment than Joe Bloggs.

It’s also important to note that there isn’t really a push for Johnson to see legal punishment. Although some people want to see that, the real focus is on him facing a political punishment. They want him to resign in disgrace. And I think that what explains this pressure is that Johnson has shown that he cannot take his own laws seriously – and taking the law seriously is the point of being a politician.

We can get to this idea by thinking about hypocrisy. Hypocrisy is problematic in politics because it undermines how seriously we take someone. During the 1990s, John Major’s government had a campaign called “Back to Basics,” which aimed to underscore the importance of traditional values like “neighbourliness, decency, courtesy.” Inevitably, Major’s cabinet was then beset by scandal.

The behavior of Major’s cabinet suggested that they did not take these values very seriously. But this was a moral campaign, the difference that compounds Johnson’s case is that his hypocrisy involves the laws he set.

Johnson was not just a hypocrite, he was a hypocrite about the laws he set, laws which are supposed to protect the public. To return to an earlier example, there might not be anything especially wrong if an ordinary lawmaker speeds, but a lawmaker elected on a platform of making the roads safer might do something especially wrong because they are being a hypocrite. By being a hypocrite, this lawmaker shows that she does not – despite her claims – really take speeding laws seriously, she does not act as though they are important. Likewise, by attending parties, Johnson showed that he did not take these laws seriously, and – if the purpose of the laws is to protect the public – he showed that he did not care about protecting the public.

(Alternatively, he showed that he thinks he is special, different from the rest of us: that he can party whilst his laws stop grieving relatives from saying goodbye to their loved ones. I’ll set aside this possibility.)

Johnson (as well as the hypocritical speedster) demonstrated a lack of care about the underlying issues: protecting the public (or keeping to the speed limit) is not important to him. But it also strikes at the strength of this law. Our system of law is not supposed to be simply a matter of force, where the most powerful get the least powerful to comply with what they want. Rather, the law is supposed to provide us with genuine reasons to act, that are somehow linked to the good of others in our community. Nowhere is this more clear than with attempts to curb the ravages of COVID-19.

Everywhere, there is skepticism about COVID-19 laws. They inherently curb our freedoms. By not taking COVID-19 laws seriously, Johnson suggested that the laws are not to be taken seriously. But it is only by taking good laws seriously that they remain good laws, laws which govern us as rational agents rather than as those merely fearful of greater power.

That is why Johnson is under political pressure to resign: Johnson has shown himself incapable of taking seriously the laws he creates, which is the entire point of being Prime Minister. His behavior undermined the justification of the laws he set.

Defining Death: One Size Fits All?

photograph of rose on tombstone

In 1844, Edgar Allen Poe published a short story titled The Premature Burial. The main trope at play in the story was the common Victorian fear of being buried alive. The main character suffers from a condition which causes him to fall into catatonic states in which it is difficult to detect breath. The body exhibits little to no motion. In response to his all-consuming fear, he designs a coffin that will allow him to alert the outside world through the ringing of a bell if he is confused for dead and accidentally buried alive.

Determining when death has occurred is not an easy matter, either historically or in the modern age. In some cultures, family members would wait until putrefaction began in order to bury or otherwise perform death ceremonies with the bodies of their loved ones, just to make sure that no one was being disposed of who was, in fact, still alive. As time progressed, we used the presence of circulatory and respiratory functioning to determine that someone was alive. The modern world presents a new set of puzzles: we are able to keep the circulatory and respiratory function going indefinitely with the help of medical technology. When, then, is a person dead?

The way that we answer this question has significant practical consequences. Hospitals are frequently low on beds, personnel, and other resources, especially during outbreaks of disease. Patients can only permanently vacate those beds when they are well enough to leave or when they are dead. We also need to be able to harvest certain organs from donors, which can occur only after the patient is dead.

What’s more, it would be unusual if there was variation among definitions of death across the country. The result could be that a person is dead in one state and not in another. In response to this concern, in 1981, the Presidential Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research arrived at the following definition which they expressed in the Uniform Definition of Death Act: An individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions or (2) irreversible cessation of all functions of the entire brain, including the brain stem, is dead. A determination must be made in accordance with accepted medical standards.

Since the time that the Commission took up the issue, this definition has faced a range of objections from all sides. No one is particularly bothered by the first criterion, but the second is the source of much debate. Some object that the Presidential Commission requires too much for a person to be considered alive; they argue that it is not the case that the entire brain must cease to function, only that the higher brain has irreversibly stopped working. They reason that it is the higher brain that is responsible for the characteristics that make an entity a person: consciousness, personality, memory, a sense of psychological continuity, and so on.

Others argue that the Commission has not gone far enough, in other words, they argue that a person who is kept alive on a ventilator is still alive, even when they have no brain function of any kind. It is possible for the body to do things while kept alive on a ventilator that only living bodies can do; among other things, bodies can go through puberty, carry a fetus to term, grow taller, grow hair, and so on. Some argue that to call such bodies “dead” is just demonstrably inaccurate.

It seems that our social conception of death has a crucial metaphysical component. Most cultures with advanced medical technology don’t tend to wait to declare a person dead until they start to decay. It’s worth asking — what, exactly, is it that we are trying to preserve when we categorize a being as “alive”? Are we being consistent in our standards? Under ordinary conditions, we wouldn’t hesitate to say that grass, clams, or coral reefs are alive (when they are). Would we use the same standards to determine when these organisms are no longer alive that we would use to determine whether a human person is no longer alive? Some argue that testing for the biological functions that give rise to personhood is the right approach when it comes to determining the status of a human being. Are humans the only organisms to whom we should apply that test?

Questions about death are philosophically compelling to reflect on in the abstract, but they are also practically important for everyone. The decision that someone is dead has significant consequences that will inevitably be devastating to some people. Consider the case of Jahi McMath and her family. In 2013, Jahi underwent a standard procedure to get her tonsils removed; she experienced severe blood loss which led to significant brain damage. She was declared brain dead on December 12th, 2013, three days after the procedure. Jahi’s family fought to overturn the diagnosis, but a judge agreed with the hospital that Jahi was brain dead. The family did not give up, but transferred Jahi to a private facility for care where she was connected to life-sustaining technology for almost four years. Jahi was declared dead in June, 2018; the cause of death listed on her death certificate was “complications due to liver failure.” Jahi never regained consciousness.

Jahi’s mother sold her house and spent all the money she had to pay for Jahi’s care, and she does not regret doing so. She appreciated the opportunity to watch Jahi change and grow, commenting to reporters, ““She grew taller and her features started to change and she went through puberty and everything. And I know for sure, dead people don’t do that.”

According to the Uniform Definition of Death Act, Jahi died in 2013, not in 2018. Not everyone agrees with the standard established by the act. People have different religious, cultural, and philosophical understanding of when death occurs. That said, a person isn’t alive simply because there is someone willing to insist that they are — Julius Caesar and Elizabeth I are dead regardless of anyone’s protestations to the contrary. The time and resources of medical professionals are limited. When someone believes that the life of someone they love is at stake, they may be willing to pay any amount of money in order to keep hope alive. Liberal democracies allow for pluralism about many things. Should the definition of death be one of them?

‘Don’t Look Up’ and “Trust the Science”

photograph of "Evidence over Ignorance" protest sign

A fairly typical review of “Don’t Look Up” reads as follows: “The true power of this film, though, is in its ferocious, unrelenting lampooning of science deniers.” I disagree. This film exposes the unfortunate limits of the oft-repeated imperative of the coronavirus and climate-change era: “Trust the Science.” McKay and Co. probe a kind of epistemic dysfunction, one that underlies many of our most fiercest moral and political disagreements. Contrary to how it’s been received, the film speaks to the lack of a generally agreed-upon method for arriving at our beliefs about how the world is and who we should trust.

As the film opens, we are treated to a warm introduction to our two astronomers and shown a montage of the scientific and mathematical processes they use to arrive at their horrific conclusion that a deadly comet will collide with Earth in six months. Surely, you might be thinking, this film tells us exactly whom to believe and trust from the outset! It tells us to “Trust the Scientists,” to “Trust the Science!”

Here’s a preliminary problem with trying to follow that advice. It’s not like we’re all doing scientific experiments ourselves whenever we accept scientific facts. Practically, we have to rely on the testimony of others to tell us what the science says — so who do we believe? Which scientists and which science?

In the film, this decision is straightforward for us. In fact, we’re not given much of a choice. But in real life, things are harder. Brilliantly, the complexity of real-life is (perhaps unintentionally) reflected in the film itself.

Imagine you’re a sensible person, a Science-Truster. You go to the CDC to get your coronavirus data, to the IPCC to get your climate change facts. If you’re worried about a comet smashing into Earth, you might think to yourself something like, “I’m going to go straight to the organization whose job it is to look at the scientific evidence, study it, and come to conclusions; I’ll trust what NASA says. The head of NASA certainly sounds like a reliable, expert source in such a scenario.” What does the head of NASA tell the public in “Don’t Look Up”? She reports that the comet is nothing to worry about.

Admittedly, McKay provides us a clear reason for the audience to ignore the head of NASA’s scientific misleading testimony about the comet. She is revealed to be a political hire and an anesthesiologist rather than an astronomer. “Trust the Science” has a friend, “Trust the Experts,” and the head of NASA doesn’t qualify as an expert on this topic. So far, so good, for the interpretation of the film as endorsing “Trust the Science” as an epistemic doctrine. It’s clear why so many critics misinterpret the film this way.

But, while it’s easy enough to miss amid the increasingly frantic plot, the plausibility of Trust the Science falls apart as the film progresses. Several Nobel-prize winning, Ivy-league scientists throw their support behind the (doomsday-causing) plan of a tech-billionaire to bring the wealth of the comet safely to Earth in manageable chunks. They assure the public that the plan is safe. Even one of our two scientific heroes repeats the false but reassuring line on a talk show, to the hosts’ delight.

Instead of being a member of the audience with privileged information about whom you should trust, imagine being an average Joe in the film’s world at this point. All you could possibly know is that some well-respected scientists claim we need to destroy or divert the comet at all costs. Meanwhile, other scientists, equally if not more well-respected, claim we can safely bring the mineral-rich comet to Earth in small chunks. What does “Trust the Science” advise “Don’t Look Up” average Joe? Nothing. The advice simply can’t be followed. It offers no guidance on what to believe or whom to listen to.

How could you decide what to believe in such a scenario? Assuming you, like most of us, lack the expertise to adjudicate the topic on the scientific merits, you might start investigating the incentives of the scientists on both sides of the debate. You might study who is getting paid by whom, who stands to gain from saying what. And this might even lead you to the truth — that the pro-comet-impact scientists are bought and paid for by the tech-billionaire and are incentivized to ignore, or at least minimize, the risk of mission failure. But this approach to belief-formation certainly doesn’t sound like Trusting the Science anymore. It sounds closer to conspiracy theorizing.

Speaking of conspiracy theories, in a particularly fascinating scene, rioters confront one of our two astronomers with the conspiracy theory that the elites have built bunkers because the they don’t really believe the comet is going to be survivable (at least, not without a bunker). Our astronomer dismissively tells the mob this theory is false, that the elites are “not that competent.” This retort nicely captures the standard rationalistic, scientific response to conspiracy theories; everything can be explained by incompetence, so there’s no need to invoke conspiracy. But, as another reviewer has noticed, later on in the film “we learn that Tech CEO literally built a 2,000 person starship in less than six months so he and the other elites could escape.” It turns out the conspiracy theory was actually more or less correct, if not in the exact details. This rationalistic, scientific debunking and dismissal of conspiracy is actually proven entirely wrong. We would have done better trusting the conspiracy theorist than trusting the scientist.

Ultimately, the demand that we “Trust the Science” turns out to be both un-followable (as soon as scientific consensus breaks down, since we don’t know which science or scientists to listen to), and unreliable (as shown when the conspiracy theorist turns out to be correct). The message this film actually delivers about “Trust the Science” is this: it’s not good enough!

The Moral and Political Importance of “Trust the Science”

Let’s now look at why any of this matters, morally speaking.

Cultures have epistemologies. They have established ways for their members to form beliefs that are widely accepted as the right ways within those cultures. That might mean that people generally accept, for example, a holy text as the ultimate source of authority about what to believe. But in our own society, currently, we lack this. We don’t have a dominant, shared authority or a commonly accepted way to get the right beliefs. We don’t have a universally respected holy book to appeal to, not even a Walter Cronkite telling us “That’s the way it is.” We can’t seem to agree on what to believe or whom to listen to, or even what kinds of claims have weight. Enter “Trust the Science”: a candidate heuristic that just might be acceptable to members of a technologically developed, scientifically advanced, and (largely) secularized society like ours. If our society could collectively agree that, in cases of controversy, everyone should Trust the Science, we might expect the emergence of more of a consensus on the basic facts. And that consensus, in turn, may resolve many of our moral and political disagreements.

This final hope isn’t a crazy one. Many of our moral and political disagreements are based on disagreements about beliefs about the basic facts. Why do Democrats tend to agree with mandatory masks, vaccines, and other coronavirus-related restrictions, while Republicans tend to disagree with them? Much of it is probably explained by the fact that, as a survey of 35,000 Americans found, “Republicans consistently underestimate risks [of coronavirus], while Democrats consistently overestimate them.” In other words, the fact that both sides have false beliefs partly explains their moral and political disagreements. Clearly, none of us are doing well at figuring out whom we can trust to give truthful, undistorted information on our own. But perhaps, if we all just followed the  “Trust the Science” heuristic, then we would reach enough agreement about the basic facts to make some progress on these moral and political questions.

Perhaps unintentionally, “Don’t Look Up” presents a powerful case against this hopeful, utopian answer to the deep divisions in our society. Trusting the Science can’t play the unifying role we might want it to; it can’t form the basis of a new, generally agreed upon secular epistemic heuristic for our society. “Don’t Look Up” is not the simple “pro-science,” “anti-science-denier” film many have taken it to be. It’s far more complicated, ambivalent, and interesting.

On an Imperative to Educate People on the History of Race in America

photograph of Martin Luther King Jr. Statue profile at night

Many people don’t have much occasion to observe racism in the United States. This means that, for some, knowledge about the topic can only come in the form of testimony. Most of the things we know, we come to know not by investigating the matter personally, but instead on the basis of what we’ve been told by others. Human beings encounter all sorts of hurdles when it comes to attaining belief through testimony. Consider, for example, the challenges our country has faced when it comes to controlling the pandemic. The testimony and advice of experts in infectious disease are often tossed aside and even vilified in favor of instead accepting the viewpoints and advice from people on YouTube telling people what they want to hear.
This happens often when it comes to discussions of race. From the perspective of many, racism is the stuff of history books. Implementation of racist policies is the kind of thing that it would only be possible to observe in a black and white photograph; racism ended with the assassination of Martin Luther King Jr. There is already a strong tendency to engage in confirmation bias when it comes to this issue — people are inclined to believe that racism ended years ago, so they are resistant and often even offended when presented with testimonial evidence to the contrary. People are also inclined to seek out others who agree with their position, especially if those people are Black. As a result, even though the views of these individuals are not the consensus view, the fact that they are willing to articulate the idea that the country is not systemically racist makes these individuals tremendously popular with people who were inclined to believe them before they ever opened their mouths.
Listening to testimonial evidence can also be challenging for people because learning about our country’s racist past and about how that racism, present in all of our institutions, has not been completely eliminated in the course of fewer than 70 years, seems to conflict with their desire to be patriotic. For some, patriotism consists in loyalty, love, and pride for one’s country. If we are unwilling to accept American exceptionalism in all of its forms, how can we count ourselves as patriots?
In response to these concerns, many argue that blind patriotism is nothing more than the acceptance of propaganda. Defenders of such patriotism encourage people not to read books like Ibram X. Kendi’s How to be an Anti-racist or Ta-Nehisi Coates’ Between the World and Me, claiming that this work is “liberal brainwashing.” Book banning, either implemented by public policy or strongly encouraged by public sentiment has occurred so often and so nefariously that if one finds oneself on that side of the issue, there is good inductive evidence that one is on the wrong side of history. Responsible members of a community, members that want their country to be the best place it can be, should be willing to think critically about various positions, to engage and respond to them rather than to simply avoid them because they’ve been told that they are “unpatriotic.” Our country has such a problematic history when it comes to listening to Black voices, that when we’re being told we shouldn’t listen to Black accounts of Black history, our propaganda sensors should be on high alert.
Still others argue that projects that attempt to understand the full effects of racism, slavery, and segregation are counterproductive — they only lead to tribalism. We should relegate discussions of race to the past and move forward into a post-racial world with a commitment to unity and equality. In response to this, people argue that to tell a group of people that we should just abandon a thoroughgoing investigation into the history of their ancestors because engaging in such an inquiry causes too much division is itself a racist idea — one that defenders of the status quo have been articulating for centuries.
Dr. Martin Luther King Jr. beautifully articulates the value of understanding Black history in a passage from The Autobiography of Martin Luther King, Jr.:

Even the Negroes’ contribution to the music of America is sometimes overlooked in astonishing ways. In 1965 my oldest son and daughter entered an integrated school in Atlanta. A few months later my wife and I were invited to attend a program entitled “Music that has made America great.” As the evening unfolded, we listened to the folk songs and melodies of the various immigrant groups. We were certain that the program would end with the most original of all American music, the Negro spiritual. But we were mistaken. Instead, all the students, including our children, ended the program by singing “Dixie.” As we rose to leave the hall, my wife and I looked at each other with a combination of indignation and amazement. All the students, black and white, all the parents present that night, and all the faculty members had been victimized by just another expression of America’s penchant for ignoring the Negro, making him invisible and making his contributions insignificant. I wept within that night. I wept for my children and all black children who have been denied a knowledge of their heritage; I wept for all white children, who, through daily miseducation, are taught that the Negro is an irrelevant entity in American society; I wept for all the white parents and teachers who are forced to overlook the fact that the wealth of cultural and technological progress in America is a result of the commonwealth of inpouring contributions.

Understanding the history of our people, all of them, fully and truthfully, is valuable for its own sake. It is also valuable for our actions going forward. We can’t understand who we are without understanding who we’ve been, and without understanding who we’ve been, we can’t construct a blueprint for who we want to be as a nation.
Originally published on February 24th, 2021

The Colston Four and the Rule of Law

photograph of Edward Colston statue

George Floyd’s murder sparked protests around the world. While Floyd’s death acted as a catalyst for demonstrations, the longstanding systemic injustice faced by Black and indigenous people of color helped fuel the 2020 demonstrations. In the U.K. city of Bristol, this anger found itself a lightning rod in the form of a statue of the 17th-century slaver Edward Colston. During the second day of protests, demonstrators tore down the monument, dragged it several hundred meters, and dumped it in Bristol’s harbor – no small feat given that the bronze statue stood over 8 ft tall (2.64m). The statue’s symbolic and literal dethroning was big news, reported by outlets like the BBC, Fox News, CNN, and Time, amongst numerous others.

Last week, four people implicated in the statue’s removal – nicknamed the Colston Four – were cleared of criminal damage charges by Bristol Crown Court. The defendants claimed that the statue’s presence represented a hate crime, so its removal was lawful. The prosecutors argued that it didn’t matter who the figure was; it was a simple case of criminal damage. The prosecution failed to convince the jury.

The Colston Four’s acquittal has been both celebrated and panned. Some, such as Professor David Olusoga, assert that the statue’s removal helps rectify an injustice started at its erection in 1895 (some 174 years after Colston’s death). Others, including several U.K. MPs like Tom Hunt, have criticized the decision, saying it highlights a weakness in the legal system itself and questions the role of juries. A quick perusal of the comment section of any news article about the case, and especially its outcome, shows just how divisive this topic has become.

Attributing some of this division to confusion, Suella Braverman, the U.K.’s Attorney General, suggested that she’s considering referring the case to the Court of Appeal. This referral would not look to overturn the outcome. Rather, as Braverman tweeted, it would ensure that “senior judges have the opportunity to clarify the law for future cases.” While she has not made any indication as to whether she will refer the case upwards, it wouldn’t be surprising if the matter is escalated (at least, given the British government’s recent attitudes towards the law, I wouldn’t be surprised).

The confusion here seems to come from the jury’s decision being so at odds with the law as previously applied – that people who have damaged public property have been punished. The Colston Four verdict may strike some as so contradictory to this principle that they feel justified in arguing that the jury’s verdict was wrong. But what does it mean to be wrong in the context of the law? Is it even possible?

Philosopher Ronald Dworkin thought the answer lay in the concept of integrity (he focuses on judges, but the work can also be applied to juries). Like how a person or a building can have integrity, Drowkin thought so could the law. He argued that when a judge decides a case’s outcome, that decision must have both ‘fit’ and ‘appeal’. Fit means the decision must, in some sense, follow the decisions of similar cases that have come previously. Appeal means that such decisions must adhere to the moral evaluations related to justice and rights. So a decision balancing the demands of precedent with broader jurisprudential concerns would be a, if not the, correct decision. If it fails in this regard, we’d be right in saying that the decision was incorrect. Dworkin uses the example of a chain novel – a single narrative where each section is written by different authors – as illustration, writing:

The novelist’s interpretation need not fit every bit of the text. It is not disqualified simply because he claims that some lines or tropes are accidental, or even that some elements of plot are mistakes because they work against the literary ambitions the interpretation states. But the interpretation he takes up must nevertheless flow throughout the text; it must have general explanatory power. [… Each contributor] may find that no single interpretation fits the bulk of the text, but that more than one does. The second dimension of interpretation then requires him to judge which of these eligible readings makes the work progress best, all things considered.

So, sentencing the Colston Four to 25 years in jail would be wrong as it would have neither fit nor appeal – it would not fit in with the chain novel of law as previously written.

But what happens when the demands for fit and appeal are opposed. How does one decide which should be emphasized? This seems to be the source of the tension in the Colston Four case. It has two opposing needs. Fit seems to require punishment for causing public property damage; appeal requires acquittal because removing racist memorials appears just.

For Dworkin, it is still possible to have a correct answer, and not just in a subjective sense (i.e., it’s wrong because I don’t like it). Instead, theoretically, at least, results can be right or wrong in an absolute sense. Much like how one figures out the correct answer to a crossword puzzle by interpreting clues and seeing how various solutions fit with those already given, the correct answer to hard cases can be revealed by considering existing case law and seeing how these fit with ethical concerns. In short, legal outcomes can be judged better or worse not according to external standards but by appealing to the nature of law itself.

What does this mean for the Colston Four, then? A black-letter reading of the law – looking only at case law and statutes – could conceivably indicate that, in the majority of cases, the four should have been charged. Most people who commit similar acts are sentenced – you break a public fountain, you get punished. This even fits in with the principles of justice and fairness (to a degree). People who damage public property should make amends through fines, community service, or time served.

Yet, to apply such reasoning here seems to overlook what that statue represented and why it was torn down. This wasn’t a phonebox or park bench that was damaged, but the effigy of someone responsible for the enslavement of over 84,000 men, women, and children. While one might argue that the case law required the Colston Four to be found guilty, the principles of justice and fairness seemingly demand the opposite. Such a rebellious act was coherent with such principles as the statue’s presence amounted to a grave injustice, one partially rectified by its removal.

One cannot begrudge the U.K.’s Attorney General for voicing concerns about confusion regarding the case’s outcome. Indeed, she echoes the concerns of many others. However, this confusion can be somewhat clarified when one remembers that law is not simply a series of rules about what one should(n’t) do. Instead, when one looks to Dworkin’s work and acknowledges the essential role principles play in law’s execution, the matter becomes at least a little more clear.

The Ethics of “Let’s Go, Brandon”

photograph of Biden on phone in Oval Office

On Christmas Eve, Joe and Jill Biden were taking holiday phone calls as a part of an annual tradition of the North American Aerospace Defense Command (NORAD) to celebrate Christmas by “tracking” Santa Claus on his trip around the globe; at the end of one conversation, the Bidens were wished “Merry Christmas and ‘Let’s Go, Brandon’” by Jared Schmeck, a father calling with his children from Oregon. In recent months, after a crowd at a NASCAR event chanting “F**k Joe Biden” was described by a reporter as saying “Let’s Go, Brandon” (seemingly referring to the winner of the race), the sanitized slogan has been wholeheartedly adopted by people seeking to (among other things) express dissatisfaction with the current president. Plenty of others have offered explanations about the linguistic mechanics of the coded phrase (for what it’s worth, I think it’s an interesting example of a conventional implicature), but this article seeks to consider a different question: did Schmeck do something unethical when he uttered the phrase directly to Joe Biden?

There’s at least two factors here that we need to keep distinct:

  1. Did Schmeck do something unethical by uttering the phrase “Let’s Go, Brandon”?
  2. Did Schmeck do something unethical by uttering “Let’s Go, Brandon” directly to Joe Biden?

The first point is an interesting question for philosophers of so-called “bad” language, often categorized as profanity, obscenity, vulgarity, or other kinds of “swear” words. It’s worth considering why such pejoratives are treated as taboo or offensive in various contexts (but, depending on various factors, not all contexts) and scores of philosophers have weighed in on such debates. But, arguably, even if you think that there is something unethical about using the word ‘f**k,’ the utterance “Let’s Go, Brandon” side-steps many relevant concerns: regardless of what Schmeck meant by his utterance, he technically didn’t say a word that would, for example, get him fined by the FCC. After all, there’s nothing innately offensive about the terms ‘let’s,’ ‘go,’ or ‘Brandon.’ In much the same way that a child who mutters “gosh darn it,” “what the snot,” or “oh my heck” might expect to dodge a punishment for inappropriate speech, saying “Let’s Go, Brandon” might be a tricky way to blatantly communicate something often considered to be offensive (the dreaded “f-word”) while technically abiding by the social prohibition of the term’s utterance.

This move — of replacing some offensive term with a vaguely similar-sounding counterpart — is sometimes referred to as “denaturing” profanity with a euphemism (including even with emoji): for example, the phrases “what the frick?” and “what the f**k?” are not clearly substantively semantically different, but only the latter will typically be censored. However, over time, this kind of “minced oath” often ends up taking on the conventional meaning of the original, offensive term (in a process that it is itself sometimes described as a “euphemism treadmill”): that is to say, at some point, society might well decide to bleep “frick” just as much as its counterpart (although, actually, social trends largely seem to be moving in the opposite direction). Nevertheless, although “Let’s Go, Brandon” is only a few months old, its notoriety might be enough to already suggest that it’s taken on some of the same offensive qualities of the phrase that it’s meant to call to mind. If you think that there’s something unethical about uttering the phrase “F**k Joe Biden,” then you might also have a reason to think that “Let’s Go, Brandon” is likewise problematic.

Notably, the widespread use of “Let’s Go, Brandon” in many places typically opposed to profanity — such as churches, airplanes, and the floor of the House of Representatives — suggests that people are not treating the phrase as being directly vulgar, despite its clear connection to the generally-offensive ‘f**k.’

Which brings us to the second point: was Schmeck wrong to utter “Let’s Go, Brandon” directly to Biden on Christmas Eve?

Again, it seems like there are at least two factors to consider here: firstly, we might wonder whether or not Schmeck was being (something like) rude to Biden by speaking the anti-Biden slogan in that context. If you think that profanity use is simply offensive and that “Let’s Go, Brandon” is a denatured form of profanity, then you might have a reason to chastise Schmeck (because he almost said a “bad word” in an inappropriate context). If Schmeck had instead directly uttered “Merry Christmas and ‘F**k Joe Biden,’” then we might at least criticize the self-described Christian father (whose small children were with him on the call) as being impolite. But if, as described above, the meaning of “Let’s Go, Brandon” is less important than the technical words appearing in the spoken sentence, then you might think that Schmeck’s actual utterance is more complicated. Initially, Schmeck suggested that he simply intended to make a harmless, spur-of-the-moment joke (a claim that is admittedly less-credible by Schmeck recording the conversation for his YouTube page and in light of Schmeck’s later comments on Steve Bannon’s podcast) — without additional context, interpreting the ethical status of the initial utterance might be difficult.

But, secondly, we would do well to remember that Joe Biden is the President of the United States and some might suppose that uttering offensive speech (whether overtly or covertly) insufficiently shows the office of the POTUS the respect that it deserves. Conversely, we might easily deny that the office “deserves” respect simpliciter at all: the fact that Biden is an elected politician, and that the United States boasts a long tradition of openly and freely criticizing our political leaders — including in notable, public displays — absolves Schmeck from ethical criticism in this case. You might still think that it is a silly, disingenuous, or overly-complicated way to make an anti-Biden jab, but these are aesthetic (not ethical) critiques of Schmeck’s utterance.

In a way, Schmeck seems to have evoked something like this last point after he started receiving criticisms for his Christmas Eve call, arguing that he was well within his First Amendment rights to freely speak to Biden as he did. Indeed, this claim (unlike his initial characterization of the comment as having “meant no disrespect”) seems correct — even as it also fails to touch our earlier question of whether or not Schmeck’s actions were still impolite (and therefore subject to social reactions). It is fully possible to think that Schmeck did nothing illegal by spiking the NORAD Santa Tracker with a political pseudo-slur, even while also thinking that he did something that, all things considered, he probably shouldn’t have done (at least not in the way that he did it). It bears repeating: the First Amendment protects one’s ability to say what they generally want to say; it does not prevent potential social backlash from those who disagree (and also enjoy similar free-speech protections).

All things considered, though he’s reportedly considering running for office, Jared Schmeck’s fifteen minutes of fame have likely passed. Still, his Santa-based stunt offers an interesting look at a developing piece of applied philosophy of language: regardless of the ethical questions related to “Let’s Go, Brandon,” the phrase is certainly not going anywhere anytime soon.

On Anxiety and Activism

"The End Is Nigh" poster featuring a COVID spore and gasmask

The Plough Quarterly recently released a new essay collection called Breaking Ground: Charting Our Future in a Pandemic Year. In a contribution by Joseph Keegin, “Be Not Afraid,” he details some of his memories of his father’s final days, and the looming role that “outrage media” played in their interactions. He writes,

My dad had neither a firearm to his name, nor a college degree. What he did have, however, was a deep, foundation-rattling anxiety about the world ubiquitous among boomers that made him—and countless others like him—easily exploitable by media conglomerates whose business model relies on sowing hysteria and reaping the reward of advertising revenue.

Keegin’s essay is aimed at a predominantly religious audience. He ends his essay by arguing that Christians bear a specifically religious obligation to fight off the fear and anxiety that makes humans easy prey to outrage media and other forms of news-centered absorption. He argues this partly on Christian theological grounds — namely, that God’s historical communications with humans is almost always preceded by the command to “be not afraid,” as a lack of anxiety is necessary for recognizing and following truth.

But if Keegin is right about the effects of this “deep, foundation-rattling anxiety” on our epistemic agency, then it is not unreasonable to wonder if everyone has, and should recognize, some kind of obligation to avoid such anxiety, and to avoid causing it in others. And it seems as though he is right. Numerous studies have shown a strong correlation between feeling dangerously out-of-control and the tendency to believe conspiracy theories, especially when it comes to COVID-19 conspiracies (here, here, here). The more frightening media we consume, the more anxious we become. The more anxious we become, the more media we consume. And as this cycle repeats, the media we are consuming tends to become more frightening, and less veridical.

Of course, nobody wants to be the proverbial “sucker,” lining the pocket books of every website owner who knows how to write a sensational headline. We are all aware of the technological tactics used to manipulate our personal insecurities for the sake of selling products and, for the most part, I would imagine we strive to avoid this kind of vulnerability. But there is a tension here. While avoiding this kind of epistemically-damaging anxiety sounds important in the abstract, this idea does not line up neatly with the ways we often talk about, and seek to advance, social change.

Each era has been beset by its own set of deep anxieties: the Great Depression, the Red Scare, the Satanic Panic, and election fears (on both sides of the aisle) are all examples of relatively recent social anxieties that lead to identifiable epistemic vulnerabilities. Conspiracies about Russian spies, gripping terror over nuclear war, and unending grassroots ballot recount movements are just a few of the signs of the epistemic vulnerability that resulted from these anxieties. The solution may at first seem obvious: be clear-headed and resist getting caught up in baseless media-driven fear-mongering. But, importantly, not all of these anxieties are baseless or the result of purposeless fear-mongering.

People who grew up during the depression often worked hard to instill an attitude of rationing in their own children, prompted by their concern for their kids’ well-being; if another economic downturn hit, they wanted their offspring to be prepared. Likewise, the very real threat of nuclear war loomed large throughout the 1950s-1980s, and many people understandably feared that the Cold War would soon turn hot. Even elementary schools held atom bomb drills, for any potential benefit to the students in the case of an attack. One can be sure that journalists took advantage of this anxiety as a way to increase readership, but concerned citizens and social activists also tried to drum up worry because worry motivates. If we think something merits concern, we often try to make others feel this same concern, both for their own sake and for the sake of those they may have influence over. But if such deep-seated cultural anxieties make it easier for others to take advantage of us through outrage media, conspiracy theories, and other forms of anxiety-confirming narratives, is such an approach to social activism worth the future consequences?

To take a more contemporary example, let’s look at the issue of climate change. According to a recent study, out of 10,000 “young people” (between the ages of 16 and 25) surveyed, almost 60% claimed to be “very” or “extremely” worried about climate change. 45% of respondents said their feelings about climate change affected their daily life and functioning in negative ways. If these findings are representative, surely this counts as the Generation Z version of the kind of “foundation-rattling anxiety” that Keegin observed in his late father.

There is little doubt where this anxiety comes from: news stories and articles routinely point out record-breaking temperatures, numbers of species that go extinct year to year, and the climate-based causes of extreme weather patterns. Pop culture has embraced the theme, with movies like “The Day After Tomorrow,” “Snowpiercer,” and “Reminiscent,” among many others, painting a bleak picture of what human life might look like once we pass the point of no return. Unlike any other time in U.S. history, politicians are proposing extremely radical, lifestyle-altering policies in order to combat the growing climate disaster. If such anxieties leave people epistemically vulnerable to the kinds of outrage media and conspiracy theory rabbit holes that Keegin worries about, are these fear-inducing tactics to combat climate change worth it?

On the surface, it seems very plausible that the answer here is “yes!” After all, if the planet is not habitable for human life-forms, it makes very little difference whether or not the humans that would have inhabited the planet would have been more prone to being consumed by the mid-day news. If inducing public anxiety over the climate crisis (or any other high stakes social challenge or danger) is effective, then likely the good would outweigh the bad. And surely genuine fear does cause such behavioral effects. Right?

But again, the data is unclear. While people are more likely to change their behavior or engage in activism when they believe some issue is actually a concern, too much concern, anxiety, or dread seems to soon produce the opposite (sometimes tragic) effect. For example, while public belief in, and concern over, climate change is higher than ever, actual climate change legislation has not been adapted in decades, and more and more elected officials deny or downplay the issue. Additionally, the latest surge of the Omicron variant of COVID-19 has renewed the social phenomenon of pandemic fatigue, the condition of giving up on health and safety measures due to exhaustion and hopelessness regarding their efficacy.

In an essay discussing the pandemic, climate change, and the threat of the end of humanity, the philosopher Agnes Callard analyzes this phenomenon as follows:

Just as the thought that other people might be about to stockpile food leads to food shortages, so too the prospect of a depressed, disaffected and de-energized distant future deprives that future of its capacity to give meaning to the less distant future, and so on, in a kind of reverse-snowball effect, until we arrive at a depressed, disaffected and de-energized present.

So, if cultural anxieties increase epistemic vulnerability, in addition to, very plausibly, leading to a kind of hopelessness-induced apathy toward the urgent issues, should we abandon the culture of panic? Should we learn how to rally interest for social change while simultaneously urging others to “be not afraid”? It seems so. But doing this well will involve a significant shift from our current strategies and an openness to adopting entirely new ones. What might these new strategies look like? I have no idea.

Ethics and Job Apps: Goodhart’s Law and the Temptation Towards Dishonesty

photograph of candidates waiting for a job interview

In the first post in this series, I discussed a moral issue I ran into as someone running a job search. In this post, I want to explore a moral issue that arose when applying to jobs, namely that the application process encourages a subtle sort of dishonesty.

My goal, when working on job applications, is to get the job. But to get the job, I need to appear to be the best candidate. And here is where the problem arises. I don’t need to be the best candidate; I just need to appear to be the best candidate. And there are things that I can do that help me appear to be the best candidate, whether I’m actually the best candidate or not.

To understand this issue, it will be useful to first look at Goodhart’s law, and then see how it applies to the application process.

Goodhart’s Law

My favorite formulation of Goodhart’s law comes from Marilyn Strathern:

When a measure becomes a target, it ceases to be a good measure. 

To understand what this means, we need to understand what a measure is. Here, we can think about a ‘measure’ as something you use as a proxy to assess how well a process is going. For example, if I go to the doctor they cannot directly test my health. Rather, they can test a bunch of things that act as measures of my health. They can check my weight, my temperature, my blood pressure, my reflexes, etc. If I have a fever, then that is good evidence I’m sick. If I don’t have a fever, that is good evidence I’m healthy. My temperature, then, is a measure of my health.

My temperature is not the same thing as my health. But it is a way to test whether or not I’m healthy.

So what Goodhart’s law says is that when the measure (in this case temperature) becomes a target, it ceases to be a good measure. What would it mean for it to become a target? Well, my temperature would be a target if I started to take steps to make sure my temperature remains normal.

Suppose that I don’t want to have a fever, since I don’t want to appear sick, and so, whenever I start to feel sick I take some acetaminophen to stop a fever. Now my temperature has become a target. So what Goodhart’s law says is that now that I’m taking steps to keep my temperature low, my temperature is no longer a good measure of whether I’m sick.

This is similar to the worry that people have about standardized tests. In a world where no one knew about standardized tests, standardized tests would actually be a pretty good measure of how much kids are learning. Students who are better at school will, generally, do better on standardized tests.

But, of course, that is not what happens. Instead, teachers begin to ‘teach to the test.’ If I spend hours and hours teaching my students tricks to pass standardized tests, then of course my students will do better on the test. But that does not mean they have actually learned more useful academic skills.

If teachers are trying to give students the best education possible, then standardized tests are a good measure of that education. But if teachers are instead trying to teach their kids to do well on standardized tests, then standardized tests are no longer a good measure of academic ability.

When standardized tests become a target (i.e., when we teach to the test) then they cease to be a good measure (i.e., a good way to tell how much teachers are teaching).

We can put the point more generally. There are various things we use as ‘proxies’ to assess a process (e.g., temperature to assess if someone is sick). We use these proxies, even though they are not perfect (e.g., you can be sick and not have a fever, or have a fever and not be sick), because they are generally reliable. But because the proxies are not perfect, there are often steps you can take to change the proxy, without changing the underlying thing that you are trying to measure (e.g., you can lower people’s temperature directly, without actually curing the disease). And so the stronger incentive people have to manipulate the proxy, the more likely they are to take steps that change the proxy without changing what the proxy was measuring (e.g., if you had to make it to a meeting where they were doing temperature checks to eliminate sick people, you’d be strongly tempted to take medicine to lower your temperature even if you really are sick). And because people are taking steps to directly change the proxy, the proxy is no longer a good way to test what you are trying to measure (e.g., you don’t be able to screen out sick people from the meeting by taking their temperature).

The thing is, Goodhart’s law explains a common moral temptation that we have to prioritize appearances.

Take, as an example, an issue that comes up in bioethics. Hospitals have a huge financial incentive to do well on various metrics and ratings. One measure is what percentage of patients die in the hospital. In general, the more a hospital contributes to public health, the lower the percentage of patients who will die there. And indeed, there are all sorts of ways a hospital might improve their care, which would also mean more people survive (they might adopt better cleaning norms, they might increase the number of doctors on shift, they might speed up the triage process in the emergency room, etc.). But there are also ways that a hospital could improve their numbers that would not involve improving care. For example, hospitals might refuse to admit really sick patients (who are more likely to die). Here the hospital would increase the percentage of their patients who survive, but would do so by actually giving worse overall care. The problem is, this actually seems to happen.

Student Evaluations?

So how does Goodhart’s law apply to my job applications?

Well, there is an ever-present temptation to do things that make me appear to be a better job candidate, irrespective of whether I am the better candidate.

The most well-known example of this is student course evaluations. One of the ways that academic search committees assess how good a teacher I will be, is by looking at my student evaluations. At the end of each semester, students rate how good the class was and how good I was as a teacher.

Now, there are two ways to improve my student evaluations. First, I can actually improve my teaching. I can make my class better, so that students get more out of it. Or…, I can make students like my class more in ways that have nothing to do with how much they actually learn. For example, students who do well in a class tend to rate it more highly. So by lowering my grading standards, I can improve my student evaluations.

Similarly, there are various teaching techniques (such as interleaving) which studies show are more effective at teaching. But studies also show that students rate them as less effective. Why? Because the techniques force the students to put in a lot of effort. Because the techniques make learning difficult, the students ‘feel like’ they are not learning as much. (Here is a nice video introducing this problem.)

One particularly disturbing study drives this point home. At the U.S. Air Force Academy, students are required to take Calculus I and Calculus II. They are required to take Calculus II even if they do very poorly in the first class (you can’t get out of it by becoming a humanities major). The cool thing about this data is that all students take the same exams which are independently graded (so there is no chance of lenient professors artificially boosting grades).

So what did the researchers find when they compared student evaluations and student performance? Well, if you just look at Calculus I, the results are what you’d naturally expect. Some professors were rated highly by students, and students in those classes outperformed the students of other teachers on the final exam. It seems, then, that the top-rated teachers did the best job teaching students.

However, you get a very different result if you then look at Calculus II. There, what they find is that the students who did the best in Calculus I (and who had the top-rated teachers), did the worst in Calculus II.

The researchers conclude that “our results show that student evaluations reward professors who increase achievement in the contemporaneous course being taught, not those who increase deep learning.” Popular teachers are those who ‘teach to the test,’ who give students tricks to help them answer the immediate questions they will face. Teachers who actually force students to do the hard work of understanding the material receive worse evaluations because students find the teaching more difficult and less intuitive. And because difficult, unintuitive learning is what is actually required to learn material deeply, there is an inverse correlation between student ratings and student learning.

Student evaluations are intended to be a measure of teaching competence. However, because I know they are used as a measure of teaching competence, there is constant temptation to treat them as a target – to do things that I know will improve my evaluations, but not actually improve my teaching.

Generalizing the Problem

Student evaluations are one example of this, but they are not the only one. There are tons of ways that measures become targets for job applicants. Take, for example, my cover letter.

For each job I apply for, I write a customized cover letter in which I explain why I’d be a good fit for the job. This cover letter is supposed to be a measure of ‘fit’. The search committee looks at the letter to see if my priorities line up with the priorities of the job.

The problem, however, is that I change around parts of my cover letter to fit what I think the search committee is looking for. My interests are wide-ranging, and my interests are likely to remain wide-ranging. But in my cover letters I don’t emphasize all of these things to the extent of my actual interest. In jobs in normative ethics, I focus my cover letter on my work in normative ethics. For teaching jobs, I focus on my teaching. In other words, I write my cover letter to try and make it look like my interest concentrations match up with what the search committee is looking for.

My cover letters become a target. But because they become a target, they cease to be a good measure.

Another example is anything people do just so that they can reference it in their applications. If a school wants a teacher who cares about diversity, then they may want to hire someone involved in their local Minorities and Philosophy chapter. But, of course, they don’t want to hire someone involved in that chapter just so that they appear to care about diversity.

Similarly, if a school wants to hire someone interested in the ethics of technology, they don’t want to hire someone who wrote a paper on AI ethics just so that they can appear competitive for technology ethics jobs.

Anytime someone does something just for appearances, they are targeting the measure. And by targeting the measure, they damage the measure itself.

Is This a Moral Problem

As a job applicant, I face a strong temptation to ‘target the measure.’ I am tempted to improve how good an applicant I appear to be, and not how good an applicant I am.

When I give into that temptation, am I doing something morally wrong? I think so, pursuing appearances is a form of dishonesty. It’s not exactly a lie. I might really think I’m the best applicant for the job. In fact, I might be the best applicant for the job. So it’s not that by pursuing appearances I’m trying to give the other person a false belief. But even when I’m trying to get them to believe something true, I’m presenting the wrong evidence for that true conclusion.

Let’s consider three versions of our temperature example.

Case 1: Suppose as a kid I wanted to stay home from school. Thus, I ‘fake’ a fever by sticking the thermometer in 100ºF water before showing it to my mom. My mom is using ‘what the thermometer says’ as a measure of whether I’m sick. I take advantage of that by targeting the measure, and thus create a misleading result.

Clearly what I did was dishonest. I was creating misleading evidence to get my mom to falsely believe I was sick. But the dishonesty does not depend on the fact that I’m healthy.

Case 2: Suppose I really think I’m sick (and that I really am sick), but I know my mom won’t believe me unless I have a fever. And again I stick the thermometer in 100ºF water before showing it to my mom.

Here I’m trying to get my mom to believe something true, namely that I’m sick (just as in the application where I’m trying to get the search committee to reach the true conclusion that I’m the best person for the job). But still it’s dishonest. One way to see this is that the evidence (what the thermometer says) only leads to the conclusion through a false belief (namely that I have a fever). But the dishonesty does not depend on that false belief.

Case 3: Suppose I know both that I am sick and that my mom won’t believe me unless I have a fever. I don’t want to trick her with the false thermometer result, and so instead I take a pill that will raise my temperature by a few degrees, thereby giving myself a fever.

Here my mom will look at the evidence (what the thermometer says), conclude I have a fever (which is true), and so conclude I am sick (which is also true). But still what I did was dishonest. It was not dishonest because it brought about a false belief, but because in targeting the measure. It’s dishonest because I’m giving ‘bad evidence’ for my true conclusion. I’m getting my mom to believe something true, but doing so by manipulation. I’m weaponizing her own ‘epistemic processes’ against her.

Now, this third case seems structurally similar to all the various steps people take to improve their ‘appearance’ as a job applicant. Those steps all ‘target the measure’ in a way that damages the sort of evidential support the measure is supposed to provide.

It seems clear that honesty requires that I not take steps to target my student evaluations directly. Similarly, it would be dishonest to put extra effort into classes that will be observed by my letter writers. I recognize that it is morally important to avoid this sort of propaganda. For example, if I’m going to give end-of-semester extra credit, for instance, I will wait till after evaluations are done just to make sure I’m not tempted to give that extra credit as a way to boost my evaluations.

But those are the (comparatively) easy temptations to avoid. It’s easy to not do something just to make yourself appear better. What is much harder is being equally willing to do something even knowing it will make me appear worse. For example, there are times when I’ve avoided giving certain really difficult assignments or covering certain controversial topics which I think probably would have been educationally best, because I thought there was a chance they might negatively affect my teaching evaluations. It’s much easier to not do something for a bad reason, than it is to not refrain from doing something for a bad reason.

Conclusion

Once you start noticing this temptation to ‘play to appearances’ you start to notice it everywhere. In this way it’s like the vice of vainglory.

In fact, you start to notice that it might be at play in your very posts about the problem. If a potential employer is reading this piece, I expect it reflects well on me. I think it gives the (I hope, true) impression that I try to be unusually scrupulous about my application materials. And that is not necessarily dishonest, but it is dishonest if I would not have written this piece except to give that impression. So is that the real reason I wrote it?

Honestly, I’m not sure. I don’t think so, but self-knowledge is hard for us ordinary non-saintly people. (Though I’ll leave that topic for a future post.)

Meaning-as-Use and the Punisher’s New Logo

photograph of man in dark parking garage with Punisher jacket

Recently, Marvel Comics announced plans to publish a new limited-series story about Frank “The Punisher” Castle, the infamous anti-villain who regularly guns down “bad guys” in his ultra-violent vigilante war on crime; set to premiere in March 2022, early looks at the comic’s first issue have revealed that the story will see Castle adopt a new logo, trading in the iconic (and controversial) skull that he’s sported since his introduction in the mid-70s. While some Marvel properties (like Spider-Man or the X-Men) could fill a catalog with their periodic redesigns, the Punisher’s look has remained roughly unchanged for almost fifty years.

From a business perspective, rebranding is always a risky move: while savvy designers can capture the benefits of adopting a trendy new symbol or replacing an out-of-date slogan, such opportunities must be balanced against the potential loss of a product’s identifiability in the marketplace. Sometimes this is intentional, as in so-called “rehabilitative” rebrands that seek to wash negative publicity from a company’s image: possible examples might include Facebook’s recent adoption of the name “Meta” and Google’s shift to “Alphabet, Inc.” But consider how when The Gap changed its simple blue logo after twenty years the company faced such a powerful backlash that it ditched the attempted rebrand after just one week; when Tropicana traded pictures of oranges (the fruit) for simple orange patches (of the color) on its orange juice boxes, it saw a 20% drop in sales within just one month. Similar stories abound from a wide variety of industries: British Airways removing the Union flag, Pizza Hut removing the word ‘pizza,’ and Radio Shack removing the word ‘radio’ from their logos were all expensive, failed attempts to re-present these companies to consumers in new ways. (As an intentional contrast, IHOP’s temporary over-the-top rebrand to “the International House of Burgers” was a clever, and effective, marketing gimmick.)

So, why is Marvel changing the Punisher’s iconic skull logo (one of its most well-known character emblems)?

Although it looks like the new series will offer an in-universe explanation for Castle’s rebrand, the wider answer has more to do with how the Punisher’s logo has been adopted in our non-fictional universe. For years, like the character himself, the Punisher’s skull emblem has been sported by numerous groups also associated with the violent use of firearms: most notably, police officers and military servicemembers (notably, Chris Kyle, the Navy SEAL whose biography was adapted into the Oscar-winning film American Sniper, was a Punisher fan who frequently wore the logo). Recent years have seen a variety of alt-right groups deploy variations of the skull symbol in their messaging and iconography, including sometimes in specific opposition to the Black Lives Matter movement, and multiple protests and riots (including the attempted insurrection in Washington D.C. last January) saw participants wearing Frank Castle’s emblem. In short, the simple long-toothed skull has taken on new meaning in the 21st century — a meaning that Marvel Comics might understandably want to separate from their character.

Plenty of philosophers of logic and language have explored the ways in which symbols mean things in different contexts, and the field of semiotics is specifically devoted to exploring the mechanics of signification — a field that can, at times, grow dizzyingly complex as theorists attempt to capture the many different ways that symbols and signs arise in daily life. But the case of the Punisher’s skull shows at least one crucial element of symbolization: the meaning of some sign is inextricably bound up in how that symbol is used. Famously codified by the Austrian philosopher Ludwig Wittgenstein (in §43 of his Philosophical Investigations), the meaning-as-use theory grounds the proper interpretation of a symbol firmly within the “form of life” in which it appears. So, while the skull logo might have initially been intended by its creator to symbolize the fictional vigilante Frank Castle, it now identifies violent militia groups and other real-world political ideologies far more frequently and publicly — its use has changed and so, too, has its meaning. Marvel has attempted to bring legal action against producers of unauthorized merchandise using the skull symbol, and Gerry Conway, the Punisher’s creator, has explicitly attempted to wrest the symbol’s meaning back from control of the alt-right, but the social nature of a symbol’s meaning has all but prevented such attempts at re-re-definition. Consequently, Marvel might have little choice but to give Frank Castle a new logo.

For another example of how symbols change over time, consider the relatively recent shift in meaning for the hand gesture made by touching the pointer finger and thumb of one hand together while stretching out the other three fingers simultaneously: whereas, for many years, the movement has been a handy way to mean the word “okay,” white supremacists have recently been using the same gesture to symbolize the racist idea of “white power.” To the degree that the racist usage has become more common, the meaning of the symbol has become far more ambiguous — leaving many people reluctant to flash the hand gesture, lest they unintentionally communicate some white supremacist idea. The point here is that the individual person’s intentions are not the only thing that matters for understanding a symbol: the cultural context (or, to Wittgenstein, “form of life”) of the sign is at least as, if not more, important.

So, amid calls to stop selling Punisher-related merchandise (and with speculation abounding that the character might be re-introduced to the wildly lucrative Marvel Cinematic Universe), it makes sense that Marvel would want to avoid further political controversy and simply give the Punisher a fresh look. But what a vaguely-Wittgensteinian look at the skull logo suggests is that it’s been years since it was simply “the Punisher’s look” at all.

‘Don’t Look Up’: Willful Ignorance of a Democracy in Crisis

image of meteor headed toward city skyline

Don’t Look Up spends over two hours making the same mistake. In its efforts to champion its cause, the film only alienates those who most need to be moved by its message.”

Holly Thomas, CNN

“it’s hard to escape the feeling of the film jabbing its pointer finger into your eye, yelling, Why aren’t you paying attention! … The thing is, if you’re watching Don’t Look Up, you probably are paying attention, not just to the news about the climate and the pandemic but to a half-dozen other things that feel like reasonable causes for panic. … So when the credits rolled — after an ending that was, admittedly, quite moving — I just sat there thinking, Who, exactly, is this for?”

Alissa Wilkinson, Vox

“[The film’]s worst parts are when it stops to show people on their phones. They tweet inanity, they participate in dumb viral challenges, they tune into propaganda and formulate conspiracy theory. At no point does Don’t Look Up’s script demonstrate an interest in why these people do these things, or what causes these online phenomena. Despite this being a central aspect of his story, McKay doesn’t seem to think it worthy of consideration. There’s a word for that: contempt.”

Joshua Rivera, Polygon

And so on, and so on. Critics of Adam McKay’s climate change satire all point to the same basic defect: “Don’t Look Up” is nothing more than an inside joke; it isn’t growing the congregation, it’s merely preaching to the choir. Worse, the movie flaunts its moral superiority over the deplorables and unwashed masses instead of shaking hands, kissing babies, and doing all the other politicking necessary for changing hearts and minds. When given the opportunity to speak to, it speaks down. In the end, this collection of Hollywood holier-than-thou A-listers sneers at their audience and is left performing only for themselves.

But what if the critics have it all wrong? What if the movie’s makers have no intention of wrestling the various political obstacles to democratic consensus? Indeed, they seem to have absolutely zero interest in playing the political game at all. Critics of “Don’t Look Up” see only a failed attempt at coalition-building, but what if the film’s doing precisely what it set out to do – showing us that there are some existential threats so great that they transcend democratic politics?

“Don’t Look Up” takes a hard look at the prospects of meaningful collective action (from COVID to the climate and beyond) with democratic institutions so corrupted by elite capture. (Spoiler: They’re grim.) Gone is any illusion that the government fears its people. In this not-so-unfamiliar political reality, to echo Joseph Schumpeter, democracy has become nothing more than an empty institutional arrangement whereby elites acquire the power to decide by way of a hollow competition for the people’s vote. This political landscape cannot support anything as grand as Rousseau’s general will – a collection of citizens’ beliefs, convictions, and commitments all articulating a shared vision of the common good. Instead, political will is manufactured and disseminated from the top down, rather than being organically generated from the ground up.

The pressing question “Don’t Look Up” poses (but does not address) is what to do when democracy becomes part of the problem. If our democratic processes can’t be fixed, can they at least be laid aside? With consequences as grave as these, surely truth shouldn’t be left to a vote. When it comes to the fate of the planet, surely we shouldn’t be content to go on making sausage.

Misgivings about the democracy are hardly new. Plato advised lying to the rabble so as to ensure they fall in line. Mill proposed assigning more weight to certain people’s votes. And Rousseau concluded that democracy was only rightly suited for a society composed entirely of gods.

Like these critical voices, Carl Schmitt similarly challenged our blind faith in democratic processes. He remained adamant that the indecisiveness that plagued republics would be their downfall. Schmitt insisted on the fundamental necessity of a sovereign to address emergency situations (like, say, the inevitable impact of a planet-killing comet). There has to be someone, Schmitt claimed, capable of suspending everyday political norms in order to normalize a state of exception – to declare martial law, mobilize the state’s resources, and organize the public. Democracies which failed to grasp this basic truth would not last. The inability to move beyond unceasing deliberation, infinite bureaucratic red tape, and unending political gridlock, Schmitt was convinced, would spell their doom. In the end, all governments must sometimes rely on dictatorial rule just like ancient Rome where time-limited powers were extended to an absolute authority tasked with saving the republic from an immediate existential threat.

This is the savior that never appears. The tragedy of the movie is that our protagonists know the truth, but cannot share it. There remain no suitable democratic channels to deliver their apocalyptic message and spur political action. They must sit with their despair, alone. Much like John Dewey, Kate Dibiasky and Dr. Mindy come to recognize that while today we possess means of communication like never before – the internet, the iPhone, Twitter, The Daily Rip – (so far) these forces have only further fractured the public rather than being harnessed to bring it together.

By the end, when the credits roll, the film leaves us in an uncomfortable place. In documenting the hopelessness of our heroes’ plight, is “Don’t Look Up” merely highlighting the various ways our democracy needs to be repaired? Or is it making the case that the rot runs so deep, democratic norms must be abandoned?

Whatever the answer, it’s a mistake to think “Don’t Look Up” fails to take the problem of political consensus seriously. It simply treats division as immovable – as inescapable as the comet. The question is: what then?