← Return to search results
Back to Prindle Institute

Should Clinicians Have Soapboxes?

blurred photograph of busy hospital hallway

Despite the tendency to talk about the pandemic in the past tense, COVID-19 hasn’t gone. Infection rates in multiple countries are swelling, prompting some – like Kenya, Austria, the Netherlands, and Belgium – to employ increasingly stringent measures. Unsurprisingly, alongside increasing infection rates comes an increase in hospital admissions. Yet, there’s one trait that most of those requiring COVID-19 treatment share – they’re unvaccinated.

This trend isn’t surprising given that one of the points of vaccination is to reduce the seriousness of the infection, thus reducing the need for serious medical interventions. Simply put, vaccinated people aren’t ending up in hospitals as often because they’re vaccinated. The people who haven’t been vaccinated, for whatever reason, are more likely to have severe complications if infected, thus needing clinical care. So far, so simple.

This tendency for hospital beds to be occupied by the unvaccinated invites questions regarding the burden on healthcare systems. After all, emergency care services are better placed to respond to emergencies – like bus crashes, heart attacks, or complicated births – when their wards, ambulances, and hallways aren’t preoccupied with patients. If those patients are there because of their choice not to be vaccinated, it’s only natural to wonder whether they are equally deserving of that resource-use.

But is it appropriate for those working in the medical profession to voice such concerns? If you’re in the hospital seriously ill, does it help to know that your nurse, doctor, consultant, or porter may resent your being there?

This question’s been brought to the forefront of the COVID-19 discussion because of a recent Guardian article entitled ICU is full of the unvaccinated – my patience with them is wearing thin. In it, an anonymous NHS respiratory consultant writes, “I am now beaten back, exhausted, worn down by the continuous stream of people that we battle to treat when they have consciously passed up the opportunity to save themselves. It does make me angry.” Similar sentiments come from the Treating the unvaccinated article in The New Yorker, where critical care physician Scott Aberegg recounts:

There’s a big internal conflict… On the one hand, there’s this sense of ‘Play stupid games, win stupid prizes.’ There’s a natural inclination to think not that they got what they deserved, because no one deserves this, but that they have some culpability because of the choices they made… When you have that intuition, you have to try to push it aside. You have to say, [t]hat’s a moral judgment which is outside my role as a doctor. And because it’s a pejorative moral judgment, I need to do everything I can to fight against it. But I’d be lying if I said it didn’t remain somewhere in the recesses of my mind. This sense of, Boy, it doesn’t have to be this way.

It’s not unsurprising that clinicians feel this way. They’ve seen the very worst this pandemic has to offer. The prospect that any of it was avoidable will undoubtedly stir up feelings of anger, betrayal, or even injustice; clinicians are, after all, only human. While expecting clinicians not to have such opinions seems like an impossible demand, should they be voicing them on platforms with such a broad reach?

On the one hand, the answer is yes. Entering the medical professions in no way invalidates one’s right to free speech, be that in person or print. Much like how any other member of the public can pen an article in an internationally respected newspaper if invited, clinicians have the right to share their views. If that view concerns their increasing inability to accept the preventable loss of life, then, at least in terms of that clinician’s rights, there is very little to stop them ethically. To try would be to revoke a privilege which many of us would likely consider to be fundamental and, without a robust justification, unassailable.

However, those experiencing the pandemic’s horrors may have more than just a right to share their opinions; they might have a duty. Those working on the frontlines in the battle against the pandemic know better than most the state of the healthcare services, the experience of watching people die from the illness, and the frustration from having to cope with much of it is seemingly preventable. Given that they have this unique knowledge, both from a medical and personable standpoint, it would seem that clinicians have a responsibility to be as honest with the general public as possible. If that means sharing their woes and frustrations about the reluctance of people to take even the most basic steps to save themselves, then so be it. After all, if they don’t tell us this information, it seems unlikely that anyone else is.

But, such a principled stance may detrimentally affect trust in the healthcare system, and subsequently, that system’s effectiveness.

As The Prindle Post has recently explored, shame is a complex phenomenon. Its use in trying to shape people’s behaviors is far from simple. This complexity has been seen in several previous public health concerns where shame has had the opposite effect intended. As both The Wall Street Journal and NPR have recently reported, shame makes for a terrible public health tool as it deters engagement with clinicians. If you believe that you’re going to be shamed by your doctor, you’re probably less likely to go. For smokers and alcoholics, this chiefly detrimentally affects only a single person’s health. During a global pandemic,however,  it means there’s one more potentially infectious person not receiving medical care. Scaled-up, this can easily result in countless people refusing to visit hospitals when they need to – increasing infection rates and preventing medical assistance from getting to those that need it.

All this is not to say that doctors, nurses, surgeons, and countless others involved in the care of the vulnerable should be automatons, devoid of emotion and opinion about the unvaccinated. Again, they’re human, and they’re going to have thoughts about what they see during the course of their professional careers. But whether those opinions should be broadcast for the entire world to read and see is an entirely different question.

Niti, Nyaya, and Kyle Rittenhouse: One Kind of Justice Is in the Details, but the Other Isn’t

photograph of courthouse columns and sky

On November 19th, roughly two-and-a-half weeks after his trial began, Kyle Rittenhouse was acquitted of all charges levied against him for killing two people; after twenty-seven hours of deliberation across four days, the jury decided that Rittenhouse’s claims to self-defense were justified and thereby legally absolved him of his responsibility for the deaths. While some might have been surprised by the verdict, legal experts generally were not; as The Prindle Post’s own Benjamin Rossi explains, the facts of the case — when set apart from the many, many partisan performances flashing around this trial — led to a relatively plain judgment, given the relevant laws in question: “…certainly in a procedural sense, and at least partially in a substantive sense, the verdict itself was not a miscarriage of justice.”

But, if I can briefly evoke Socrates before Thrasymachus here, what is ‘justice,’ anyway?

To listen to much of the commentary following the wake of the Rittenhouse verdict, ‘justice’ is a matter of careful adherence to the regulations of the justice system, with the understanding that said institution has been carefully crafted in a way that produces just results. This is, I take it, what we mean when we refer to justice in a “procedural sense” — it indicates that the processes and procedures undertaken to render the verdict were proper, so we can therefore be confident that the defendant’s rights and interests were protected throughout the trial. Insofar as those defendant-protecting processes constitute the “due process” owed and doled out fairly to all accused people in the system, then, this view of justice focuses on the arrangement of the institution and the technical application of its mechanisms to determine whether or not justice has been done.

This is markedly different, though, from the broader, perhaps more philosophical (or at least less-technical), sense of ‘justice’ as the realization of a just society or world filled with people who behave and are treated well, all things considered. To be concerned about ‘justice’ as a matter of promoting a flourishing community filled with well-respected individuals is far more complicated than merely maintaining a focus on the operations of particular social institutions (like the legal system), but it is, arguably, what we actually care about at the end of the day.

This distinction between an “arrangement-focused” and a “realization-focused” view of justice plays a key role in the work of philosopher, economist, and Nobel-prize winner Amartya Sen; calling the former niti and the latter nyaya, Sen points out that both of these are key concepts for socio-political theorists to consider, but that the latter should hold a priority. Both niti and nyaya are classical Sanskrit terms for ‘justice,’ but niti focuses primarily on technical applications of “organizational propriety” while nyaya is the more comprehensive concept upon which a “just society” can be recognized; so, in The Idea of Justice, Sen argues that “the roles of institutions, rules and organization, important as they are, have to be assessed in the broader and more inclusive perspective of nyaya, which is inescapably linked with the world that actually emerges, not just the institutions or rules we happen to have.”

Consider, for example, another imaginary case of Brian the 17-year-old who has been forced to regularly steal from his local grocery store to provide food for himself and his younger sister after his parents were hospitalized in a car accident. Knowing that thievery is illegal, we might simply conclude that Brian is a criminal and charge him accordingly; the hunger felt by Brian and his sister is regrettable, but it is well outside the scope of what niti is designed to care about. If you would hesitate to charge Brian with a crime, or even harbor a desire to see that the court system treat him with leniency, given the mitigating circumstances, then this likely stems from your desire to see nyaya (rather than just niti) upheld: we moral agents can recognize the difference between malicious or self-centered embezzlement (of the sort that anti-theft laws are typically designed to prevent) and Brian’s desperate attempt to care for his sister, even though Brian’s actions still violate the letter of the law. In a similar fashion, Sen illustrates the niti-nyaya distinction with a reference to the Holy Roman emperor Ferdinand I who (in)famously declared “Let justice be done, though the world perish”; says Sen, “if indeed the world does perish, there would be nothing much to celebrate in that accomplishment,” no matter how much niti might have been respected along the way.

What, then, of the Rittenhouse verdict?

It seems clear that, in terms of niti, justice might well have been upheld regarding the specific question of the killings for which Rittenhouse was charged. The morality of killing in self-defense is a wrought notion, but the legal precedents regarding its allowability are well-established and, by most accounts, the actual proceedings of the Rittenhouse trial centered almost entirely on these concerns.

But it is not at all clear that the Rittenhouse verdict protects justice in the sense of nyaya — indeed, the problem for many is that it can’t.

According to Rittenhouse, he drove to Kenosha, Wisconsin (from his home about 20 minutes away in Illinois) to serve as a “medic” for people trying to “protect businesses” from protestors after police officer Rusten Sheskey repeatedly shot Jacob Blake in the back a few days earlier. Although misrepresentations of the details have abounded, as Rossi points out, those details matter: although Rittenhouse did cross state lines to get to Kenosha, his rifle was already in Wisconsin waiting for him. Although the relevant regulations are complicated, if the gun was purchased illegally (as it apparently was), the blame falls on the purchaser, not on Rittenhouse; moreover (despite the intention of the law in question probably pertaining to hunting contexts) Rittenhouse was, technically, apparently acting legally by wielding it in public as he did.

Or rather, those details matter for procedural reasons. Although questions of the gun’s legality might not pertain directly to the charges Rittenhouse faced regarding the deaths he caused, we might still wonder why he chose to arm himself heavily and insert himself into the situation in Kenosha in the first place. We could, for example, doubt that the protestors in Kenosha ever posed an actual threat to businesses or anything else (indeed, with the exception of Rittenhouse’s shootings, the Kenosha protests — like most protests — were quite peaceful); it’s not clear why anyone on the streets of Kenosha would have actually needed either rifles or medics in August 2020. That is to say, it seems perfectly reasonable to think that Rittenhouse was breaking no laws by being in Kenosha on August 25th, 2020, and yet he still had no good reason to be in Kenosha.

And, of course, if he hadn’t chosen to go to Kenosha, then Rittenhouse wouldn’t have been in the situation where he feared for his life and was forced to act, according to the jury, in justifiable self-defense. But this focus extends far beyond the niti-based concerns of the legal system to broader questions about how we want society to operate, how we hope people will freely behave, and how we desire for virtuous individuals to flourish and help those around them to flourish likewise. On its own, “traveling to Kenosha” seems morally neutral, but “traveling to Kenosha because I should protect it — possibly even with violence — from the rioters threatening it” is a perspective loaded with serious moral assumptions and judgments that are not clearly virtuous. To criticize Rittenhouse’s actions in this broader sense (beyond simply asking “was he legally allowed to pull the trigger at the moment he chose to do so”) depends on one’s much richer perspective about what constitutes nyaya, or justice fully realized, altogether.

Consider this from a different perspective: the marches over the shooting of Jacob Blake were not protesting niti-related procedural infelicities that merely treated Blake inappropriately: they were outcries about the nyaya-based injustice of yet another black man (Blake) being attacked by a white police officer (Sheskey) and, in this case, left paralyzed (Sheskey faced no charges for shooting Blake). As many have pointed out, if Rittenhouse himself were not white, his trial — to say nothing of his arrest — would likely not have proceeded exactly as it did (nevermind the multiple literal job offers Rittenhouse has received since). So, although the niti-based details of the Rittenhouse trial might not have substantively included race, the nyaya-based context of the broader conversation certainly does: Rittenhouse’s experience is just one more example of the deference shown institutionally to specifically white bodies: a clear violation of nyaya, no matter how much it comports with niti.

In short, it seems clear and uncontroversial that people can commit injustices without technically breaking laws (consider how folks might escape just punishment on some “legal technicality” or other). Sen’s distinction between niti and nyaya can help us to speak more clearly about the dissatisfaction we feel at those times, even if technical procedures are perfectly honored. The problem might well lie in the broader, unjust context altogether.

The Vice of Ingratitude; or, How I’m Bad at Thanksgiving

photograph of set table for autumn harvest

While growing up, my family took part in a fairly standard thanksgiving tradition. We would all go around the table and each say something that we are thankful for.

I was bad at this tradition. Partly, that was because I was bad at vulnerability. To deflect from saying anything too deep, I’d normally give half joking answers. For example, one year I said I was grateful for ‘literacy.’ Now, in one sense ‘literacy’ is actually a good answer. It’s a wonderful thing that we often don’t stop to notice and appreciate. But even if it’s a ‘clever’ answer, it was also a dodge from revealing any real emotional depth.

Over the years I’ve gotten better at vulnerability. But I still struggle with gratitude more broadly. Indeed, the three vices I struggle with most are vainglory, ingratitude, and cowardice. I’ve written before on vainglory, and — with Thanksgiving fast approaching — it’s time to wrestle with ingratitude.

To home in on my struggles with gratitude, we first need to understand that there was a second problem with my answer of ‘literacy.’ To see the problem, we need to understand what gratitude is.

Defining Gratitude

The word ‘gratitude’ is used somewhat ambiguously in modern English. Sometimes we use words like ‘grateful’ and ‘thankful’ when we mean something like the word ‘glad.’ If I say:

“I’m thankful it didn’t rain during my wedding.”

I’m really saying something like:

“I’m glad it didn’t rain during my wedding.”

I’m basically saying that I’m pleased by the course of events. But gladness is different from gratitude. I am glad FOR something, but I’m grateful TO someone FOR something.

In gladness there are two parts of the relationship. (1) The person who is glad, and (2) what they are glad for. In contrast, in gratitude there are three parts of the relationship. (1) The person who is grateful, (2) the person (or persons) they are grateful to, (3) and what they are grateful for. Philosophers say that gladness is a “dyadic relation” (a relation between two elements) whereas gratitude is a “triadic relation” (a relation between three elements).

And this was the deep problem with my answer of ‘literacy.’ I was not grateful that I could read and write, rather I was glad that I could read and write. People would ask me the question “what are you thankful for?” but I would instead answer the question “what are you glad for?”

If I’d really wanted to express gratitude, I should have said something like:

“I’m grateful to my teachers and parents for helping me learn to read.”

Saying “I’m grateful for my job” is an expression of gladness. Saying “I’m grateful to my boss for keeping me on even after that mistake I made last December” is an expression of gratitude.

Gratitude Looks Outward

Often, when people try to list the things that they are thankful for, they instead list things about which they are glad. When I googled ‘things to be thankful for’ the first list to come up included: good health, weekends, pets, laughter, sunshine, books, indoor plumbing, modern medicine, and freedom of speech.

Now, it’s possible to be grateful for these things, but I expect that for the most part we are glad of these things rather than grateful for them. I certainly am glad for modern medicine, but I don’t exactly feel ‘grateful’ to medical researchers. The truth is, I barely think about medical researchers at all, and certainly they do not leap to mind when I reflect on the wonders of modern medicine.

Similarly, I’m glad that I’m healthy and that there is laughter in the world. But I’m not grateful for such things.

The thing is, given my own philosophical commitments, I should be grateful. I shouldn’t just be glad that it didn’t rain during my wedding, I should be thankful to God that it did not rain during my wedding. This is one of the distinctive features of most theistic traditions, anytime it’s appropriate to feel glad it’s also appropriate to feel gratitude since all good things ultimately come from God (see Ephesians 5:20, Colossians 3:17, 1 Thessalonians 5:18, James 1:17, The Qur’an 16:53).

I could tell myself that I’m grateful to God for good health and laughter. But the truth is I’m not that grateful a person. I’m glad of those things, and I’m philosophically committed to those good things coming from God, but I struggle to feel gratitude.

I’m glad for modern medicine, and not grateful to medical researchers for modern medicine. Similarly, I’m glad for my health, and not grateful to God for my health. In both cases, I can tell that I’m predominantly glad — instead of predominantly grateful — because of how my attention gets directed. When I think about my health, my attention turns inward. I attend to my own life because I’m content with my own life.

Were I predominantly grateful — instead of predominantly glad — then my attention would be disposed to move outward. When I think about my health, my attention would naturally redirect to God and to all the people who have worked hard to help develop modern medicine. The person who is glad for their political freedoms thinks predominantly about what the freedoms mean for their own life. The person who is grateful for their political freedoms is disposed to also think about the sacrifices that others have made to bring political reform.

Gratitude as a Prosocial Emotion

Gratitude, by its very nature, draws one’s attention out of one’s self. The grateful person does not rest content in their own life but is led to think well of other people.

It is this feature of gratitude, that it directs our attention outward, that makes gratitude such an important virtue. The more you see your own goods as gifts, the more you recognize the fittingness of giving good things to others in return (c.f. Colossians 3:1-17). I expect you’ve noticed this in your own life. When someone does something kind to you, you are often inclined to do something kind for others. If you find twenty dollars on the street, that is just good fortune. But if the person in front of you pays for your twenty dollar meal, you are much more likely to pay for the person behind you in turn.

This is why gratitude leads to prosocial behavior. Studies have found that gratitude increases charitable giving, strengthens relationships, and improves the quality of work. Of course, happiness and gladness also lead to these things, but the evidence seems to suggest that directed gratitude is an especially powerful prosocial influence.

My Struggles with Gratitude

When I said I was thankful for literacy, I was not thinking about the other people who have helped me learn to read. And even now, aware of the moral importance of gratitude, I find myself more often glad than grateful.

What is the cause of this ingratitude?

Seneca, in his book On Benefits, suggests that there are three primary causes of ingratitude:

Now we must consider what it is that most makes people ungrateful: it is either an excessive regard for oneself—the deeply ingrained human failing of being impressed by oneself and one’s accomplishments—or greed or envy.

In my own case, I suspect that it’s mostly the first cause. I struggle with gratitude because I fail to appreciate just how deeply the goods of my life are owed to the free gifts of others. Intellectually¸ I can recognize that I would not have the good life I have were it not for the generosity of others. But that recognition is all on the surface, it has not seeped down ‘into my bones.’

If I’m not careful, I fall into the assumption “that I earned all that I have” or at least that what others have given me is only “what they owed me in the first place” (Adapted Quote, Seneca 26.II). To inculcate a virtue of gratitude, I need a clearer moral vision. I need to learn to more clearly and reflexively recognize all the many positive influences that others have been in my life.

The hope is that if I spend enough time noting things to be thankful for (not just noting things about which I’m glad, but actually noting the people who have done good things for me), then I’ll eventually develop the virtue of gratitude. Perhaps I’ll be able to recognize, down in my bones, the wonderful gift I have in the life I get to live.

In the Rittenhouse Case, Justice Is (to Some Extent) in the Details

photograph of Lady Justice figurine

The reactions to the Rittenhouse verdict from the right and left have been depressingly predictable. The right — at least, the MAGA right — has chosen to elevate Rittenhouse to the status of folk hero, an everyman vigilante in the grand tradition of real-life subway shooter Bernard Goetz or the fictional Dr. Paul Kersey of the Death Wish film series. (These references to the ‘70s and ‘80s should remind us that we’ve been here before). On this telling, Kyle Rittenhouse did nothing wrong by toting an assault rifle to a riot in order to protect other people’s property — in fact, his behavior was commendable because it helped bring law and order to a chaotic situation. Among other things, this story overlooks the fact that Wisconsin law categorically forbids the use of lethal force for the sole purpose of defending property. So, if Rittenhouse had used his weapon for that purpose, he would have broken the law. How threatening to break the law helps impose law and order is, frankly, beyond my powers to explain.

Meanwhile, the left has portrayed the Rittenhouse case as another example of how our justice system is broken. From this perspective, whether or not the jury’s application of Wisconsin self-defense law was reasonable is really irrelevant; what matters is that it was legal for Rittenhouse and others to traipse around Kenosha playing soldier, and that two men ended up dead without anyone being held legally accountable. On top of this, there is the fact that the verdict may very well have been different if Rittenhouse had been a person of color.

These perspectives on the case have something in common: they both minimize the importance of the narrow question, did Rittenhouse act reasonably in self-defense? For the right, the morally important thing is that Rittenhouse killed some scumbags for a good cause; whether he did it in self-defense is almost beside the point. For the left, the morally important thing is that various structures — legal (open carry laws, defendant-friendly self-defense laws), political (Republican domination of state legislatures through gerrymandering allowing passage of those laws), and social (white privilege causing police to treat white shooters differently from Black) — determined the verdict. Again, the granular details of the case, the kinds of details that would help us determine whether or not Rittenhouse acted reasonably in self-defense, fade into the background.

Without question, the left’s view has more truth in it than the right’s. It should go without saying that it is morally reprehensible to shoot a person for destroying property, or that it is morally irrelevant that Joseph Rosenbaum, Rittenhouse’s first victim, was (perhaps) a registered sex offender. Vigilantism might slake a primal thirst for vengeance, but it is among the slipperiest of moral slopes.

Still, I think the left misses something important by omitting examination of the case details. To see this, consider the following (slightly involved) counterfactual. Suppose that Trump is assassinated by a Chinese-American, touching off protests and then riots in parts of cities where many businesses are owned by Asians. Some left-wing folk from the surrounding area bring weapons to protect Asian-owned property. Among them is Brian, a 17-year-old gun enthusiast and lifeguard.

At some point during the night, Brian wanders from the store he came to guard and approaches some rioters. At the subsequent trial, Brian’s counsel presents eyewitness evidence that one rioter shouted at Brian that he was going to kill him; others testify that this rioter was hyper aggressive and belligerent that night, although a prosecution witness, another armed left-wing guardian of property, testifies that he was a “clown.” In turn, the prosecution presents blurry video footage and stills that it claims shows that Brian pointed his weapon at one of the rioters, but the footage is difficult to interpret. There is no dispute about what happens next: the rioter takes off after Brian, and Brian runs away. He flees into a parking lot where — and this is disputed at trial — he may be boxed in by some parked cars. He wheels around. The rioter screams “fuck you!” and charges toward Brian, while a nearby rioter fires a pistol into the air. Prosecution witnesses testify that a moment later, the rioter tried to grab Brian’s gun. In any case, autopsy evidence later shows that the rioter’s hands were at least very close to the barrel of the gun. Brian fires four times in less than three-quarters of a second — the first shot shatters the rioter’s pelvis, but it is the third shot into his back that kills him.

Brian flees the scene, pursued by a group of rioters, many of whom quite possibly believe that Brian is an active shooter. Some throw rocks at Brian. One rioter wielding a skateboard catches up to Brian, and whether because the rioter strikes him with the skateboard or because he just trips — this is a disputed point — Brian falls down. Fortunately, this scene is captured by cell phone cameras at very close range, so there is much less dispute about what happens next. Another rioter attempts to kick Brian in the face; Brian fires two shots at him as he rolls on the ground, but both miss. The rioter with the skateboard hits him with it, then tries to grab Brian’s gun. Brian shoots, hitting the rioter in the chest. A third individual, who is holding a pistol in his right hand, is at that moment raising his arms. Then the armed man lowers his arms and walks around Brian. At that moment, it is unclear whether the armed man points his pistol at Brian. Under examination by defense counsel, the man himself testifies that he did, but he later recants that statement. In any case, Brian then shoots the armed man, blowing off his right bicep. Brian gets to his feet and flees.

Suppose that a jury acquits Brian, accepting that the prosecution had failed to prove beyond a reasonable doubt that Brian had not acted in self-defense. Frankly, I find it hard to imagine that the left-leaning commentariat would celebrate Brian as an everyman vigilante (although perhaps some would lionize him as a warrior for racial justice). Perhaps it would concede that the 17-year-old shouldn’t have been there, and that open carry laws encourage such vigilantism. Still, I think the left would focus on the details of the case and of the relevant law. In other words, the left would take Brian’s self-defense claim seriously.

Sometimes, considering these kinds of counterfactuals can help us see things with greater moral clarity. On the other hand, sometimes they just reveal a bias. I do not, however, think that’s the case here. The left would be right to focus on the details of Brian’s self-defense case. By the same token, despite the many structural factors that shaped the outcome of the Rittenhouse trial, whether or not Rittenhouse reasonably feared for his life in the moments leading up to the shootings is morally relevant, because it determines the fairness of punishing him. As long as we continue to believe that people are fairly punished only when their action reveals some fault, their state of mind when acting will be of paramount importance.

This brings me to the question whether the Rittenhouse verdict shows that the justice system is broken. I think it’s a mixed picture. On the one hand, Wisconsin’s permissive open carry regime is a travesty. One could also argue that the Wisconsin self-defense law should require the defendant to retreat before using lethal force, even if he has not provoked the attack. I’m not sure that would have made a difference in the Rittenhouse case, but it’s a reasonable proposal. One could also argue that the burden of proof should be on the defense to prove that the defendant acted in self-defense, not on the prosecution to prove that he didn’t. But changing the law in this way is the job of the legislature, not the courts. Moreover, the jury members in the Rittenhouse case clearly took their job very seriously, deliberating for almost four days, reviewing many hours of video, and taking copious notes throughout the trial. This was not an easy case for a jury: it required making highly fact-intensive judgments about the reasonability of a person’s fear in a very chaotic situation. Given the law and the facts, the verdict was at least reasonable. Finally, the possibility that had Rittenhouse been Black, the verdict would have been different does point up the failures of the justice system, but it does not make the Rittenhouse verdict itself unjust.

Undeniably, the Rittenhouse case should prompt a reexamination of state firearms and self-defense laws. But certainly in a procedural sense, and at least partially in a substantive sense, the verdict itself was not a miscarriage of justice. The trial elicited a wealth of relevant factual detail through adversarial presentation, and the jury applied the law to that body of facts in a careful, reasonable way, focusing on the undeniably morally relevant issue of the reasonability of Rittenhouse’s fear. The Rittenhouse verdict is not all bad news.

Fair Shares and COVID-19 Booster Shots

photograph of COVID vaccination in outdoor tent

Arguments abound regarding the moral importance of receiving the COVID-19 vaccine. Beyond the obvious health benefits for the vaccinated individual, herd immunity remains the most effective way to stop the spread of the virus, limit the development of more deadly variants, and – most  importantly – save lives. In fact, it may very well be the case that these reasons go so far as to provide us with a moral duty to get vaccinated so as not to treat others unfairly and, therefore, immorally. Given all of this, it would seem then that the morality of receiving a third ‘booster’ dose of the vaccine is simple. Unfortunately, ethics is rarely that straight-forward.

Currently, 7.54 billion doses of the COVID-19 vaccine have been administered globally, with 52.2% of the world’s population having now received at least one dose. In the U.S., close to 60% of the population have been fortunate enough to receive two doses of the vaccine, with the CDC now recommending a third dose for certain vulnerable portions of the population. Colorado, California, New Mexico, New York, and Arkansas have gone further than this by approving booster doses for all residents over the age of 18.

Yet, at the same time, only 4.6% of people in low-income countries have received their first dose of the vaccine, with this number dropping to less than one percent in countries such as Chad and Haiti. The reasons for this are many, but one of the largest contributing factors has been affluent countries pre-ordering more doses than they require to fully vaccinate their population. The U.S., for example, has pre-ordered twice as many vaccines as they need, the U.K. has purchased four times as many, and Canada has secured a whopping five times as many doses as would be required to provide a double dose of the vaccine to every one of their residents. These orders are still being filled, and – until they are – many poorer nations are left to wait to receive even their first dose of the vaccine. As a result, the World Health Organization has called on countries to issue a moratorium on providing COVID-19 booster shots until every country is able to vaccinate at least 10% of its population.

Essentially, this matter boils down to the unjust distribution of limited resources – with some countries taking far more than their ‘fair share’ of the vaccine, and leaving others without nearly enough. This has become a fairly common moral issue lately – underpinning problems surrounding everything from toilet paper, to gasoline, to carbon emissions.

There are many reasons why it’s wrong to take more than your fair share of a limited resource. On top of these more general concerns with just allocations, there are ethical issues specific to the case of vaccines. For one, we might claim that we have strong moral reasons to maximize the good. While an initial vaccine dose will grant around 90% immunity to the recipient, using that same dose as a booster will instead grant only a 10% increase in protection. Put simply, a single COVID-19 vaccine dose will do far more good given to an unvaccinated individual than to someone who has already received two previous doses. There are pragmatic concerns too. Unvaccinated populations provide opportunities for the virus to mutate into more virulent strains – strains that undercut vaccination efforts everywhere else in the world.

So let’s suppose that there’s a good case to be made for the fact that countries have done something wrong by taking far more than their fair share of the COVID-19 vaccine, and that the vaccine stock used by affluent nations to provide third booster shots is what we might call an “ill-gotten gain.” What does this mean for us, as individuals? Do we have a moral obligation to refrain from receiving a booster shot until more people – especially those in poorer nations – have managed to at least receive their first dose?

If we think that our resources should go where they’ll do the most good, then the answer may very well be “yes.” This approach is precisely the same as a very famous argument for our moral obligation to donate money to the poor. While buying that Starbucks Double Chocolaty Chip Crème Frappuccino might bring me a modicum of joy, donating that same amount of money could do far more for someone living in absolute destitution. In the same way, while an additional COVID-19 vaccine – used as a booster – will bring me a small benefit, it could do far more for someone else if used as an initial vaccine.

Of course, this argument assumes that by refusing a booster shot, my vaccine dose will instead be sent where it’s more needed. But it turns out it’s notoriously difficult to donate unused COVID vaccines, with some U.S. states already throwing away tens of thousands of unused doses. Suppose, then, that these booster shots are bought-and-paid-for, and that refusing these boosters will not see them go to those who are more in need. What, then, are our obligations regarding these ill-gotten gains?

A thought experiment may help in this situation. Suppose that we were currently suffering through a severe water shortage, and that the government sent out a limited supply of water tankers to alleviate people’s suffering. Your town’s tanker arrives, and everyone receives a reasonable allowance of water. In a shockingly unscrupulous turn of events, however, your town’s local officials hijack and claim the tanker destined for the next town over, parking it on the main street and telling residents to come and help themselves. Whatever water isn’t taken, they claim, will merely be dumped. What should you do? You don’t agree with how this water was obtained, but you also know that if you don’t use it, it’ll only go to waste anyway. You already have enough water to survive, but your plants are looking a little brown and your car could really use a good wash. It seems that, in a circumstance like this, you have every reason to make use of this ill-gotten gain. We have an obligation to maximize the good, and since the harm (depriving others of this vital resource) has already been done, some good might as well come of it, no?

Perhaps. But it is in cases like this that it becomes important to distinguish between maximizing the good in a particular case, and maximizing the good over the long run. While I may have everything to gain from enjoying this stolen water, I don’t stand to benefit from a society in which one town steals vital resources from another. And the same may be true of vaccine booster shots. A global society in which affluent nations overbuy and hoard life-saving resources is one that, in the long-run, will create more harm than good – particularly where this kind of behavior only serves to prolong and worsen a crisis (like the pandemic) for the entire global population. By refraining from taking the COVID-19 booster – at least until those in poorer nations have had the opportunity to receive their initial vaccine – we send a clear message to our governments that we will not partake in ill-gotten gains.

The Politics of Earth’s Climate

photograph of COP26 banner

This past weekend marked the end of COP26, an annual event started in 1995 to bring countries together to discuss climate change. All eyes fell on the leaders of the world’s highest-carbon emitting countries. With each passing year, the future looks more and more dire as the planet continues warming.

Shortly before the commencement of the COP26, a summit was held in Rome involving many of those same world leaders. The topic of climate change was merely brushed over. A photo of the leaders tossing a coin into the Trevi Fountain quickly went viral on social media. The smiling faces and picturesque background made it seem as if these leaders were mere tourists partaking in a common ritual, rather than meeting to discuss the future of life on Earth. With such little progress made and such little attention paid to climate change at the summit, the photo suggests a carefree attitude: devastating climate disasters happen in other, far less wealthy countries.

The COP26 conference, in Glasgow, however, offered hope that global warming would be treated like the crisis it is, with serious and extensive discussions resulting in real and measurable action. Along with one of the warmest years in history, within the warmest decade in history, 2020 brought the most expensive year of weather disasters ever, carrying a $50 billion price tag. With all the money spent on disasters like earthquakes, wildfires, and flooding, you’d think world leaders would make addressing the climate crisis a priority.

The conference brought some encouraging news: COP26 represented the biggest climate meeting in history. Almost 200 world leaders managed to agree upon the Glasgow Climate Pact, which is meant to keep the Earth’s climate warming below the 1.5 degrees Celsius through multiple strategies like decreasing carbon emissions. This commitment to this new, lower threshold is encouraging. The previous Paris Accord from 2015 had settled on a 2-degree target. However, this would mean the complete sinking of coastal countries and cities, encompassing millions of people. Currently, the world is on track to reach a warming of 2.7 degrees Celsius within this century, which almost ensures catastrophic climate disasters for every country on the globe. In order to reach the 1.5 goal, the world needs to cut greenhouse gas emissions in half by 2030.

Leaders at COP26 discussed decreasing some of the most polluting activities in the world, such as fossil fuel production, deforestation, and methane emissions in order to work towards the 2030 goal. The world may have just witnessed an amount of global cooperation and delegation that hasn’t been seen in decades, and certainly never with the context of climate change. This does not mean, however, that world leaders were truly able to set aside politics, even in the face of a worldwide threat indifferent to human conditions.

One of the biggest weaknesses of these arrangements is that they depend solely on the word of dozens of world leaders. The agreements lack any sort of enforcement mechanisms to ensure that countries will actually put into action the pledges they agree to on paper. Given the grave stakes and the necessity of cooperation in achieving our goals, having no sort of penalty for defection or inaction may spell disaster. The countries who signed on to the 2015 Paris Agreement are not even close to hitting those targets. The global coordination that is needed to actually take meaningful action on climate change has never been witnessed before (with consequences that are life-changing for every person on Earth), yet world leaders have refused to hold each other accountable.

This lack of enforcement sheds light on one of the biggest disparities that exists in climate change: the countries who contribute the most pollution and the countries who have felt the worst of climate change so far. Just 12% of the global population (living in wealthy countries) are responsible for 50% of the global greenhouse gas emissions from the beginning of the industrial revolution. This fact has long been discussed, and in 2009 wealthy countries even pledged to help shoulder the costs of the climate disasters in countries that struggle financially with the impacts. Unsurprisingly, this funding had no sort of enforcement, so in the Glasgow Agreement it was noted

with deep regret that the goal of developed country Parties to mobilize jointly USD 100 billion per year by 2020 in the context of meaningful mitigation actions and transparency on implementation has not yet been met.”

This time around, the pact encourages wealthy countries to voluntarily help fund lesser developed countries with the high costs of climate change that they have barely contributed to in comparison to countries like the U.S., England, China, and Russia. While the COP26 certainly resulted in world leaders making strict goals towards climate change in a way that we have never seen them commit to before, there seems to be plenty of both historical and scientific evidence to believe these goals to be made in blind optimism.

Another glaring issue at the COP26 was who was actually in attendance, or at least who was able to get there. For decades, oil company executives have had plenty of seats at the table of climate change discussions, knowing that it is their business that was going to take a hit if the world ever transitions away from fossil fuels. This conference was no different with over 500 people in attendance all from countries with major oil and gas companies or lobbying organizations in support of the fossil fuel industry. This allows the very industries that have helped bring the climate to catastrophic warming, all the while denying the impact of climate change for decades, to have a significant say in the future of a world without the need for their business. At the same time, young activists whose homelands are directly threatened by climate change struggled to afford the costs of attending the conference. These activists bring first-hand knowledge of the impacts of climate change to their lands. Unfortunately, they’ve found that their experience and perspective is not welcomed at a conference specifically committed to helping these very lands and people.

Yet another issue of access was revealed when the energy minister for Israel, Karine Elharrar, was unable to attend the conference as she could not find a transportation that was wheel-chair friendly. Another disability and climate activist, Jason Boberg, could not get in because the accessibility entrance was closed and pointed out the exclusion was bigger than the conference: “We know that disabled people are left behind in climate disasters, floods and fires, and now we are left out of the conference that is supposedly meant to address that.” The field of attendees illustrates that there are ethical issues not just in what decisions are being made at the conference, but who is able to influence, discuss, and witness these decisions. The conference was meant to be full of diverse conversations across the topic of climate change, but even these conversations were limited.

Ultimately, COP26 was illustrative of just how complicated the issue of climate change really is. In a world that is more globally connected than ever before, climate disasters will affect not just one country, but spread beyond national borders. Additionally, just as greenhouse gas emissions are not being released equally, they are not impacting countries equally. There are very serious ethical concerns in how countries that have the capability and finances to assume responsibility for their own pollution are refusing to do so. Despite the various pledges they make, nations continue to allow captains of industry, actively working against progress on climate change, to sit at the head of the table. Politicians may have been able to agree upon extensive plans for climate mitigation under the watchful eye of activists and millions of onlookers, but only time will tell whether these new pronouncements will be anything more than empty promises.

On the Appropriateness of Shame

photograph of the Statue of Cain in Paris

Shame has taken up a prominent role in the public discourse recently. For instance, The Atlantic’s Conor Friedersdorf recently tweeted, arguing that Americans have an obligation to right past wrongs but not to feel shame over “wrongs perpetrated before our births.” Shame also plays a role in discourse about the pandemic. Earlier on, people might have felt shame over getting COVID-19: “If someone who thought they were being careful got the virus, well…maybe they weren’t being so careful.” And now the issue of vaccine shaming arises, with debates over whether people should be shamed for not getting the vaccine.

But shame is a nuanced thing. It is an emotion we feel, but it is also something we do to other people. I might feel shame, but I might also try to get you to feel shame: I shame you. This leads to two different questions: When is it appropriate to feel shame? When is it appropriate to shame somebody?

One mistake, a mistake that Friedersdorf makes, is to tie shame too tightly to wrongdoing. Some emotions are linked to wrongdoing. For instance, guilt tends to be linked to having done something morally wrong. And you certainly can be ashamed of your own wrongdoing. But there are more things in heaven and earth than moral rightness and wrongness. Some things are ugly, pitiful, or bad in non-moral ways. You might also be ashamed that you have a large nose, or you might be ashamed that you were too cowardly to take an exciting opportunity.

If shame were tied only to your own wrongdoing, then shame over wrongs perpetrated before your birth would be nonsensical. But shame isn’t even just tied to what you have done, hence the possibility of being ashamed of your nose. Shame is instead based on who we are. And shame is distinctly interpersonal: much of the time we feel shame because we know others think poorly of us (perhaps because of our looks or our inability to better ourselves). Further, who we are is based on our broader connections to other people: being in a family, being a fan of a certain sports team, or being a citizen of someplace or other.

So, you might be ashamed not of your own wrongdoing, but of the wrongdoing of your father. And you might be ashamed of your country, too. Nikole Hannah-Jones said that she was ashamed of America’s bombing of Hiroshima.

Now, you might question whether we should feel ashamed by things we haven’t done, by things we are merely associated with. For one, it seems perfectly reasonable to care about our non-moral qualities and to care about what others think of us. Secondly, shame and pride come hand-in-hand. Parents are proud of what their kids have done, and people are proud of their country’s achievements. Hannah-Jones was right when, responding to Friedersdorf, she pointed out that if you want to feel proud of your country – for what it does well now, and what it has done well through its history – you better be willing to be ashamed of it, too, for what it does badly and what it did badly in the past.

So, we can be ashamed of many things, including things we haven’t done. What about shaming somebody else? When should we shame people? Perhaps the obvious answer is: when they have done something shameful.

Though there might be a variety of forms of shaming, how shaming works should be fairly obvious: if you fail to meet certain standards, other people – remember, shame is interpersonal – can point out that they think less of you. For this to be effective, you need to then reflect on your failures, and this can involve feeling shame: you see why they think less of you, and you think less of yourself for it. Perhaps this process even must involve shame: to fully appreciate your failure might require that you do in fact feel ashamed of it.

So, when should we shame people? Again, the obvious answer is “when they do something shameful,” but that is too simple. It can depend on the relationship between people. You – a serial card cheat – might have no right to tell me that it’s wrong to count cards. You – a stranger on the street – might have no right to tell me not to be so rude to my wife (whereas our friends can step in and say something). So, shaming might be inappropriate if you are a hypocrite or if you have no business in judging me, whereas if you are a respected member of my community and my actions negatively affect my community, you might be well placed to shame me.

We must also keep in mind that some forms of shaming might carry costs: rather than making somebody feel mildly ashamed for a past misdeed, you might make them feel awful. And we need to be careful, as Kenneth Boyd noted in this venue, because shaming can be unfair, either picking out individuals who may have done something that was more acceptable at the time, and it can be a tool of bigotry, shaming people for being a minority and perpetuating harmful systems of oppression.

So, should we shame people for not getting vaccinated? Firstly, not all the unvaccinated have acted shamefully. In places where it can be hard to get time off of work to get the jab (or where people are not aware that they are entitled to time off), or in places where misinformation is rife, perhaps they are meeting or exceeding the standards we should expect of them as fellow members of the public. Or they may have genuine, conscientious objections.

But it is more likely that opposition to “vaccine shaming” turns on the idea that shaming is ineffective. Somebody might be acting shamefully: they might be failing to protect others, relying upon an overly individualized notion of rights (and failing to recognize how they interact with others in a society), and failing to evaluate the evidence properly because – though they should know better – they have been captured by petty, angry politics. It can be frustrating to be told not to shame these people. But if our aim is to get them to take the vaccine, we need to find an alternative strategy that doesn’t prompt a retreat into deeper skepticism.

Or, so the argument goes. But maybe that argument is wrong: there is some evidence that appealing to the sense of shame or embarrassment someone would feel if they spread COVID to a loved one is somewhat effective at increasing the vaccination rate. Ultimately, I don’t know when Americans should feel shame for what happened in the past. And I don’t know when we should shame people for their behavior in this pandemic. I do know that to have a well-informed public discussion, we need to understand the many facets of shame.

The Ethical Risks of Ad-Hoc Bilingualism

photorgaph of Air Canda plane in air

A rather strange episode in Canadian language relations occurred at the beginning of November when the CEO of Canada’s flagship airline Air Canada had to apologize for not being able to speak French, despite living in Montreal for 14 years. Quebec politicians and journalists quickly labelled the remarks as “insulting,” prompting a wave of criticism in his direction, including from the Deputy Prime Minister herself, who wrote a letter to the airline telling the CEO that improvement to his language ability “should be incorporated as one of his key performance goals.” The affair has prompted yet another debate about bilingualism in Canada, but this particular instance highlights a growing ethical problem regarding the way that bilingual policies are understood in practice.

The affair began on November 3rd when Michael Rousseau, the CEO of Air Canada, made his first major speech after taking on the role in February to the Montreal Chamber of Commerce. Rousseau spoke limited French during his speech, and when asked afterwards by a journalist how he could live in Montreal for so long without speaking French, he replied, “I’ve been able to live in Montreal without speaking French, and I think that’s a testament to the city of Montreal.” These remarks were labelled as contemptuous of Quebec and its culture, “appalling and disrespectful,” “insensitive,” and indicative of a “lack of respect for francophones” by various officials at the federal level and from the Quebec government. In response, Rousseau has pledged to take French lessons.

This affair has prompted a counter response that this is simply Quebec “fragility” with each charge against Rousseau being more absurd than the last. But, putting aside the culture war for a second, its worth considering what bilingualism is supposed to mean to a contemporary Canadian society. The role of the French language has been a hot issue in Quebec since the government recently introduced new legislation to strengthen the French language in Quebec and crack down on English use in public. Initially, the policy of official bilingualism in Canada began with the Official Languages Act of 1969, passed by Pierre Trudeau’s government. The intention behind it was to ensure French and English would be given equal status and that French or English Canadians would be able to access services from the federal government in their own language. In addition, Canada’s constitution guarantees equal language rights and education rights. But these inclusive policies seem merely meant to guide government services, not encourage all members of Canadian society to be bilingual.

The dispute playing out stems from an ambiguity. One conception of bilingualism would hold that any Canadian should be able to work in their language of choice. Another conception of bilingualism may specifically promote the idea of speaking both French and English, and as a social policy Canada should become more bilingual in this way. These are very different goals and would require very different resources, carrying with them very different ethical concerns.

For starters, science tells us that learning a new language once we reach adulthood is very difficult because our neural connections have stabilized by that point. This means that a number of social factors will likely determine success in picking up a new language. The science of languages suggests that learning a second language really requires one “to be immersed in it,” “to be around native speakers as much as possible.” Yet, the 2016 Census found that 86% of bilingual Canadians live in Quebec, Ontario, and New Brunswick, meaning that if you live in a difficult geographic region, your ability to learn a second language is far more difficult. In addition, civil servants have argued that current bilingual policies are racist because they effectively exclude immigrants and new Canadians who in many cases must learn not one, but two new languages if they wish to work at the federal level.

There are also concerns that the policy of official bilingualism is exclusionary for aboriginals as well. Former MP Roméo Saganash opposed forcing Supreme Court judges to be bilingual, for example, because it would effectively prevent aboriginals from reaching it. Even recently, there was a controversy over the fact that Canada’s newest and first Aboriginal Governor General, Mary Simon, a bilingual person speaking English and Inuktitut and born in Quebec, was unable to speak French. This raised hundreds of complaints from francophones despite Simon noting how as a child she was denied the chance to learn French.

Finally, even once one has attained bilingual status, ethical concerns remain. The federal government’s own website notes how tension and insecurities between second language speakers and native speakers can lead to exclusion. These insecurities have made it difficult to have a bilingual civil service, so why would we expect these factors not to be a problem if bilingualism were promoted more broadly?

This brings us back to the case of Michael Rousseau. Critics argue that since Air Canada, a former crown corporation, is the only national airline legally subject to the Official Languages Act and required to be headquartered in Montreal, it follows that the CEO should speak French. But while this is a reason for Air Canada to offer bilingual services, it is not an argument that everyone in the company should speak multiple langauges. As Sabrina Maddeaux recently noted,

If no one at Air Canada headquarters spoke French, that’d be a problem to discuss, but that’s certainly not the case. In fact, the airline has a multi-million dollar internal official languages program and employees dedicated to any complaints related to the Official Languages Act. Functionally, this is a non-issue. Rather, it’s a PR problem and optics issue the government has no business sticking its nose in.

Instead, most of the arguments against Rousseau seem to dwell on the symbolism involved. Some argue that Rousseau didn’t show enough “humility,” while other argue that Air Canada is “not just any company” because of its status in Canadian culture. They insist that that CEOs should set an example and that “Official languages obligations should be seen as a duty owed to the nation”: “If the CEO is not bilingual, why should a flight attendant?” Of course, the obvious answer is that a flight attendant directly offers the services mandated to be bilingual while the CEO does not.

But even if it can’t be legally required for a CEO to speak French, should we regard the expectation as more of a social requirement? Should everyone in the company be bilingual? What about other national corporations and institution? If the argument is that either for symbolic reasons or because we actively wish to promote a bilingual society, you must be bilingual if you want to operate at the national level, then we cannot ignore the larger moral issues and potential inequities.

Moving from a model of mandating service in both languages to a model of bilingualism that promotes it as a social policy carries important ethical concerns. Determining who should be bilingual and what national roles should be bilingual is not something that should be handled by a mob of journalists and politicians based on ad-hoc reasoning about which roles rise to the level sufficient importance. This isn’t an issue that affects only a single CEO who could easily afford French lessons. It could conceivably apply to any job field within federal jurisdiction. Such moves in official language policy have the potential to exclude many sections of Canadian society. Policies which could potentially ruin whole careers or exacerbate social inequalities should be rigorously debated and voted on. Given the moral challenges of bilingualism, it is morally irresponsible for a government to proceed in such an ad-hoc or arbitrary way.

Death Row Inmates, Execution, and Choice

photograph of drug vials and vintage syringe

On October 28th, 2021, the state of Oklahoma executed John Marion Grant. This was the first execution the state had carried out for six years, after placing a moratorium on executions following a case where a prisoner was given the wrong drugs (and another prisoner narrowly escaped the same fate). The drug of choice in nearly all modern federal executions had, until that point, been sodium thiopental. But issues with batch contamination shut down the sole factory producing and supplying it to prisons. Additionally, shortly after discovering a therapeutic alternative to sodium thiopental — pentobarbital — the drug company’s CEO imposed distribution restrictions on the product, prohibiting prisons from buying it.

Since then, most states have lost access to their first and second execution drug of choice, nearly slowing federal executions to a stop. In the last couple years, states have managed to re-access both sodium thiopental and pentobarbital, but the humaneness — and, therefore, constitutionality — of their use remain a matter of dispute, with several lawsuits across many states protesting their use in federal executions. Prisoners and their lawyers frequently use these lawsuits to try to achieve stays of execution. After all, if the jury is literally still out on whether a certain drug is a cruel and unusual means of execution, that seems a good reason to delay its use. However, since Justice Alito’s 2014 Supreme Court opinion arguing that “because capital punishment is constitutional, there must be a constitutional means of carrying it out,” states have been forced to come up with some way to perform executions. Oklahoma devised a compromise: take all of the contested methods of execution, and let the prisoners choose their preferred method.

There was a further deceptive aspect of the choice prisoners were given: several of the execution methods had active lawsuits against them. Therefore, if a prisoner chooses one of those drugs, their execution is put on hold indefinitely, at least until the lawsuit is resolved. The prisoners could choose between several different methods: pentobarbitol, sodium thiopental, non-FDA approved compounded versions of either of the first two drugs, firing squad, or a three-injection method that utilized a benzodiazepine, a paralytic, and potassium chloride to end lives.

But there were some prisoners who believed that selecting the method of their execution would be akin to participating in their own death (i.e., suicide). John Marion Grant was one of those prisoners.

Grant’s refusal to choose an execution method, on the grounds that it violated his deeply-held beliefs that this act would be morally wrong, meant that he was, by default, given the three-injection drug execution – a method not without controversy, given that dosing for these drugs represents a sort of guessing game. That is: nobody really knows how much of each drug any particular inmate will require for complete sedation, so they tend to give prisoners massive amounts rather than risk not giving them enough. Grant’s execution did not, however, go as planned. The aftermath of the injections saw him gagging, convulsing, and vomiting for at least 12 minutes before he was officially declared dead. In an op-ed for The Atlantic, Elizabeth Bruenig characterized Oklahoma’s ruling in the following headline: “Oklahoma Tortured John Grant to Death Because He Wouldn’t Commit Suicide.”

But is this a fair characterization of Oklahoma’s law? Is allowing inmates to choose their preferred method of execution really on a par with forcing them to commit suicide? Initially, the answer seems to be no. Merely having some active role in your own execution is surely not sufficient to render one’s actions “suicidal.” As far as John Marion Grant knew, he was going to die no matter what. All the state was offering him was a chance to choose what he would experience in his final moments.

But rhetoric aside, we may still wonder whether having prisoners take this active role presents an ethical problem. Elizabeth Anderson, in her Tanner Lectures entitled “Private Government,” argues that there are many instances in which a choice only superficially increases someone’s autonomy. She uses the example of laws regarding marriage, specifically the changes in the law when divorce became legal. This newly granted “freedom” of entry into (and exit from) a marriage which, on its surface, appeared to grant more autonomy to women within marriage, actually did the opposite. Because women still lost all property rights upon entering into a marriage contract with their husband, choosing to divorce would, for most women, result in almost certain destitution. It was an “option” that was not really an option at all. Such a choice did little to help improve the overall situation for domestic women. Anderson argues that, “Consent to an option within a set cannot justify the option set itself.” That is, a woman who consents to stay in the marriage, because her other option is homelessness, does not, by that acquiescence, justify the situation. Similarly, one might argue that the Oklahoma law only gives a superficial appearance of prisoner choice and autonomy, and does nothing to make the bare set of execution options permissible.

From a consequentialist perspective, however, an argument could be made that allowing prisoners to choose their method of executions maximizes net good. One may argue that this choice improves the lives of prisoners by alleviating some anxiety they may have otherwise experienced in the lead-up to execution, and that it does this without making anyone else worse-off. For example, if a prisoner had a particular fear of sodium thiopental, they may be relieved to have the option to avoid the drug entirely. Of course, this net gain in utility is not a guarantee — choosing their means of death could exacerbate the anxieties of the prisoner, allowing them to imagine their death in vivid detail in the days before their execution. It may also, as in the case of John Marion Grant, weigh on their conscience as a morally impermissible act of self-harm.

From a Kantian perspective, there may be entirely different reasons to avoid offering this choice to inmates. Kant’s theory of punishment is commonly held to involve a view of justice as primarily retributive — that is, justice is realized when people are rewarded for doing good, and punished for doing bad. Kantian retributivists like the philosopher Igor Primoratz hold that the punishment has to fit the crime in order of magnitude. A crime of murder, therefore, requires for justice that the murderer’s life be taken. The longer we wait to end the life of the murderer, the longer justice waits to be served.

One can, then, imagine a retributivist objection to the Oklahoma law on the grounds that it sometimes results in unnecessary stays of execution. Additionally, one could argue that granting this autonomy of choice to people who are charged with brutally taking innocent lives renders their punishment too light to actually serve justice. After all, the murder victims certainly were not allowed to choose their own means of death.

And so, it seems that, from all normative perspectives, the Oklahoma law regarding choice of means of execution appears morally questionable, at best. We can hope that the law will be replaced with one that is more just — whatever that may look like.

The Poland-Belarus Border Crisis: The Ethics of Blackmail

photograph of barbed wire fence at border

“A company of men in dark uniforms and balaclavas, all carrying clubs. They are battering a group of people, repeatedly clubbing them on their arms, legs and backs. They push them into a river that marks the boundary of the European Union. ‘Go,’ they yell. ‘Go.’”

So begins Kenan Malik’s report, published on Sunday, of the ongoing crisis on the Polish-Belarusian border. Let me briefly explain how we got here.

Last year, the president (or dictator, depending on who you ask) of Belarus, Alexander Lukashenko, jailed his political opposition and tens of thousands of protestors against the controversial and contested Presidential election. More recently, Roman Protasevich, an exiled critic of Lukashenko, was detained after his flight was “hijacked” and forced to land in Minsk. The European Union has responded to such provocations with sanctions on Lukashenko’s regime.

Now, thousands of migrants are attempting to cross from Belarus into Poland and thereby into the EU. In his latest bid for revenge on the EU for its sanctions, Belarus’s Lukashenko has reportedly sold “tourist visas” and scheduled sold-out flights from Turkey, Dubai, and other Middle Eastern countries to Belarus. The U.S. and EU delegations to the U.N. Security Council claim the crisis has been “orchestrated for political purposes.” France has condemned these actions as “migrant trafficking.” Meanwhile, Belarusian border guards have started to force migrants across the border.

With the Belarusian authorities refusing to let the migrants leave a forest on the border, nighttime temperatures dropping below freezing, and the first death being reported, there is a clear humanitarian case for letting the migrants through. But to do so would be to hand Lukashenko a victory, to reward the strategy of using human lives as political pawns, and to further politically destabilize the EU (which is still widely seen as having lost control over migration flows). The tension is, at heart, a moral one: To achieve a morally important humanitarian end, should you ever give in to blackmail?

The immoral actions and threats of others can sometimes force us to make extremely difficult moral choices. The philosopher Bernard Williams presents the following case. You’re a botanist who has ended up in a small South American town in a politically unstable region. Tied up against the trees are twenty locals. An army captain in charge tells you these locals were protesting against the government, and he wants to execute them as a warning to others in the town. Since you are an honored citizen from a foreign land, the captain gives you an offer. You can shoot one of the locals yourself, in which case the others will be set free. Or you can refuse, keep your hands clean, and the captain will kill all twenty. What do you do?

Moral blackmail of this kind is not merely a theoretical problem. In 2014, four Europeans, Edwin Dyer, Gabriella Braco Greiner, Werner Greiner, and Mariane Petzold were all seized by members of al-Qaeda in North Africa. Ransom demands were made for each of the four. The Greiners, who are Swiss, survived, as did Petzold, who is German. Dyer, a British citizen, was killed. The Swiss government paid $3.2m for the lives of their two citizens. The German government has never revealed if they paid Petzold’s ransom, though security experts believe they did. Only the U.K. government refused to pay, costing Dyer his life and his family their loved one.

Most European countries pay terrorist ransoms, though few admit it publicly and most find an indirect method to pay to keep the practice theoretically deniable. Between 2008 and 2014, it was estimated that (primarily) European governments paid $125m in ransoms to terrorist groups. The U.S. and U.K., by contrast, have a strict policy of non-cooperation regarding ransoms. The British government fact sheet on the topic states that “The UK’s position on payment of terrorist ransoms is very clear: we do not pay. Payment of terrorist ransoms is illegal under the Terrorism Act.” The U.S. journalist James Foley is perhaps the most famous victim of this uncompromising policy.

Most European countries clearly think saving the lives of their citizens is worth the moral cost of rewarding terrorists. Many of us would think the same way if it were one of our own family who was kidnapped and we had the financial resources to pay for their release. Some principles seem worth bending if the stakes are high enough.

The U.K. government, on the other hand, justifies its opposing stance “on the basis that providing money or property to a terrorist group fuels terrorist activity; and encourages further kidnaps.” There are two interesting features of this argument worth mentioning. The first is that the argument is consequentialist; it is an argument based on what actions will produce better outcomes. The argument implies that greater harm will result from paying (in the long term) than from refusing, and that it follows we ought not to pay. The second interesting point about this argument is that it relies on an empirical premise: that paying ransoms encourages further kidnaps.

It certainly seems logical that paying ransoms would encourage more kidnapping and terrorist activity. Certainly, some have argued this is the case. David S. Cohen, former Treasury Department under secretary for terrorism and financial intelligence, claims that “Kidnapping for ransom has become today’s most significant source of terrorist financing. Each transaction encourages another transaction.” Of course, reality is messier than this. Do terrorists know the policies of Western governments before they kidnap? Do they know the citizenship of those they are kidnapping before they kidnap them? At least in some cases, it’s plausible they won’t. Perhaps the U.S. and U.K. are morally posturing at the cost of their citizens’ lives. (Interestingly, however, a Wikileaks cable revealed that at least one hostage broker, Abdousalam Ag Assalat, had “specified that the group was not interested in American hostages, presumably because USG [U.S. government] does not make ransom payments.”)

Applying this kind of consequentialist argument to the current border crisis, the relevant empirical question seems to be: would giving into Lukashenko’s blackmail (and allowing the migrants across the border for obvious humanitarian reasons) encourage future “orchestrated” humanitarian crises, resulting in more total human suffering? If the answer to this question is “yes” then Poland could be morally justified in refusing to give into Lukashenko’s moral blackmail. The answer to this complex empirical question is far from obvious. As such, as well as facing a complex political problem, Poland’s leaders face a difficult moral dilemma.

Aaron Rodgers, “Critical Thinking,” and Intellectual Humility

photograph of Aaron Rogers in football uniform with helmet

NFL quarterback Aaron Rodgers made headlines recently when he was sidelined for having contracted COVID-19 and it became public knowledge that he has not been vaccinated. While Rodgers is far from the only unvaccinated NFL player, controversy ensued when it came out that he had misled reporters and the public into thinking that he had, in fact, been vaccinated. Rodgers stated that he has been “immunized,” something which many took to mean that he had been vaccinated, but really meant that Rodgers sought alternative treatments, including the thoroughly-debunked ivermectin, and defied the advice of trusted experts in lieu of that of Joe Rogan.

While there is plenty to be worried about when it comes to Rodgers’ situation – he is actively spreading misinformation about the safety of vaccines and the efficacy of alternative treatments, he is a public figure and role model and thus has a greater responsibility that comes with having greater influence, etc. – something stood out when he was explaining why he had chosen to mislead reporters about his vaccination status. While Rodgers claimed that he was worried about the repercussions of the “woke mob” and “cancel culture,” he also justified his actions by stating that, “I’m not, you know, some sort of anti-vax flat-earther. I am somebody who is a critical thinker.”

In labeling himself a “critical thinker,” Rodgers and those like him are attempting to avoid being targets of criticism, while at the same time presenting themselves as rational inquirers who have happened to have reached conclusions that diverge from the scientific consensus. Given that rational inquiry and independent thinking seem like generally good things, self-proclaimed critical thinkers might then feel persecuted for having their views rejected and mocked.

You’re supposed to think critically! Shouldn’t we encourage critical thinking, and doesn’t the “woke mob’s” refusal to even engage with divergent views from those such as Rodgers represent some kind of failing as rational thinkers and inquirers?

In thinking about these questions, we need to get clearer on what it means to be a “critical thinker.” When we think about being a “critical thinker,” we might also think about being intellectually virtuous: possessing character traits or dispositions that lead someone to effectively pursue the truth, acquire knowledge, and gain understanding. In other words, just as there are traits that are typically representative of morally admirable people – for example, being generous, kind, empathetic, etc. – so, too, are there traits that are representative of being intellectually admirable. These might include traits like being open-minded, curious, and honest, among others. There’s no definitive list of all the virtues out there, but a good place to start when thinking about virtues is to think about smart people we really admire, and to see what kinds of traits they possess.

One such trait that we might associate with our intellectual idols is being a critical thinker. Indeed, some have come out in support of Rodgers, and have expressed admiration of the way he has inquired into issues surrounding COVID-19 vaccines. However, many of those using the term seem to be conflating two senses of “critical thinker,” one which is intellectually virtuous and worthy of guiding our inquiries, and one which is not.

The sense in which Rodgers, and many of those he has associated with, use the term seems to be one in which “critical thinking” means thinking independently: one pursues the truth on one’s own (or else in conjunction with a small group of other “critical thinkers”), often in such a way as to challenge a dominant view. When thinking about which intellectual traits are good ones, these kinds of critical thinkers might look to admirable intellectual figures throughout history, perhaps ones who have made significant scientific progress by rejecting the intellectual authorities of their day. In this sense, “critical thinking” is really a kind of critical thinking, insofar as one looks primarily to criticize consensus views.

The problem with being a critical thinker of this variety, however, is that it can come at the expense of other intellectual virtues. For instance, one important intellectual virtue is that of humility: one needs to be able to recognize what one knows and is capable of finding out, and not try to tackle problems one does not have the training or capacity to meaningfully contribute to. While it is, of course, worthwhile to learn new things, part of being intellectually humble means recognizing when one needs to listen to others.

For example, I have a passing interest in cosmology, but have no formal training in the physics of black holes. It would not, then, be intellectually humble of me to challenge trained scientists on their views just because they don’t align with my pet theories: it wouldn’t help make any progress, and I wouldn’t be any closer to gaining any new knowledge or understanding. What I should do in such a case is listen and learn.

There are certain kinds of critical thinking, then, which may very well be bad for one’s intellectual character. This is not to say that we always need to simply accept what we are told by people who apparently know better. Rather, it means that we need to be able to evaluate the areas in which we could help make a contribution and the ones in which we simply need to listen to what people who know better are saying. It is not always easy to do this. Regardless, while those like Rodgers might want to distance themselves from conspiracy theories and claim that his dissent from the recommendations of doctors, scientists, and the NFL is the result of some rational inquiry, the kind of critical thinking he is engaged in is not the kind of intellectual trait that one should admire.

Civility, Testimonial Injustice, and Commitment to Philosophy

black-and-white photograph of man and woman yelling into megaphones

The American people are extremely politically polarized. Polling shows that this divide is only increasing, particularly on issues of race and gender. Recent revelations that have come out as a result of whistleblowing about the practices of Facebook confirm what many of us probably already expected based on our own personal experiences — social media makes these chasms even wider by contributing to the spread of false information and creating echo chambers for groups of like-minded extremists to speak to one another at the exclusion of any dissenting voices or disconfirming evidence.

The state of politics today has many people longing for an imaginary past in which those who disagreed did so respectfully. In this utopia, we focus exclusively on the merits of arguments (the good kind) rather than simply attacking people. We recognize that dissent is healthy, and we appreciate the insight of John Stuart Mill in On Liberty when he said,

the peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error. 

Here, Mill illustrates a certain kind of learning process — one that is employed by Socrates in his conversations with the citizens of Athens. To understand which conclusions we ought to adopt, we ought to listen to the arguments that people make. If we identify an error in reasoning, we can calmly point it out and everyone involved will be the better for it, as it might bring us all that much closer to truth. Perhaps, like Socrates, the finer points of our arguments will be met from even the staunchest dissenter from our position with a “that is undeniable” or “that is perfectly true” for good measure.

So, is it “philosophy to the rescue!”? One way of responding to our current predicament is to insist that everyone needs a strong education in logic and critical thinking. People need to develop the ability not only to recognize the commission of a fallacy when they see it, but also to frequently (and in good faith) reflect on their own body of beliefs and attitudes. We need to collectively get better at checking for cognitive bias and errors in reasoning in both ourselves and others.

On the other hand, we might ask ourselves whether the above account of Plato and Mill is an accurate description of the circumstances in which we are likely to find ourselves. A more compelling insight might be one from 18th century philosopher David Hume who famously said, “reason is, and ought only to be the slave of the passions.” Hume makes the argument the reason alone does not and cannot motivate us to act — it is our passions that do that. If this is the case, then if we want to arrive at a common understanding or come together in motivation toward a common cause, we need to understand the complexities of one another’s psychologies; we need to recognize the common forces that might potentially move us to action. We might have arguments for our positions, but is it really those arguments that motivate us to act in the ways that we do?

Moreover, to insist that what’s needed now in contemporary culture is more civil discourse may be to fail to recognize certain obvious facts about the way that the world works. In an ideal world, it might be the case that we could all offer arguments, and expect to be heard and understood. However, the non-ideal world in which we find ourselves is a world characterized by power dynamics and replete with testimonial injustice. Groups with power are more likely to be listened to and believed than groups without it. The claims of the rich, for instance, are often given a considerably larger platform than the claims of the poor. What’s more, those on the desirable side of the power dynamic are more likely to describe themselves and to be described by others as “rational.” Often, these descriptions confuse the category of the “rational” with the category of “positions held by the powerful.”

Philosophers from antiquity have identified the capacity to reason as the essence of a human being, but, just as reliably, the concept of rationality has been weaponized to create “us” and “them” groups which are subsequently called upon to insist on “rights for me but not for thee.” Consider, for instance the Cartesian philosopher Nicolas Malebranche’s description of the way women’s minds work:

…normally they are incapable of penetrating to truths that are slightly difficult to discover. Everything abstract is incomprehensible to them. They cannot use their imagination for working out tangled and complex questions. They consider only the surface of things, and their imagination has insufficient strength and insight to pierce it to the heart, comparing all the parts, without being distracted. A trifle is enough to distract them, the slightest cry frightens them, the least motion fascinates them. Finally, the style and not the reality of things suffices to occupy their minds to capacity; because insignificant things produce great motions in the delicate fibers of their brains, and these things necessarily excite great and vivid feelings in their souls, completely occupying it.

Indeed, many figures in the history of philosophy who argue that rationality is the essential human function are also quick to insist that not all human beings participate in this essence. For Aristotle, for example, groups that are not capable of engaging in the kinds of practical deliberations requisite for virtue, namely women and “natural slaves,” are the kinds of beings that are rightly ruled over.

In light of the weaponized history of the very concept of rationality, it is no surprise that there might be barriers to genuine rational discourse and debate — people may not recognize the biases they bring to the discussion and they may not be self-reflective enough to understand that there may be voices to which they are less likely to listen or to treat as credible. If this is the case, we run into another problem for civil discourse. When people have been the recipients of testimonial injustice often enough, they may no longer be calm about it. They may be angry, and that anger may be justified. Demands, then, for “rationality” may just be tone-policing by the group to which people have always listened.

What lessons should lovers of philosophy learn from all of this? Evaluation of arguments is, after all, what we do. Should these considerations encourage us to give up our most deeply-held convictions as philosophers? Probably not. But it should prompt us to be more reflective about the broader social and political landscapes in which we make and, perhaps more importantly, listen to arguments.

Education and Parental Control

photograph of parent walking son to schoolbus

In the United States, parents have a curious sort of authority over the education of their children. On the one hand, parents have quite extensive legal authority over whether to send their children to public schools. On the other hand, parents have extremely limited legal authority over what happens within a public school. For an extremely accessible introduction to the state of current law, see this overview, written by the conservative lawyer David French.

As a parent, then, the primary way you can direct what your child is taught is through the choice of school. Your primary three options are:

  1. Send your child to the public school.
  2. Send your child to a private school.
  3. Homeschool your child.

But there is an important complication. Options 2 and 3 are not widely available to all parents. Private school is extremely expensive, and homeschooling, generally, requires a parent be able to stay home from work. Both options, then, are restricted to the relatively privileged.

Should we care? Well that depends on whether we think it’s important that parents be able to exercise substantial legal control over their children’s education. If we think that it is important, then it’s a serious problem that that control is restricted to parents who are wealthy or well-off.

So is it important? This is a surprisingly difficult question to answer. But before we look at possible answers, let’s try to clarify the question.

Clarifying the Question

It is easy to confuse the object-level question “should a child be taught x?” from the meta-level question “who should get to decide whether a child will be taught x?”.

Consider two cases.

One case occurs in today’s Virginia. A school includes in its English curriculum Toni Morrison’s Beloved. A parent opposes their child being required to read sexually explicit material.

The second case occurs in Jim Crow south. A school includes in its biology curriculum a pseudo-scientific article explaining that heritable genetic differences explain why the Black population is poorer than the white population in the United States. A parent opposes their child being required to read racist propaganda.

The object level question is: should the child have to read the material. And here the object level question is easy to answer. In the first case, yes. It is totally reasonable to require that high school students read Beloved. In the second case, no. It is totally unreasonable to teach students racist propaganda as fact.

But what we face is not an object-level question: what does an ideal curriculum look like? Rather, we face a meta-level question: when a parent and school disagree, who should get final say over what the child is taught?

If you answer the school in the first case and the parent in the second case, you are not actually answering the meta-level question. Rather, what you are saying is that you should just be the final arbiter in all such conflicts. Or perhaps that ‘whoever is right’ should get the final say. But unfortunately neither of those are principles that we are able to implement.

So we are still left with a meta-level question. This is not the question of which answer is right, but what procedure should we use to reach an answer. In this case, the meta-level question is should parents be able to overrule a school’s curriculum when they have moral objections to the content of what is taught?

This is not a choice between authority or no authority. Rather, it is a question of which authority, where the main two options are parents or voters (where voters act indirectly through electing the school board or members of government).

This meta-level question is what we need to answer. I don’t have my own answer, but here are three approaches I find plausible.

Approach 1: Whatever is Most Likely to Help Children

Tucker Sechrest, in a separate Prindle Post, suggests that we should generally decide the meta-level question against parents. He suggests that “the belief that parents are best-positioned to judge the educational merits of scholastic programs disrespects the expertise of educators.”

He reaches this position because he thinks that the question of who should get to decide for children ultimately comes down to who is most likely to promote the interests of the child.

Parents merely act as stewards of a child’s future until that youth herself becomes capable of exercising an autonomous will. While we often defer to parents because we think that mother or father might know best, this should not be confused with universal permission; there are very real limits to freedom parents can exercise in determining what’s best for their child. … The state is obligated to protect vulnerable populations who cannot defend (or even express) their interests. It must oppose parental choices which threaten to constrain children’s future options or inhibit their growth and development.

This still leaves an important role for parents. School boards cannot micromanage policy for each student since they don’t know the particular details of each student’s life. Parents do know those details, and thus parents are often better positioned to decide questions like:

  • Should a child skip second grade?
  • What extracurriculars should the child do?
  • Would this child thrive in a Montessori environment?

However, parents should only get to make decisions about the sort of questions where parents are more likely to get the right answer. Because parents are not education experts, they should not decide what students should read in an English or Biology class.

Sure, sometimes a parent will be right and the school system wrong (as in the imagined racist pseudo-science case), but since in most cases of conflict parents will be wrong, Tucker thinks we decide the meta-level question in favor of schools.

The first answer has a lot going for it. But it is important to recognize that it likely entails a massive increase in the power of government over the raising of children.

Tucker illustrates his approach with examples like these:

We limit parental authority to refuse medical treatment for their child, school children are required to have MMR vaccinations, and most states set homeschooling requirements regarding assessment, parents’ qualifications, and curriculum approval.

But these are generally minimal requirements. Parents cannot refuse life-saving medication. But they don’t need to take other recommendations from doctors (even though it’s surely the case that doctors make more reliable medical decisions than parents).

Similarly, do I really need to think that parents are more likely to discern the true religion to think that parents should be able to decide on the religious education of their children? In Medieval Christendom, states did not require Jewish parents to baptize their children Catholic. This was not because the state thought that individual parents were more likely than the state to be correct about religion. Rather, it was because people thought that parents had the right to set the religious education of their children.

Medieval law did not say parents could do whatever to their children. Abandoning or killing a child was still murder. But these were limits on the natural rights of parents, not the result of thinking that the most qualified person always gets final say.

Similarly, our current law does not say that parents can only make decisions that they are more likely to get right. Rather, parents have authority over their children about most questions but there are limits to how far that authority goes. These limits are not placed where we think parents become less reliable than the state, but rather where we think parental mistakes would reach the point of child abuse (denying lifesaving treatment, not providing a minimally adequate education, etc.).

Approach 2: Family and Natural Rights

Our first approach treated the family as a construct, a useful organizational scheme for ensuring that children are treated well. Because of that, the rights of parents extend only as far as those rights are useful to the maximal well-being of children.

Another answer holds that parents have natural rights over their children. These rights have limits (such as concerning life-saving treatment), but parents have a default claim to raise their children as they think best.

The idea that parents have some natural rights over children is an old one. A version of this view was held by both John Locke and by Immanuel Kant. My favorite formulation, however, comes in Catholic Social Theology.

In the Charter of the Rights of the Family, the Catholic Church claims that “the family, a natural society, exists prior to the State or any other community, and possesses inherent rights which are inalienable.” Parental authority is not a construction for the public good, but is a natural institution that sets limits on the authority of the state.

Given such a view, it is unsurprising that the Charter goes on to argue that parents should have extensive control over education:

Since they have conferred life on their children, parents have the original, primary and inalienable right to educate them; hence they must be acknowledged as the first and foremost educators of their children.

… Parents have the right to freely choose schools or other means necessary to educate their children in keeping with their convictions. … Parents have the right to ensure that their children are not compelled to attend classes which are not in agreement with their own moral and religious convictions. In particular, sex education is a basic right of the parents and must always be carried out under their close supervision, whether at home or in educational centers chosen and controlled by them.

Of course many people are skeptical of these sorts of natural rights. And it’s perhaps unsurprising that the best articulation of them comes from the Catholic Church, natural rights fit more naturally within a religious moral universe.

So let’s consider one last secular perspective that might also answer the meta-level question in favor of parents.

Approach 3: Pluralism and Liberal Limits on the State

In political philosophy, there is an old conflict between democracy and liberalism. Democracy is simply rule by the people, but that can easily be a tyrannical rule. If we all vote for a state religion, that vote is democratic but it is not liberal. It is illiberal for the majority to enforce their religious values on the minority, even if doing so is dictated by the popular will.

Part of what we want in a liberal order, then, is space for a moral and religious pluralism. We don’t want the government to impose a set of values on people, rather we want the government to construct a fair, shared arena in which people are able to live their own lives in light of their own values.

Within this perspective, however, anytime the government takes on the role of moral education, we have reason to worry. It is one thing for a government to try and educate the citizenry in what they believe to be the right values, but it is another thing entirely if there is no way to opt-out of that education. The moment that moral education becomes compulsory, liberals worry that the state is well on its way to compulsory propaganda.

Even if it turns out that the government’s values are right, we don’t necessarily want the government to be able to use coercive power to impose those values on future generations. Doing so risks destroying the very pluralism and disagreement that secures many of the benefits of a political order.

Of course it might also be bad when parents impose mistaken values on children; but at least different parents impose different values. The government poses a unique danger of imposing a homogenized moral education through the power of legal coercion.

On the liberal view, the meta-level question is decided in favor of parents, not because parents are more likely to have the correct values, nor because parents have a natural right over children, but because broadly distributing educational authority provides a check against the state.

What About our Problem?

If we accept an answer to the meta-level question that says parental authority is important, then as a society we face a real moral problem.

Right now, parental authority over education is largely restricted to those who are well-off. And thus an important social good (maybe a natural right, maybe a check against the state) is denied to parents without many resources.

There are two possible solutions to this problem.

First, you could change the legal status quo. Right now parents have authority over whether their kids go to public school, but very little authority within public school. We could change that, we could create more parental transparency over school curriculum, and give parents the power to opt their children out of some portions.

This, however, could be logistically unfeasible.

Second, you might keep the legal status quo but empower parents with other schooling options. For example, you might increase the use of charter schools and educational vouchers. If it is an important public good that parents can direct the religious education of their children, then perhaps parents should be able to use state resources to send their children to religious private or charter schools.

(There are serious worries about educational vouchers, however. Though, working through those would require a much longer piece.)


Right now, parents have power over their children’s education, but most of that power is reserved to the privileged. If that parental power is important, then the current system is unjust. But it’s unclear what the best solution moving forward might be.

Parents’ Rights and Public Education

There’s been no shortage of post-mortems detailing Terry McAuliffe’s defeat at the hands of Glenn Youngkin in Tuesday night’s gubernatorial contest. Most accounts target one issue in particular: education. They point to school lockdowns and curriculum complaints as having sealed McAuliffe’s political fate. More specifically, it was McAuliffe’s declaration that “I don’t think parents should be telling schools what they should teach” that was responsible for driving away white, suburban moms and flipping the state from blue to red. In the end, the governor’s race in Virginia was decided by a single concern: parents’ rights.

However true this tale may or may not be, it seems vital to investigate our convictions regarding paternalism, autonomy, and the conflict between state interests and parents’ rights. How far should these rights extend? What function does the state serve in providing public education? And who should get the final say over school curricula? While the story’s already been written, we should take a moment to consider whether McAuliffe’s remark really was the obvious gaffe it’s been made out to be.

Certainly there’s always been a presumption in favor of familial discretion; it’s commonly held that households should be free to handle their own affairs. Consider, for example, Wisconsin v. Yoder. State law mandated that all children attend public school until age 16. But three different Amish families challenged the state’s right to compel attendance. They argued that compulsory education precluded their children from meaningful participation in their religious faith. High school instruction only interfered with these young adults’ religious development and integration into communal culture. Ultimately, exposure to the alternative values and worldviews preached in secondary school constituted a threat to the Amish way of life. Whatever worldly knowledge they might be taught at school paled in comparison to the vocational training they would receive at home.

In a unanimous decision (7-0), the Supreme Court found that these families’ right to freedom of religion outweighed the state’s interest in seeing their children educated. While “some degree of education is necessary to prepare citizens to participate effectively and intelligently in our open political system,” that imperative cannot justify trampling on other constitutional rights and liberties. This is true especially when a different, but commensurate, education remains on offer. As Chief Justice Burger explained,

The State’s claim that it is empowered, as parens patriae (parent of the people), to extend the benefit of secondary education to children regardless of the wishes of their parents cannot be sustained […], for the Amish have introduced convincing evidence that […] forgoing one or two additional years of compulsory education will not impair the physical or mental health of the child, or result in an inability to be self-supporting or to discharge the duties and responsibilities of citizenship, or in any other way materially detract from the welfare of society.

On its face, the court’s decision in Yoder might seem to provide absolute license for parents to dictate all aspects of their child’s education. Contrary to McAullife’s insistence that parents butt out, Yoder seems to suggest that the buck stops with them.

In reality, however, the ruling is much more limited than it might first appear. First, it only applies to cases where public education runs up against very specific First Amendment religious protections. Second, much of the ruling hinges on the Amish’s unique way of life. As Burger notes,

It is one thing to say that compulsory education for a year or two beyond the eighth grade may be necessary when its goal is the preparation of the child for life in modern society as the majority live, but it is quite another if the goal of education be viewed as the preparation of the child for life in the separated agrarian community that is the keystone of the Amish faith.

Given the unique (and especially isolated) way of life the Amish practice, it’s easier to explain how these students’ withdrawal from public school wouldn’t “materially detract from the welfare of society.”

Still, we shouldn’t make assumptions about the inevitable shape a developing child’s life will take. Justice White was quick to point out that while it’s more than likely that an Amish child “will wish to continue living the rural life of their parents,” others “may wish to become nuclear physicists, ballet dancers, computer programmers, or historians, and for these occupations, formal training will be necessary.” As such, the state has a crucial role to play in helping “prepare them for the lifestyle that they may later choose, or at least to provide them with an option other than the life they have led in the past.”

But if this is so central to the purpose of public schooling, why let these students opt-out? The court decided that the difference between an eighth grade education and a tenth grade education was so slight that the state couldn’t justify interfering with the families’ (but, more importantly, the children’s) religiously-informed convictions.

This finding, then, is much more limited than what parents’ rights advocates would have us believe. There is no broad license granted. There exists no basis for parents’ expansive and inviolable rights. There is no precedent in favor of parents picking and choosing which educational line items to approve and which to discard. Growth and development are stunted in insular settings; learning is about confronting the unfamiliar. Our commitment to consumer choice and rugged individualism blinds us to the role the village plays in raising our child, but that doesn’t make the proverb any less true.

Apart from the obvious practical problems with imbuing every parent with a veto right over school curricula, the belief that parents are best-positioned to judge the educational merits of scholastic programs disrespects the expertise of educators. There’s reason to doubt that parents possess innate knowledge of the intellectual and psychological needs of students better than teachers.

Ultimately, the battle cry of “parents’ rights!” fails to appreciate the person whose interests are actually at stake in these confrontations: children. Parents merely act as stewards of a child’s future until that youth herself becomes capable of exercising an autonomous will. While we often defer to parents because we think that mother or father might know best, this should not be confused with universal permission; there are very real limits to freedom parents can exercise in determining what’s best for their child. Despite often acting as guardians of their child’s individual interests, there are all sorts of issues where we do not grant parents final say. We limit parental authority to refuse medical treatment for their child, school children are required to have MMR vaccinations, and most states set homeschooling requirements regarding assessment, parents’ qualifications, and curriculum approval. Why? Because the state is obligated to protect vulnerable populations who cannot defend (or even express) their interests. It must oppose parental choices which threaten to constrain children’s future options or inhibit their growth and development. State intervention isn’t about reducing parents’ freedom but ensuring its future conditions for the child. (Consider the battle being waged over sex education in schools.)

In the end, we must recognize that public schools don’t operate to serve parents’ whims; they exist to ensure that children have the necessary tools to develop into fully autonomous adults. Certainly, parents enjoy the ability to impact their child’s education through electing school board representatives, voicing their opinions, and supplementing their child’s education, but they don’t have the right to demand that school curricula accommodate their personal worldviews and private desires. While there are legitimate causes and complaints, a ban on Toni Morrison’s Beloved cannot qualify.

COP26: What’s the Point?

image of Sisyphus rolling bolder uphill

The 2021 United Nations Climate Change Conference is taking place in Glasgow, Scotland. COP26 – as it’s otherwise known – has been touted by many as one of, if not the last, chance to avert the existential threat brought on by man-made climate change, biodiversity collapse, and deforestation.

In an impassioned speech, the naturalist and broadcaster Sir David Attenborough urged delegates to set aside their differences, stop chasing short-term gains, and see the bigger picture. He highlighted that actions, not promises, are necessary to reduce carbon in the atmosphere and save the global ecosystem alongside countless human and non-human lives. Sir David stressed that when discussing the people impacted by climate change, we no longer think about those yet to be born. Indeed, gone are the days when we talk about our grandchildren or great-grandchildren feeling climate change’s impacts. Instead, the effects are being felt by people alive today, and these impacts will affect the next generation in ways almost unthinkable.

Despite the seriousness of the topic (and indeed, the task at hand), Attenborough struck a hopeful tone, concluding:

If working apart we are a force powerful enough to destabilize our planet, surely working together, we are powerful enough to save it. In my lifetime, I’ve witnessed a terrible decline. In yours, you could, and should, witness a wonderful recovery. That desperate hope, ladies and gentlemen, delegates, excellencies, is why the world is looking to you and why you are here.

COP26 may be the turning point that so many of us hope it will be. While the effects of the increased levels of carbon already in the atmosphere will be felt for decades to come, some claim it is not too late to reverse course in the long term. Apocalyptic climate change might be avoided if every industry, country, company, and conglomerate bands together and acts not in self-interest but the interest of others. Technologies and policies need creating, not to improve things now but to benefit the planet and those living on it for decades or even centuries to come.

But, history’s shown that humanity’s terrible at thinking long-term and long-distance. After all, we’ve suspected that climate change would cause global devastation for over a century. Yet, when called upon to act, we’ve collectively shrugged and said it was someone else’s problem – that someone else being future generations. As British Prime Minister Boris Johnson noted, “[h]umanity has long since run down the clock on climate change. It is one minute to midnight on that doomsday clock, and we need to act now.” If our past is any indication, while we may need to act now, we probably won’t.

So, one has to ask what the point of COP26 is? If the chances of us doing anything to avoid sleepwalking (or, more accurately, apathetically stumbling while distracted by our brilliance) into a man-made climate oblivion are minute, why should we expend time, effort, and collateral worrying about the inevitable? How can we carry on knowing it’s all going to come crashing down and that our efforts are ultimately pointless?

The latter question was of central focus for the French Philosopher Albert Camus with both his philosophical essays and fictional works addressing life’s meaninglessness. Or, more accurately, how to grasp meaning when all our worldly achievements amount to nothing. As he illustrates in the opening line to his work, The Myth of Sisyphus, “[t]here is but one truly serious philosophical problem and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.” His reasoning for this is simple. Everything rests upon answering this question. Without knowing why life matters, we can’t be certain anything we do in our lives has a point, including morality, knowledge, passion, justice, etc. Without a clear answer to why life matters, we are left with no reason why we shouldn’t just end it all right now. As a quote often misattributed to Camus captures nicely, “[s]hould I kill myself, or have a cup of coffee?” Without meaning, both are valid options.

Unlike other philosophers, Camus thought the correct response to existence’s triviality was not through ignorance, evasion, or despair, but embracement. Camus proposes we should accept that our efforts will amount to naught and that our lives and accomplishments will quickly be forgotten. These things are simply part and parcel of living in a universe lacking a benevolent god’s preordained plan. It is just the way things are, and to try and do anything other than embrace those facts is to deny life itself. To illustrate this, Camus draws upon the titular Myth of Sisyphus.

In most versions, Sisyphus was a feature of ancient Greek mythology punished by the gods for twice tricking Thanatos, the literal embodiment of death. By way of punishment, Sisyphus is forced, for all eternity, to roll a boulder up a hill all day, only to have it roll back down whenever he gets near the top. Thus, never seeing any progress and benefit for his actions; a purposeless task occupying a pointless existence.

Like us, Sisyphus is trapped by circumstances beyond his control and forced to undertake a meaningless undertaking. The difference is that our pointless task is life itself. Much like Sisyphus, when we die, nothing ultimately changes. No universal plans will have been advanced, nor any of our impacts on the earth last for more than a couple of millennia.

But, Camus sees Sisyphus not as a depressive cautionary tale but as an inspiration. For him, despite being faced with an existence devoid of a grand meaning, we should simply do what we can. We must triumph over the hopelessness of life by seeking out meaning where we can find it and acknowledge that while this is absurd, so is the universe. When concluding The Myth of Sisyphus, Camus writes, “[o]ne must imagine Sisyphus happy.”

Yes, acknowledging this leads one to oblivion’s edge, but it also forces one to engage with life more fully. Embracing existence’s absurdity gives one license to disregard the nay-sayers, enjoy life’s pleasures (something Camus did with enthusiasm), and “to live and to create, in the very midst of the desert.” Thus, Camus provides a road map for dealing with life’s meaninglessness.

This approach to existence provides us with more than mere despair when considering the likelihood of addressing climate change. It might be the case that our efforts to address the impending man-made climate crisis fail. That our nature as finite beings, devoid of divine guidance, prevents us from looking beyond our shortsightedness. But, this doesn’t mean that the task of trying to make a difference is itself meaningless. On the contrary, this task can be given meaning if we imbue it with such. Fighting against the inevitable may seem absurd, but so is everything else we do. Indeed, the universe itself, with its gravity, black holes, hummingbirds, earthquakes, x-men, and suntan lotion, is nothing but absurdity. So, why should we think that our lives should be any different? Why should the absurd task of saving the world from climate change be notably outrageous compared to everything else?

Amazon’s Influence on Literature

photograph of Amazon package with Smile logo upside down

According to Amazon lore, Jeff Bezos abandoned a cushy hedge fund job after reading The Remains of the Day, Kuzuo Ishiguro’s melancholy tale of wasted energy and missed opportunities. The young entrepreneur was so moved by the novel that he committed to a life of “regret minimization,” and struck out on his own to start a small online bookstore. Bezos’ then-wife claims that he didn’t pick up Ishiguro until after he started Amazon, but it still makes for a potent founding myth. Though Amazon’s virtual marketplace now offers far more than books, the nascent mega-corporation was influenced by literature on a fundamental level, and perhaps, for better or for worse, it has come to influence literature in turn.

Mark McGurl, a literary critic who teaches at Stanford, traces the influence of Amazon on the fiction marketplace in his new book, Everything and Less: The Novel in the Age of Amazon. McGurl sees Amazon as a black hole with an inescapable gravitational pull, sucking everything from highbrow metafiction to niche erotica into its dark maw. He makes the bold claim that “The rise of Amazon is the most significant novelty in recent literary history, representing an attempt to reforge contemporary literary life as an adjunct to online retail.” Every book is neatly codified by genre (often incorrectly) and plugged into Amazon’s labyrinthine algorithm, where they become commodities rather than texts. “As a lit­erary institution,” he writes, Amazon “is the obverse of the writing program, facilitating commerce in the raw.” In other words, online retailers nakedly prioritize the market over artistic individuality, which ultimately homogenizes the literary landscape. McGurl acknowledges that Amazon has democratized self-publishing, in-so-far as it’s possible for Amazon to democratize anything. Through Kindle Direct Publishing, Amazon pays hundreds of millions of dollars a year to authors around the globe, and though few writers make enough to support themselves through the platform, we have never been more inundated with things to read.

McGurl’s argument does have some weak spots. For example, Kyle Chaka objects that McGurl “doesn’t present any evidence that Amazon’s algorithm incentivizes novelists like Knausgaard or Ben Lerner to write in a certain style, or that it even accounts for their popularity, relative to other, lesser-known contemporary novelists.” If the argument is that Amazon has changed every aspect of literary production, shouldn’t we be able to see that impact in the style and form of bestselling novels? McGurl also oversimplifies the broad range of stories that Amazon promotes. He believes that all commodified fiction can be categorized in one of two ways. A story is an “epic” if it takes a cosmic perspective on life, uplifts the human spirit, and creates (in McGurl’s words) a sense of “cultural integration.” Alternatively, “romance” stories involve interpersonal drama, intimate worlds, and soothe us rather than puff us up. These categories flatten the diverse array of fiction published by Amazon and its subsidiaries, and how can we be sure that the company created our desire to be soothed or aggrandized? They’ve certainly profited off such desires, but there isn’t evidence that we’re increasingly relying on these narrative models or that Amazon alone is driving that change.

Amazon certainly is a problem for literature, but as Chaka points out, the problem has more to do with business than genre trends. Amazon’s low prices endanger small bookstores and the traditional publishing houses that stock their shelves. When negotiating contracts with independent publishers in the early 2000’s, Bezos advised his company to “approach these small publishers the way a cheetah would pursue a sickly gazelle.” This predatory attitude towards traditional marketplaces has hardly changed, and does affect the literary landscape in tangible ways. Whether the democratization of online publishing has come at the expense of traditional publishing is difficult to say, and it’s even more difficult to determine whether or not this is necessarily a bad thing, given how extremely homogeneous the publishing industry is.

Beyond the publishing industry, it might be said that Amazon poses an existential threat to writers. When so much content is available online, and millions of writers are forced to compete for the public’s attention and money, is there a point in writing at all? Parul Sehgal notes “a certain miasma of shame that emanates from much contemporary fiction,” even fiction produced by successful and well-known authors, and wonders if this despair arises from the online marketplace Amazon has created. Given the general mood of self-doubt and ennui, it’s worth celebrating how many people continue to be creative without hope of material reward. Amazon can hardly take credit for the vast output of such writers, but if Amazon has altered the way we approach and consume fiction, they certainly haven’t crushed the creative impulse.

Leaf Blowers Suck. Here’s Why.

photograph of heavy duty leaf blowers in operation

As I sit writing this, I’m surrounded by the sounds of fall. The gentle whisper of the Autumnal wind, the ominous cawing of crows, and the incessant drone of leaf blowers emanating from at least three adjacent properties. Sure, leaf blowers might be among the most irritating inventions ever created by man, but could we really go so far as arguing that it’s morally wrong to use them? I’m certainly willing to try.

One common starting point for assessing the morality of an action is to ask whether the benefits of that action outweigh its costs. There are, of course, benefits to using a leaf blower. For one, they help homeowners maintain the appearance of their properties. And there’s a lot of fun to be had in wielding a loud, powerful piece of machinery. But there are also costs.

First up, they’re a neighborly nuisance – and not just because of the noise pollution they create. The blasts of air generated by leaf blowers are greater than those found in a Category 5 hurricane, and, as such, are able to kick up large amounts of particulate matter (about 5 pounds per hour). According to the Department of Environmental Conservation, this matter often includes pollen, mold, animal faeces, heavy metals, and chemicals from herbicides and pesticides. This matter can take many hours to settle, traveling surprisingly large distances and blanketing surrounding homes in the process.

But the environmental cost of leaf blowers goes even further than this. Due to the terrible fuel efficiency of their two-stroke engines, thirty minutes of using a consumer-grade leaf blower will result in roughly the same amount of carbon emissions as driving a Ford F-150 from Texas to Alaska. In fact, in California, gas-powered leaf blowers are now a larger source of smog-forming emissions than the state’s 14.4 million passenger vehicles. But while vehicles (arguably) serve an essential purpose, leaf blowers are a mere frivolity. All of this has resulted in an outright ban on leaf blowers in a number of cities in California, Colorado, Illinois, Massachusetts, New Jersey, New York, Texas, and Vermont.

But what does this mean for the rest of us? Should we return to those pre-leaf blower days and pick up our trusty rakes instead? Not necessarily. Removing leaves entirely from your yard – even with a rake – may still do more harm than good. According to the Environmental Protection Agency, more than 10 million tonnes of yard debris went into landfills in 2018 (the last year for which data is available). When this debris breaks down in the anaerobic (that is, zero-oxygen) environment of a landfill it creates methane – a greenhouse gas that’s twenty-five times worse than carbon dioxide. If, on the other hand, those leaves are left on your property, they can do a world of good for your garden. They provide a vital habitat for all kinds of creatures including butterflies, salamanders, insects, chipmunks, toads, earthworms, and even box turtles. Despite the obvious value in preserving biodiversity, many of these creatures also make up a highly beneficial (if not vital) part of a garden’s biome. If the desire to clean up those leaves is overwhelming, then there are better ways of doing this too. Mulching or composting leaves on-site allows for this debris to break down in an aerobic (that is, oxygen-rich) environment, avoiding the production of methane. This also allows for nutrients to be cycled back into the ground, minimizing the need to purchase expensive fertilizers.

It’s important to note that, while the focus here has so far been on leaf blowing, many of the same arguments also apply to our obsession with lawn maintenance more generally. A large portion of that 10 million tonnes of lawn debris mentioned above is merely lawn clippings. Further, like leaf blowers, lawn mowers also come with an enormous carbon cost.

Close-cut grass lawns first emerged in 17th-century England as a mark of wealth and status. Traditionally, landowners sought to squeeze as much productivity out of their properties as possible by dedicating every last square foot to crops or grazing pasture. The rich, however, had such affluence that they could devote large tracts of land to an entirely wasteful endeavor – the growing of a lawn. While the original purpose behind lawns might have been forgotten, its largely wasteful nature remains. There’s an estimated 40-50 million acres of lawn in the continental United States – almost as much as all of the country’s national parks combined. In 2020, Americans spent $105 billion dollars maintaining those lawns, and consumed 3 trillion gallons of water, 59 million pounds of pesticide, and 3 billion gallons of gasoline (the equivalent of around 6 million passenger cars running for a year) in the process. All of this to merely sustain what entomologist Doug Tallamy describes as an ecologically “dead space.”

Law maintenance – like leaf blowing – does far more harm than good. What’s worse, it’s unnecessary harm. There are many alternatives to the traditional lawn that come with lower waste, lower carbon emissions, and lower maintenance costs. So throw out your leaf blower and ditch that patch of wasteful sod in favor of something else. The environment – and your wallet – will thank you for it.