← Return to search results
Back to Prindle Institute

A Post-Christian America and the Foundations of Morality

painted photograph of Cades Cove church isolated in Smoky Mountains

The trend of American secularization continues. A recent report by the Pew Research Committee notes the accelerating number of “nones” – those without formal religious affiliation – in the U.S., and finds that under several different scenarios America will be a Christian minority nation by 2070. The report estimates that as of 2020 approximately 64% of Americans identified as Christian. While explanations for the shift away from Christianity are multiple and complicated, it echoes patterns of secularization in Europe.

There are reasons to be wary of overinterpretation, as a lack of affiliation of formal religion does not mean that someone is not informally religious, spiritual, or otherwise wedded to a guiding belief system. Similarly, Christianity is no monolith and encompasses a wide array of sects with varying religious commitments (and varying levels of commitment to those commitments). But, big picture, the church as an institution is in demographic decline.

Religious practices organize people socially and culturally, not just theologically. And whatever its democratic woes, Christianity continues to have a powerful role in American politics, as reflected by recent Supreme Court decisions on abortion and school prayer.

Of particular philosophical interest, religion provides a (plausibly) objective basis to morality and many believers worry about the metaphysical foundations of ethics absent something like a god.

Put differently, what underpins morality in an irreligious society? And relatedly, what is the worry of having a moral system without foundations?

Questions of morality are familiar. Is killing wrong? What about in self-defense? Is it okay to break a promise? To tell a white lie? To collect and sell data from the users of an app? However, there are also questions we can have about morality itself. This is the domain of metaethics. One of the most prominent debates within metaethics concerns the objectivity and reality of ethical claims.

Consider a claim like: “murder is wrong.” A natural interpretation is that this statement is making a factual claim about the moral wrongness of murder, and the claim is either true or false. (This is the claim of moral realists. Though in another tradition in metaethics, called noncognitivism, ethical claims are not treated as being true or false at all.) Assuming the ethical claim is true (or false), the next issue is explaining what makes it that way.

One answer is that something in the world makes the ethical claim true. For a religious or spiritual person, this something is often that a god commands it.

But even within religious thought, such a move is not without difficulties. Plato’s famous Euthyphro dilemma asks whether something is good because it is beloved by the gods, or beloved by the gods because it is good. Nonetheless, religious traditions have additional resources to draw upon when it comes to the truth of ethical claims.

Absent religion, things get trickier. The Australian ethicist J.L. Mackie influentially argued that if morality is something in the world, it is an awfully strange thing. We know of no bits or bobs of the world that seem to constitute moral wrongness, we don’t know how to measure “ethicalness” or move it around, and there seems to be nothing physically different between lying to the police about what’s in your basement to save a refugee’s life or to save your heroin operation. What’s more, we might question what morality really explains. If asked to account for the actions of Adolf Hitler, one can appeal to his psychology, his politics, and the historical context. It is not obvious what additional information is provided by asserting that Hitler is also “evil.” This line has been argued most prominently by the philosopher Gilbert Harman.

Broadly speaking, an account of morality that places moral facts – the wrongness of eating meat, for example – out in the world appears somewhat out of step with our best current scientific accounts.

Evolution makes this concern more acute. Philosophers Sharon Street and Richard Joyce have both argued that evolutionary theory “debunks” morality, where debunking arguments are a specific kind of objection which attempt to show that the causal origin of something undermines its justifications. In particular, evolution is responsive to fitness, not responsive to truth, so the concern is that there is no reason to expect from our evolutionary history that we would have evolved with even an approximately correct set of moral beliefs. The idea is we evolved our general moral commitments because cooperative humans that did not kill and steal from each other constantly were reproductively successful, not because they were perceiving the moral structure of the universe.

This argument is especially powerful because it undermines the evidence on which one would build a case for the metaphysical foundations of ethics. Our everyday moral talk often treats morality as if it is true – we refer to murder as wrong and helping the needy as good. Ethicists take this seriously as (at least initial) evidence for the objective reality of morality, absent compelling reasons to think otherwise. However, if our moral intuitions can be effectively explained by evolution, then the evidentiary basis on which moral realism derives its plausibility evaporates.

These arguments have not gone unaddressed and the debate continues. For example, we presumably did not evolve to learn particle physics, and yet no one considers it “debunked” by evolution.

Other philosophers take completely different approaches. Immanuel Kant famously argued that our morality is rooted in our nature as rational beings that can act in accordance with reason. His work suggests that the truth of moral claims is not written in the stars. Instead, as free-willed rational creatures, it is our duty to recognize the force of moral law. The appeal of approaches like Kant’s is that there can be objective answers to moral questions, even if the foundation of morality lies in our own nature rather than some thing out in the world.

Still, we might ask why all these questions of moral foundations matter at all, and whether religion actually solves the problem. For those concerned by Christianity’s decline, the ultimate fear is likely an amoral world where nothing is right or wrong. Let us grant for the moment that if god/gods exist, then murder is really wrong (here using the time-honored philosophical practice of strategically italicizing words). It is not a glorified opinion. It is not wrong on the basis of reasons or political commitments. It is not only wrong for a given person, or a given society. It is not wrong because we are a specific species with a specific evolutionary history of cooperation that has given us a hard to shake set of psychological intuitions about morality. It is really, truly, against-the-divine-structure-of-the-universe Wrong.

Note that this moral fact does not depend on whether people are religious. It instead depends on the truth of some religious tenets. The popularity of religion is simply unrelated to questions of the existence of moral foundations.

Alternatively, our overriding concern might not be a philosophical one related to whether or not there is an objective basis to morality, but a social one regarding people’s belief in moral foundations. If no one believes that anything is really wrong, so the worry goes, then what is to stop absolute hooliganism? We need the belief in moral foundations for their salutary effect on behavior. This, however, is ultimately a scientific question about, first, whether the religiously unaffiliated are less likely to believe in objective morality, and second, if those who do not believe in objective morality behave less ethically (by conventional standards).

Some research suggests that religious people and secular people have slightly different ethical commitments and behaviors, but there is no evidence of general amorality. If anything, the rise of the “nones” spurs objections to some religiously motivated practices – like abortion bans – on explicitly ethical grounds. Changes in America’s religious landscape will result in changes in its moral landscape, but this does not entail Americans being generally less concerned with morality. And while philosophers and others may be fascinated by the (possible) foundations of morality as an intellectual project, it remains to be seen whether this project is genuinely socially motivated. We simply are, descriptively, organisms that care about ethics. Most of us anyway.

Queuing for the Queen and the Moral Limits of Markets

photograph of queue in front of Buckingham Palace

In the days before her funeral last week, more than 250,000 former subjects joined the 10-mile queue (line) to see Queen Elizabeth II lying-in-state in Westminster Hall. Some queued for 24 hours. Many slept on the street. More than 400 fainted. All this led The Economist to ask, “is queuing the best way to do things?” The problem, the magazine claims, is how to allocate scarce resources (limited slots to walk by the coffin). “An ideal system,” they write, “would give spots to those who value them the most.”

The free-for-all queue falls short by this metric. Why? It “effectively rations out the spots to those who turn up first—and who are willing to wait.” In other words, it allocates the scarce resource to those with more time, rather than those who value it the most. A devoted royalist with an inflexible job to go to is less likely to see the Queen than a tourist who feels like taking part in the experience. It also led to a lot of wasted time that could be better used. This is, in economic terms, an inefficient system.

So what other options are there, besides the mega-queue? The Economist article suggests a couple, including “some kind of market, with prices for each time slot set high enough to balance supply and demand.” They note that there is some precedence: “To visit Buckingham Palace,” for example, “one must buy a ticket.” Now, this system would also, admittedly, have disadvantages. It obviously benefits those with more ability to pay. And those who can pay most aren’t necessarily the same people who would value the experience most. In any case, the reporter’s suggestion of market allocation of tickets went down pretty poorly in the queue. Of course, there may be a selection effect at play. You wouldn’t expect to find people who dislike the queue system in a 10-mile queue.

But I think there are some reasons besides economic inefficiency to think that a market-based allocation would be a bad idea. “Suppose, on your wedding day,” writes the philosopher Michael Sandel, “your best man delivers a heartwarming toast, a speech so moving it brings tears to your eyes. You later learn that he bought it online. Would you care? Would the toast mean less than it did at first, before you knew it was written by a paid professional? For most of us, it probably would.” For some goods, their being bought or sold seems to affect their value to us.

Or take gifts. Economists have long railed against the economic inefficiency of gift-giving. In Scroogenomics: Why You Shouldn’t Buy Presents for the Holidays, economist Joel Waldfogel writes:

The bottom line is that when other people do our shopping, for clothes or music or whatever, it’s pretty unlikely that they’ll choose as well as we would have chosen for ourselves… Relative to how much satisfaction their expenditures could have given us, their choices destroy value.

Waldfogel acknowledges that cash is generally seen as a bad gift. But why is it? Are we simply being irrational, or is the narrow economic lens of analysis missing something important about gift-giving? Sandel suspects the latter. “Gifts aren’t only about utility,” he writes. “Some gifts are expressive of relationships that engage, challenge, and reinterpret our identities.” In other words, going out and choosing a gift that you think will be meaningful to the recipient says something about how you understand them. It can also say something about your relationship.

A scene from Seinfeld illustrates Sandel’s point. Jerry, not knowing where his relationship stands with Elaine now that they are sleeping together but not in a relationship, is struggling to choose a birthday present that sends the appropriate message. A music box is “too relationshippy,” candleholders “too romantic,” lingerie “too sexual,” waffle-maker “too domestic.” Jerry’s ultimate choice is revealed as Elaine opens her present.

Elaine: Cash?!

Jerry: What do you think?

Elaine: You got me cash?!

Jerry: Well this way I figured you could go out and get yourself whatever you want. No good?

Elaine: What are you, my uncle?

Elaine’s complaint is that the gift fails to reflect what is meaningful in their relationship. It may be economically efficient, as Jerry protests, but it is impersonal — something a distant relative would give. Despite the efficiency of cash as a gift, it has less value to her.

There is a similar case to be made for the rituals surrounding death, and, in the case of the British monarchy, political rituals. We may not always want the most efficient, market-based solution. Introducing market norms to some areas of life seems to devalue the very things we find precious and meaningful. So what, to put it in Sandel’s terms, does the inefficient queue “express?” How does it “engage” or “reinterpret” our identities?

In a deeply wealth-divided society (and perhaps somewhat ironically for a monarchical event) a queue, open to all and free for all, embodies a sense of moral equality. Theoretically, everybody has twenty-four hours in a day, even if practically, the demands on that time vary greatly from person to person. The queue does a better job of expressing this equality, the sense that we’re all in this together, than a system of paid time slots, even if it does benefit the time-rich.

Part of what many found moving about the queue was the degree of personal sacrifice it involved, of both time and comfort. Of course, a market-based system would also involve sacrifice — the money for the tickets. But spending money communicates less personal investment than spending one’s time, or enduring discomfort. Perhaps this communicates something: respect for the Queen, for the dead, or simply the historical significance of the moment.

Finally, we also can’t ignore that there is something very British about this way of mourning a monarch. Stephen Reicher, a social psychologist, writes in The Guardian that:

Time and again, queues, and this one in particular, have been described as quintessentially and uniquely British: polite, restrained and orderly, reflecting the timeless characteristics of our national identity. 

For a nation just stripped of its most powerful symbol of continuity, the longest-reigning monarch in British history, perhaps the familiar inefficiency of a long queue is precisely what we needed.

The Meaning of Monarchy

photograph of Queen Victoria statue at Kensington Palace

A prominent figure for nearly a century, the death of Queen Elizabeth II leaves a tremendous void behind. Many are deeply affected by this loss. For some, however, the death of the Queen also breaks a spell. As perhaps the most famous representation of monarchy, her image seems to have put some questions about the nature of monarchy on hold for some time. Almost immediately after her death, however, questions about the future of monarchy – generally, or in the U.K. specifically – began to swirl. While some have started to map out the succession line, others voiced criticism of their historical (colonial) ties to the monarchy. Now, there is a strong call to take a moral stance regarding monarchy, in general. Questions concerning the role of monarchy coupled with the financial, diplomatic, and moral burden of the royal families are coming to the fore.

The problem with taking a moral stance, however, is that the monarchy today is quite different from the monarchy in the past: Not many “Royalists” in the traditional sense remain standing in the Western world. These domesticated and tranquilized monarchies hold almost no political power; they are merely symbolic.

But what’s not clear is what exactly these “Symbolic Monarchies” symbolize, and whether one is morally obligated to support or oppose what they represent. Do they pose some threat of oppression like in the past or are they now somehow “redeemed”?

From my regional point of view, the idea of monarchy is still a very dangerous one. The Middle East, in general, is quite accustomed to a very strong central figure, and developing democracy or civil society is always under threat by an autocratic one-man regime. As the West of “The East” or the East of “the West,” Turkiye is an interesting boundary case where the idea of monarchy is both very weak and yet still somehow scary at the same time. From time to time, the possibility of a symbolic Ottoman Emperor is jokingly suggested. Most react radically to even the mere mention, whereas some think a powerless monarchy has some kind of emotional and historical nostalgic value – mainly in a more cultural, diplomatic, and touristic sense. One year after the establishment of the Turkish Republic, the Osmanoglu Family were sent into exile in 1924 since it was thought they would pose a threat to the newly found republic. In the first half of the 70s, this exile was lifted for all members as any dream to resurrect the Ottoman Empire seemed unrealistic. (I believe it is still unrealistic.)

However, some events involving members of the Osmanoglu Family are worrying. Some, claiming to be the rightful inheritors of the Ottoman Sultan, Abdulhamid II, demand lands on the ground that these were the personal property of the Sultan. Meanwhile, Abdulhamid Kayıhan Osmanoglu, claiming to be a royal family member, has entered into politics in “New Welfare Party” – a revamp of a radically conservative party – and usually goes around in Sehzade/Prince clothes. Additionally, the 21st century has brought a “Neo-Ottoman” political and cultural wave in Turkiye. From the very beginning, some have pointed out a relation between governing “Justice and Development Party (AKP)” and this “Ottomania” or “Neo-Ottomanism.”

In one sense, no one – including AKP – appears to be seriously considering abolishing democracy and bringing the monarchy back. However, in another sense, interest in monarchy appears to be very much revived.

This situation is enough to make any citizen of a country with a similar history uneasy.

From a North American or European point of view, these events may not seem relevant since the threat – especially in the Middle Eastern region – is not about Symbolic Monarchy, but the possibility of reinstating Traditional Monarchy. The general belief that a Symbolic Monarchy is safe, harmless, or powerless is generally accepted unquestioningly in the West. Its assumed lack of political power is so overemphasized that its message – what a monarchy represents – is generally overlooked.

Some treat Symbolic Monarchy like they do fictional entities like Santa Claus. Having great cultural importance, this take on monarchy assumes that Symbolic Monarchy is not really a “monarchy” as much as a glamorous imitation for show. These declawed figureheads are like Santa Claus giving out gifts in malls and ringing a bell on the corner of the streets. Perhaps the Queen was not a “Queen” after all. Maybe she represented the fantasy of an ideal benevolent ruler that we know doesn’t exist. It was simply an unforgettable role played by the actor Elizabeth Alexandra Mary Windsor, whose fans now feel they’ve lost their heroine.

But for those more familiar with the monarchies of old, this preoccupation with pageantry is naïve.

Even if it is merely a role to be played, we must still ask: What does this role represent? Even though we may appreciate the actors who play them, the public performance of roles like “Queen,” “Tzar,” and “Sultan” evoke real historical associations.

This is where these roles get their power, and contrary to the nostalgic reception, some of these associations are negative. Reminiscing often comes with memories of colonization, abuse, and torture. Many people are reminded of how they or their ancestors were oppressed. The very existence of Symbolic Monarchy and its global glorification seems capable of hurting many or being used as a tool to numb people to historical harms. Even in its lightest form – where we assume the characters are benevolent and we acknowledge and condemn past transgressions – monarchy represents inequality, as Nicholas Kreuder has recently argued; it contradicts the natural or essential equality of all human beings.

Is this necessarily true? I’m sympathetic to Benjamin Rossi’s critique that suggests that as long as people can in some way voluntarily embrace it, monarchy can be morally legitimate. But, from another point of view, it’s difficult to judge whether the adoption is voluntary or not.

In historically oppressed cultures, it is common to observe the adoption of their oppressors’ values, language, and religion, which were forced on them in the past. For such people, an idea like “Queen” can be very damaging and, controversially, soothing at the same time.

Ultimately, Symbolic Monarchy is thought to exercise some influence on the public but a low political influence: They seem to play only a support role to major events with no decision-making power. But what is “real” power these days? Members of a royal family have many powerful symbols at their disposal that wield great influence, including casting a shadow over politics. Though they are generally not allowed to endorse an ideology, party, or politician, the line between supporting a moral cause and supporting a policy is incredibly blurry. As “influencers,” their discretion in addressing particular issues rather than others, their preferences on charity and patronage, or the moral positions they adopt in royal dramas are not easily separable from political issues. While royal families’ political influence is difficult to quantify, there is no doubt – from their fashion sense to their diplomatic missions – that they possess “power.”

Today, monarchy is under increasing scrutiny following the Queen’s death. Some admire these royal families. Some remember the yoke of oppression. Some fear past monsters may rise again. What power the idea has left – be it Traditional Monarchy or Symbolic – remains an open question.

National Debt and Longtermism

photograph of father and son silhouette working tiller

On September 23rd, the U.K. Chancellor Kwasi Kwarteng outlined an array of tax cuts and other economic measures to jumpstart the economy and tackle the growing cost of living. His hope was that an increase in economic growth would result in a less turbulent recession and, ultimately, an easing of the pressure on household and business finances. However, the markets met the measures with less than favorable responses. Immediately after the announcement, the value of the pound dropped to a near 40-year low, and U.S. Treasury Secretary Larry Summers remarked that “[t]he UK is behaving a bit like an emerging market turning itself into a submerging market” and that “Britain will be remembered for having pursued the worst macroeconomic policies of any major country in a long time.” Unfortunately, the bad news kept coming as, early on September 26th, the pound plummeted to a record low against the dollar.

Despite the inherent complexities in national and global finances, one of the most significant criticisms against this dramatic shift in financial policy has been a relatively simple question – who will pay for all this? After all, if you want to cut taxes but still provide public services, the money must come from somewhere. For Kwarteng and the U.K. government, the answer is to borrow. But we’re not talking about a small loan. The U.K. government will borrow £72 billion over the next six months alone. This is in addition to the borrowing it had already planned to do.

Of course, if you want to avoid going bankrupt and defaulting on your loans, borrowed money needs to be repaid at some point, alongside interest. This fact is something which the Chancellor has been hesitant to acknowledge, sidestepping the question when he’s asked. Nevertheless, the matter remains – who will have to pay back this money? The answer is future generations.

Money borrowed today will be paid back, in the form of taxes, by those yet to be born. In other words, the U.K. government is trying to solve today’s financial problems by pushing them onto tomorrow’s generation.

To some, this seems exceedingly unfair. After all, those future generations, who are yet to be born, weren’t the ones who borrowed that money. So, why should they be the ones to pay it back? Indeed, those making decisions about national borrowing – the ones in government positions – face no personal repercussions for their borrowing decisions beyond those shared with everyone else. And while they borrow on their nation’s behalf rather than their own, there’s seemingly a disconnect between financial decision-making and the consequences.

So, ultimately, the benefits of borrowing – increased economic activity, better public services, more generous subsidies – are enjoyed by those alive today, while future generations will be stuck with the bill. And, of course, the prospect of borrowing money to make our lives easier and not having to pay back that debt, as it will be someone else’s problem, makes for a mighty tempting offer. James Buchanan, Professor of Economics at Harris University, summarizes this nicely:

the institution of public debt introduces a unique problem that is usually absent with private debt; persons who are decision makers in one period are allowed to impose possible financial losses on persons in future generations. It follows that the institution is liable to abuse this and overextend its borrowing practices.

So far, I’ve painted a picture in which the arguably reckless borrowing exhibited by the U.K.’s newest government does, at minimum, a disservice and, at most, an injustice to future generations. While an intuitively appealing stance, it rests upon a proposition that might be less than well-founded. Do we, or can we, actually owe anything to future generations? If a government or I take action that, theoretically, harms the interests of someone who doesn’t exist yet, have I done something wrong?

According to Oxford Philosopher William MacAskill, the answer is not only that we can owe things to future generations but that our inclination to think in the short-term is one of our most significant moral failings. As he argues in What We Owe the Future, if we’re to draw ethics down to a simple utilitarian numbers game, where the most ethical actions are those bringing about the greatest good for the greatest number, then not only should we take into account the interest of future generations, but those generations outweigh the interests of those alive today. This is because the number of potential persons yet to be born far outweighs the number of people currently alive.

As such, if we’re committed to a utilitarian ethical framework, then it follows that we should sacrifice our well-being and hamper our interests if doing so could bring about a better life for the multitude of future potential people – you are singular, but your descendants are possibly innumerable.

When made explicit, this idea of acting in the best interest of the yet-to-exist at our expense can sound counterintuitive. But, we’re typically intuitively inclined toward such thinking. As MacAskill writes:

Concern for future generations is common sense across diverse intellectual traditions […] When we dispose of radioactive waste, we don’t say, “Who cares if this poisons people centuries from now?”

Similarly, few of us who care about climate change or pollution do so solely for the sake of people alive today. We build museums and parks and bridges that we hope will last for generations; we invest in schools and longterm scientific projects; we preserve paintings, traditions, languages; we protect beautiful places.

As such, it seems, much like with the motivation to curb climate change or refrain from radioactive waste dumping, that making things harder for ourselves today can be justified, even ethically required, if doing so has significant and material benefits for future generations.

The question, then, is whether the extreme measures Kwarteng’s taken will eventually make things better for the U.K. economy, ideally, benefit not only those alive today but the country’s future overall. According to his supporters, the answer is a resounding yes. Unfortunately, it’s coming from too few voices. Ultimately, only time will tell whether the Chancellor’s gamble will pay off. But if it doesn’t, any short-term gains he may have just secured would almost certainly pale compared to the long-term harms he’s potentially unleashed.

Is It Wrong to Say the Pandemic Is Over?

photograph of President Biden at podium

President Biden’s recent statement that the pandemic is “over” sparked a flurry of debate as many experts arguing that such remarks are premature and unhelpful. Biden’s own officials have attempted to walk back the remarks, with Anthony Fauci suggesting that Biden simply meant that the country is in a better place now compared to when pandemic first began. Some have even suggested that Biden is simply wrong in his assertion. But was it really wrong to say that the pandemic is over? Does the existence of a pandemic depend on what experts might say? Who should get to say if a pandemic is over? Are there moral risks to either declaring victory too soon or admitting achievements too late?

Following Biden’s statement many of his own COVID advisors seemed surprised. A spokesperson for the Department of Health and Human Services reiterated that the public health emergency remains in effect and that there would be a 60 day notice before ending it. Fauci suggested that Biden meant that the worst stage of the pandemic is over, but noted, “We are not where we need to be if we are going to quote ‘live with the virus’ because we know we are not going to eradicate it.” He also added, “Four hundred deaths per day is not an acceptable number as far as I’m concerned.” Biden’s Press Secretary Karine Jean-Pierre has conceded that the pandemic isn’t “over,” but that “it is now more manageable” with case numbers down dramatically from when Biden came to office.

The World Health Organization also weighed in on Biden’s assertion with WHO Director-General Tedros Adhanom Ghebreyesus stating that the end “is still a long way off…We all need hope that we can—and we will—get to the end of the tunnel and put the pandemic behind us. But we’re not there yet.” When asked whether there are criteria in place for the WHO to revoke the declaration of a public health emergency, WHO representative Maria Van Kerkhove said that it “is under active discussion.”

With nearly 400 deaths in America per day from COVID, and over one million dead in the U.S alone, many have been critical of the president’s remarks.

2 million new COVID infections were confirmed last month and there is still a concern among many about the effects of long COVID with persistent and debilitating symptoms for months after infection. Some estimates suggest that as many as 10 million Americans may suffer from this condition. The virus has also become more infectious as mutations produce new variants, and there is a concern that the situation could become worse.

Critics also suggest that saying that the pandemic is over sends the wrong message. As Dr. Emily Langdon of the University of Chicago noted, “The problem with Biden’s message is that it doubles down on this idea that we don’t need to worry about COVID anymore.” Saying that the pandemic is over will discourage people from getting vaccinated or getting boosters while less that 70% of Americans are fully vaccinated. Declaring the pandemic over also means an end to the emergency funds provided during the pandemic, perhaps even including the forgiveness of student debt.

On the other hand, there are those who defend the president’s assertion. Dr. Ashwin Vasan notes that “We are not longer in the emergency phase of the pandemic…we haven’t yet defined what endemicity looks like.”

This is an important point because there is no single simple answer to what a pandemic even is.

Classically a pandemic is defined as “an epidemic occurring worldwide, or over a very wide area, crossing international boundaries and usually affecting a large number of people.” However, this definition does not mention severity or population immunity. In fact, the definition of “pandemic” has been modified several times after the last few decades and currently the WHO doesn’t even use the concept as an official category. Most definitions are aimed at defining when the problem begins and not where it ends.

This reminds us that while there is an epidemiological definition of “pandemic,” the concept is not purely a scientific term. To the extent that public policy is shaped by pandemic concerns, then a pandemic is also a political concept. The declaration that the pandemic is “over” is, therefore, not purely a matter for experts. As I have discussed previously, there needs to be democratic input in areas of science where expert advice affects public policy precisely because there are also many issues involved that require judgments about values.

Some might suggest that the decision should be entirely up to scientists. As Bruce Y. Lee of Forbes writes, “there was the President of the U.S., who’s not a scientist or medical expert, at the Detroit Auto Show, which is not a medical setting, making a statement about something that should have been left for real science and real scientists to decide.” But this is simply wrong.

Yes, people don’t get a say about what the case numbers are, but to whatever extent there is a “pandemic” recognized by governments with specific government policies to address these concerns, then people should get a say. It is not a matter for scientist to decide on their own.

Many experts have suggested the saying the pandemic is over will lead people to think we don’t need to care about COVID anymore. David Dosner from Columbia University’s Mailman School of Public Health has expressed the concern that Biden’s comments will give a “kind of social legitimacy to the idea of going into crowds, and it just makes some people feel awkward not not doing that.” But ironically, the same experts who profess the need to follow the science, seem to have no problem speculating without evidence. How does anyone know that Biden’s statements would discourage people from getting vaccinated? Is anyone really suggesting that after all this time, the remaining 30% of the country that isn’t vaccinated is suddenly going to drop their plans to get vaccinated because of what Joe Biden said?

There is no good reason why saying the pandemic over would mean giving up our efforts to fight COVID. As noted, the term has no official use. The emergency declarations by the WHO and Department of Health would carry on regardless. On the other hand, despite the case rates, people around the world are returning to their lives. Even Canada recently announced the end to border vaccine mandates. While Fauci may not be comfortable with 400 deaths per day, maybe the American people are. As governments and the public lose interest in treating the pandemic as a “pandemic,” scientists risk straining their own credibility by focusing on what is important to them rather than gauging what the public is prepared to entertain policy-wise.

In an age of polarization and climate change, scientists need to be conscious about public reactions to their warnings. There is a risk that if the public construes the experts’ insistence on the pandemic mindset – despite the worst-case scenarios seeming to be increasingly remote – as ridiculous, then they will be less likely to find such voices credible in the future. When the next crisis comes along, the experts may very well be ignored. Yes, there are moral risks to declaring the pandemic over prematurely, but there are also very real moral risks to continuing to insist that it isn’t.

Victim-blaming and Guilty Victims

photograph of two hands pointing fingers at one another

Since the heinous attack on The Satanic Verses author Salman Rushdie, there has been a lively debate in The Prindle Post about free speech, victim blaming, and self-censorship. Giles Howdle argued that authors have no obligation to censor themselves, even if they are aware that publishing certain material might incite violence against themselves or others. Benjamin Rossi replied that although Rushdie might be blameless for the retribution leveled against him, we need a stronger principle to determine the cases in which victims might be, at least partially, responsible for their own misfortune.

Rossi’s argument hinges on a fascinating comparison between the actions of Salman Rushdie and those of Terry Jones, an America pastor who organized  Quran burnings in an ironic protest against the intolerance supposedly inherent in Islam. Both men’s actions led to violent reprisals and widespread, deadly protests. Whereas we are inclined to excuse Rushdie for the furor his publication caused, we are less sympathetic to Jones, who was widely condemned, and even arrested, for his actions.

I agree with both Howdle and Rossi on important points. Like Howdle, I think that Rushdie should not be blamed for the response to The Satanic Verses, and certainly not for his own stabbing. Like Rossi, I don’t think that all victims are blameless, or that it is always wrong to blame the victim.

But there are a couple of important clarifications which I think ought to be made in the debate.

The first clarification is to do with the idea of victimhood. If we are focusing on ‘victim-blaming’, we ought to have a firm idea of what constitutes a victim. The second clarification centers on intent, which I think can be a useful measure in deciding when and where to apportion blame.

Victim-blaming

‘Victim-blaming’ itself is a charged term. In the media, it most often crops up in misogynistic backlash to the stories of victims of sexual assault. “She was asking for it,” “she shouldn’t have worn that dress,” “she led him on”: these are paradigm examples of victim-blaming that claim women are responsible for the crimes men commit against them. These statements indicate that victims should be held at fault for the reprehensible actions of others.

Victim-blaming in sexual assault might be a form of the just-world delusion, where people think that because the world is inherently fair, people must deserve the things that happen to them. Or it might be a result or reflection of  patriarchal rape culture. The important thing, for now, is that in these cases, the victims are genuinely, completely, utterly innocent. No fashion decisions or quirks of personality can make you culpable for your own sexual assault because there is nothing morally wrong in dressing or acting the way you like. But if these victims are innocent, it raises the question: can victims ever be guilty?

Guilty Victims

Imagine your friend goes out for the night and comes home bloody and bruised, the victim of an assault. How terrible(!), you might think. As you tease the story out of your friend, however, something becomes clear: they had been roaming the streets, intentionally offending everybody in sight, until somebody reacted violently. Now your sympathy might start to subside – you might even think that on some level, your friend got what they deserved.

In this case, you might want to dismiss your friend’s victimhood entirely: you might think that because they wanted to offend people, they’re not really a victim at all. But this doesn’t quite capture the reality of the situation. Sure, your friend was being insufferable. But they probably didn’t deserve the beating they received. In that sense, they are still a victim.

A better approach is to say that although your friend is a victim, they are not an innocent victim: they are culpable in their own misfortune. They set out to cause offense and ended up the victim of physical harm. In a sense, they reaped what they sowed. They are a guilty victim and because they are guilty, they are worthy of blame. True, they might not have deserved the level of retribution they received. But that does not make them innocent. In the same way, we might feel sorry for somebody who receives an excessively harsh prison sentence for a relatively minor crime. Although we ought to have sympathy for their plight, we shouldn’t deceive ourselves into thinking that they are entirely innocent.

So as well as innocent victims, we can have guilty victims: those who, in doing something morally wrong, set in motion a chain of events in which they themselves are harmed. The next question is: how do we tell the difference between innocent and guilty victims?

Careless Victims

You might think that guilty victims are just those who play a causal role in their own misfortune. But this approach would cast too wide a net. Consider somebody walking down the footpath, daydreaming, who suddenly gets hit by a drunk driver mounting the curb. Their carelessness is a factor in the misery that befalls them: had they been paying attention to the road instead of singing along to Harry Styles, they would have been able to jump out of the way and escape unharmed. Although this victim is careless, it would be a stretch to say that they are guilty. After all, they haven’t done anything morally wrong. So careless victims are probably best considered a special type of innocent victim.

Intent and Guilt

A more promising method of determining whether a victim is innocent or guilty is to consider their intent in authoring the action that leads to their misfortune. If you intend to cause harm to others but are unlucky enough that the harm boomerangs around to hit you, you are probably a guilty victim.

If you make an offhanded, inoffensive comment to somebody who promptly wallops you, you can’t – or shouldn’t – be blamed. You are an innocent victim. But if your intent is to offend somebody, and you goad them into throwing a punch, then you can be held morally accountable for your actions.

Your accountability doesn’t absolve the puncher of theirs – but neither does their action absolve you of the role you played.

If your intent is to offend everybody, and you walk around hurling insults until a fight breaks out, then once again, you are morally responsible. Although your intent isn’t to personally cause physical harm, you are not only indifferent about the potential for physical harm to occur, but also perpetrating the exact morally dubious actions that make that harm more likely. For this, you ought to be held morally accountable.

So the basic rule is this: innocent victims are never worthy of blame, but guilty victims can be. Guilty victims are those who intend to cause harm (or offense), and, in doing so, set in motion a chain of events resulting in harm to themselves.

Rushdie and Jones

Finally, we can apply our notion of guilty victimhood to the cases of Salman Rushdie and Terry Jones. Both were victims of retribution for their non-violent expression. To decide whether they are blameworthy, we ought to consider whether they are innocent or guilty victims. To do that, we must consider the intent behind their inflammatory actions.

If Jones’s intent was to inflame and cause offense, then he is worthy of criticism, and we might be inclined to say that he is not just causally, but also morally, responsible for the backlash to his actions. He is a guilty victim. On the other hand, if Rushdie’s goal was to entertain and engage, and inflammation and insult were mere by-products of that intention, then we ought to be more forgiving. He is an innocent victim (or, at worst, a careless one). He remains causally responsible – as does Jones – but is less morally culpable for the ultimate outcomes of the chain of events in which he was only one link. Both actors played a role in a causal chain which led to violence and death. But to be equally morally responsible, they would have to have had the same (or similar) intentions.

Now this is not a defense of violence as a response to being offended.

No matter the intent behind Jones’s Quran burning, he doesn’t deserve violent retribution. Nor does his intent to offend excuse the actions of those who would seek to harm him.

But I think that the difference between guilty and innocent victims can help explain our different reactions to the Jones and Rushdie cases.

I suggest that it is not just that Rushdie seems to otherwise be a more sympathetic character than Jones. The difference is that Rushdie’s intent in writing and publishing The Satanic Verses might have been fundamentally different to Jones’s intent in burning Qurans. Of course, intent is incredibly hard to judge, and we may never be certain of the intent of either man. And there is certainly the counter that I might only think Rushdie’s intent was better because I am already more positively inclined towards him. But it might equally be the case that I am more positively inclined towards Rushdie because I think his intent was purer. So, we can call this a wash.

Nonetheless, intent ought to make a difference in assessing the morality of actions. If Rushdie was an innocent victim, and Jones a guilty one, then it makes sense that we are more sympathetic to Rushdie. This sympathy has nothing to do with the value of Rushdie’s work (I wouldn’t know, I haven’t read it!), and everything to do with the intention behind it. Rossi is right to say that there is no ‘consistent commitment’ in terms of apportioning blame to people whose non-violent acts contribute to violence. But my point is: there shouldn’t be a consistent commitment. We should praise or condemn actions not just on whether they contribute to a causal chain leading to violence, but on the intentions behind those actions. Not all victims are created equal.

Dungeons & Dragons & Oppression

photograph of game dice and figurine

On September 2nd, Wizards of the Coast, the company that produces official Dungeons & Dragons (D&D) materials, apologized for offensive content in its lore concerning the Hadozee race released in the most recent Spelljammer: Adventures in Space boxed set. The Hadozee are a monkey-like humanoid race known for their sailing abilities and love of exploring. They have been included in the Spelljammer series since 1990, but recent updates to the lore, as well as some of the Hadozee artwork, prompted criticism that the Hadozee evoked anti-Black stereotypes.

The updates restructured the Hadozee’s origins, stating that the race was created after a wizard captured wild Hadozee and gave them an experimental elixir that made them intelligent, more human-like in appearance, and, as a byproduct, more resilient when harmed. The wizard’s plan was to sell the enhanced Hadozee as slaves for military use; however, the wizard’s apprentices helped the Hadozee escape with the rest of the elixir. The Hadozee returned to their homelands to use the rest of the elixir on other wild Hadozee.

As reddit user u/Rexli178 explained, the main issues players had with this lore were that “the Hadozee were enslaved and through their enslavement were transformed from animals to thinking feeling people,” and that “the Hadozee had no agency in their own liberation.” This, coupled with the fact that anti-Black stereotypes often compare Black people to monkeys, plus the well-known historical racist sentiment that enslavement was necessary for the improvement of the Black race, plus the idea that Black people don’t feel as much pain when harmed, plus Hadozee artwork that seemed to evoke the imagery of minstrel showsplus the fact that the Hadozee were characterized in other places as happy servants of the Elves all came together to paint a picture that many players found damning.

It is worth noting that the critique of the Hadozee lore was not that the Hadozee reminded players of Black people. The critique was that the Hadozee echo anti-Black stereotypes and narratives that have been used to oppress Black people.

Stating that something is similar to a stereotype of a group is not the same as stating that that thing is similar to the group itself; this is especially clear when the stereotype in question is plainly false and dehumanizing.

The recent Hadozee controversy is not the only misstep Wizards of the Coast has made in the past few years — the 2016 adventure Curse of Strahd  contained a people called the Vistani who evoked negative stereotypes associated with the Roma people. At the same time, Wizards of the Coast is slowly trying to change how race functions in D&D, removing alignment traits (good vs. evil, chaotic vs. lawful) and other passages of lore text to allow for greater freedom when constructing a character.

What went wrong with the Hadozee storytelling?

Whether the parallels with real life oppression and negative stereotypes were intentional, it seems clear that this lore pulled players out of the fantasy world and recreated negative tropes associated with anti-Black racism.

How strongly it pulled on those tropes might be a matter for debate; however, I think the more interesting philosophical question here is: How should D&D include stories of oppression into their game materials, if at all? The answer to this question will likely depend upon particular histories of oppression and the details of how a given narrative of oppression is woven in the story, but I think we can say a few things to answer the general version of the question.

The first observation to make is that D&D is a fantasy series. When playing, we want to be able to escape our mundane world and experience the excitement of casting spells, fighting monsters, and just joking around with our friends. Because D&D is a fantasy series, it seems that the stories about oppression that the game facilitates should be sufficiently removed from actual histories of oppression. Even more so it seems that they should avoid reifying oppressive stereotypes in worldbuilding.

Some historical themes can be pulled upon, but WotC should be careful not to let too many of those themes overlap.

If a story maps too closely onto the experiences of oppressed groups that still exist and are still oppressed in some ways, D&D has left the realm of fantasy, doing a disservice to its storytelling and potential for healthy escapism.

And, if a story maps too closely onto oppressive stereotypes that have been used to denigrate certain groups of people historically, that can also set off alarm bells.

It is worth noting, too, that just because a piece of lore does not bring certain players out of the game – because they do not see the parallels to real-life oppressive tropes or narratives – that alone does not mean the lore is passable. The point here is for players of all different backgrounds, including from different marginalized groups, to be able to suspend disbelief and enjoy the fantasy world of D&D. Storytelling that makes it difficult for members of certain marginalized groups to equally enjoy and participate in the game is unjust and can practically exclude people from the game.

Now, this isn’t to say that Dungeon Masters (DMs, or the person who runs the D&D game) should not be allowed to modify or reinvent D&D materials to tell stories that more closely map onto real-life instances of oppression. Some members of oppressed groups might appreciate being able to navigate oppressive frameworks in a world in which they have power and can become heroes. Whether these homebrew stories are exclusionary is another tricky question.

The point is, while players should have the freedom to adapt and change stories, the basic blueprint put out through official D&D materials should be maximally inclusive.

This brings me to my second point — Dungeons & Dragons should not shy away from storytelling that allows players to explore oppressive societies, complex social issues, and other uncomfortable situations that they may face in their real lives. The trick is that the players need to be able to make their own choices about how they want to engage in these issues. And, if the storytelling is sufficiently removed from real-life histories of oppression, that will make it a safer space for players to explore how their characters might respond in those scenarios. In order to include these elements and tell these stories well, it would be good for Wizards of the Coast to hire writers who are familiar with common oppressive tropes and narratives and who could redirect problematic stories in a different direction.

It’s interesting to note that the people who have reacted against the Hadozee controversy and claimed that the Spelljammer material was just fine seem to agree with these two principles. One common reaction appears to be something like “fantasy is different than reality, and we shouldn’t confuse the two.” While the bulk of those reactions insinuate that people are seeing connections between Black stereotypes and the Hadozee that aren’t there, I think that they are roughly in line with the idea that fantasy is something that is distinct from reality and should be kept that way. I take it that those who are critiquing the Hadozee lore are critiquing it for this same reason.

Another common reaction seems to be along the lines of “we shouldn’t get rid of conflict and difficult themes just to make some politically correct folks happy,” which lines up with the idea that D&D should be a space where those scenarios can be explored by all players. I imagine that those who are unhappy with the Hadozee lore would also agree with this principle, so long as players can be active in shaping their characters and experiences and the game does not exclude certain groups of players.

To be inclusive to all D&D players, however, Wizards of the Coast needs to have better representation of people with different life experiences and understandings of the world in their writing rooms. This will not only make for better storytelling, but it will also facilitate gameplay that does not alienate certain players in the room. Let’s hope that Wizards of the Coast as well as the larger D&D community start to head more in that direction.

What ‘The Rings of Power’ Criticism Really Shows

photograph of The Rings of Power TV series on TV with remote control in hand

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The Rings of Power, a prequel to Tolkien’s The Lord of the Rings, has come in for a barrage of criticism. Much of this is not simply about, say, the content of the series taken in isolation, but how it relates to Tolkien and – more nebulously – how it relates to current social issues.

Concerning Tolkien, Alexander Larman, writing in The Spectator, called this series “artistic necrophilia.” He seems to worry it’s expensive and lacks star power, while also suggesting that Tolkien’s Silmarillion, which this is based on, is not coherent enough. His worry, which he expresses more clearly elsewhere, is that Tolkien’s work is being diluted and we should avoid that.

Perhaps there is something to this, we might worry that too much Tolkien is a bit like producing new versions of Monet by using some AI tool; at some point, this wouldn’t have much to do with Monet’s vision and would lack something that his originals possess.

Though I disagree we have reached this point, I can see his concerns.

Ben Reinhard, writing in Crisis, thinks that, in the hands of these writers, “Tolkien’s moral and imaginative universe is simply gutted.” His concern is that the plot lines and characters are new – perhaps, supposedly, based on Tolkien, but failing to capture the true meaning of Tolkien. It is, he thinks, stripped of the values Tolkien cared about.

This evidence for this, though, is mixed at best. He has a problem with Nori, the Harfoot (a proto-Hobbit), transgressing boundaries and showing a disdain for her staid and conservative society. (Well, he might want to meet some of the Hobbits in Tolkien’s trilogy.) And is Galadriel just some modern Girl Boss for those whose political engagement goes about as far as having a Ruth Bader Ginsberg bobblehead? Maybe. But we can’t judge that off of a few episodes. He complains that she isn’t the serene vision she is in the Lord of the Rings, but it shouldn’t surprise us that a character has to age into such grace (the show is, after all, set five thousand years earlier).

Perhaps the most contentious criticism concerns race and other social justice issues – and how these should relate to Tolkien’s original work.

Brandon Morse, in a couple of pieces, alleges that this show is just another example of something being “ripped out from the past in order to be revamped and remade for modern times, and this always includes an injection of woke culture and social justice values.” He wrote this based on the trailer, which appears to be “woke” simply because it features a female warrior and people of color.

Morse’s claim that when diversity is the focus, the storyline suffers amounts to sheer speculation – three episodes in and there is certainly a story developing. And I have no idea how anyone could determine how good the story might be from a few minutes of trailer.

But these complaints haven’t been taking place just in the pages of magazines on the right of the political spectrum. Plenty of mainstream ink has been spilled about the relationship between this show and social justice issues – some of it more worthy of discussion than Morse’s screed. At CNN, John Blake has documented the culture wars breaking out over the show, surveying many of the opinions I discuss here. But even his framing of the debate is more contentious than it need be: “Does casting non-White actors enhance the new series, or is it a betrayal of Tolkien’s original vision?”

Why does enhancement need to be the issue: why can’t we just cast non-White actors and expect them to be no more or less enhancing than White actors?

Here are some other ways of putting the question. Ismael Cruz Cordóva plays an elf in the new adaptation. He said he wanted to be an elf, but people told him “elves don’t look like you.” But is there any reason why elves shouldn’t look like him? They should be tall, they should be elegant and enchanting, but why would they need to be white? Even if they are white in the books, does that whiteness play any particularly important role?

Some think so. Louis Markos thinks we lose our ability to suspend disbelief when we see a non-white elf. It somehow jolts us out of the story. But I’m not sure why this should be true, beyond a personal view that this is what elves should look like.

We all face issues about what characters should look like – we read a book and have an image in our mind, then we see the character on screen and they look very different. For many of us, most of the time, we can easily adapt.

(More pointedly, Mark Burrows, also cited in the CNN article, is confused by people who can accept walking tree-people but who think “darker skinned dwarves are a bit far-fetched.”) It seems to me that if we don’t think whiteness is essential to elves being elves, then we shouldn’t have any problem with non-white actors playing elves. Add to this that representation is important – a kid who looks Cordóva, too, can dream of being an elf – and the argument doesn’t get us far.

And if we do think elves are essentially white, we might face bigger issues: is Tolkien, in presenting elves as superior, a racist? There is certainly an argument to be made here, but we would like to hope not, and we would like to hope that even if this were the case, his art needn’t be bound to those attitudes.

Part of my concern here is with knee-jerk responses to a show that’s just getting started. As Adam Serwer of The Atlantic notes we’re beginning to see “reflexive conservative criticism of any art that includes even weakly perceptible progressive elements.” And our own A.G. Holdier has demonstrated how this conservative nostalgia – for a whiter media – can lead to moral risks.

Reinhard admits that his more “paranoid and conspiratorial” tendencies – which he does his best to keep down – show him “Luciferian images and undercurrents.” I wonder whether, if he could keep those thoughts at bay, he, and other critics, might try to watch the show in a slightly more generous mood. When all you have is a hammer, everything might look like a nail – which is why those who go into this show expecting to see wokeness everywhere might not have all that much fun. Better to suspend both belief and your commitment to the culture wars, you might enjoy watching it a bit more.

Fascism, Book Banning, and Non-Violent Direct Action

photograph of book burning in flames

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Throughout the summer, armed Idaho citizens showed up at library board meetings at a small library in Bonners Ferry to demand that a list of 400 books be taken off of the shelves. The books in question were not, in fact, books that this particular library carried. In response to the ongoing threats against the library, its insurance company declined to continue to cover them, citing increased risk of violence or harm that might take place in the building. The director of the library, Kimber Glidden, resigned her position in response to the situation, citing personal threats and angry armed protestors showing up at her private home demanding that she remove the “pornography” from the shelves of her library.

This behavior is far from limited to the state of Idaho. In Oklahoma, Summer Boismier, an English teacher at Norman High School was put on leave because she told her students about UnBanned — a program out of the Brooklyn Country Library which allows people from anywhere in the country to access e-book versions of books that have been banned. The program was designed to fight back against censorship and to advocate for the “rights of teens nationwide to read what they like, discover themselves, and form their own opinions.” Boismier was put on leave after a parent protested that she had violated state law HB1775 which, among other things, prohibits the teaching of books or other material that might make one race feel that they are worse than another race. Despite the egalitarian sounding language, the legislation was passed in part to defend a conception of a mythic past and to prevent students from reading about the ways in which the United States’ history of racism continues to have significant consequences to this day. Boismier resigned in protest.

Many states have passed laws banning books with certain content, and that content often involves race, feminism, sexual orientation, and gender identity. And prosecutors in states like Wyoming have considered bringing criminal charges against librarians who continue to carry books that their legislatures have outlawed.

Laws like these capitalize on in-group/out-group dynamics and xenophobia, often putting marginalized groups at further risk of violence, anxiety, depression, and suicide.

In heated board meetings across the country, there appear to be at least two sides to this issue. First, there are the parents and community members who are opposed to censorship or who believe that noise over “pornographic” literature targeting children in libraries is tilting at windmills; in other words, the content the protestors are concerned about simply doesn’t exist. On the other side of the debate, there are parents who are concerned that their children are being exposed to material that is developmentally inappropriate and might actually significantly harm them.

Granting that some of these books contain material that genuinely concerns parents, it simply doesn’t follow that the material to which they object really is bad for the children involved.

The fact that a person is a parent does not make that person an expert on what is best for developing minds and parents do not own the minds of their children.

For instance, one of the most commonly challenged books in the country is The Hate U Give, which is, in part, a book about a teenage girl’s response to the racially motivated killing of her friend by a police officer. Some want this book banned because they don’t want their children internalizing anti-police sentiment. However, reading this kind of a book might increase a child’s critical thinking skills when it comes to how they perceive authority, while also contributing to compassion for historically marginalized groups.

Consider also perhaps the most controversial books — books that have some sexual content.

These books may not be appropriate for children under a certain age, but reading stories in which teenagers are going through common teenage struggles and experiences has the potential to help young readers understand that their thoughts and experiences are totally normal and even healthy.

Parents who want this content banned might be wrong about what is best for their children. Some of these parents might instead be trying to control their children. In any case, the fact that some parents don’t want their children to have access to a particular book does not mean that all young people should be prohibited from reading the books in question. Some parents don’t believe in censorship and trust their children to be discerning and reflective when they read.

That said, the apparent two-sidedness of this debate may well be illusory. After all, there is a simple way to determine whether the library carries books with the kind of content that parents are concerned about — simply check the database. The librarians in question claim that the books about which these parents are complaining are not books that the library carries. What’s more, these “debates” – though sometimes well-intentioned – are more troubling than they may initially appear. More and more people across the country appear to be succumbing to the kind of conspiratorial thinking that tills fertile ground for fascism. These are trends that are reminiscent of moral panics over comic books in the ’50s and ’60s and video games in the ’80s and ’90s.

Perhaps the most pressing question confronting our culture today is not whether libraries should continue to carry pornographic or racist materials (since they don’t) but, instead, what we should do about the looming threat of fascism.

Philosopher Hannah Arendt wrote about her concern that fascist demagogues, who behave as if facts themselves are up for debate, destroy the social fabric of reason on which we all rely. This creates communities of “people for whom the distinction between fact and fiction…and the distinction between true and false…no longer exist.” Novelists have explored these themes countless times, and the restriction of reading material is a common theme in dystopian novels. In particular, readers see these themes explored in 1984, Fahrenheit 451, and The Handmaid’s Tale. In 1984, Orwell describes a “Ministry of Truth” that is responsible for changing history books so that they say all and only what the authoritarian regime wants them to say. In Fahrenheit 451, Bradberry describes an authoritarian regime that disallows reading altogether. In The Handmaid’s Tale, reading is permitted, but only by the powerful, and in this case the only powerful community members are men.

All of these dystopian tales emphasize the importance of language, writing, reading, and freedom of expression both for healthy societies and for healthy individuals.

Ironically, all three of these books are frequently found on banned or challenged books lists.

Rising aggression on the part of those who call for the banning of books has motivated some to respond in purposeful but peaceful ways. In Idaho, for instance, a small group of a couple of dozen concerned citizens – composed of both liberals and conservatives – met in a grove of apple trees to hold a read-in in support of public libraries and against censorship.

These approaches respond to violence with non-violence, an activist approach favored by Martin Luther King Jr. That said, King was also explicit about the fact that the movement he facilitated was a movement of non-violent direct action, which meant more than simply showing up and being peaceful. The strategies King employed involved disrupting society, but non-violently. The Montgomery Bus Boycott, for instance, involved refusing to contribute to the economy of the city by paying for bus fare until it gave up its policy of forcing Black riders to give up their seats to white passengers and to sit at the back of the bus. The non-violent protests in Birmingham in 1963 were intended to affect the profits of merchants in the area so that there would be palpable motivation to end segregation. Non-violent direct action was never intended to be “polite” in the sense that it didn’t provide reasons for frustration. In his “Letter from a Birmingham Jail,” King says

My citing the creation of tension as part of the work of the nonviolent resister may sound rather shocking. But I must confess that I am not afraid of the word “tension.” I have earnestly opposed violent tension, but there is a type of constructive, nonviolent tension which is necessary for growth.

So, while there is certainly nothing wrong with reading a book under an apple tree, if people want to roll back the wave of censorship and anti-intellectualism – both trends that are part and parcel with fascism – action that is more than “polite” but less than violent may be warranted.

Sex Differences or Sexism? On the Long Wait for a Female President

photograph of oval office

In her recent New York Times piece, “Britain 3, America 0,” Gail Collins laments the failure of the United States to elect a female president. Meanwhile, traditional old Britain has recently acquired its third (Conservative) female prime minister, Liz Truss.

Collins’ frustration will strike many readers as obviously correct, perhaps even boringly correct. If, counterfactually, we lived in a society without sexism, then surely we would have had at least one female president by now. If that’s right, then sexist stereotyping plays some role in explaining the fact that we haven’t. But are there other explanations than sexism?

According to the Center for American Women in Politics’ Debbie Walsh, the problem is “that women are seen as good at getting along with other people, but not necessarily at running things.” In other words, inaccurate sexist stereotypes explain the nation’s failure to elect a female president.

The trouble with this explanation is that psychological research partly confirms certain clichés about men and women.

While both sexes have near-identical average scores for some traits, e.g., openness/intellect (the ability and interest in attending to and processing complex stimuli), studies repeatedly find significant differences between the sexes for other traits. For instance, on average, women are more “agreeable” than men, meaning they have a higher tendency toward “cooperation, maintenance of social harmony, and consideration of the concerns of others.” But men are, on average, more assertive, which many – rightly or wrongly – consider a key leadership trait.

There are competing explanations for these sex differences. Some think they are explained by human biology. Others contend that they are cultural artifacts, ways of being that men and women learn from society as they grow up. But most psychologists now think that a complex mix of nature and nurture explains these average differences between men and women. But whatever the cause(s), there is little dispute that these measurable differences exist. On this basis, some have concluded that our society is not sexist because “Women really are different from men!” And they think that this fact – and not sexism – explains things like why we’ve not had a female president.

But this conclusion would be premature. There’s still plenty of opportunity for sexism to be doing some explanatory work.

First, it might be that our assumptions about what makes a good leader are sexist. We might tend to overvalue the “masculine” traits and undervalue “feminine” traits. Perhaps women’s greater tendency to agreeableness (“cooperation, maintenance of social harmony, and consideration of others”) is precisely what we need in a leader in these divisive times, but we keep electing less suitable but more assertive “strong men.” If we systematically overvalue the traits more common in men and undervalue those more common in women, we would be putting the pool of female candidates at an unfair disadvantage. This is one way in which sexism can operate even if there are sex differences.

Second, and I think far more significantly, individual men and women are not the averages of their sex. An average is just an average, and nothing more. Within any group as large as half the population, there is obviously a huge amount of individual variation. There are many highly agreeable and unassertive men and many highly assertive and disagreeable women. Think about it — you probably know people who fall at all different points on this scale.

Even if, hypothetically, traits more common in men did make them more suitable for the role of president, then, given great individual variation, we should still expect some presidents to have been women, even if not half.

This brings me to another way that sexism can operate on top of sex differences. Imagine a voter who sincerely thinks that assertiveness is a key trait for a president. After all, a president must make difficult and important decisions each day, often under terrible pressure. Imagine this person could never vote for someone who they didn’t think was highly assertive. That seems like a reasonable view.

But sexism could still be at play. If the average woman is less assertive than the average man, then we might tend to overlook the leadership potential of highly assertive women because we assume, given their sex, that they are less assertive than they actually are.

Given that female politicians have, presumably, had to overcome certain challenges their male counterparts have not, you might even expect female politicians to be particularly assertive, perhaps even more than their male counterparts. Britain’s controversial first female Prime Minister, Margaret Thatcher, certainly lends some credence to that possibility.

To explain why we’ve not had a female president, then, we can’t simply appeal to either sexism or sex differences. The relationship between sexism, sex differences, and political outcomes is more nuanced. We don’t need to dispute the psychological evidence of sex differences in order to maintain that sexism is a real problem with damaging political consequences. It might be that our assumptions about the traits needed for leadership are sexist and biased. Or it might be that our awareness of group averages blinds us to individual differences, preventing us from fairly judging the merit of individual female political candidates. Yes, there is significant psychological evidence for sex differences in traits that are commonly regarded as important for leadership. But sexism could still be unjustly distorting political outcomes. As is so often the case, the partial explanations of political events and outcomes that we are so often provided are excellent at getting us riled up and reinforcing our loyalty to our political tribe, but they’re much worse at helping us to understand the society of which we are a part.

Monarchy and Moral Equality

photograph of Queen's Guard in formation at Buckingham Palace

In a recent column, Nicholas Kreuder argues that the very idea of monarchy is incompatible with the moral equality of persons. His argument is straightforward. He claims that to be compatible with moral equality, a hierarchy of esteem must meet two conditions. First, the person esteemed must be estimable — in other words, esteem for her must be earned, or at least deserved. Second, deferential conduct toward the esteemed person must not be coerced or otherwise involuntary. But the deference demanded by a monarch is neither warranted, nor voluntarily given: monarchs are esteemed only for their royal pedigree, and their subjects are expected to show esteem even though they are not, at least in the typical case, subjects by choice. Therefore, the hierarchy of esteem between monarch and subject is fundamentally incompatible with moral equality.

This argument is compelling, and as a confirmed republican, I confess bewilderment at the practice of paying a woman to live in fabulous wealth for a century so that she can christen the nation’s boats.

Nevertheless, for the sake of argument, I would like to critically examine Kreuder’s premises to see whether they really establish his sweeping conclusion.

The first question to consider is a simple one: what is a monarch? The argument against monarchy from moral equality appears to assume that monarchies are by definition hereditary, and that they are never elective. In fact, elective or non-hereditary monarchies are not unusual in human history. In Ancient Greece, the kings of Macedon and Epirus were elected by the army. Alexander Hamilton argued for an elective monarchy in a speech before the Constitutional Convention of 1787; he thought the American monarch should have life tenure and extensive powers.

In truth, authoritative sources seem confused about just what a monarchy is. For instance, the Encyclopedia Britannica defines “monarchy” as “a political system based upon the undivided sovereignty or rule of a single person.” Yet the accompanying article acknowledges that in constitutional monarchies, the monarch has “transfer[red] [her] authority to various societal groups . . . political authority is exercised by elected politicians.” That does not sound like undivided sovereignty to me.

My conclusion is that “monarch” is a label promiscuously affixed to wildly different kinds of regimes, leaving the concept monarch without much determinate content. A monarchy can be limited or absolute, elective or hereditary.

It’s difficult to argue that a concept with little determinate content is incompatible with moral equality. However, if we limit the argument to hereditary monarchies, then the argument against monarchy from moral equality appears to get back on track. If the monarch is not elected, then the deference she demands is not voluntary. And if her claim to esteem is inherited, then it is certainly not deserved.

Yet even when restricted to hereditary monarchies, the argument does not seem entirely plausible. The problem is that in some cases, the hierarchy of esteem between a particular hereditary monarch and her subjects seems voluntary. Consider the United Kingdom’s hereditary but constitutional monarchy. The citizens of that country appear to have widely divergent views about both their monarchs and their monarchy. Some people detest the newly-crowned King Charles III, yet have no qualms with the monarchical institution. Some liked Queen Elizabeth on a personal level but are staunch republicans. Moreover, Britons do not keep their opinions on this score a secret, and they are not generally thrown in jail for publicly criticizing the monarch in the harshest terms. (Although in saying this, reports of anti-royal protestors being arrested on bogus charges of breaching the peace make me pause.)

No one is forced to sing “God Save the Queen” who doesn’t wish to do so. In short, in the U.K., deference to the monarch may be encouraged, but it is certainly not required. A Briton can thrive in her society without ever showing the slightest deference to her monarch.

With respect to the hierarchy of esteem between the U.K.’s monarch and her subjects — as opposed to her constitutional functions or public prominence — the situation seems somewhat akin to the relationship between Catholic priests and the rest of society in the United States. Even non-Catholics regularly refer to priests as “father,” a gesture of deference that is less required than customary. (That said, I refrain from this practice if at all possible; it mildly affronts my democratic temperament. This did not go over particularly well at Notre Dame.)

It might also be doubted whether every hereditary monarch does not deserve esteem. Many Britons seem to think that with her stoicism and quiet dignity, Queen Elizabeth provided stability over the course of a turbulent twentieth century. I take no stance on that proposition, but it certainly seems conceivable that a monarch could come to earn esteem through her exemplary conduct either before she ascends to the throne or while she serves as monarch.

Thus, the extent to which monarchy cuts against moral equality really depends on the conditions of the society in which a particular monarchical institution exists.

The concept of a hereditary monarchy might seem incompatible with moral equality at a very high level of abstraction, but some of its instantiations may be perfectly consonant with it.

This does, however, lead me to a more philosophical point. In the argument against monarchy from moral equality, the test for whether a hierarchy of esteem is morally legitimate requires it to meet both the conditions of deservedness and voluntariness. But it appears that voluntariness alone is sufficient to make such a hierarchy compatible with moral equality. If I routinely genuflect before my girlfriend because I believe her gorgeous auburn hair possesses mystical powers, that does not seem particularly demeaning to my dignity so long as my delusional belief cannot be said to undermine the voluntariness of my deferential act — even though the deference is wholly undeserved. Likewise, so long as a Briton is not forced to pay obeisance to King Charles III, her acts of deference seem to be compatible with her dignity even if the king doesn’t deserve them.

Indeed, voluntariness seems not only sufficient to legitimize a hierarchy of esteem, but also necessary. Martin Luther King, Jr. and Malcolm X are both, in my view, figures richly deserving of esteem. Yet if I were forced to regularly kiss their feet, that hierarchy of esteem would be an insult to my dignity as a moral equal.

Philosophers love abstractions, and I am no exception. Sometimes, however, what appears to be a strong argument at a high level of abstraction loses some of its luster once the messy reality of human existence is brought into view. Such is the case, I think, with the claim that monarchy is per se incompatible with human equality.

Should Monarchies Be Abolished?

photograph of British monarchy crown

On September 8th, 2022, Queen Elizabeth II of England died at the age of 96. She held the crown for 70 years, making her the longest reigning monarch in the history of Britain. Her son, now King Charles III, will likely be coronated in mid-2023.

The death of the British monarch has drawn a number of reactions. Most public officials and organizations have expressed respect for the former monarchies and sympathy towards her family. However, others have offered criticism of both the Queen and the monarchy itself. Multiple people have been arrested in the U.K. for anti-royal protests. Negative sentiment has been particularly strong in nations that were previously British colonies – many have taken to social media to critique the Crown’s role in colonialism: the Economic Freedom Fighters, a minority party in South Africa’s parliament released a statement saying they will “not mourn the death of Elizabeth,” and Irish soccer fans chanted that “Lizzy’s in a box.” Professor Maya Jasanoff bridged the two positions, writing that, while Queen Elizabeth II was committed to her duties and ought to be mourned as a person, she “helped obscure a bloody history of decolonization whose proportions and legacies have yet to be adequately acknowledged.”

My goal in this article is to reflect on monarchies, and their role in contemporary societies. I will not focus on any specific monarch. So, my claims here will be compatible with “good” and “bad” monarchs. Further, I will not consider any particular nation’s monarchy. Rather, I want to focus on the idea of monarchy. Thus, my analysis does not rely on historical events. I argue that monarchies, even in concept, are incompatible with the moral tenets of democratic societies and ought to be abolished as a result.

Democratic societies accept as fundamentally true that all people are moral equals. It is this equality that grounds the right to equal participation in government.

Equal relations stand in contrast to hierarchical relationships. Hierarchies occur when one individual is considered “above” some other(s) in at least one respect. In Private Government, Elizabeth Anderson distinguishes between multiple varieties of hierarchy. Particularly relevant here are hierarchies of esteem. A hierarchy of esteem occurs when some individuals are required to show deference to (an) other(s). This deference may take various forms, such as referring to others through titles or engaging in gestures like bowing or prostration that show inferiority.

Generally, hierarchies of esteem are not automatically impermissible. One might opt into some. For instance, you might have to call your boss “Mrs. Last-Name,” athletes may have to use the title “coach” rather than a first name, etc. Yet, provided that one freely enters into these relationships, such hierarchies need not be troubling. Further, hierarchies of esteem may be part of some relationships that one does not voluntarily enter but are nonetheless morally justifiable – children, generally, are required to show some level of deference to their parents (provided that the parents are caring, have their child’s best interests in mind, etc.), for instance.

The problem with the monarchy is not that it establishes a hierarchy of esteem, but rather that it establishes a mandatory, unearned hierarchy between otherwise equal citizens.

To live in a country with a monarch is to have an individual person and family deemed your social superiors, a group to whom you are expected to show deference, despite your moral equality. This is not a relationship you choose, but rather, one that is thrust upon you. Further, the deference we are said to owe to, and the higher status of, monarchs is not earned. Rather, it is something that they are claimed to deserve simply by virtue of who their parents are, who in turn owe their elevated status to their lineage. Finally, beyond merely commanding deference, monarchs are born into a life of luxury; they live in castles, they travel the world meeting foreign dignitaries, and their deaths may grind a country to a halt as part of a period of mourning.

So, in sum, monarchies undermine the moral foundation of our democracies. We value democratic regimes (in part) because they recognize our equivalent  moral standing. By picking out some, labeling them as the superiors in a hierarchy of deference due to nothing but their ancestry, monarchies are incompatible with the idea that all people are equal.

However, there are some obvious ways one might try to respond. One could object on economic grounds. There is room to argue that monarchies could potentially produce economic benefits. Royals may serve as a tourist attraction or, if internationally popular, might raise the profile and favorability of the nation, thus increasing the desirability of its products and culture. So perhaps monarchies are justified because they are on the whole beneficial.

The problem with this argument is that it compares the incommensurable. It responds to a moral concern by pointing out economic benefits.

My claim is not that monarchy is bad in every respect. Indeed, we can take it for granted that having a monarchy produces economic benefits. However, my claim is that it undermines the moral justification of democracy.

Without a larger argument, it does not follow that economic benefits are sufficient to outweigh moral concerns. This would be like arguing that we should legalize vote-selling due to its economic benefits – it seems to miss the moral reason why we structure public institutions the ways that we do.

Another objection may be grounded in culture. Perhaps monarchies are woven into the cultural fabric of the societies in which they exist; they are part of proud traditions that extend back hundreds or even thousands of years. To abolish a monarchy would be to erase part of a people’s culture.

While it’s true that monarchies are long traditions in many nations, this argument only gets one so far. A practice being part of a people’s culture does not make it immune to critique. Had the Roman practice of gladiatorial combat to the death for the sake of entertainment survived to this day, we would (hopefully) think it ought to be eliminated, despite thousands of years of cultural history.

When a practice violates our society’s foundational moral principles, it ought to be abolished no matter how attached to it we have become.

Finally, one might argue that abolition is unnecessary. Compared to their status throughout history, monarchies have fallen out of grace in the 20th and 21st centuries. Of the nations with monarchies, few have a monarch which wields anything but symbolic power (although some exceptions are notable). This argument relies on a distinction between what we might call monarchs-as-sovereigns and monarchs-as-figureheads. Monarchs-as-sovereign violate the fundamental tenets of democracy by denying citizens the right to participate in government, while monarchs-as-figureheads, wielding only symbolic power, do not, or so the argument goes.

The issue with this argument is that it underappreciates the full extent of what democracy demands. It does get things right by recognizing that the commitment to democracy arises from the belief that people deserve a say in a government that rules over them. However, it is just not that all citizens deserve some say, but rather, that all citizens deserve an equal say. One person, one vote.

Part of the justification for democracy is that individuals ought to be able to shape their lives, and thus deserve a say in the institutions that affect us all.

Although individuals may vary in their knowledge or other capabilities, to give some greater say in our decision making is to give them disproportionate power to shape the lives of others. No one individual should automatically be someone to whom we all must defer. We might collectively agree to, say, regard someone as an expert in a particular matter relevant to the public good and thus defer to her. However, this only occurs after we collectively agree to it in a process where we all have equal say, either by voting directly for her, or voting for the person who appoints her. Unless we have a parity of power in this process, then we diminish the ability of some to shape their own lives.

On these grounds, perhaps a monarchy could be justified if the citizens of a nation voted the monarch into power. This would simply be another means of collective deferment. But since electorates are constantly changing, there would need to be regular votes on this to ensure the voters still want to defer to this monarch. Yet current monarchies, by elevating the monarch (and family) above others while leaving this outside the realm of collective decision-making, violate the moral justification of democracy – some are made superior by default in the hierarchy of esteem. The establishment of democracy and abolition of all monarchy are proverbial branches that stem from the same tree. Our recognition of human equality should lead us to reject monarchy in even innocuous, purely symbolic forms.

Can Human-Grown Organs De-Liver?

photograph of surgeons conducting procedure on operating table

Despite the majority U.K. shifting from an opt-in to an opt-out donation system, there is still a vast shortage of viable organs. According to NHS figures, there are currently over 7,000 people on the organ transplant waiting list, and just last year 420 died because a suitable organ was not available. And this is not just a U.K. problem. Similar shortages occur across the EU, the U.S., and Canada. In fact, the organ shortage problem is global, with most countries reporting a deficit between the number of donors and the number of hopeful recipients.

Writers at The Prindle Post have already explored and critiqued some solutions to this problem, including harvesting organs from the imprisoned, 3D printing the necessary body parts, xenotransplantation, and paying people to donate non-vital organs. Last month, I wrote a piece about OrganEx, a novel technology that might reverse posthumous cellular damage, thereby making more organs viable for transplant.

But hot on the heels of that innovation came news of an upcoming trial by biotech company LyGenesis, a trial which might have truly radical implications for the organ transplant landscape. In short, the company will try to grow new livers inside the bodies of people with end-stage liver disease.

This is wild in and of itself. But it gets better: twelve volunteers will be given increasingly potent doses of the treatment over the trial period until the final study participants will potentially grow not just one but five mini livers throughout their bodies.

While still highly speculative, the potential to grow livers, or even other organs, within the body of the hopeful transplantee could dramatically reduce global demand. People would no longer have to wait for a whole organ to be available. Instead, they could grow a new one. This would improve health outcomes, ease pressure on healthcare systems, and ultimately save lives.

The procedure involves taking healthy liver cells from an organ donor and injecting them into the lymph nodes of the sick recipient. As the nodes provide an excellent environment for cellular division and growth, the team at LyGenesis believe that the transplanted liver cells would start to divide and grow within the node, eventually replacing it.

Over time, the transplanted cells would develop into one or several miniature livers and start compensating for that person’s damaged original.

The team have already conducted animal trials over the last ten years, growing mini livers in mice, pigs, and dogs, and now believes it is time for human trials. Each trial participant will receive regular check-ups over the following year to ensure doctors pick up on any adverse side effects as soon as possible. In total, the study should take just over two years to complete. If the trial goes well and the results prove promising, LyGenesis plans to implant other cells and grow other types of organs.

But, the donor cells have to come from somewhere – the team do not magic them into existence. The source of these cells will be, unsurprisingly, donated livers. So, organ donation will still be needed even if the trial proves to be 100% effective. However, the proposed technique could vastly increase the number of sick people a single donated liver could help. Currently, when a perfectly healthy liver is donated posthumously, it is split into two, and surgeons implant each half into a different sick person. As such, one liver can help up to two people.

However, the LyGenesis researchers believe that, because only a (relatively) limited number of cells is needed to start the organ growing process, they could get up to seventy-five treatments out of a single donated liver.

Arguably, the LyGenesis cell transplant technique should be the first port of call regarding ethical organ donation processing, as helping more people than less is ethically required.

Now, the prospect of growing organs rather than harvesting them from altruistic donors has been around for several centuries in the dream of xenotransplantation – taking tissues and organs from animals and putting them in people. In fact, the first recorded attempt to use animal material in a human’s body was in the 17th century, when Jean Baptiste Denis transfused lamb’s blood into a patient to surprisingly little harm (i.e., the person did not die). Since then, harvesting animal tissues and organs for human transplantation has become more sophisticated, with scientists employing genetic modification techniques to improve the compatibility of organs and recipients.

However, xenotransplantation comes with a whole host of potentially intractable ethical issues. These include the potential dangers of zoonotic disease transmission (which caused David Bennett Sr.’s death), animal welfare concerns, and reflections upon our ever-increasing capacity to alter the natural world around us for our needs.

When compared to these ethical objections, LyGenesis’ human-growth liver technique seems justifiable on human-interest grounds and on a broader range of bioethical considerations.

Not only does it seemingly have the potential to maximize the net benefit each donated liver can provide, but it also helps avoid many of the issues that come part-and-parcel with growing human organs within non-human animals.

For example, there is no worry about cross-species disease transmission as all the genetic material involved is human. Similarly, beyond the animal’s use in the research, there is no worry about organ farming producing suffering on a comparative scale to the industrial farming complex. Questions regarding our ability to alter the world around us and, in essence, play God remain. However, such criticisms can be levied by critics against practically every medical procedure used today and, as such, fail to be specific to the topic of organ farming. Indeed, complete devotion to such a stance would seemingly paralyze an individual to complete non-action, as everything we do can be interpreted as playing God in one form or another.

Ultimately, while still very experimental, LyGenesis might be on the right track to tackling the organ donation shortage, at least in liver disease cases. Time will tell whether growing organs within one’s body is the way forward. However, compared to the potential issues xenotransplantation raises, human grown livers certainly seem to have a distinct ethical advantage.

When Do We Begin to Exist?

photograph of baby looking at self in mirror

The question of when we begin to exist is of central importance to abortion ethics. When we begin to exist depends on what we are. On one popular view, animalism, we are organisms. A human organism begins to exist at conception, or shortly afterwards. So animalism entails that we come to exist very early in pregnancy. This means that abortion kills a being like us – a being who, in most cases, will grow to develop the capacities we have if allowed to live. Still, it may not immediately follow that abortion is wrong. Perhaps we do not yet possess full moral status when we begin to exist, or perhaps, even if we do, abortion might still be justified by the pregnant person’s right to bodily autonomy. But it would be an important consideration against abortion.

On another view, the embodied mind view, we are our minds. We are embodied within organisms, but we are not ourselves the organisms. (Perhaps your mind is an immaterial soul, or perhaps it is an object composed of the parts of your brain responsible for your mental life.) On this view, we will not begin to exist until our minds begin to exist. Presumably, this is when our brains develop the capacity for consciousness and a mental life. Scientists are not totally sure when this is: most seem to think it’s during the mid-to-late second trimester, but others suggest earlier points, such as twelve weeks into pregnancy. In any event, even basic brain activity does not exist until six to ten weeks into pregnancy, so it couldn’t be before that. On the embodied mind view, rather than killing one of us, early abortion would prevent one of us from coming into existence. In an important way, it would be analogous to contraception. It may not immediately follow that abortion is okay. Perhaps all human life has some sort of intrinsic value, including early human organisms which don’t yet host a mind. But it would follow that early abortion could not be murder, and could not otherwise wrong the fetus or violate its rights.

Many people assume animalism is true without thinking much about it. But there are some interesting philosophical arguments against animalism. Here is one from the philosopher Jeff McMahan, who developed the embodied mind view and is its main defender.

Suppose your brain is healthy but your body is dying, while your twin is in a permanent vegetative state but has a healthy body. Doctors decide they will save you in the following way. You lay down on Operating Table A, and your twin is laid on Operating Table B. They remove the part of your brain responsible for your mental life. We can suppose this is your cerebrum, the top part of your brain. They leave your lower brain intact. The lower brain is what regulates your vital functions, so your old body stays alive. They then transplant your cerebrum into your twin’s body. Your mind, along with your first-person perspective, memories, personality, etc. goes along with your cerebrum, and someone with all these things wakes up in your twin’s body.

Where are you at the end of this operation? Do you go along with your mind, and are you now inhabiting your twin’s body over on Table B? Or did you instead remain as the irreversibly comatose being on Table A, with your mind leaving you behind?

Almost everyone is inclined to think you went with your mind over to Table B. But this is incompatible with animalism, since your organism did not go with your mind over to Table B. If your organism had gone with your cerebrum, then while your cerebrum was being transplanted from one body to another, the entire organism would have consisted of nothing but your detached cerebrum. But a detached cerebrum is not an organism: it does not meet the scientific criteria for being an organism (maintaining homeostasis, etc.) any more than a severed hand does. And further, keep in mind that there is still a living, comatose human organism on Table A. Presumably you did not create a new human organism when you removed the cerebrum, so that must be the same human organism that laid down on Table A to begin with, i.e., your organism. But if your organism is still on Table A, it cannot have gone with your cerebrum over to Table B.

So, to sum up: it seems possible for you and your organism to part ways. But you cannot part ways from yourself. So you must not be your organism. Furthermore, in such cases, you seem always to go with your mind. This suggests that, rather than your organism, you are your mind.

Here is another argument for the same conclusion. This is also inspired by one from McMahan, but modified a bit by me. Suppose we encountered members of a naturally two-headed alien species, like the podracing announcer in Star Wars. Here, it seems right to say that each two-headed being is a single organism. But if each head contains its own functioning brain with its own mind, it seems there are two different people present. (We can imagine the minds in the two different heads getting into an argument.) But if there are two people and only one organism, these alien people cannot be their organisms. Since we count them by counting minds, it seems they must be their minds. And if they would be their minds, it seems most natural and simple to suppose that we are our minds, too.

While I find these arguments very powerful, not all philosophers are convinced by them, and animalism does have able defenders, such as Eric Olson. And further, as I noted above, finding the right theory of personal identity would not immediately solve all the ethical answers surrounding abortion. So I do not expect this to completely resolve the abortion debate. But the question of when we begin to exist is very important to the abortion debate, and deserves far more attention than it usually gets.

Reasons and Elephants (and Persons)

photograph of elephant at zoo painting

In ordinary language, the term ‘person’ typically refers to an individual human being (and, sometimes, their physical body): signs reading “one person per table” or “$10 per person,” comments about preferences like “I’m not a cat person” or location descriptions such as “I always keep it on my person,” and references to strangers or people with unclear identities like “I spoke to a person in customer support yesterday” are all mundane examples. But technical uses of the term ‘person’ abound: linguists use it to describe the intended audience of a speech-act; Christian theologians have developed the term in complicated directions to buttress the concept of a trinitarian deity; the law (roughly) defines it as something that possesses legal standing to bring complaints to the court, a category which includes individuals, but also corporations, churches, colleges, and other legally-protected entities.

It does not include elephants.

Over the last few years, The Prindle Post has periodically discussed the legal case of Happy, a 51-year old Asian elephant who has lived in the Bronx Zoo since 1977. In 2018, the Nonhuman Rights Project filed a case on Happy’s behalf claiming that she is a legal person who has a fundamental right to liberty which is violated by her solitary confinement in her zoo pen; after several judgments and appeals, the New York Court of Appeals ruled in June that Happy is not a person in the relevant sense and, therefore, cannot request the court system to protect her. Writing for the majority, Chief Judge Janet DiFiore explained that the legal principle of habeas corpus — which prevents someone from being imprisoned indefinitely without criminal charges — is irrelevant to Happy because “Habeas corpus is a procedural vehicle intended to secure the liberty rights of human beings who are unlawfully restrained, not nonhuman animals.”

In short, the court’s decision is squarely and explicitly speciesist: it treats Happy differently than other creatures because of her species.

While the five judges who ruled against her carefully avoided making a claim about whether or not Happy actually has a right to liberty, they instead concluded that the structure of the law simply cannot, in principle, apply to Happy because she is not human. By their own reasoning, Happy might well possess a right to liberty that is being violated by the Bronx Zoo, but New York law is not designed to protect such a right (if, they would say, it exists).

This should seem strange. Normally, people who talk about “rights” tend to treat them as a relatively simple category: if Susan has a right to “not be abused” and Calvin has a right to “not be abused,” then we would typically say that both of them possess the same right to the same thing. If we were to learn that Susan is a hamster, it seems plainly immoral to just suddenly accept that Calvin could abuse her without acting improperly. Presumably, if you think that Calvin should not abuse hamsters (or cats, dogs, red pandas, or whatever your favorite animal happens to be), then you might well think that Calvin has a duty to not abuse them (which also means that they have a right to not be abused). There is no need on this model to differentiate between “the human right to ‘not be abused’” and “the nonhuman right to ‘not be abused’” — but this alleged distinction is roughly the only reason why Happy’s right to liberty was ignored by the court system. According to the judges, habeas corpus is only about “the human right to liberty” alone.

This means that the five judges who ruled against Happy on these procedural grounds were effectively saying that “Happy must lose the case because creatures like her cannot use habeas corpus to win cases.”

But this seems like an example of a rudimentary logical fallacy: petitio principii, better known as begging the question or an argument that is circular. If I try to argue that “abortion is murder” because “all abortions intentionally kill an innocent person,” then I’m assuming (among other things) that a fetus is an innocent person — but this is what my argument was supposed to prove in the first place! For my argument to not be circular, I must first give some reason to think that fetuses are people, at which point I could say that an abortion kills a person (I’ll leave the ‘intentionally’ and ‘innocent’ claims as an exercise to the reader).

In a similar way, the court was asked by Happy’s lawyers to determine whether her rights were violated; for the courts to instead say “Happy’s rights were not violated because she is not human” assumes that “only humans have rights that can be violated” — but this is precisely what the court was asked to consider from the start!

Sadly, it seems like little else can be done for Happy: there is no further recourse available in New York’s court system. But two small silver linings are left on this cloud: firstly, the fact that the courts considered Happy’s case at all is surprising — she is only the third nonhuman animal to be given a legal hearing in this fashion (two chimpanzees named Tommy and Kiko were the first and second in a similarly unsuccessful case in 2018). But, even more encouraging is the fact that Happy’s case was not unanimous: two of the seven judges voted in her favor. According to Judge Rowan D. Wilson, the legal system should

recognize Happy’s right to petition for her liberty not just because she is a wild animal who is not meant to be caged and displayed, but because the rights we confer on others define who we are as a society.

It remains to be seen how long it might take for society to recognize the rights of nonhumans (we’re still struggling to legally recognize many human rights); until we do, we should expect the court system to continue spinning in logical circles.

Fantastic Beasts and How to Categorize Them

photograph of Niffler statue for movie display

For a short video explaining the piece click here.

Fantastic Beasts and Where to Find Them is both a film franchise and a book. But the book doesn’t have a narrative; it is formatted like a textbook assigned in the Care of Magical Creatures course at Hogwarts. It’s ‘written’ by Newt Scamander and comes with scribbles from Harry and Ron commenting on its contents.

Before the creature entries begin there is a multipart introduction. One part, entitled “What is a Beast?” seeks to articulate a distinction between creatures who are ‘beasts’ and those that are ‘beings.’ The text notes that a being is “a creature worthy of legal rights and a voice in the governance of magical world.” But how do we distinguish between beasts and beings? This is one of the main questions central to the topic of moral status.

So, the intro asks two questions: who is worthy and how do we know? The first question seeks to determine who is in the moral community and thus deserving of rights and a voice. This is a question concerning whether an entity has the property of ‘moral standing’ or ‘moral considerability.’ The second question seeks to identify what properties an entity must have to be a member of the moral community. In other words, how does one ground a claim that a particular entity is morally considerable? We can call this a question about the grounds of moral considerability. It is the main question of the short introduction to Fantastic Beasts:

What are the properties that a creature has to have in order to be in the category ’beast’ (outside the moral community) or ‘being’ (inside the moral community)?

Attempts to resolve a question of moral considerability confront a particular problem. Call it the Goldilocks Problem. Goldilocks wants porridge that is just right, neither too hot nor too cold. We want definitions of the moral community to be just right and avoid leaving out entities that should be in (under-inclusion) and avoid including entities that should be left out (over-inclusion). When it comes to porridge it is hard to imagine one bowl being both too hot and too cold at the same time. But in the case of definitions of the grounds of moral considerability, this happens often. We can see this in the attempts to define ‘being’ in the text of Fantastic Beasts.

Fantastic Beasts looks at three definitions of the grounds of being a ‘being.’ According to the text, “Burdock Muldoon, Chief of the Wizard Council in the fourteenth century, decreed that any member of the magical community that walked on two legs would henceforth be granted the status of ‘being,’ all others to remain ‘beasts.’” This resulted in a clear case of over-inclusion. Diriclaws, Augureys, Pixies and other creatures were included in the moral community of beings, but should not have been. The text states that “the mere possession of two legs was no guarantee that a magical creature could or would take an interest in the affairs of wizard government.”

What really mattered was not the physical characteristic of being bipedal but the psychological characteristic of having interests. By focusing on the wrong property this definition accidentally included entities that did not belong.

This of course is related to a humorous story that Plato once lectured about Aristotle’s definition of a human as a featherless biped only to have Diogenes show up the next day with a plucked chicken stating “Behold! A man.”

At the same time, however, this definition is under-inclusive. Centaurs are entities that could take an interest in the affairs of wizards, but they have four legs and thus are left out. Merpeople also could take an interest in the affairs of wizards, but have no legs and thus are left out. Clearly, this definition will not do.

And it is not surprising that the definition fails. Using a physical characteristic to determine whether an entity will have the right psychological characteristics is not likely to work.

So what is a wizard to do but try to find a property more closely linked to the relevant psychological characteristic. Interests — for example, wants and needs — are often expressed linguistically: “I want chocolate chip cookies”; “I need more vegetables.” This apparently led Madame Elfrida Clagg to define a being as “those who could speak with the human tongue.” But, again, we have an example where the definition is over- and under-inclusive. Trolls could be taught to say, but not understand, a few human sentences and were included in the community but should have been excluded. Once again, the merpeople, who could only speak Mermish, a non-human language, were left out when they should have been included.

In our own world, the focus on language and other activities as proxies for cognitive traits have been used to discuss the moral status of animals (also, here). Attempts to exclude animals from the moral community did, in fact, use speech-use and tool-use as reasons to exclude animals. Descartes famously claimed in part V of the Discourse on Methods that animals did not use language but were mere automatons. But apes can use sign language, and crows, elephants, otters and other animals can use tools. So, for many who want to only include humans as in the category of ‘being,’ these activity-based definitions turn out to be over-inclusive. But again, given the incapacity of new born humans to use language or tools, they would also leave out some humans and be under-inclusive. So, using a non-psychological property (an activity) to identify a psychological property is unsurprisingly problematic.

Apparently, the wizarding world got the memo regarding the problem of these definitions by the 19th century. In 1811, Minister of Magic Grogan Stump defined a being as “any creature that has sufficient intelligence to understand the laws of the magical community and to bear part of the responsibility in shaping those laws.” The philosophical term for this set of capabilities is autonomy, at least in the way Immanuel Kant defined the term.

One way to express Kant’s’ view is that the morally considerable beings, the beings that could be called ‘persons,’ were those that had the capacity to rationally identify their interests and then have the will to execute plans to see those interests realized.

Persons are also capable of seeing that others have this capacity and thus rationally adopt rules that limit what we can do to other persons. These are the moral rules that guide our interactions that ground our rights, legal and moral, as well as give us a voice in self- and communal-governance. In other words, the term ‘being’ in Fantastic Beasts is just the text’s term for ‘moral person.’ Furthermore, the relevant psychological characteristic of persons is autonomy as defined by Kant.

There is something questionable about this Kantian view of being-hood or person-hood. On this view, persons need sophisticated cognitive abilities to be identified as persons. Any entity that lacks these cognitive abilities needed for moral judgment are non-persons and thus wholly outside the moral community. In other words, non-persons are things, have only instrumental value, and can be equated with tools: you can own them and dispose of them without morally harming them. But, this definition also excludes human infants and humans with diminished cognitive abilities, but we do not think of them as outside the moral community.

Surely these implications for humans are unacceptable. They would probably be unacceptable to the fictional Newt Scamander as well as to people who fight for animal rights. But the Kantian view is binary: you are a person/being or a beast/thing. Those who find such a stark choice unappealing can and do recognize another category between person and things. This would be something that has interests, but not interests in having a voice in governance. These entities often are vulnerable to damaging impacts of the behavior of persons and have an interest in not suffering those impacts, even if they cannot directly communicate them.

So, we need a new set of terms to describe the new possible categories of moral considerability. Instead of just the categories being/person and beast/thing, we can discuss the categories of moral agent, moral patient, and thing.

A moral agent is an entity that meets the Kantian definition of person. It is an entity who is in the moral community and also shapes it. A thing is something that does not have interests and thus is outside the moral community. But a moral patient is an entity that has interests, specifically interests against harm and for beneficence that should be morally protected. Thus, they are members of the moral community, just not governing members. So, Centaurs and Merpeople and Muggles can all be considered moral agents and thus can, if they so desire, contribute to the governance of the magical community. But even if they don’t want to participate in governance, the magical community should still recognize them as being moral patients, as beings who can be impacted by and thus whose interests should be included in the discussion of governance. The giants, trolls, werewolves in werewolf form, and pixies should at least fall into this category of patient as well. In the human world, infants, young children, and those with cognitive impairment would also fall into this category.

To sum up, then, the text of Fantastic Beasts presents a view similar to Kant’s of the grounds of moral status, but it can be improved upon by recognizing the category of moral patients. Furthermore, Fantastic Beasts clearly supports psychological accounts of the grounds of moral status over physical accounts. In other words, what matters to many questions of identity and morality are psychological properties and not physical properties or behavioral capacities. This is consistent with a theme of the Harry Potter novels where the main villains focus on the physical characteristic of whether an entity has the right blood-status to be part of the wizarding community. In other words, only a villain would solely focus on physical characteristics as a source of moral value.

Were Parts of Your Mind Made in a Factory?

photograph of women using smartphone and wearing an Apple watch

You, dear reader, are a wonderfully unique thing.

Humor me for a moment, and think of your mother. Now, think of your most significant achievement, a long-unfulfilled desire, your favorite movie, and something you are ashamed of.

If I were to ask every other intelligent being that will ever exist to think of these and other such things, not a single one would think of all the same things you did. You possess a uniqueness that sets you apart. And your uniqueness – your particular experiences, relationships, projects, predilections, desires – have accumulated over time to give your life its distinctive, ongoing character. They configure your particular perspective on the world. They make you who you are.

One of the great obscenities of human life is that this personal uniqueness is not yours to keep. There will come a time when you will be unable to perform my exercise. The details of your life will cease to configure a unified perspective that can be called yours. For we are organisms that decay and die.

In particular, the organ of the mind, the brain, deteriorates, one way or another. The lucky among us will hold on until we are annihilated. But, if we don’t die prematurely, half of us, perhaps more, will be gradually dispossessed before that.

We have a name for this dispossession. Dementia is that condition characterized by the deterioration of cognitive functions relating to memory, reasoning, and planning. It is the main cause of disability in old age. New medical treatments, the discovery of modifiable risk factors, and greater understanding of the disorder and its causes may allow some of us to hold on longer than would otherwise be possible. But so long as we are fleshy things, our minds are vulnerable.

*****

The idea that our minds are made of such delicate stuff as brain matter is odious.

Many people simply refuse to believe the idea. Descartes could not be moved by his formidable reason (or his formidable critics) to relinquish the idea that the mind is a non-physical substance. We are in no position to laugh at his intransigence. The conviction that a person’s brain and and a person’s mind are separate entities survived disenchantment and neuroscience. It has the enviable durability we can only aspire to.

Many other people believe the idea but desperately wish it weren’t so. We fantasize incessantly about leaving our squishy bodies behind and transferring our minds to a more resilient medium. How could we not? Even the most undignified thing in the virtual world (which, of course, is increasingly our world) has the enviable advantage over us, and more. It’s unrottable. It’s copyable. If we could only step into that world, we could become like gods. But we are stuck. The technology doesn’t exist.

And yet, although we can’t escape our squishy bodies, something curious is happening.

Some people whose brains have lost significant functioning as a result of neurodegenerative disorders are able to do things, all on their own, that go well beyond what their brain state suggests they are capable of, which would have been infeasible for someone with the same condition a few decades ago.

Edith has mild dementia but arrives at appointments, returns phone calls, and pays bills on time; Henry has moderate dementia but can recall the names and likenesses of his family members; Maya has severe dementia but is able to visualize her grandchildren’s faces and contact them when she wants to. These capacities are not fluky or localized. Edith shows up to her appointments purposefully and reliably; Henry doesn’t have to be at home with his leatherbound photo album to recall his family.

The capacities I’m speaking of are not the result of new medical treatments. They are achieved through ordinary information and communication technologies like smartphones, smartwatches, and smart speakers. Edith uses Google Maps and a calendar app with dynamic notifications to encode and utilize the information needed to effectively navigate day-to-day life; Henry uses a special app designed for people with memory problems to catalog details of his loved ones; Maya possesses a simple phone with pictures of her grandchildren that she can press to call them. These technologies are reliable and available to them virtually all the time, strapped to a wrist or snug in a pocket.

Each person has regained something lost to dementia not by leaving behind their squishy body and its attendant vulnerabilities but by transferring something crucial, which was once based in the brain, to a more resilient medium. They haven’t uploaded their minds. But they’ve done something that produces some of the same effects.

*****

What is your mind made of?

This question is ambiguous. Suppose I ask what your car is made of. You might answer: metal, rubber, glass (etc.). Or you might answer: engine, tires, windows (etc.). Both answers are accurate. They differ because they presuppose different descriptive frameworks. The former answer describes your car’s makeup in terms of its underlying materials; the latter in terms of the components that contribute to the car’s functioning.

Your mind is in this way like your car. We can describe your mind’s makeup at a lower level, in terms of underlying matter (squishy stuff (brain matter)), or at a higher level, in terms of functional components such as mental states (like beliefs, desires, and hopes) and mental processes (like perception, deliberation, and reflection).

Consider beliefs. Just as the engine is that part of your car that makes it go, so your beliefs are, very roughly, those parts of your mind that represent what the world is like and enable you to think about and navigate it effectively.

Earlier, you thought about your mother and so forth by accessing beliefs in your brain. Now, imagine that due to dementia your brain can’t encode such information anymore. Fortunately, you have some technology, say, a smartphone with a special app tailored to your needs, that encodes all sorts of relevant biographical information for you, which you can access whenever you need to. In this scenario, your phone, rather than your brain, contains the information you access to think about your mother and so forth. Your phone plays roughly the same role as certain brain parts do in real life. It seems to have become a functional component, or in other words an integrated part, of your mind. True, it’s outside of your skin. It’s not made of squishy stuff. But it’s doing the same basic thing that the squishy stuff usually does. And that’s what makes it part of your mind.

Think of it this way. If you take the engine out of your ‘67 Camaro and strap a functional electric motor to the roof, you’ve got something weird. But you don’t have a motorless car. True, the motor is outside of your car. But it’s doing basically the same things that an engine under the hood would do (we’re assuming it’s hooked up correctly). And that’s what makes it the car’s motor.

The idea that parts of your mind might be made up of things located outside of your skin is called the extended mind thesis. As the philosophers who formulated it point out, the thesis suggests that when people like Edith, Henry, and Maya utilize external technology to make up for deficiencies in endogenous cognitive functioning, they thereby incorporate that technology (or processes involving that technology) into themselves. The technology literally becomes part of them by reliably playing a role in their cognition.

It’s not quite as dramatic as our fantasies. But it’s something, which, if looked at in the right light, appears extraordinary. These people’s minds are made, in part, of technology.

*****

The extended mind thesis would seem to have some rather profound ethical implications. Suppose you steal Henry’s phone, which contains unbacked biographical data. What have you done? Well, you haven’t simply stolen something expensive from Henry. You’ve deprived him of part of his mind, much as if you had excised part of his brain. If you look through his phone, you are looking through his mind. You’ve done something qualitatively different than stealing some other possession, like a fancy hat.

Now, the extended mind thesis is controversial for various reasons. You might reasonably be skeptical of the claim that the phone is literally part of Henry’s mind. But it’s not obvious this matters from an ethical point of view. What’s most important is that the phone is on some level functioning as if it’s part of his mind.

This is especially clear in extreme cases, like the imaginary case where many of your own important biographical details are encoded into your phone. If your grip on who you are, your access to your past and your uniqueness, is significantly mediated by a piece of technology, then that technology is as integral to your mind and identity as many parts of your brain are. And this should be reflected in our judgments about what other people can do to that technology without your permission. It’s more sacrosanct than mere property. Perhaps it should be protected by bodily autonomy rights.

*****

I know a lot of phone numbers. But if you ask me while I’m swimming what they are, I won’t be able to tell you immediately. That’s because they’re stored in my phone, not my brain.

This highlights something you might have been thinking all along. It’s not only people with dementia who offload information and cognitive tasks to their phones. People with impairments might do it more extensively (biographical details rather than just phone numbers, calendar appointments, and recipes). They might have more trouble adjusting if they suddenly couldn’t do it.

Nevertheless, we all extend our minds into these little gadgets we carry around with us. We’re all made up, in part, of silicon and metal and plastic. Of stuff made in a factory.

This suggests something pretty important. The rules about what other people can do to our phones (and other gadgets) without our permission should probably be pretty strict, far stricter than rules governing most other stuff. One might advocate in favor of something like the following (admittedly rough and exception-riddled) principle: if it’s wrong to do such-and-such to someone’s brain, then it’s prima facie wrong to do such-and-such to their phone.

I’ll end with a suggestive example.

Surely we can all agree that it would be wrong for the state to use data from a mind-reading machine designed to scan the brains of females in order to figure out when they believe their last period happened. That’s too invasive; it violates bodily autonomy. Well, our rough principle would seem to suggest that it’s prima facie wrong to use data from a machine designed to scan someone’s phone to get the same information. The fact that the phone happens to be outside the person’s skin is, well, immaterial.

What Should Disabled Representation Look Like?

photograph of steps leading to office building

Over the course of the last two years, the COVID-19 pandemic has infected millions, with long-haul symptoms of COVID permanently impacting the health of up to 23 million Americans. These long-haul symptoms are expected to have significant impacts on public health as a whole as more and more citizens become disabled. This will likely have significant impacts on the workforce — after all, it is much more difficult to engage in employment when workplace communities tend to be relatively inaccessible.

In light of this problem, we should ask ourselves the following question:

Should we prioritize disabled representation and accommodation in the corporate and political workforce, or should we focus on making local communities more accessible for disabled residents?

The answers to this question will determine the systematic way we go about supporting those with disabilities as well as how, and to what degree, disabled people are integrated into abled societies.

The burdens of ableism — the intentional or unintentional discrimination or lack of accommodation of people with non-normative bodies — often fall on individuals with conditions that prevent them from reaching preconceived notions of normalcy, intelligence, and productivity. For example, those with long COVID might find themselves unable to work and with little access to financial and social support.

Conversely, accessibility represents the reversal of these burdens, both physically and mentally, specifically to the benefit of the disabled individual, rather than the benefit of a corporation or political organization.

Adding more disabled people to a work team to meet diversity and inclusion standards is not the same as accessibility, especially if nothing about the work environment is adjusted for that employee.

On average, disabled individuals earn roughly two-thirds the pay of their able-bodied counterparts in nearly every profession, assuming they can do their job at all under their working conditions. Pushing for better pay would be a good step towards combating ableism, but, unfortunately, the federal minimum wage has not increased since 2009. On top of this, the average annual cost of healthcare for a person with a disability is significantly higher ($13,492) than that for a person without ($2,835). Higher wages alone are not enough to overcome this gap.

It is our norm, societally, to push the economic burden of disability onto the disabled, all while reinventing the accessibility wheel often just to make able-bodied citizens feel like they have done a good thing. In turn, we have inventions such as $33,000 stair-climbing wheelchairs being pushed — inventions that rarely are affordable for the working disabled citizen, let alone someone who cannot work — in instances where we could just have built a ramp.

In order for tangible, sustainable progress to be made and for the requirements of justice to be met, we must begin with consistent, local changes to accessibility.

It can be powerful to see such representation in political and business environments, and it’s vital to provide disabled individuals with resources for healthcare, housing, and other basic needs. But change is difficult at the large, systemic level. People often fall through the cracks of bureaucratic guidelines. Given this, small-scale local changes to accessibility might be a better target for achieving change for the disabled community on a national scale.

Of course, whatever changes are made should be done in conversation with disabled members of the community, who will best understand their own experiences and needs. People with disabilities need to be included in the conversation, not made out as some kind of problem for abled people to solve.

This solution morally aligns with Rawls’ theory of justice as fairness, which emphasizes justice for all members of society, regardless of gender, race, ability level, or any other significant difference. It explains this through two separate principles. The first focuses on everyone having “the same indefeasible claim to a fully equal basic liberties.” This principle takes precedence over the second principle, which states that “social and economic inequalities… are to be attached to offices and positions open to all… to the greatest benefit of the least-advantaged.”

By Rawls’ standards, because of the order of precedence, we should prioritize ensuring disabled citizens’ basic liberties before securing their opportunities for positions of economic and social power.

But wouldn’t access to these positions of power provide a more practical path for guaranteeing basic liberties for all disabled members of society? Shouldn’t the knowledge and representation that disabled individuals bring lead us towards making better policy decisions? According to Enzo Rossi and Olúfémi O. Táíwò in their article on woke capitalism, the main problem with an emphasis on diverse representation is that, while diversification of the upper class is likely under capitalism, the majority of oppressive systems for lower classes are likely to stay the same. In instances like this, where the system has been built against the wishes of such a large minority of people for so long, it may be easier to effect change by working from the bottom up, bringing neighbors together to make their communities more accessible for the people who live there.

Oftentimes, disabled people simply want to indulge in the same small-scale pleasures that their nondisabled counterparts do. When talking to other disabled individuals about their desires, many of them are as simple as able-bodied counterparts’ daily taken-for-granted lives: cooking in their own apartment, navigating public spaces simply, or even just being able to go to the bank or grocery store. These things become unaffordable luxuries for disabled people in inaccessible areas.

In my own experience with certain disabilities, particularly in my worst flare-ups that necessitated the use of a wheelchair, I just wanted to be able to do very simple things again. Getting to class comfortably, keeping up with peers, or getting to places independently became very hard to achieve, or simply impossible.

Financial independence and some kind of say in societal decisions would certainly have been meaningful and significant, but I really just needed the basics before I could worry about career advancement or systemic change.

Accessibility for disabled people on such simple scales only improves their independence, and independence for nondisabled people as well. Any change for disabled people at a local scale would also benefit the larger community. Building better ramps, sidewalks, and doors for people with mobility limitations within homes, educational environments, and recreational areas not only eases the burden of disability, but it also improves quality of life for children, the temporarily disabled, and the elderly in the same community.

Obviously, there is something important to be said about securing basic needs — especially housing, healthcare, food, and clean drinking water — but these, too, would be best handled by consulting local disabled community members to meet their specific requirements.

From here, we could focus on making further investments in walkable community areas and providing adequate physical and social support like housing, basic income, and recreation. We can also make proper changes to our current social support systems, which tend to be dated and ineffective.

The more disabled peoples’ quality of lives improve, the more likely they will feel supported enough to make large-scale change. What matters at the end of the day is that disabled people are represented in real-life contexts, not just in positions of power.

Representation isn’t just being featured in TV shows or making it into the C-Suite, it’s being able to order a coffee at Starbucks, get inside a leasing office to pay rent, or to swim at the local pool.

This is not the end-all be-all solution to end ableism, nor is it guaranteed to fix larger structural and political issues around disability, like stigma and economic mobility. But, by focusing on ableism on a local scale in a non-business-oriented fashion, we can improve the quality of life of our neighbors, whether they are experiencing long COVID or living with another disability. Once we have secured basic liberties for disabled folks, then we can worry about corporate pay and representation.

The Ethics of Quiet Quitting

photograph of worker alone in empty office

If you don’t hop to answer emails after hours, volunteer for new initiates, and help your employer out by taking up overtime work on short notice, then you might be a quiet quitter.

Referencing not quitting as such, but merely fulfilling the terms of one’s contract and no more, quiet quitting entered popular discourse via TikTok. (Wikipedia asserts – without citation — the neologism has its origins in a 2009 symposium at Texas A&M. The author has been unable to confirm this claim.)

Public reactions have been mixed, partly due to uncertainty about just what the term meant. First though, some clarification about what it is not.

Quiet quitting is not working-to-rule: a longstanding union practice in which employees followed rules and regulations to the letter often substantially interfering with company productivity.

A work-to-rule action is collective, involving many workers engaging in uncivil obedience simultaneously, and foregrounds the importance of good faith employee labor to a productive company. The intent of a work-to-rule action is not to assert personal boundaries, but to change the workplace. Quiet quitting by contrast is something far more individual.

In 17 seconds, an early TikTok video pushed “quiet quitting” as both resistance to going above and beyond at work, and as resistance to the centrality of paid labor in our lives. These are independent claims and do not need to be taken together.

Recent economic and cultural movements, most prominently the great resignation and the surging popularity of the “antiwork” community on the social media site Reddit, have brought critiques of work to the fore. There is a deep well of philosophy to draw from. Aristotle highlighted the pursuit of leisure, during which moral and intellectual virtues could be cultivated. Zhuangzi, a classical Chinese philosopher in the Daoist tradition, felt no need for the industriousness and drive to master the world. Bertrand Russell is famous for his 1932 paean to leisure and idleness, and held it should not be the privilege of the rich alone. More recent examples exist as well. Sunaura Taylor contends our obsession with work and productivity effaces the inherent value of people, and especially of disabled people who are easily marginalized if judged purely on their market contribution. David Graeber resists the value of work for work’s sake and the proliferation of “bullshit” jobs.

More narrowly though, as something a worker does, quiet quitting is fundamentally about setting specific boundaries at work – perhaps especially in the United States with its weak unions, permissive labor laws, and culture of striving.

This too has received pushback.

Arianna Huffington (co-founder of the HuffPost) expressed sympathy for avoiding burnout, but argued instead for “joyful joining” in which “rather than go through the motions in a job you’ve effectively quit on, why not find one that inspires you, engages you, and bring you joy.”

Huffington’s implication here is that “quiet quitting” is doing the absolute minimum, but that need not be the case. Workers can be committed diligent workers and nonetheless refuse to answer work emails off the clock, or show up on short notice. Others of course may be withdrawing more fully, presumably out of deeper dissatisfaction with the workplace.

As for “joyful joining,” it is only partially responsive. Many workers may not be in the personal or economic position to easily switch jobs or pursue the perfect job for themselves. Additionally, it lets the job itself off the hook.

Toxic work culture, unreasonable expectations, low pay, rare raises, and other management-side problems that lead to quiet quitting are not addressed by employee fit – these workplaces need to improve.

As the Harvard Business Review puts it, “Quiet Quitting is about Bad Bosses, not Bad Employees.”

What about the ethical core of the matter? Is it always the right thing to give 110% at work? Is this kind of diligence a virtue? There is little doubt that quiet quitting stands in opposition to hustle or rise and grind culture that demands ceaseless commitment to productivity and the pursuit of career-oriented goals. But, that’s kind of the point. The ethical contention at play concerns whether there is anything morally wrong about fulfilling our contractual expectations but not seeking to rise above them.

While not one of the classical virtues, hard work could be considered a moral virtue – an excellency of character – generally worth cultivating.

That answer might appear somewhat unsatisfying as it seems to simply stipulate exceeding expectation at work is a good thing, but on closer examination even positing that hard work represents a virtue says little about quiet quitting.

Virtues need to be balanced. A person must set boundaries on their paid work in order to invest time in family, volunteering, or other pursuits. Likewise, the virtue of hard work presumably does not demand suffering or extensive sacrifice, and this places limits on how much one owes their job even if they believe in the value of hard work as such.

Immanuel Kant, for example, held we have a duty to self-improvement and the cultivation of our talents. However, he did not believe we needed to be fanatical with this, and we could be flexible with which talents we cultivated and how extensively. Additionally, for Kant, part of the aim of such cultivation is so that we can be in position to offer help and assistance to others (just as we each require help and assistance during our lives).

For Kant then, if the work is meaningful and provides us an opportunity for self-improvement, there may be good reason to exceed expectations. It does not, however, provide a general obligation to exert extra effort. And such a duty would certainly not apply to work of drudgery or morally compromised work. If anything, Kant would more likely object to some modern business practices, such as treating workers strictly according to their contribution to the bottom line rather than as ends unto themselves.

In the Marxist tradition, for a worker to derive satisfaction from labor they must be substantively connected with the products of their labor. Marx himself would hold that the entire idea of wage labor, in which we work for others to ensure our own subsistence, is necessarily alienating and problematic.

But we do not need to follow him to that conclusion to appreciate that if someone is not benefiting from their additional labor, either in terms of the product of their labor or compensation, then there is very little reason to expend additional effort.

Finally, a lot of disagreement with quiet quitting is more strategic than ethical. Kevin O’Leary, a businessman turned television personality, has harshly criticized quiet quitting. His main contention is that it will be the go-getters, not the quiet quitters, that get ahead. This is contestable and depends a great deal on the commitment to meritocracy (it may well be the sons of golfing buddies that get ahead). It also relies on finding a successful balance between boundaries and burnout as a long-term strategy. Nonetheless, for those in upwardly mobile career paths giving 110% is plausibly an effective tactic for advancement. But here, quiet quitting centers a different question: is that what matters to the worker?

Climate Change and the Defense of Ignorance

photograph of factory air pollution silhouette

Although first uncovered some years ago, a New Zealand newspaper article from 1912 touting the environmental dangers of carbon emissions has again been making the rounds. But why is information like this morally relevant? And what does it mean for the responsibility of particular parties?

Successfully combating the climate crisis will involve huge burdens for certain countries, corporations, and individuals. Some of these burdens will be in the form of mitigation – that is, taking action to do all we can to reduce the effects of climate change. In 2011, nearly all countries agreed to limit the global average temperature rise to no more than 2°C compared to preindustrial levels – the maximum global temperature rise we can tolerate while avoiding the most catastrophic effect of climate changes. According to the Intergovernmental Panel on Climate Change, achieving this with a probability of >66% will require us to keep our global carbon expenditure below 2900GtCO2. As at the time of writing, only 562GtCO2 remains. Note that this is already 2 GtCO2 less than when I wrote another article on climate harms only three weeks ago. In order to ensure we don’t go over budget, certain parties will have to severely reduce their consumption: forgoing the cheap and easily accessible fossil fuels we’ve been exploiting for hundreds of years, and investing heavily in new, cleaner sources of energy.

But there will also be adaptation burdens – that is, costs associated with dealing with the effects of climate change that already exist. Examples of these burdens include building seawalls, fighting floods and fires, and potentially rehoming those who find themselves displaced by extreme weather events and abandoned by their insurance companies.

Usually when a problem creates costs, we look to pass those costs on to the person/s who caused the problem.

Suppose I find a large, deep hole on what I believe to be an empty plot of land adjacent to my property. I then begin to use this hole as a dumping ground for organic waste – grass clippings, tree trimmings, and the like. It seems to be a fortuitous arrangement. I no longer have to pay for the expensive disposal of large amounts of green waste, while at the same time filling in a potential hazard to others. Suppose, however, that a few weeks later I’m approached by an angry neighbor who claims that I’m responsible for going onto their property and filling in their newly dug well. Our intuition would most likely be that if anyone needs to compensate the neighbor for this wrong, it’s me – the one who created the problem. This approach is commonly referred to as the “Polluter Pays Principle.”

In some cases, however, this principle doesn’t apply so well. Suppose that I’m particularly lazy, and instead pay someone to dispose of my green waste in that same hole. In that case it seems less appropriate to place responsibility on the one who is technically doing the polluting (the person I employ). Instead, it still seems apt to make me responsible. Why? Well, even though I’m not the one putting the refuse in the hole, I am the one benefiting from the outcome – disposing of my waste and saving money. This approach is referred to as the “Beneficiary Pays Principle.”

Both of these principles play a huge role in establishing – at the global level – who should take on the mitigation and adaptation burdens required to combat the climate crisis. But they also rely heavily on something we’ve not yet discussed: knowledge.

Consider the application of the Polluter Pays Principle to the well example above. Arguably, we might say that even if I’m responsible for filling the hole, it wouldn’t be right to hold me responsible so long as I had no reasonable idea that it was, in fact, somebody’s well. It seems that I should only be responsible for the actions I take after I’m informed that what I’m doing is wrong. The same is true of the Beneficiary Pays Principle. Suppose that I pay someone to remove the green waste from my property – but have no idea that they are, in fact, dumping it down someone’s well. Once again, this lack of knowledge would seem to make it inappropriate to hold me responsible. Ignorance would be an excuse.

Nineteen-ninety is often held as the watershed hour for the climate crisis. This is when the IPCC issued their first assessment report, and when the world came to officially learn of “climate change” and the existential risk it posed to us.

Countries and corporations often attempt to avoid responsibility for any contribution to the crisis (i.e., carbon emissions) made prior to 1990 – citing ignorance. But it’s a lot more complicated than that.

The Center for International Environmental Law has outlined how Humble Oil (now ExxonMobil) was aware of the impending climate crisis as early as 1957, with the American Petroleum Industry coming into this same information only a year later. By 1968, the U.S. oil industry was receiving warnings from its own scientists about the environmental risks posed by the climate crisis, such that – by the 1980s – these companies were spending millions of dollars to protect their own assets, such as by modifying oil rig designs to account for rising sea levels.

And then there’s that little New Zealand article from 1912. In fact, this is predated by an even earlier warning, with Swedish scientist Svante Arrhenius publishing a paper in 1896 predicting a global increase in temperature as a result of increasing carbon emissions. All of this means that while ignorance might sometimes be an excuse when attributing responsibility, no such ignorance can be claimed by those who have created – and continue to contribute to – the global climate crisis.

Is It Always Wrong to Blame the Victim?

photograph of burning match near a bunch of unlit matches

In July 2010, Terry Jones, the pastor of a small church in Florida, announced he would burn 200 Qurans on the ninth anniversary of the 9/11 attacks — an event he dubbed “International Burn the Quran Day.” The pastor blamed the Quran for the attacks and other terrorist violence. When September came, Jones was temporarily dissuaded from acting through the personal intervention of religious leaders and government officials, including a phone call from Defense Secretary Robert Gates. Nevertheless, in March 2011, Jones livestreamed a “trial” of the holy book, complete with Arabic subtitles. After a brief recitation of the “charges,” the pastor condemned a copy of the Quran to be torched in a portable fire pit. A few weeks later an Afghan mob, whipped into a frenzy by sermons and speeches denouncing the act, attacked a U.N. compound, killing seven U.N. employees. Subsequent riots left nine dead and more than ninety injured. Days later, two U.S. soldiers were shot and killed by an Afghan policeman in an attack that was later attributed to his anger over the burning.

Condemnation of Jones was nearly universal. A frequent theme in the chorus of opprobrium was the argument that Jones was responsible for putting American lives at risk overseas.

Prior to the burning, President Obama said that “I just want [Jones] to understand that this stunt that he is talking about pulling could greatly endanger our young men and women in uniform who are in Iraq, who are in Afghanistan.” After the riots, a Pentagon spokesman said the violence showed that “irresponsible words and actions do have consequences.” Some commentators also blamed the U.S. media for “recklessly” amplifying the story. Only a few, mostly conservative writers focused attention on the “eternal flame of Muslim outrage” that made Quran-burning such an explosive act.

This incident came to mind as I read Giles Howdle’s recent column on the assassination attempt against Salman Rushdie. Howdle argues that Rushdie is not responsible for any of the violence provoked by his novel, The Satanic Verses — including, but not limited to, violence directed at his own person.

To support his claim, Howdle points out that Rushdie’s actions, while part of a causal chain that predictably produced violence, were themselves non-violent, and that Rushdie never encouraged or desired violence.

According to Howdle, blaming Rushdie is akin to blaming the victim of sexual assault for having worn “provocative” clothing. Moreover, Howdle contends that placing responsibility for violence on Rushdie instead of the Muslim perpetrators treats the latter as “lacking full moral agency.”

These arguments are compelling, but I wonder if they derive some of their plausibility from the fact that Rushdie is an immensely sympathetic character: a brilliant writer and man of the left, persecuted for nothing more than a witty novel. Jones is a much less appealing figure; and yet, in its essentials, his act and Rushdie’s seem comparable. Jones’ act was non-violent, albeit part of a causal chain that predictably caused violence. While it is debatable whether Jones set out to incite violence, assume arguendo that his act expressed his sincerely held, if deeply bigoted beliefs, and that he merely foresaw the possibility of violence resulting from his act rather than wanting or intending it to occur. Doubtless, Rushdie’s novel is more valuable than Jones’ political stunt; but Howdle’s case does not turn on the value, aesthetic or otherwise, of Rushdie’s work.

If your intuitions about these cases still differ, I suggest it has something to do with your sympathy for Rushdie and aversion to Jones, rather than any consistent commitment to the proposition that those who, through their non-violent acts, provoke others to commit acts of violence as a foreseen or foreseeable but unwanted side effect are not responsible for that outcome.

Consider this thought experiment. Smith is walking briskly to a job interview for which he is already five minutes late. Suddenly, out of an alley appears a man holding a woman at gunpoint, blocking Smith’s path. The man warns Smith that if he takes one step closer, he will shoot the woman. Unfortunately, Smith has to move in the man’s direction if he wants to make his interview. Resolving to set up a college fund for any children the woman might have, Smith takes a step toward the man, who promptly shoots the woman. Here, Smith’s act is non-violent, though it has predictably violent consequences given the man’s credible threat. In addition, Smith does not want any misfortune to befall the victim: if, say, the man’s gun jammed and the woman were able to escape his clutches, Smith would be delighted. Yet surely he bears some responsibility for her death, and in the scenario in which the gun jams, he is still responsible for risking her life. It might be argued that by taking the step, Smith somehow encouraged or incited the man. But if simply doing what will predictably trigger the execution of another person’s threat constitutes incitement or encouragement, then writing, publishing, or not recalling a book in the face of credible threats that these acts will cause violence is also encouragement or incitement.

My point is not that the Smith case is analogous in every respect to the Rushdie case.

Rather, my argument is that we are sometimes partially responsible for other people’s violent acts and the harm that results, even if we don’t encourage or welcome them in any way.

If that’s true, then any argument for Rushdie’s lack of responsibility for the violence that occurred as the result of his novel’s publication needs to be more nuanced. It is not sufficient that Rushdie’s own acts were non-violent and that he did not encourage or incite violence or want it to occur.

What we need, in other words, is a more sophisticated theory of when we are morally responsible for causing others to harm third parties — notably including, but not limited to, situations in which we trigger the execution of another person’s credible threat to harm another. The range of cases is immense.

For example, when a government decides to abide by its policy never to pay a ransom in the face of a credible threat to a hostage’s life, and that decision leads to the hostage’s death, that is not generally considered an outcome for which the government is blameworthy. On the other hand, the media has sometimes been blamed for causing “copycat” acts of violence by publicizing the names or manifestos of mass shooters.

What distinguishes these cases? By carefully examining the differences between cases like these, we can start to build a theory that hopefully better explains our moral intuitions.

There is, of course, an obvious distinction between the Smith and Jones cases on the one hand, and Rushdie’s case on the other: Rushdie himself was a victim. Even if we grant that we are sometimes responsible for harm that others cause third parties, that is not the same as blaming the victim. The question, then, is whether we are ever responsible for self-harm that occurs as a foreseen or foreseeable but unwanted result of our actions’ influence on others.

There are actually two things we might mean when we say that we are “responsible” for this kind of self-harm. The first is that by knowingly running the risk of provoking harm to ourselves, we tacitly consent to the risk, thereby waiving our right against the perpetrator that she not harm us: the “he asked for it” defense. The second interpretation is that by knowingly running the risk of provoking harm to ourselves, we are blameworthy for the perpetrator’s acts and resulting self-harm. Space constraints prevent me from analyzing these interpretations in depth here, so a few general points must suffice.

As with responsibility for provoking others to harm third parties, it seems unlikely that we are either never or always responsible for self-harm in either of these senses.

The idea of holding sexual assault victims responsible for their perpetrators’ actions is morally repugnant, but this may be best explained in light of our attitudes and expectations related to sexual violence, rather than some general moral principle barring liability for self-harm in all cases. Again, it seems that we need a more nuanced theory than “the victim is never responsible.”

Despite the foregoing, I am confident that blaming Rushdie for the violence his novel provoked is morally perverse. However, as I hope to have shown, we need better arguments for why this is so.