← Return to search results
Back to Prindle Institute

Roe v. Wade and the Meaning of a Right

image of United States map divided into blue and red polygonal shapes

On May 2nd a draft of a Supreme Court decision written by Samuel Alito was leaked. It challenges the core holding of Roe v. Wade – that there exists, unenumerated but implicit, a constitutional right to an abortion.

If something like the draft became law, it would represent a drastic overhaul of the legal landscape for abortion in the United States.

Thirteen states are waiting with currently unenforced “trigger” laws on the books, that will go off and ban abortion even during the first trimester. And yet, in other ways, even the elimination of a constitutional right to abortion is not a cataclysmic shift, but instead a continuation of the slow erosion of access to abortion that has characterized the past several decades.

The case currently under review at the Supreme Court is Dobbs v. Jackson Women’s Health Organization. Notably, Jackson Women’s Health Organization is the only licensed abortion clinic in Mississippi; they only provide abortion up to 16 weeks, patients are required by state law to have an ultrasound and make two separate trips at least 24 hours apart, and underage patients require parental consent. Moreover, Mississippi provides public funding only in cases of life endangerment, rape, or incest, and health insurance sold on state exchanges does not cover most abortions. Such a highly restrictive environment for abortion access is not unique to Mississippi, but characterizes many states. This is with Roe v. Wade intact.

In the initial 1973 decision, the now famous “trimester” framework of Roe v. Wade was set out based largely on balancing an unenumerated constitutional right to privacy, various health and safety considerations, and a state interest in protecting potential life. It specified a federal level framework under which state laws could be implemented. During the first trimester (around 3 months) of pregnancy, abortion must be legal everywhere and would only be subject to basic medical safety regulation. During the second trimester, abortion could not be banned but it could be subject to reasonable regulation that promoted the health and safety of the parent. During the third trimester, abortion could be banned by state law.

Under Roe v. Wade, proposed regulations on abortion would be subject to the highest standard of judicial review – the strict scrutiny standard.

To evaluate constitutionality of a proposed regulation under this standard, a court first checks to see whether a regulation pushes a compelling state interest and then whether that regulation is appropriately precise or if the state interest could be advanced in a less restrictive way.

Regulations like the current Mississippi requirement for a clinically unnecessary ultrasound prior to abortion would almost certainly fail this standard. However, this is no longer the standard of judicial review that is in use.

While Roe v. Wade is the most famous case concerning abortion, and clarified that it is a constitutional right, the details of abortion law in the United States have been superseded by a later Supreme Court case, the 1992 Planned Parenthood v. Casey. This decision changed the legal landscape in two fundamental ways. First, it ended the trimester framework replacing it with a pre-viability, post-viability of analysis. (Viability is when the fetus can live outside of the womb albeit with medical support and generally occurs around the 24-weeks mark.) Second, it changed the standard of judicial review from strict scrutiny to the weaker and less common “undue burden” standard. Under this approach, regulations of abortion could be implemented even pre-viability as long as they did not provide an undue burden to those seeking access to abortion.

However, what constitutes an undue burden is contentious and highly dependent upon parental resources. Intentionally or otherwise, this new standard opened the legal floodgates to state level regulations that often had an explicitly anti-abortion intent, e.g., that abortion clinics must be subject to the same architectural guidelines as full surgical centers at hospitals despite no clinical need for this policy. Some of the most onerous regulations were deemed to in fact be undue burdens in the 2016 Supreme Court decision Whole Woman’s Health v. Hellerstedt, but many remain.

But beneath this legal dispute is a larger question of what it means to have a right at all.

Is a right to an abortion constituted simply by a prohibition on explicitly banning abortion, or does it require that people regardless of income actually be able to travel a reasonable difference, enter a safe and clean facility, and get an abortion? Does someone in Texas still have a federally protected right to an abortion if they have to travel to New Mexico to get one? Similar considerations are at play with other rights. Is a right to free speech secure if people must get free speech permits and can only protest in designated free speech zones? More generally, what legal, political, and social setup is required such that rights exist not merely as abstract metaphysical entitlements or legal stipulations but as meaningful parts of our lives? For many women, substantive access to abortion does not hinge on a looming Supreme Court decision but was lost decades ago.

Some reproductive rights advocates, like the SisterSong Collective, have criticized the mainstream pro-choice movement for being too narrowly focused on abortion as opposed to reproductive rights more generally, and abortion law as opposed to abortion access. They seek a broader movement around reproductive justice which they define as “the human right to maintain personal bodily autonomy, have children, not have children, and parent the children we have in safe and sustainable communities.” The understanding of rights at play is not a narrow legal one, but rather demands the commitment of resources such that reproductive rights are socially and materially supported. A hospitable legal landscape for abortion is part of this, but only part.

The Alito draft overturns even a minimal understanding of the constitutional right to an abortion, and would permit individual states to ban abortion from conception onward. What this means is going to depend on where people live and their ability to travel. People in California need not worry about their state banning abortion; people who want access to safe abortion in Jackson, Mississippi should be more concerned. It could also start a national level legislative discussion about abortion – something a very risk-averse Congress has been loath to take on as long Roe v. Wade stood. (Although, of course, potential national legislation may not be in the interest of abortion rights.) More interestingly, a legislative conversation about abortion would not necessarily concerns rights at all, and could bring in aspects of the broader abortion debate such as public health and questions of fetal personhood that have been left out of often arcane judicial decisions concerning substantive due process, stare decisis (respect for precedent), and constitutional interpretation.

A (Spoiler-Free) Discussion of the Classism and Ableism of Spoilers

photograph of Star Wars robots on film set

On Friday, the first two episodes of Obi-Wan Kenobi — the latest installment in the ever-growing Star Wars franchise — were released on Disney+. The episodes went live at midnight Pacific Time – yet within minutes of their release, YouTube was rife with reaction and review videos featuring thumbnails spoiling all kinds of details from the show.

This kind of behavior isn’t the sole realm of malicious internet trolls.

Many otherwise reputable entertainment sites do the same thing, posting spoilerific headlines and thumbnails only days — or sometimes even hours — after a movie or television episode premieres. Sometimes, even the content creators themselves are guilty of this behavior. Last year’s Spider Man: No Way Home featured many surprising cameos from the last two decades of Spider Man films. Some of these cameos were clearly advertised in trailers preceding the film’s cinematic release, but others (arguably, the best) were preserved for theatergoers to discover on opening night. Sadly, however, Sony Pictures decided to spoil these very same cameos in the marketing for the home video release of the film, preventing anyone waiting to watch the movie at home from experiencing the same sense of surprise and wonder as theatergoers.

These spoilers are certainly annoying, but are they morally wrong? This is a question taken up by Richard Greene in his recent book Spoiler Alert!, and previously touched upon by fellow Prindle Post author A.G. Holdier. Here, however, I want to argue not only that spoilers are morally wrong, but that the reason for this is that they are inherently classist and ableist.

Spoilers are classist because certain barriers exist to immediately consuming entertainment upon release, and these barriers are more easily overcome by those of a higher socio-economic status.

Take, for example, the premiere episodes of Obi-Wan Kenobi. If you wanted to completely remove the risk of being spoiled for these episodes — and lived on the East Coast of the USA — you’d need to be up at 3am on Friday morning to watch them. Many people — including lower- to middle-income earners working a standard 9-to-5 job — are simply unable to do this. There are financial barriers, too. Going to the cinema isn’t cheap. The average cost of a movie ticket is $9.16, meaning that a family of four will pay more than $35.00 to see the latest release on the big screen (ridiculously expensive popcorn not included). This means that for many families, waiting for the home video release (where a movie can be rented for less than five dollars) is the only financially viable way of enjoying new movies.

Spoilers are ableist for similar reasons. While cinemas strive to provide better accessibility for those with mobility issues and audio and visual impairments, there are still many people for whom the theatergoing experience is unattainable. Those who are neurodiverse, have an intellectual disability, are immunocompromised, or suffer from ADHD are often unable to enjoy films during their theatrical run, and must wait for these movies to finally come to home video. Spoilers strip these less-able individuals of their ability to enjoy the very same surprises as those who can attend theaters.

The current pandemic provides yet another reason why someone may avoid the theatre. Released on December 17th 2021, Spider Man: No Way Home arrived just as the Omicron variant was beginning to spread through the U.S. — ultimately leading to the highest ever COVID daily case count just a few weeks later. For many people, seeing a movie in the cinema simply wasn’t worth the risk of spreading an infection that could greatly harm — and possibly even kill — their fellow attendees. Yet these individuals — those who sacrificed their own enjoyment in order to keep others safe — are those who suffer the most when a company like Sony Pictures releases home video trailers spoiling some of the biggest cameos of the film.

As we’ve seen, spoilers disproportionately affect those who are less well-off, less-able, and those who are simply trying to do what’s right in the midst of a global pandemic.

But are spoilers really all that harmful? It would seem so. Studios clearly understand the entertainment value of surprise. It’s why they fiercely guard plot details and issue watertight non-disclosure agreements to cast and crew. And we can appreciate the reasons for this. There’s nothing quite like the unanticipated return of a favorite character, or a delicious plot-twist that — despite your countless speculations — you never saw coming. Further, as Holdier previously noted, spoilers prevent us from taking part in a shared community experience — and may cause us to feel socially excluded as a result.

We might justify this harm on Consequentialist grounds if there was some greater good to be achieved. But there isn’t. It’s not entirely clear why entertainment sites or YouTube reviewers feel the need to wantonly spoil details of a new show or movie. While there’s obviously a financial motive in gaining clicks and views, it’s unclear how sharing spoilerific details in a headline or thumbnail furthers this end (especially since burying such details in the middle of an article or video would surely force people to click or view more).

Some might claim that they prefer to know plot details in advance — and there’s even evidence suggesting that spoilers might cause certain people to enjoy some stories more. But here’s the thing: you only get one chance to enjoy a story spoiler-free, and we should let people make this choice for themselves. The kinds of spoilers discussed here — those thrust to the top of a newsfeed, or to the main page of YouTube, or aired on network television — are unavoidable. They don’t give people a choice. What’s more, these spoilers disproportionately harm the underprivileged — and it’s the inherent classism and ableism of these spoilers that makes them so morally wrong.

Wanda Maximoff and the Metaphysics of Responsibility

photograph of Dr. Strange movie display

This article contains spoilers for the Disney+ series Wandavision and the films Avengers: Infinity War, Avengers: Endgame, and Doctor Strange and the Multiverse of Madness.

In the latest entry to the Marvel Cinematic Universe, Doctor Strange and the Multiverse of Madness, the titular hero squares off against a former ally in a race across universes. After losing the love of her life (twice) at the end of Avengers: Infinity War and watching almost everyone else miraculously resurrected at the climax of Avengers: Endgame, Wanda Maximoff retreated to a small town in New Jersey to mourn. As shown in the Disney+ series Wandavision, she instead ends up (mostly accidentally) trapping the town inside a painful illusion wherein she could pretend that her beloved Vision was still alive; her powerful magic even creates two children (Billy and Tommy) to complete the couple’s happy life of domestic bliss — until everything unravels, that is, and Wanda is again forced to say goodbye to the people she loves.

Last March, I wrote about Wanda’s journey through grief and love for the Post;

at that point, MCU fans had a number of reasons to be hopeful for a genuine Maximoff family reunion. Now, the newest Doctor Strange film has buried those chances firmly under the rubble of Mount Wundagore.

In brief, Wandavision ends by revealing Wanda as a being of immense (and ominous) power known as the “Scarlet Witch” — she frees the town of her illusion, apologizes for the harm she caused, and escapes with a mysterious spellbook called the Darkhold, seemingly intending to somehow use it to reconnect with Billy and Tommy. But from her first scene in Multiverse of Madness, it’s clear that Wanda Maximoff is no longer sorry for what she plans to do: namely, absorb an innocent teenager’s soul and travel to a different universe (where Billy and Tommy are still alive) to kill and replace her counterpart, then live out her days as a mother to the alternate versions of her children. Moreover, Wanda is fully comfortable with killing anyone who tries to stop her — something she does in spades before the story’s end (including to most of the film’s celebrity cameos). Ultimately, it turns out that the Darkhold is a thoroughly evil book which taints whoever reads it with darkness and madness — by searching its pages for a spell to save her children, Wanda was also unknowingly corrupting her once-heroic soul. After Doctor Strange and his allies manage to cut through the Darkhold’s influence, Wanda sacrifices her own life to destroy the demonic book and spare the multiverse from the threat of the Scarlet Witch.

So, here’s where we can ask a more philosophical question:

Wanda brutally murders dozens of people in her quest to save her children, but — if she was under the influence of the Darkhold’s power — was she responsible for her actions?

One common idea (connected to the philosophical idea of “libertarian free will”) is that for an agent to be fully responsible for some action, they must be fully free or in control of the choice to perform the action — as it is often put, the responsible person must have been “able to do otherwise than they actually did” (more technically, they must satisfy the “Principle of Alternative Possibilities,” or PAP). If I were to cast a spell that hypnotically forces you to transfer your life savings into my bank account, you would not have the power to do otherwise, so you would not be free and I would be responsible for the money transfer.

On the other hand, some philosophers believe that a strong commitment to PAP is scientifically untenable: if our actions are ultimately rooted in the material interactions of molecules in our brains (as opposed to something like an immaterial soul), and if those material conditions necessarily obey regular laws of physics, then it seems like no one can ever satisfy PAP (because you will only ever do what the material conditions of the universe dictate). On this view (typically called “determinism”), notions like “free will” and “moral responsibility” are often written off as mere intuitions or illusions that, though sometimes useful in certain conversations, shouldn’t ultimately be taken too seriously.

The middle ground between these views is an interesting position called “compatibilism” which argues that determinism (as described in the preceding paragraph) actually is compatible with a robust sense of freedom and moral responsibility, but not one that requires PAP.

Instead, compatibilists argue that a person is free (and therefore responsible) for a choice if that choice aligns with their dispositions (like wanting or believing certain things). Often, compatibilists will frame responsibility for determined-but-free choices as a matter of “getting what you want” (even if you couldn’t have “gotten” anything else).

For example, suppose that you want to sit in a particular chair and read a book, so you enter a room, close the door, sit in your chair, and read the book — unbeknownst to you, the door locks after you close it, but that doesn’t matter, because you just want to sit and read — are you responsible for the choice to stay in the room? The compatibilist will easily say yes: you’re satisfying your desire, so the fact that you couldn’t have chosen otherwise (violating PAP, thanks to the locked door) is unimportant.

So, what does this mean for Wanda?

Admittedly, the MCU has given only sparse explanations about the metaphysical nature of the Darkhold (so we have to engage in a bit of speculation here), but the film does make clear that the demonic book exerts some kind of influence on (and extracts a price from) its readers. Which means that we can ask two questions:

1. Was Wanda “able to do otherwise than she actually did” while under the Darkhold’s influence?

2. Regardless of the Darkhold’s influence, did Wanda want to do what she did?

If the answer to (1) is “No,” then Wanda’s condition fails to satisfy PAP — just like how Wanda-838 (the actual mother to Billy and Tommy from the Illuminati’s universe) isn’t responsible for the actions that Wanda-616 (from the standard MCU reality) performs while dreamwalking across the multiverse, Wanda-616 would be similarly at the mercy of the Darkhold. If the answer to (2) is also “No,” the compatibilists will also be able to recognize that Wanda wasn’t responsible for her murderous choices, even though she couldn’t have done otherwise.

One of the most interesting things about this whole conversation, though, is that it’s actually not clear that the answer to (2) is “No.” While the movie takes pains to signpost the dangerous nature of the Darkhold (most notably by implicating it in the deaths of multiple versions of Stephen Strange), Wanda repeatedly suggests that her (understandable) desire to find her children is fully her own. If this is the case, then the Darkhold’s influence might have provoked her to act in extreme ways (to say the least), but the compatibilist might not be able to draw a sharp line between Wanda’s dispositions and the book’s suggestions.

However, though Wanda fans might balk at the notion that she authentically “broke bad” and is responsible for murdering whole armies of sorcerers and superheroes, this narrative might make Wanda’s decision to destroy both the Darkhold and herself at the film’s end all the more impressive.

It remains to be seen whether Wanda Maximoff’s tenure in the MCU has come to an end (the movie notoriously avoids offering conclusive proof of her death), just as it is unclear how her character might handle questions of guilt and responsibility, should she return. (For what it’s worth, I’m still hoping that the MCU will grant her a happy ending!) One thing, though, is certain: having grossed nearly a billion dollars in its first month, Doctor Strange and the Multiverse of Madness proves that Marvel Studios is all-but-determined to continuing making MCU films — and audiences will absolutely choose to keep watching them.

Why Some Fear “Replacement”

photograph of cracked brick wall with several different colors of peeling paint

On Saturday, May 14th, yet another mass shooting occurred in the United States. Ten people were killed, and three more injured. This was not a random act of violence. The shooter drove about three hours to reach a grocery store in Buffalo, NY rather than a location closer to his home in Conklin, NY. He claims he chose this area as it had the highest percentage Black population of potential target locations. Why target a Black neighborhood? The shooter apparently believes white Americans are being “replaced” by other racial and ethnic groups.

The once fringe idea of “replacement” has become mainstream.

This is the conspiracy theory that some group is working to ensure the decline of the white population in the U.S. and Western Europe, in order to “replace” them with people of other races and ethnicities. Originally presented as an anti-Semitic conspiracy, “replacement” has entered into American politics in a different form; some Republican politicians and media pundits claim that Democrats want increased immigration for the purpose of “replacing” white, conservative-leaning voters with those more likely to vote blue.

It is very easy to dismiss the idea of “replacement.” Indeed, much recent reporting immediately labels it racist without much explanation (never mind that the account is factually mistaken). But given the trend of claiming that left-leaning individuals call any idea they do not like “racist,” it’s worth spelling out exactly why fearing “replacement” relies on racist assumptions.

First, it is worth noting that “replacement” for political gain would be a poor plan. Immigrants are not a monolith. For instance, Donald Trump actually gained support among Hispanic voters between 2016 and 2020. In general, the relationship between demography and political outcomes is not so clean cut. Further, the plan would take a long time to develop – you must be a legal resident for five years before qualifying for citizenship, not including time it takes to apply for and receive a green card, provided one even qualifies. Of course, this may dovetail with other conspiracies.

Second, there is something antidemocratic about feeling threatened by “replacement.” It is impossible for an electorate to remain static. Between each election, some children reach voting age, some voters die, events happen which change our views and which motivate us to get out the vote or simply stay home. Just as Heraclitus suggested we can never step in the same river twice, we can never have the same election twice. Provided that elections are fair, open, and secure, objecting to a changing electorate because you perceive that your favored political goals will be threatened is to deny the starting premise of democracy – that every citizen’s political preference counts equally.

To fear changing demographics out of concern for the impact on elections is to value your preferred outcomes over the equality of your fellow citizens.

So perhaps some find the idea of “replacement” frightening because they fear its impacts on culture. They might view it as a kind of cultural genocide; the decreasing portion of the white population threatens to destroy white, American culture and replace it with something else.

In 1753, Benjamin Franklin expressed anxieties about German immigration into the colonies. He claimed that, although some Germans are virtuous, the majority of the new immigrants were the “most ignorant or stupid sort of their nation.” He bemoaned that they do not bother to learn English, instead creating German language newspapers and street signs in both English and German. He feared that, unless German immigration was limited, “they will soon so outnumber us, that … [we will not] be able to preserve our language, and even our Government will become precarious.”

In 2022, Americans eat bratwurst and frankfurters with sauerkraut. We send our children to Kindergarten. The most popular American beers originated from Adolph Coors, Adolphus Busch and Frederick Miller. Franklin’s concerns about German immigration echo those we hear today about immigrants from different places. But Germans did not replace Americans or topple the government.

Instead, these immigrants altered our culture. Like our electorates, our culture is never static. It is constantly changing, in response to global events and in response to new knowledge and traditions that immigrants bring. As our culture changes, who we label as outsiders changes; two hundred years ago, it was non-Anglos and non-Protestants.

If Franklin was wrong to fear German influence on American culture, it’s hard to see any relevant difference with fearing the effects of contemporary immigration.

Some fear “replacement” for a different reason, claiming that changing demographics will result in new majorities exacting revenge. The idea being that, after white citizens become a political minority, the new political majority will engage in retributive measures for past injustices.

This view of the dangers of “replacement” indicates that a majority can use our political institutions in ways that unjustly harm minorities. In fact, it seems to even acknowledge that this has occurred. So, why leave that system intact? The far better response would be to reform or maybe even replace current systems that allow a majority to perpetuate injustices against a minority.

And we now see clearly why fear of “replacement” stems from racism. Being afraid of changing demographics requires denying that all citizens of a nation deserve an equal say in how it is run. It means conceiving of a particular culture as superior to another. And, ultimately, it involves thinking our institutions ought to be designed in ways that allow a majority to commit injustices against a minority. In these ways, the person who fears “replacement” endorses a hierarchical worldview where some deserve to count for more, are superior to, and deserve power over, others. It is only through this lens that a change in racial and ethnic demographics can be worrisome.

But given all this, why would anyone find the idea of “replacement” a compelling one? Finding an answer to this question is crucial if we are to counteract it. The U.S. is still very segregated. This is due to the interaction of numerous historical, political, and economic factors, at both the local and national levels. I grew up in a suburb of Buffalo, called Hamburg. According to 2021 data, the population of Hamburg is 96.1% white. 2021 census estimates that 95.7% of the population of Conklin is white. These figures are remarkable given that the U.S. as a whole is 57.6% white.

To live in a place like Hamburg or Conklin is to live in a white world. You can complete an entire day in town – a trip to the grocery store, a doctor’s appointment and a deposit at the bank – and only encounter white people.

It is no wonder why some may feel threatened by the idea of “replacement”; a world where people of color are increasingly visible is not their world. They have little exposure to a world that is not (nearly) entirely white, thus the prospect of it triggers the fear of the unknown. Hence why “replacement” is frightening – it threatens to “destroy” their world.

So, responding to terrorist acts like those in Buffalo requires a lot more than athletes telling us to choose love or teaching President Biden about “Buffalove.” It requires significant institutional change. To truly eliminate the grip that ideas like “replacement” have on some, we must work to counteract the injustices that leave many of us living in separate worlds. Given the increasing frequency of racially-motivated terrorist acts in the U.S., this task is only becoming more pressing.

Illocutionary Silencing and Southern Baptist Abuse

black and white photograph of child with hands over mouth, eyes, and ears

Content Warning: this story contains discussions of sexual, institutional, and religious abuse.

On May 22nd, external investigators released an extensive report detailing patterns of corruption and abuse from the leadership of the Southern Baptist Convention (SBC), the largest denomination of Protestant Christianity in the United States. According to the report, Southern Baptist leaders spent decades silencing victims of sexual abuse while ignoring and covering up accusations against hundreds of Southern Baptist ministers, many of whom were allowed to continue in their roles as pastors and preachers at churches around the country. In general, the Executive Committee of the SBC prioritized shielding itself and the denomination from legal liability, rather than care for the scores of people abused at the hands of SBC clergy. But, after years of public condemnations of the Committee’s behavior, church representatives overwhelmingly voted in June to investigate the Executive Committee itself.

To anyone who has not been listening to years worth of testimony from SBC abuse victims, there is much in the SBC report to shock and appall.

But in this article, I want to consider one important reason why so many (beyond just the members of the SBC Executive Committee) ignored that mountain of testimony, even despite prominent awareness campaigns about sexual abuse in religious spaces after the USA gymnastics abuse trial and the #MeToo movement (like #ChurchToo): in short, in addition to the abuse itself, many of the people who chose to come forward and speak about their experiences suffered the additional injustice of what philosophers of language call illocutionary silencing.

In brief, philosophers (in the “speech act theory” tradition) often identify three distinct elements of a given utterance: the literal words spoken (locution), the function of those words as a communicative act (illocution), and the effects that those words have after they are spoken (perlocution). So, to use the cliché example, if I shout “FIRE!” in a crowded theater, we can distinguish between the following components of my speech:

    • Locution: A word referring to the process of (often dangerous) fuel combustion that produces light and heat.
    • Illocution: A warning that the audience of the utterance could be in danger from an   uncontrolled fire.
    • Perlocution: People exit the theater to escape the fire.

In general, interpreting a speech act involves understanding each of these distinct parts of an utterance.

But this means that silencing someone — or “preventing a person from speaking” — can happen in three different ways. Silencing someone overtly, perhaps by forcibly covering their mouth or shouting them down so as to fully prevent them from uttering words, is an example of locutionary silencing, given that it fully stops a speaker from voicing words at all. On the other side, perlocutionary silencing happens when someone is allowed to speak, but other factors beyond the speaker’s control convene to prevent the expected consequences of that speech from occurring: consider, for example, how you can argue in defense of a position without convincing your audience or how you might invite friends to a party which they do not attend.

Illocutionary silencing, then, lies in between these cases and occurs when a speaker successfully utters words, but those words (because of other factors beyond the speaker’s control) fail to perform the function that the speaker intended: as a common phrase from speech act theory puts it,

illocutionary silencing prevents people from doing things with their words.

Consider a case where a severe storm has damaged local roadways and Susie is trying to warn Calvin about a bridge being closed ahead; even if Susie is unhindered in speaking, if Calvin believes that she isn’t being serious (and interprets her utterance as a joke rather than a warning) then Susie will not have warned Calvin, despite her best attempts to do so.

So, consider the pattern of behavior from the SBC towards the hundreds of people who came forward to report their experiences of assault, grooming, and other forms of abuse: according to the recent investigation, decades of attempted reports were met with “resistance, stonewalling, and even outright hostility” from SBC leadership who, in many cases, chose to slander the victims themselves as “‘opportunistic,’ having a ‘hidden agenda of lawsuits,’ wanting to ‘burn things to the ground,’ and acting as a ‘professional victim.’” Sometimes, the insults towards victims were cast as spiritualized warnings, such as when August Boto (a longtime influential member of the SBC’s legal team) labeled abuse reports as “a satanic scheme to completely distract us from evangelism. It is not the gospel. It is not even a part of the gospel. It is a misdirection play…This is the devil being temporarily successful.” To warp the illocutionary force of an abuse report into a demonic temptation is an unusually offensive form of illocutionary silencing that heaps additional coals onto the heads of people already suffering grave injustices.

And, importantly, this kind of silencing shapes discursive environments beyond just the email inboxes of the SBC Executive Committee: a 2018 report from the Public Religion Research Institute found, for example, that only one group of Americans considered “false accusations made about sexual harrassment or assault” to be a bigger social problem than the actual experience of sexual assault itself — White Evangelical Baptists.

In the New Testament, Jesus warns about the dangers of hypocrisy, saying “Nothing is covered up that will not be uncovered and nothing secret that will not become known. Therefore whatever you have said in the dark will be heard in the light, and what you have whispered behind closed doors will be proclaimed from the housetops” (Luke 12:2-3, NRSVUE). It may well be that, finally, the proclamations by and about the victims of and within the Southern Baptist Convention can be silenced no longer.

Can Machines Be Morally Responsible?

photograph of robot in front of chalkboard littered with question marks

As artificial intelligence becomes more advanced, we find ourselves relying more and more on the decision-making of neural nets and other complex AI systems. If the machine can think and decide in ways that cannot be easily traced back to the decision of one or multiple programmers, who do we hold responsible if, for instance, the AI decision-making reflects the biases and prejudices that we have as human beings? What if someone is hurt by the machine’s discrimination?

To answer this question, we need to know what makes someone or something responsible. The machine certainly causes the processing it performs and the decisions it makes, but is the AI system a morally responsible agent?

Could artificial intelligence have the basic abilities required to be an appropriate target of blame?

Some philosophers think that the ability that is core to moral responsibility is control or choice. While sometimes this ability is spelled out in terms of the freedom to do otherwise, let’s set aside questions of whether the AI system is determined or undetermined. There are some AI systems that do seem to be determined by fixed laws of nature, but there are others that use quantum computing and are indeterminate, i.e., they won’t produce the same answers even if given the same inputs under the same conditions. Whether you think that determinism or indeterminism is required for responsibility, there will be at least some AI systems that will fit that requirement. Assume for what follows that the AI system in question is determined or undetermined, according to your philosophical preferences.

Can some AI systems exercise control or engage in decision-making? Even though AI decision-making processes will not, as of this moment, directly mirror the structure of decision-making in human brains, AI systems are still able to take inputs and produce a judgment based on those inputs. Furthermore, some AI decision-making algorithms outcompete human thought on the same problems. It seems that if we were able to get a complex enough artificial intelligence that could make its own determinations that did not reduce to its initial human-made inputs and parameters, we might have a plausible autonomous agent who is exercising control in decision-making.

The other primary capacity that philosophers take to be required for responsibility is the ability to recognize reasons. If someone couldn’t understand what moral principles required or the reasons they expressed, then it would be unfair to hold them responsible. It seems that sophisticated AI can at least assign weights to different reasons and understand the relations between them (including whether certain reasons override others). In addition, AI that are trained on images of a certain medical condition can come to recognize the common features that would identify someone as having that condition. So, AI can come to identify reasons that were not explicitly plugged into them in the first place.

What about the recognition of moral reasons? Shouldn’t AI need to have a gut feeling or emotional reaction to get the right moral answer?

While some philosophers think that moral laws are given by reason alone, others think that feelings like empathy or compassion are necessary to be moral agents. Some worry that without the right affective states, the agent will wind up being a sociopath or psychopath, and these conditions seem to inhibit responsibility. Others think that even psychopaths can be responsible, so long as they can understand moral claims. At the moment, it seems that AI cannot have the same emotional reactions that we do, though there is work to develop AI that can.

Do AI need to be conscious to be responsible? Insofar as we allow that humans can recognize reasons unconsciously and that they can be held responsible for those judgments, it doesn’t seem that consciousness is required for reasons-recognition. For example, I may not have the conscious judgment that a member of a given race is less hard-working, but that implicit bias may still affect my hiring practices. If we think it’s appropriate to hold me responsible for that bias, then it seems that consciousness isn’t required for responsibility. It is a standing question as to whether some AI might develop consciousness, but either way, it seems plausible that an AI system could be responsible at least with regard to the capacity of reasons-recognition. Consciousness may be required for choice on some models, though other philosophers allow that we can be responsible for automatic, unconscious, yet intentional actions.

What seems true is that it is possible that there will at some point be an artificial intelligence that meets all of the criteria for moral responsibility, at least as far as we can practically tell. When that happens, it appears that we should hold the artificial intelligence system morally responsible, so long as there is no good reason to discount responsibility — the mere fact that the putative moral agent was artificial wouldn’t undermine responsibility. Instead, a good reason might look like evidence that the AI can’t actually understand what morality requires it to do, or maybe that the AI can’t make choices in the way that responsibility requires. Of course, we would need to figure out what it looks like to hold an AI system responsible.

Could we punish the AI? Would it understand blame and feel guilt? What about praise or rewards? These are difficult questions that will depend on what capacities the AI has.

Until that point, it’s hard to know who to blame and how much to blame them. What do we do if an AI that doesn’t meet the criteria for responsibility has a pattern of discriminatory decision-making? Return to our initial case. Assume that the AI’s decision-making can’t be reduced to the parameters set by its multiple creators, who themselves appear without fault. Additionally, the humans who have relied on the AI have affirmed the AI’s judgments without recognizing the patterns of discrimination. Because of these AI-assisted decisions, several people have been harmed. Who do we hold responsible?

One option would be to have there be a liability fund attached to the AI, such that in the event of discrimination, those affected can be compensated. There is some question here as to who would pay for the fund, whether that be the creators or the users or both. Another option would be to place the responsibility on the person relying on the AI to aid in their decision-making. The idea here would be that the buck stops with the human decision-maker and that the human decision-maker needs to be aware of possible biases and check them. A final option would be to place the responsibility on the AI creators, who, perhaps without fault created the discriminatory AI, but took on the burden of that potential consequence by deciding to enter the AI business in the first place. They might be required to pay a fine or take measures to retrain the AI to avoid the discrimination in the first place.

The right answer, for now, is probably some combination of the three that can recognize the shared decision-making happening between multiple agents and machines. Even if AI systems become responsible agents someday, shared responsibility will likely remain.

Kill-Switch Tractors and Techno-Pessimism

photograph of combine harvester in field

On May 1st, CNN reported that Russian troops had stolen around $5 million worth of John Deere tractors and combines from the Russian-occupied Ukrainian city of Melitopol. At nearly $300,000 each, these pieces of farm equipment are extremely expensive, massive, and unbelievably high-tech. This last feature is particularly important for this story, which ended on a seemingly-patriotic note:

John Deere had remotely kill-switched the tractors once they became aware of the theft, rendering them useless to the Russians.

A remote kill-switch that thwarts invading Russian troops from using stolen Ukrainian goods is easy to read as a feel-good story about the power of creative thinking, and the promising future of new technological inventions. But some are concerned that the background details give us more reason to be fearful than excited. Notably, activist and author Cory Doctorow, whose writing focuses primarily on issues in new and emerging technologies, wants to redirect the attention of the reader to a different aspect of the Russian-tractors story. When John Deere manufactured these particular tractors, they had no idea that they would be sold to Ukraine, and eventually stolen by Russian troops. Why, then, had the company installed a remote kill-switch in the first place?

What follows in the rest of Doctorow’s blog post is an eerie picture. John Deere’s high-tech farm equipment is capable of much more than being shut down from thousands of miles away. Sensors built into the combines and tractors pull scores of data about machine use, soil conditions, weather, and crop growth, among other things, and send this data back to John Deere. Deere then sells this data for a wild profit. Who does Deere sell the data to? According to Doctorow, it was indirectly sold back to the farmer (who could not, until very recently, access it for free) by coming bundled with a seed package the farmers have to purchase from Monsanto. Doctorow goes on:

But selling farmers their own soil telemetry is only the beginning. Deere aggregates all the soil data from all the farms, all around the world, and sells it to private equity firms making bets in the futures market. That’s far more lucrative than the returns from selling farmers to Monsanto. The real money is using farmers’ aggregated data to inform the bets that financiers make against the farmers.

So, while the farmers do benefit from the collection of their data — in the form of improved seed and farm equipment based on this data — they are also exploited, and rendered vulnerable, in the data collection process.

Recent exposés on the (mis)uses of big data paint a picture of this emerging technology as world-changing; and not, necessarily, in a good way. Doctorow’s work on this case, as well as the plethora of other stories on big data manipulation and privacy invasion, can easily lead one to a position sometimes referred to as “techno-pessimism.” Techno-pessimism is a general bleak disposition toward technological advancement that assumes such advancements will be for the general worsening of society/culture/human life. The techno-pessimist is exactly what the name implies: pessimistic about the changes that technological “advancements” will bring.

Opposite the techno-pessimist is the techno-optimist. Nailing down a definition for this seems to be a bit trickier. Doctorow, who (at least once) identified as a techno-optimist himself, defines the term as follows: “Techno-optimism is an ideology that embodies the pessimism and the optimism above: the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.” Put in these terms, techno-pessimism seems akin to a kind of stodgy traditionalism that valorizes the past for its own sake; the proverbial old man telling the new generation to get off his law. Techno-optimism, on the other hand, seems common-sensical: for every bleak Black Mirror-esque story we hear about technology abuse, we know that there are thousands more instances of new technology saving and improving the lives of the global population. Yet, tallying up technology uses vs abuses is not sufficient to vindicate the optimist.

What can we say about our overall condition given the trajectory of new and emerging technology? Are we better-off, on the whole? Or worse?

What is undeniable is that we are physically healthier, better-fed, and better protected from disease and viruses. Despite all the unsettling details of the John Deere kill-switch tractors, such machines have grown to enormous sizes because of the unimaginable amount of food that individual farms are able to produce. Because of advances in the technology of farming equipment and plant breeding, farmers are able to produce exponentially more product, and do so quicker and with greater efficiency. Food can also now be bio-fortified, to help get needed nutrients to populations that otherwise would lack them. These benefits are clearly not evenly distributed — many people-groups remain indefensibly underserved. Still, living standards as averages have increased quite radically.

It is also clear that some of the most horrifying misuses of technology are not unprecedented. While many gasp at the atrocity of videos of violent acts going viral on message boards, human lust for blood sport is an ancient phenomenon. So, does techno-pessimism have a leg to stand on? Should the drive toward further technological advancement be headed despite the worrying uses, because the good outweighs the bad?

In his work Human, All Too Human, the 19th century German philosopher Friedrich Nietzsche penned a scathing review of what he took to be the self-defeating methods by which Enlightenment humanity strove toward “progress”:

Mankind mercilessly employs every individual as material for heating its great machines: but what then is the purpose of the machines if all individuals (that is to say mankind) are of no other use than as material for maintaining them? Machines that are an end in themselves—is that the umana commedia?

While there is no reason to read him here as talking about literal human-devouring machines, one can imagine Nietzsche applying this very critique to the state of 21st century technological advancement. We gather data crucial for the benefit of humanity by first exploiting individuals out of their personal data, leaving them vulnerable in the hands of those who may (or may not) choose to use this information against them. The mass of data itself overwhelms the human mind — normal rational capacities are often rendered inert trying to think clearly in the midst of the flood. Algorithms pick through the nearly infinite heap at rates that far exceed the human ability to analyze, but much machine learning is still a black box of unknown mechanisms and outcomes. We are, it seems, hurtling toward a future where certain human capacities are unhelpful, able to either be used fruitlessly, inefficiently, or else abandoned in favor of the higher machine minds. At the very least, one can imagine the techno-pessimist’s position as something nearly Nietzschean: can we build these machines without ourselves becoming their fuel?

Justice and Retributivism in ‘Moon Knight’

photograph of 'Moon Knight' comic cover featuring an illustration of a superhero in a jump pose with a black suit and cape

This article contains spoilers for the Disney+ series Moon Knight.

In Disney’s Moon Knight, two Egyptian Gods advocate for two very different models of justice. Their avatars, of whom the titular character is one, are the humans tasked with doing the Gods’ bidding. Konshu is the beaked God of vengeance who manipulates his avatars to punish wrongdoers. His form of justice depends on the concept of desert — people should be punished for the choices that they make after, and only after, they have made them. Throughout the series, the main antagonist, Harrow (who was, himself, once Konshu’s avatar) attempts to release the banished alligator God Ammit. Ammit has the power to see into the future; she knows the bad actions that people will perform and instructs her avatars to punish these future wrongdoers preemptively, before anyone is harmed by the bad decisions.

As is so often the case with Marvel villains, the mission shared by Harrow and Ammit is complicated.

The struggle involved between the two Gods is not a battle between good and evil (neither of them fit cleanly into either of those categories). Instead, it is a conflict between competing ideologies. Ammit and Harrow want to bring about a better world. The best possible world, they argue, is a world in which the free will of humans is never allowed to actually culminate in the kinds of actions that cause pain and suffering. If people were prevented from committing murders, starting wars, and perpetrating hate, there would be no victims. The reasoning here is grounded in consequences; the kinds of experiences that people have in their lives are ultimately what matters. If we can minimize the kinds of really bad experiences that are caused by other people, we should.

Nevertheless, viewers are encouraged to think of Konshu’s vision of justice as superior; Mark and Steven spend six episodes trying to prevent Harrowing from reviving Ammit. The virtue of Konshu’s conception of justice is that it takes the value of the exercise of free will seriously. The concept of reward is inextricably linked to the concept of praise and the concept of blame is similarly linked to the concept of punishment. People are only deserving of praise and blame when they act freely; free will is a necessary condition for praise or blame to be apt. A person is only praiseworthy for an action if they freely choose to perform it, and the same is true with blame. Ammit’s form of justice doesn’t respect this connection, and the conclusion the viewer is invited to draw is that the God therefore misses something central about what it is that fundamentally justifies punishment.

The suggestion is that retributivism — the view that those who have chosen to do bad things should “get what they deserve” — is the theory of punishment that we should adopt in light of the extent to which it emphasizes the importance of free will.

But it isn’t that simple, in the MCU or in the real world. Later episodes of the series explore the theme of mitigating circumstances, and the viewer is left to wonder: are all circumstances mitigating? In episode 5, Marc and Steven travel to an afterlife and, at the same time, through their own memories. As viewers have likely suspected, Marc has dissociative identity disorder, and Steven is a personality he created to protect him from the abuse that he suffered at the hands of his mother. In childhood, Marc and his little brother Randall went to play in a cave together and rising waters resulted in Randall’s drowning. Marc’s mother never stops blaming him for the death and takes it out on him until the day that she dies. It is clear that Marc has carried a significant sense of guilt along with him all of his life. Steven assures him, “it wasn’t your fault, you were just a child!”

The actions that young Marc took might appear to be chosen freely; he went to the cave with his brother despite the fact that he knew doing so was dangerous. Yet it does seem that Steven is correct to suggest that the inexperience of youth undermines full moral responsibility. The same is true with at least some forms of mental illness. If the trauma of Marc’s past has fractured his psyche, is he really responsible for anything that he does, either as Marc or as Steven?

The kinds of factors that contribute to who a person becomes are largely outside of their control.

No one can choose their genetics, where they are born, who their parents are, the social conditions and norms that govern who it is deemed “acceptable” for them to be, whether they are raised in conditions of economic uncertainty, and so on.

Many factors of who we are end up being largely a matter, not of free will, but of luck. If this is the case, it is far from clear that, as viewers, we should be cheering for Konshu’s model of justice to win in the end. Anger and resentment are common sentiments in response to wrongdoing, but retributive attitudes about justice often create barriers to experiencing emotions that are even more important — forgiveness, compassion and empathy. Existence on the planet is not one giant battle between good and evil; explanations for behavior are considerably messier and more complicated.

Moon Knight’s story has only just begun, and the philosophical themes promise to be rich. With any luck, they’ll motivate us to think more critically about justice in the real world. Even if we could see into the future, there are good arguments against pursuing Ammit’s strategy — it seems unfair to punish someone to prevent them from doing something wrong (the metaphysics of time are kind of sketchy there, too). Konshu’s strategy — a heavily retributivist strategy — closely resembles the one we actually follow in the United States; we incarcerate more people than any country in the world. Our commitment to giving wrongdoers “what they deserve” may stand in the way of more nuanced moral thought.

The Cosmic Horror of HP Lovecraft and Sgr A*

telescopic image of black hole

Astronomers at the Event Horizon Telescope collaboration recently released the first picture of Sagittarius A*, the monstrous black hole residing at our galaxy’s core. Given that the distance between the Earth and this cosmological object is roughly 26,000 light-years (or 152,844,259,702,773,792 miles), the image’s production is a remarkable scientific and technological achievement. While Sgr A*, as it is affectionately known, weighs 4 million times the mass of our sun, it is only 17 times its size. This combination of mass and size results in gravitational forces strong enough to shape the entire Milky Way galaxy and warp existence itself – bending space, altering the flow of time, and preventing even light from escaping.

This isn’t the first time the Event Horizon Telescope collaboration has produced such an awesome image.

In 2019, the array captured the very first image of a black hole. This one, named M87*, resides in the center of another galaxy roughly 500 million light-years away (or 2,939,312,686,591,800,000,000 miles). With a mass 6.5 billion times that of our sun and measuring over 23 billion miles across, M87* dwarfs Sgr A*. For a sense of scale, the distance from the Sun to Neptune, the most distant planet in our solar system, is 3 billion miles. Indeed, this M87* is thought to be one of the largest black holes in existence.

These objects’ sheer scale and weight, alongside their distance from us, defy comprehension. And while black holes are undoubtedly unique celestial objects, much of the universe plays out on an equally impressive scale. Planets, galaxies, stars, and quasars, amongst numerous other cosmological objects, exist on geographical and temporal scales far beyond anything on Earth. Asking someone to comprehend a single object vastly larger than our entire solar system is to ask them to conceive of the inconceivable.

We’re just not evolved to think about reality in such grand terms. As corporeal beings, limited to a single planet and a single lifetime, we think in human terms; we’re geared to understand the universe on our scale.

But regardless of this predisposition, the universe is beyond vast. The cosmic scale on which existence plays out makes everything that’s occurred here on Earth, from the first hints of life to you reading this sentence, seem infinitesimally small in comparison. Existence’s sheer unassailability can fill the heart with dread as it means that the universe is, by its nature, incomprehensible. That is, as limited, mortal beings, the overwhelming majority of existence will forever be home to the unknowable and the unknown, and nothing stokes fear like the unknown.

This fear, and specifically its emergence from the universe’s vast and uncaring nature, was a central theme in the fiction of Howard Phillips Lovecraft, AKA HP Lovecraft. Indeed, many of his most famous stories – The Shadow over Innsmouth, At the Mountains of Madness, and Colour out of Space – concern a small group of persons wrestling with their insignificance in the face of a vastly uncaring universe. While his fiction features creatures literally beyond comprehension (just seeing some of them drives characters into insanity), these beings always function as embodiments of the irrefutable fact that our fleeting lives mean less than nothing in the grand terms of space and time.

As Lovecraft writes in the opening to arguably his most famous work, The Call of Cthulhu:

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

Many philosophies and religions focus on humanity. They ascribe to us some vital place in God’s design, use rationality to construct a coherent worldview, or provide frameworks according to which we can understand right and wrong. But Lovecraft sought to set the scope of his philosophy, known as Cosmicism, beyond our limited mortal perspective. For him, there is simply so much out there in the universe that has nothing to do with us. For humanity to try and come to grips with reality in its entirety is to attempt an impossible task, one that would shatter our minds if even partially achieved. Cosmicism highlights that while things may matter to us on a human scale, this scale is meaningless compared to reality’s vastness. In other words, while things matter to us, ultimately, we don’t matter.

This realization may strike readers, even atheistic ones, as inherently bleak.

While one can grapple with the idea that existence is godless and that meaning is self-created, that is a very different thing from saying the universe, by its very constitution, is geared against our continued existence due to its utter indifference; much like how a boot is geared against the existence of an ant.

But such an upsetting outcome need not be the only takeaway from Cosmicism. One can read Lovecraft’s works, contemplate existence’s vast and uncaring nature, and be even more thankful for the beauty in our lives. If contradictions and comparisons enhance qualities and characteristics – if good is only good when compared to bad, if hot is only hot when compared to cold – then the earthly things that bring meaning to people’s lives might be even more meaningful when we acknowledge how much of a miracle it is that they exist in the first place. When objects exist, like black holes, that could tear apart the planet in less than a moment, the fact that our meager mortal lives occur as they do seems even more miraculous.

The fact that there is a single, small, blue ball floating in the blackness of space and time, where beauty and meaning (even if that is only according to human standards) can be found amongst existence’s indifferent gaze, should temper our fear of the endless well of the universe… if only a little.

Some University of Chicago Students Prove Lukewarm on Free Speech

photograph of University of Chicago ivy-covered Gothic buildings

The University of Chicago is known as a bastion for, and important intellectual proponent of, free speech and academic freedom. Its “Chicago Statement,” released in 2015 and since adopted by over eighty academic institutions across the country, is considered the gold standard for university free speech policy statements. Yet a recent incident involving its student newspaper, The Chicago Maroon, shows that a university’s commitment to free speech is only as robust as that of its members — including its students.

On January 26, 2022, the University of Chicago chapter of Students for Justice in Palestine (SJP UChicago) posted a call to boycott “Sh*tty Zionist Classes” on its Instagram page. Although the boycott included within its ambit any class at Chicago “on Israel or those taught by Israeli fellows,” it was apparently aimed at three particular classes, whose course descriptions the post reproduced along with critical annotations. “By attending these classes,” the post argued, “you are participating in a propaganda campaign that creates complicity in the continuation of Israel’s occupation of Palestine.”

Almost a month later, the Maroon published an op-ed entitled “We Must Condemn the SJP’s Online Anti-Semitism.” Notably, its subheading inaccurately described SJP UChicago’s boycott as aimed at “Jewish-taught and -related classes.” The op-ed itself argued that based on the lunisolar Hebrew calendar, SJP UChicago had posted its boycott demand on Holocaust Remembrance Day, which the authors claimed “was done to isolate and alienate the Jewish population at UChicago and to interfere with a day of mourning.” It also claimed that “the targeting of classes taught specifically by Israeli fellows is xenophobic” and that because all of the courses singled out in the post were housed within the university’s Center for Jewish Studies, the post “furthers the trope that Jewish courses and professors work to contribute to propaganda for Israel.” Finally, it denounced SJP UChicago’s attempt to persuade students to avoid or drop certain classes as a violation of the university’s discrimination and harassment policies, since Israeli faculty were “directly discriminated against,” Jewish students were “indirectly” discriminated against, and the harassment policy “states that any organization that uses social media . . . in order to interfere with the education of students is harassment [sic].”

The op-ed’s first two arguments are fine as far as they go: substantively they’re thin gruel, but they’re firmly in line with the Chicago Statement’s view that the best antidote to “offensive, unwise, immoral, or wrong-headed” speech is more speech. By contrast, the argument that the SJP UChicago boycott announcement violates the university’s discrimination and harassment policies was a blatant attempt by the authors to pressure the university into sanctioning other students for political speech under the flimsy pretext of “harassment” and “discrimination.” This is clearly contrary to the letter and spirit of the Statement, which states that

it is for the individual members of the University community, not for the University as an institution, to make those judgments [about which ideas are offensive, unwise, immoral, or wrong-headed] themselves, and to act on those judgments not by seeking to suppress speech, but by openly and vigorously contesting the ideas that they oppose.

As a threshold matter, it’s unclear whether student choices concerning what classes to take, or speech directed at influencing such choices, fall within the scope of UChicago’s discrimination policy. Even if they do, SJP UChicago’s boycott demand was clearly based on the ideological content of the courses or the instructors’ institutional affiliations, not their national origins or religion. Assuming arguendo that the boycott announcement constituted or encouraged discrimination based on instructor national origin or faith, it could not constitute harassment unless, in addition to being based on a proscribed factor such as national origin, it unreasonably interfered with a person’s work performance or educational program participation or created an intimidating, hostile, or offensive work environment. Finally, because it plausibly occurred in “an academic context,” to qualify as harassment it also had to be directed at specific persons, constitute abuse, and serve no bona fide academic purpose. Clearly, SJP UChicago’s boycott announcement ticks none of these boxes.

If the op-ed itself didn’t excel in free speech terms, what happened next was no better and suggested that SJP UChicago and some editors at the Maroon would probably benefit from reading the Statement again.

On April 2 the editors of Viewpoints, the Maroon’s opinions page, decided to retract the op-ed, citing “factual inaccuracies.” In a long and rambling explanatory note, the editors said that these inaccuracies “flattened dialogue and perpetuated hate toward [SJP UChicago], Palestine Palestinian students, and those on campus who support the Palestinian liberation struggle,” and apologized to SJP UChicago and “all others affected by this decision.” However, the editors identified only four inaccuracies: the characterization of the boycott as targeting “Jewish-taught and -related classes,” which did not even appear in the op-ed itself but in its subheading; another description of the boycott as targeting classes “taught by Israeli professors,” rather than Israeli fellows affiliated with the Israel Institute; the claim that the post was deliberately published on Holocaust Remembrance Day; and the claim that SJP UChicago members had approached students on the quad about boycotting the classes. At key points, the editors appeared to rely upon information provided by SJP UChicago, rather than any independent reporting, to correct the op-ed’s claims. Notably, the retraction note included something like a disclaimer from the editor-in-chief and managing editor of the Maroon pointedly stating that “the following apology does not constitute an institutional perspective and represents only the views of the current Viewpoints Head Editors.”

Thus, apparently under pressure from SJP UChicago and its allies, the Viewpoints editors retracted the op-ed under a thin pretext of concern for four factual inaccuracies. One of these inaccuracies was not even the responsibility of the op-ed’s authors, while others were only inaccurate by the lights of SJP UChicago’s own account of events. Moreover, the Viewpoints editors had other, less dramatic options available to them to address what factual inaccuracies existed, such as publishing corrections or inviting a rebuttal from an SJP UChicago member.

Even if the factual inaccuracies were more significant, however, the crucial question the retraction raises is the extent to which a newspaper is responsible for the factual inaccuracies that appear in the opinion pieces it chooses to run.

On its face, it would seem that since the purpose of an opinions page is to provide a forum for community voices rather than news coverage, ensuring the factual accuracy of the former is a lesser priority. It is true that some factual inaccuracies may be so glaring that they either undermine an op-ed’s main claims or arguments or they amount to pernicious disinformation. In these cases, factual inaccuracies may sap an opinion piece of its value in fostering debate and discussion because they render the piece, in some important sense, irrelevant. That does not seem to be the case here.

In addition, the Viewpoints editors trotted out the specter of the “harm” caused by the op-ed to justify its retraction. The implication, it seems, is that speech must be harmless to be publishable. Some defenders of free speech tend to downplay the harm caused by it, arguing that belief in speech’s harmfulness is based on “cognitive distortions.” However, as I have argued before, the best argument for tolerating offensive, wrong-headed, hateful, or immoral speech is not that it is harmless. For example, the U.S. Supreme Court did not hold that journalists are immune from suit for negligent defamation of public officials because the latter are psychologically maladjusted snowflakes whose reputations are not really harmed by defamatory falsehoods broadcast about them by major news outlets. Instead, its rationale was that the costs of allowing journalists to be sued for negligent defamation — and in particular, the so-called “chilling effects” on politically important speech — substantially outweigh the benefits. By the same token, newspapers like the Maroon should publish potentially harmful speech at least partly because accepting the editorial principle that speech is publishable only if it cannot inflict any degree of harm upon any person at any time would have a devastating effect on a newspaper’s ability to serve as a forum for lively, relevant, and politically engaged debate and discussion.

If, as the original op-ed amply demonstrates, some are already tempted to use institutional discrimination and harassment policies to silence others’ speech, consider what a gift it would be to these censors manqué if everyone accepted that narrow principle of publishable speech.

The University of Chicago has much to be proud of in its record of defending free speech against the rising tide of illiberalism on both the right and left. But as Hannah Arendt reminded us, in every generation, civilization is invaded by barbarians — we call them “children.” Among the most important duties of the university in a liberal society is to inculcate in each new class of undergraduates the disposition to critically evaluate deeply offensive speech without invoking some institutional lever to censor the speaker. Apparently, in this respect even UChicago can stand to do better.

Transparency and Trust in News Media

When I teach critical thinking, I often suggest that students pay a good deal of attention to the news. When news stories develop, what details do journalists choose to focus on? What details are they ignoring? Why choose to focus on certain details and not others? When new details are added or the story is updated, how does this change the narrative? As someone who regularly monitors the news for ethical analysis, this is a phenomenon I see all the time. A news item gets updated, and suddenly the focus of the piece dramatically changes. This is something that one can’t do in print media, but online media can revise and change the narrative of news after it is published.

Given the rapidly declining public trust in media, is it time for journalists and news groups to be more transparent and accountable about the narratives they choose to focus on (some may even say create) when they present a new story?

One morning last week I began to read an opinion article which is part of a series of articles written by former national NDP leader (and Prime Ministerial candidate) Tom Mulcair for CTV News. The article is about the on-going national Conservative leadership convention taking place, and mostly focuses on one candidate, Pierre Poilievre, and his attempts to appeal to voters in contrast with some of his rivals. I didn’t finish the article that morning, but when I returned to it later that afternoon, I noticed it had a new title.

What was entitled “Tom Mulcair: The Conservative leadership debates will be crucial” that morning was now titled “Tom Mulcair: The Trump side to Poilievre.” This change was surprising, but if one looks carefully, they will note that the article was “updated” an hour after being first published.

Luckily, I had the original article in my browser, and I was able to make comparisons between the updated version and the original. Does the update contain some new information that would prompt the change in title? No. The two articles are nearly identical, except for a minor typo correction. This means that with no meaningful difference, the article’s title was changed from a more neutral one to a far more politically charged title. It is no secret that Donald Trump is not popular in Canada, and so connecting one politician’s rhetoric to Trump’s is going to send a far different message and tone than “leadership debates will be crucial.” The important question, then, is why this change was made?

Is this a case of a news organization attempting to create and sell a political narrative for political purposes? To be fair, the original article always contained a final section entitled “The Trump Side to Poilievre,” but most of the article doesn’t focus on this topic. The more prominent section in the article focuses on issues of housing affordability, so why wasn’t the article changed to “Tom Mulcair: Conservatives address affordability as a theme?”

Is this a case of merely using clickbait-y headlines in the hopes of driving more attention? The point is that we don’t know, and most people would never even be aware of this change, let alone why it was made.

A recent survey of Canadians found that 49% of Canadians believe that journalists are purposely trying to mislead people by saying false or exaggerated claims, 52% believe that news organizations are more concerned with supporting an ideology than informing the public, and 52% believe that the media is not doing well at being objective and non-partisan. Similar sentiments can be found about American media as well. Amusingly, the very article that reports on this Canadian poll seeks to answer who is to blame for this. Apparently, it’s because of the end of the fairness doctrine in the U.S. (something that would have no effect on Canada), the growth of punditry (who gives them airtime?), polarization, and Donald Trump. Missing, of course, is the media pointing the blame at themselves; the sloppy collection of facts, the lazy analyses, the narrow focus on sensational topics. Surely, the loss of confidence in the media has nothing to do with their own lack of accountability and transparency.

News organizations always present a perspective when they report. We do not care about literally everything that happens, so the choice to cover a story and what parts of the story to cover are always going to be a reflection of values.

This is true in news, just as it is true in science. As philosopher of science Philip Kitcher notes, “The aim of science is not to discover any old truth but to discover significant truths.” Indeed, many philosophers of science argue that the notion of objectivity in science as a case of “value freedom” is nonsense. They argue that science will always be infused with values in some form or another in order to derive what it takes to be significant truths, so the intention should be to be as transparent about these matters as possible.

Recently, in response to concerns about bias in AI, there have been calls within the field of machine learning to use data sheets for data sets that would document the motivation, collection process, and recommended uses of a data set. Again, the aim is not necessarily to eliminate all bias and values, but to be more transparent about them to increase accountability. Should the news media consider something similar? Imagine if CTV communicated, not only that there had been an update to their story, but what was included in that update and why, not unlike Wikipedia. This would increase the transparency of the media and make them more accountable for how they choose to package and communicate news.

A 2019 report by the Knight Foundation reports that transparency is a key factor in trust in media. They note that this should not only include things like notifications of conflicts of interest, but also “additional reporting material made available to readers,” that could take the form of editorial disclosure, or a story-behind-the-story, that would explain why an editor thought a story was newsworthy. Organizational scholars Andrew Schnackenberg and Edward Tomlinson suggest that greater transparency can help with public trust in news by improving their perception of competence, integrity, and benevolence.

This also suggests why the news media’s attempt to improve their image have had limited success. Much of the debate about news media, particularly when framed by the news media themselves, focuses on the obligation to “fact check.” The CBC, for example, brags that its efforts to “rebuild trust in journalism” have focused on confirming the authenticity of videos against deep fakes, a corrections and clarifications page (which contains very vague accounts of such corrections), or their efforts to fight disinformation. They say that pundits can opine on the news but not the reporters.

But what they conveniently leave out is that the degradation in trust in news is not just about getting the facts right, it’s about how facts are being organized, packaged, and delivered.

Why include these pundits? Why cover this story? Why cover it in this way? If the media truly wants to improve the public trust, they will need to begin honestly taking responsibility for their own failure to be transparent about editorial decisions, they need to take steps to be held accountable, and they need to focus on how they can be more transparent in their coverage.

Constitutional Interpretation in the Roe Reversal

photograph of Authority of Law statue facing out from Supreme Court building

On May 2, Politico published a leaked draft opinion of the Supreme Court of the United States in the case Dobbs v. Jackson Women’s Health Organization. The case concerns the constitutionality of Mississippi’s Gestational Age Act, which would prohibit abortions in the state after fifteen weeks. The appearance in the press of a leaked draft opinion of the Court is a highly unusual event unto itself, the exact circumstances of which are not yet known by the public but are currently the subject of investigation and speculation. The draft opinion, authored by Justice Samuel Alito, would not merely uphold Mississippi’s restrictive abortion law. It would overturn Roe v. Wade and Planned Parenthood v. Casey, and thereby rescind the constitutional protection for the right to privacy with respect to abortion that has been in place for nearly half a century.

Much of the public discussion about legal challenges to the right to privacy with respect to abortion in the press and in the confirmation hearings of Supreme Court nominees has, rightly or wrongly, focused on the doctrine of stare decisis. From this perspective, since the Court had already recognized and reaffirmed the right to privacy with respect to abortion, the key question was whether the Court would abandon that precedent and under what conditions the Court had a legitimate basis to do so. These issues also came up in oral argument in Dobbs. In electing to overturn precedent, the leaked draft opinion provides the following rationale: Roe and Casey were “egregiously wrong” decisions that “must be overruled” because the recognition of the constitutional protection of the right to privacy with respect to abortion was an “abuse of judicial authority” wherein “the Court usurped the power to address a question of profound moral and social importance that the Constitution unequivocally leaves for the people.”Alito concludes that “the authority to regulate abortion must be returned to the people and their elected representatives.”

It is first worth noting what the draft opinion does not say. It does not address the issue of whether, as a matter of basic justice or as a matter of political legitimacy, the right to privacy with respect to abortion requires constitutional protection.

This is because, notwithstanding the abstract moral provisions of the constitution, the theory of constitutional interpretation espoused in the draft opinion presupposes that these are mostly irrelevant considerations with respect to determining whether an unenumerated right is a candidate for constitutional protection. While it is presumably the case that Alito thinks abortion is some kind of grievous moral wrong, the draft opinion does nothing to support that conclusion other than to indicate that some people hold that opinion. Its primary aim is to demonstrate that the right to privacy with respect to abortion does not satisfy two key criteria it claims are necessary for an unenumerated right to require constitutional protection: that the right is “deeply rooted in [our] history and tradition” and compatible with a scheme of “ordered liberty.” According to Alito, the right to privacy with respect to abortion does not satisfy these criteria, and therefore the authority to regulate abortion must be left to the states.

It is worth contemplating just what the supposed restoration of the authority of the people to regulate abortion would constitute. This would grant states, in principle, broad police powers with respect to abortion. The people of the states could, of course, limit these powers by entrenching statutory or constitutional rights against their exercise, but they could also reserve such powers to the legislature. Some of these powers are the obvious ones that the opponents of safe and legal abortion desire: the authority to severely restrict or outright ban abortion within a state, including the authority to impose criminal penalties on women and their physicians if they are so inclined.

But it would also entail, as the late legal philosopher Ronald Dworkin pointed out, the authority to compel abortion so long as doing so promotes a legitimate state interest. This point was reiterated in Casey, which notes that but for the right protected by Roe, “the State might as readily restrict a woman’s right to choose to carry a pregnancy to term as to terminate it, to further asserted state interests in population control, or eugenics, for example.” A draft opinion which, if it does become the decision of the Court, would authorize state policy requiring compulsory abortion or would permit the institution of a scheme of licensure for the privilege of bearing children, including the imposition of fines or penalties for failure to make use of abortion services in the absence of such license is of great concern.

I mention this not because I think this is a likely prospect — I take no position on that question — but because it suggests that the draft opinion is prima facie defective.

And while jurists are generally less willing than philosophers to contemplate what they presume to be unlikely or fanciful consequences, or “hypotheticals,” it does not require any imagination to realize that such policies are not unheard of. These were effectively part of China’s One Child Policy, for instance. Once this dimension of the right to privacy with respect to abortion is acknowledged, it becomes clear that if the Court, in overturning Roe and Casey, primarily looks to a litany of 19th Century statutes restricting or prohibiting abortion as a basis for such a determination, it has not taken its analysis of “history and tradition” very seriously.

I have postulated that the same constitutional right to privacy that protects a woman’s right to choose whether to have an abortion also protects a woman’s right to not be compelled to have an abortion. It might be claimed that this point is irrelevant because it is possible to have one without the other: it is possible to jettison the right to choose and retain the right not to be compelled. It is certainly possible to conceive of a legal regime that is barred from compelling a woman to have an abortion without that woman having an individual right against such compulsion. For instance, if the state restricts itself from exercising that prerogative, or because it would violate the rights of someone else, e.g., if an embryo or fetus is considered to be a rights-bearing person, or if a woman’s body is considered the property of another person, and so on.

However, I would suggest that if a woman has an individual right not to be compelled to have an abortion, or, in other words, if such an invasion of her body by the state is an injury to her, as it plainly is, then, ex hypothesi, her right against such compulsion, whether described in terms of liberty, autonomy, privacy, or bodily integrity, also entails that she has the right to choose to have an abortion.

If this is the case, it follows that if the right to not be compelled to have an abortion meets the criteria for constitutional protection, then the Court is making a grave error in rescinding the right to privacy with respect to abortion.

The draft opinion is also concerning due to the precedent it sets for privacy rights in general. In a recent essay, the constitutional scholar Akhil Amar attempts to assuage these concerns. He aims to defend Alito’s claim that “[n]othing in this opinion should be understood to cast doubt on precedents that do not concern abortion.” According to Amar, overturning Roe and Casey would not imperil other privacy rights because, first, the public statements of sitting Justices indicate that they are not inclined to rescind other privacy rights, (e.g., the right to privacy with respect to contraception and the right privacy with respect to interracial marriage), and, second, because the recent legislative agendas of the states suggests that there is little to no public support for doing so.

The basic idea is that, unlike other privacy rights, the right to privacy with respect to abortion remains controversial, as evidenced by the persistence of legal challenges by various states. Therefore, other rights are unlikely targets for rescindment.

But this point is cold comfort for those who take the right to privacy with respect to abortion to have the same foundation as the other privacy rights. Perhaps the current composition of the Court can make peace with the apparent interpretive inconsistency of recognizing some privacy rights and not others, of declaring some privacy rights fundamental rights and treating the recognition of others as tantamount to judicial usurpation. But that does not prevent a future Court from using the reasoning in this draft opinion, if it does become the decision of the Court, as precedent for such judicial misadventure. (Of course, no precedent can prevent a majority of the Court that is willing to dispense with precedent altogether from imposing its interpretation of the Constitution on the nation.)

Presumably the reason Amar does not find the draft opinion to be concerning is because he does not see any such inconsistency. He agrees with Alito’s assessment that “abortion is fundamentally different” from other privacy rights, a point on which he is cited as an authority in the draft opinion. One reason, put forth by Alito and Amar, for the supposed distinction between the right to privacy with respect to abortion and the other privacy rights is the presence of an interest in protecting “potential life.”

The implication is that the right to privacy with respect to abortion entails unique conflicts that other privacy rights do not. But this is not plausible.

First, it is necessary to be clear about what the nature of the conflict is. The legitimate state interest, acknowledged in Roe and Casey, of protecting potential life, presents a conflict between individual liberty and public policy. When this is recognized, there is plainly no relevant difference between the right to privacy with respect to abortion and other privacy rights. All of these may be in conflict with various kinds of social policy, for instance, in regulating the “morals” of a community, as anti-miscegenation laws certainly purported to do.

The other reason, adduced by Alito and mentioned by Amar, states that the right to abortion with respect to privacy is distinct because abortion “destroys an ‘unborn human being.’” But the Court has not dared to claim, even in this draft opinion, as it could not do without venturing into a constitutional quagmire, that an unborn human being is a constitutionally rights-bearing person. So it is not clear what the point of this claim is supposed to be or how it factors into constitutional interpretation.

It remains to be seen whether the official Dobbs decision will differ in any significant way from the draft opinion. What is clear is that the Court is on the verge of rescinding the right to privacy with respect to abortion.

The Roe Leak: Of Trust and Promises

photograph of manilla envelopewith "Top Secret" stamped on it

There is plenty to be said about the leak that brought us the news that the Supreme Court was considering overturning Roe vs. Wade, the case that legalized abortion throughout America. The most important issue is that, if this draft becomes law, many people will be forced to either give birth when they do not want to (and giving birth in America is dangerous compared to other wealthy countries, especially for women of color), or they will have to seek an illegal abortion. Not to mention that banning abortion does not decrease the number of abortions, it just makes them more dangerous (because they are illicit and less well-regulated).

My focus here is not on that issue, it is on the comparatively unimportant issue of whether whomever leaked the draft should have done so – though I won’t find an answer, I will explore what sorts of factors might help decide this. (Matt Pearce in the LA Times does an excellent job of explaining the various competing factors; there is no way that I could cover everything in this short article, and I will inevitably omit important factors.)

The leak itself has caused an outcry. SCOTUS Blog described the leak as “the gravest, most unforgivable sin.” (This might be a bit strong, considering the Supreme Court has previously ruled that slaves had no rights and Japanese-Americans could be interred in concentration camps.) The leak has also been described as an “actual insurrection” (seemingly by somebody who does not know what words mean) and as an obvious attempt to “intimidate.”

Others have offered more measured, reasonable, criticism. John Roberts, the Chief Justice, said that this leak was a “betrayal of the confidences of the Court [that] was intended to undermine the integrity of our operations.” He also noted that there was a “tradition” of “respecting the confidentiality” of such drafts, calling the leak a “breach of trust” that was an “affront” to the court. (It’s worth pointing out that leaking court opinions is not illegal – no law forbids leaking itself.) I want to suggest that even if everything Roberts has said is true, the leaker still might have been right to leak the draft.

Here is one starting point to get to Roberts’s position. Clerks apparently promise the court confidentiality, and to break a promise is itself wrong. After all, this is a reasonable promise to expect clerks to make (and this following consideration applies to judges, too): deliberating in an open way, where you can communicate trustfully with your colleagues, in theory helps to ensure open, fruitful conversation. (If a justice leaked the draft, they might not have made a promise, but the reasons to ensure open discussions apply to them.)

How exactly promises work is a topic of debate amongst philosophers, but one illuminating approach is offered by the recently deceased Joseph Raz that draws on the notion of “exclusionary reasons.”

As Raz sees it, what we should do is determined by what reasons we have. Ordinary (first-order) reasons help us decide what is best: if eating the cake will give me the nutrition required, and I want to eat it, then I should eat it if no reason exists against eating it. Now, if there is a reason not to eat it, for instance I have already had one portion and I don’t want to offend my hosts, then perhaps I shouldn’t eat it. Whether I should eat it depends on how these reasons weigh up: is it more important that I get the necessary nutrition and do what I want, or that I avoid any risk of offending my host. Promises are not like that: if I promised my wife I would only have one slice of cake, then the facts that I want it and it supplies nutrition, do not count. The promise excludes the countervailing considerations.

So, if there was a promise not to leak, then even if there are reasons to leak, perhaps one should not.

Yet even if the leak would breach a promise and constitute a betrayal, this might be the right thing to do. If a friend tells you that they are cheating on their partner, you might betray your friend’s trust by informing that partner – and trust amongst friends is important –  but tell that partner might still be the right thing to do: your friend’s partner does not deserve to be treated like this, and that might outweigh the fact that you promised your friend you wouldn’t tell.

Here are two explanations for why this might be okay. If your friend had said “I have a secret, promise me you won’t tell anybody?” you might think they are, say, planning a surprise party for a friend or thinking about a career change. You might reasonably think your promise has a certain scope, restricted to trivial things. If your friend had confessed to being a notorious murderer, you wouldn’t reasonably be expected to keep that promise, nor need you keep the promise when he tells you he has cheated on his partner. Likewise, in the case of the Supreme Court leak, we have to judge whether the promise to keep things confidential extends this far: does it cover overturning a law that has been settled for five decades, that will affect millions, and which many of the Supreme Court justices (even recently) suggested they would not overturn?

Or, perhaps sometimes it would be wrong to leak (because you promised not to) yet the best thing to do all-round is to leak it. This is a bit like the ethical problem of dirty hands: where to ensure the best result, somebody had to do something wrong. It might be that torture is wrong, yet finding out where the bomb is hidden is so important that somebody should do the awful thing and torture the suspect (this example is simplistic: torture is very ineffective). Likewise, perhaps leaking is wrong and damages the court, yet letting Roe vs. Wade be overturned is too dangerous, and somebody should get their hands dirty, do the wrong thing, and leak the draft for the greater good. This would be, in a way, deeply admirable.

The topic is complex, my point here is just that the fact that leaking is wrong, or the fact it betrays an institution, is not enough to get us to the conclusion that it shouldn’t be done. Sometimes – as tough as it may be, as much as it may damage one’s own moral standing or future career – people should betray others.

Bloodstained Men and Circumcision Protest

photograph of Bloodstained Men protestor

Images of men dressed in pure white with a vibrant mark of blood around their crotch have littered front pages in past weeks. The Bloodstained Men are protesting the practice of male circumcision – removal of the foreskin from the penis. This surgical practice, although less common in many European countries, is widely accepted and largely performed for social, aesthetic, or religious reasons. The World Health Organization estimated that somewhere between 76-92% of people with penises are circumcised in the United States.

While the practice of circumcision has a long history and has been endorsed by many Western doctors, does this make it ethical?

The Bloodstained Men, and other anti-circumcision activists, would argue that it does not: circumcision is a violation of genital autonomy and is a purely aesthetic surgery that only works to detract sexual pleasure and is performed without the consent of the child. Others, meanwhile, support circumcision, citing its possible medical benefits and ability to increase social, romantic, and sexual acceptance. How can we reconcile these two conflicting views?

Consulting our ethical convictions regarding female genital mutilation (FGM) may bring some clarity on this issue. The practice of altering the female genitalia – either by removing the clitoris, parts of the labia, or closing the vagina – has long been considered a morally impermissible intervention in Western society, and on valid grounds. Still, it must be determined whether our condemnation of FGM should inform a similar objection over male circumcision.

Most significantly, many cite FGM as problematic in its attempt to limit sexual autonomy, maintain ideals of purity, and uphold societal expectations around sex and femininity. The intent behind the procedure, then, may be the key to our acceptance of circumcision. Circumcision has long been a religious custom in the Muslim and Jewish faiths, but gained popularity in the United States for different reasons. Most integral to its growth in practice was a belief that circumcision could cure physical and mental health issues, provide an indication of wealth and social status, and prevent masturbation. Although these reasons may have led to its popularity, they have long been proven incorrect, and now the intent behind circumcision is typically associated with ideas of cleanliness, health, or social acceptance (with a focus on genital uniformity with one’s father or peers).

Are these justifications more morally permissible than those for FGM? Like FGM, there is a historic desire to suppress sexual autonomy paired with a current desire to gain social acceptance, and in both cultures the procedure is viewed as an accepted social custom done to benefit a child in some way. It is possible, then, that an evaluation of impact, rather than intent, will prove more useful for our discussion.

FGM is denounced by its lack of medical benefits, and more broadly by its medical risks, with severe forms causing difficulties birthing, infections, and psychological trauma. Does the moral difference, then, lie in the benefits of circumcision? Possible benefits include a decreased risk for HIV or urinary tract infections, easier hygiene, and social acceptance, with the belief that uncircumcised persons will face social persecution, bullying, or romantic/sexual ostracization. Do these reasons warrant genital surgery?

Research has found that these benefits are much more slight than once believed, especially when making a consideration for policy within the United States where HIV rates are quite low and may be better addressed with proper access to condoms, the drug PREP, or comprehensive sex education. In addition, circumcision, like FGM, reduces sexual pleasure; the foreskin, much like the clitoris, houses a majority of the nerve endings in the penis, so its removal reduces sensation. It is widely known, now, that circumcision is not a medical necessity, yet the practice remains a social custom. Social reasons for circumcision may be convincing, but are also similar to those that inform FGM.

Is social normativity enough to warrant the removal or change to a perfectly healthy organ, especially if it reduces pleasure? Even if there are some medical benefits, is this a decision that should be made for a child?

This discussion really comes down to a conversation about informed consent. For surgeries under the age of 18, parents are given the authority to provide consent for their children; this sacrifice of rights is necessary to serve the medical interests of the child. In the case of circumcision, though, there is absolutely no medical necessity; it is a surgery that involves the removal of a natural part of a healthy organ, an organ that increases pleasure later in life. Should parents be able to consent to surgeries that are not medically necessary?

The value we place on bodily autonomy suggests that this is not a decision that should be made by parents, especially as it is often motivated by a desire to “fit in.” Personal autonomy and the right to control one’s own body, especially such an intimate organ, should supersede social and cultural norms. If we do decide respecting cultural customs and desires for social acceptance are more important than our ethical understanding that people should have the right to control their bodies, why do we denounce FGM?

When evaluating the two procedures, it seems as though circumcision shares many of the qualities that make FGM unethical, so shouldn’t we deem circumcision unethical as well? If we decide to continue the practice of circumcision, where must we fall on the issue of FGM? In order to come to a conclusion about circumcision, we must reckon with our moral attitudes towards FGM and determine whether our values of consent and pleasure are more important than our need to conform to social and cultural customs.

Cryonics: The Trap Objection

photograph of hand pressed on thawing glass

Cryonics is the technique of preserving the bodies (or brains) of recently deceased people with the hope that future scientific advances will enable these people to be revived and live on. The technology to revive cryons (i.e., cryonically preserved people) doesn’t exist, and there’s no guarantee that it will ever be developed. Nevertheless, there’s a chance that it will be. This chance motivates people to spend money to undergo cryonic preservation.

The basic argument for cryonics is that it might not work, but what do you have to lose? As my colleague Richard Gibson has noted, we can think of the cryonics choice as a wager.

If you choose not to be preserved, then you certainly won’t enjoy any more life after death (I’m assuming there’s no spiritual afterlife). But if you choose to be preserved, then although there’s a chance you won’t be revived, there’s also a chance that you will be revived, enabling you to enjoy more life after you die.

Therefore, choosing preservation is a better bet, assuming the costs aren’t too high. By analogy, if you have to choose between placing a bet that has no chance of winning, and placing a bet that has some unspecified but non-zero chance of winning, the latter is definitely the better bet (ignoring the costs of placing the bets).

I want to explore an objection to this argument. Call it the Trap Objection. The Trap Objection questions the presupposition that revival would be a good outcome. Basically, the Trap Objection points out that while revival might be a good outcome for a cryon, it’s also possible for a cryon to be revived into a situation that is both undesirable and inescapable. Thus, the wager is less straightforward than it appears.

To appreciate the Trap Objection, first note that life is not always worth living. Life is filled with lots of bad things, such as pain, grief, and disappointment, to which we would not be exposed if we were not alive.

Most of us believe that most of the time the good in our lives outweighs the bad, and thus life is on balance worth living despite the drawbacks. Such assessments are probably usually correct (although some question this). It sometimes happens, though, that the bad things in life outweigh the good.

For example, the life of someone with an agonizing incurable illness may contain lots of pain and virtually no compensatory goods. For this person, life is no longer better than nothing at all.

Second, note that sometimes suicide is on balance good and consequently justified when life is no longer worth living. For example, the incurably ill person may reasonably view suicide as preferable to living on since living on will bring him more bad than good but death will permanently close the account, so to speak. And because suicide is sometimes justified and preferable to living on, it is sometimes a great misfortune when someone loses the capacity to choose death. If the incurably ill person were unable to choose to escape the agony of his life, this would likely be a great misfortune for him.

Let a Trap Situation be any situation wherein (i) a person’s life has permanently ceased to be worth living yet (ii) the person has lost the capacity to choose to end their life. For example, individuals with late-stage Alzheimer’s disease are often in Trap Situations, unable to enjoy life but also unable to end it. Trap Situations are very bad, and people have very good reason to want to avoid them.

Now we are in a position to formulate the Trap Objection. The Trap Objection is that there is a chance that choosing cryonic preservation will lead to a Trap Situation, and until we have some understanding of how high this chance is and how bad the most likely Trap Situations would be, we are not in a position to determine whether cryonic preservation is a good or bad bet. But a death without cryonic preservation will certainly not lead to a Trap Situation. Thus, choosing against preservation is arguably the safer and better option.

By analogy, if you have to choose between placing a bet that has no chance of winning or losing any money, and placing a bet that has some unspecified chance of winning you some unspecified amount of money and some unspecified chance of losing you some unspecified amount of money, the former is arguably the safer and better bet (ignoring the costs of placing the bets).

Cryonics could conceivably produce many types of Trap Situations. Here are some examples.

Brain Damage: The cryonics process irreversibly damages a cryon’s brain. The cryon is revived and kept alive by advanced technology for centuries. But the cryon’s brain damage causes her to suffer from irreversible severe dementia, rendering the cryon unable to enjoy her life and also unable to end it.

Environmental Mismatch: A cryon is revived into a radically unfamiliar social, political, and technological environment. The cryon is unable to adjust to this new environment and reasonably wants to end her life. The cryon is unable to end her life, however, because suicide is culturally and legally prohibited, and the means exist to enforce this prohibition.

Valuable Specimen: The technology to revive cryons is developed in the distant future. Future humans are interested in learning about 21st century humans, but only a few have been successfully preserved. A cryon from the 21st century is revived and studied. The study techniques are barbaric and make the cryon miserable to such an extent that the cryon reasonably wants to kill herself. But because the cryon is a valuable specimen this is not permitted.

Mind Upload: A cryon’s brain is scanned, and the cryon’s consciousness is uploaded to a virtual world that is owned and operated by a technology company. The cryon finds life in the virtual world to be unbearably depressing and wants to opt out, but because the activities of the virtual world’s digital inhabitants generate economic value for the technology company, inhabitants are not permitted to terminate themselves. Mental processes in the virtual world are simulated at 1,000 times their normal speed, such that one day in the real world feels like one thousand days to the digital inhabitants. The virtual world is maintained for 50 real-world years, which the cryon experiences as 50,000 years of unbearable depression.

This sampling is meant to illustrate that revival needn’t be a good thing and might actually be a very bad thing – even an astronomically bad thing, as in Mind Upload – for a cryon. It does not represent an exhaustive mapping of the relevant possibility space.

I don’t know how likely it is, either in absolute or relative terms, that a cryon will be revived into a Trap Situation, although the likelihood is definitely non-zero. Moreover, it’s unclear how to go about determining this likelihood from our current perspective. Contemporary cryonic practitioners will claim that they would never revive a cryon into a Trap Situation. But it is very unlikely that the technology to revive cryons will be developed within the (natural) lifespan of any living cryonic practitioners. Moreover, the world could change a lot by the time the technology is developed. So, the significance of these claims is dubious.

It seems that even if we ignore pre-preservation costs, choosing cryonic preservation is not clearly a safe or good option.

If you are so terrified of nonexistence that you would prefer the chance at any sort of future life to certain annihilation, then cryonic preservation does seem reasonable. But this preference seems unreasonable. In some situations, the certainty of death should be preferred to the uncertainty of life.

On the Morality of Allowing Euthanasia for Those with Mental Illness: Part 2

photograph of empty hospital bed with curtains closed

In a previous post on Canada’s decision to allow those with a mental illness to seek medical aid in dying, I discussed some of the factors that need to be considered when evaluating the moral permissibility of euthanasia. These considerations, however, are generally raised in response to cases of intolerable and incurable physical suffering. Things become a lot more complicated when this suffering is instead mental.

Why might this be the case? One of the most common arguments in favor of the moral permissibility of euthanasia is based around the idea of autonomy. This concept holds that we should get the final say on decisions that affect the course of our lives. And this includes choices about how and when we die. This is why we might see a case of suicide as tragic or regrettable, but are usually reluctant to say that someone who takes own life does something morally wrong. But what happens when the process used to make such choices becomes unreliable?

One way of understanding autonomy is through the satisfaction of desires. We all have many desires: A desire to see the climate crisis resolved, a desire to study orbital mechanics in college, or the desire to eat an entire cheese pizza for dinner. The extent to which we have autonomy over these things is determined by our ability to satisfy these desires. So, while I can do something to reduce my carbon footprint, the complete resolution of the climate crisis is entirely out of my control. This, then, is something over which I do not have autonomy. When it comes to what I eat for dinner or what I study at college, however, I have far more autonomy. To say that I should have autonomy over the time and manner of my death, then, is to say that I should be able to satisfy whatever desire I have regarding my death. If that desire is to end my life prematurely, then I should be allowed to do so. And if for some reason I need assistance in ending my own life, then there can be nothing wrong with another person providing this.

The problem with desire-based theories like this is that there are many cases in which we don’t desire what’s good for us. This can happen in one of two ways. Firstly, we can desire things that are bad for us. That cheese pizza might be delicious – and give me thirty solid minutes of bliss – but the long-term effects will be bad for me. I’ll gain weight, raise my cholesterol, and suffer through an entire evening of gastric distress. Secondly, we can fail to desire things that are good for us. While I might thoroughly enjoy studying orbital mechanics, it may very well have been the case that a degree in ornithology would have been far more enjoyable and rewarding.

These concerns are compounded in cases of mental illness, as sufferers may be more prone to form desires that are bad for them. But to discount all of the desires of the mentally ill is to show enormous disrespect for their dignity as persons. So how can we discern the good desires from the bad?

One solution might be to refer to distinguish between “first-order” and “second-order” desires. First-order desires are precisely the kind of desires we’ve been considering so far – desires about what to eat, what to study, and when to die. Second-order desires, on the other hand, are desires about desires. To illustrate the difference between these two, consider the case of Mary. Mary is a smoker. Every morning, she wakes up with a powerful desire for a cigarette. A desire that she promptly satisfies. Then, throughout the day, she desires many more cigarettes – a full pack’s worth in fact. Mary, however, deeply regrets being a smoker. She hates the harmful effects it has on her health and her wallet. She wishes that she didn’t desire cigarettes. So, while Mary’s first order desire is to smoke cigarettes, her second order desire is precisely the opposite.

How does this help us? Well, we might argue that when considering how best to respect a person’s autonomy, we should focus purely on an individual’s second-order desires. This, then, would permit us to do something like forcefully prevent Mary from smoking (say, by confiscating her cigarettes and preventing her from buying more). Similar reasoning can be applied to the many cases where someone’s desires have been corrupted by addiction, deception, or general human flaws like laziness and procrastination.

In the case of mental illness, then, we now have a tool that allows us to look past someone’s immediate desires, and instead ask whether an individual desires to have such desires. If we can show that someone’s desire for death has come about as a result of their mental illness (and not, say, by a reliable process of informed reasoning) we could argue that – since the individual does not desire that desire – helping them end their life would not be respectful of their autonomy. If, however, their second-order desire is in favor of the desire to die, respect for autonomy will once again lean in favor of us helping them to end their own life.

All of this is to say that allowing euthanasia in cases of severe and incurable mental illness is enormously complicated. Not only does it involve all of the usual considerations that are relevant to euthanasia, it also contains an additional set of concerns around whether helping a patient end their own life will truly see us acting in a way that respects their autonomy. In order to ensure such respect, we should focus not just on what an individual desires, but on their attitudes towards those desires.

On the Morality of Allowing Euthanasia for Those with Mental Illness: Part 1

photograph of empty hospital beds and drawn curtains

Starting in March 2023, Canada will allow those with a mental illness to seek medical aid in dying. Canada first legalized euthanasia in June 2016, but – like most jurisdictions that have passed such laws – restricted access only to those with a terminal illness. This law was expanded in 2021 to also include patients with a “grievous and irremediable medical condition” (that is, those who are going through incurable suffering, but who are not dying). Canada’s most recent amendment will now see them join the Netherlands as one of the first countries in the world to make euthanasia available for those with a severe and incurable mental illness.

What should our ethical position be on this law? Fortunately for us, euthanasia is an incredibly fertile area in moral philosophy. Generally, philosophers separate cases of euthanasia into one of two kinds. The first of these describes scenarios in which a doctor withdraws lifesaving or life-sustaining treatment from a patient. Suppose, for example, that a terminal cancer patient has only months to live, but is currently undergoing an aggressive course of chemotherapy to try and extend her life by several weeks. This treatment comes at an enormous physical and mental toll, however. Given this, she elects to cease treatment in order to maximize her quality of life in what little time she has left. This is what is referred to as “passive” euthanasia. Passive euthanasia can be contrasted with cases in which a doctor intentionally intervenes in order to end a patient’s life. Suppose, for example, that the patient above wanted to avoid the unnecessary pain and suffering that her illness will bring in the months preceding her inevitable death. Given this, she asks her doctor to administer a morphine overdose to end her life quickly and painlessly. This is what is referred to as “active” euthanasia.

This distinction is important, as many jurisdictions that criminalize active euthanasia have no such prohibitions against passive euthanasia. And this sentiment is often echoed by doctors. The American Medical Association (AMA), for example, has previously stated that active euthanasia is “contrary to that for which the medical profession stands and is contrary to the policy of the AMA.” Despite this, they hold that the passive euthanasia should be “the decision of the patient and or his immediate family.”

This distinction is an interesting one – especially since, in many cases, passive euthanasia can bring about far more suffering. Consider a terminal patient whose life is being prolonged using a respirator. Suppose, then, that this patient – knowing that her final weeks will be filled with unbearable suffering – elects to end her life prematurely. There are two ways in which this could be done: The first option would be to remove her respirator. The second option would instead involve administering a morphine overdose, quickly and painlessly ending her life. According to the approach outlined above, the former option (a case of passive euthanasia) would be acceptable, even though it sentences the patient to the harrowing experience of slowly suffocating to death. The latter option (a case of active euthanasia) would instead be seen as morally impermissible, despite the fact that it involves far less suffering for the patient.

Given the fact that passive euthanasia can often be far worse than active euthanasia, it might be tempting to think that there’s no important moral difference between the two. But some philosophers still believe there is. Philippa Foot, for example, argues that it all comes down to whether or not we are the agent of harm. Foot emphasizes that any harm that befalls a person (including death) comes as the result of a sequence of events – sort of like a long line of dominoes. The morally important question then becomes: who started this sequence of events? If it’s us – that is, if we are the one who tipped the first domino – then we are, according to Foot, the “agent of harm,” and therefore find ourselves morally responsible for the harm in question. If, however, the sequence of events is already in motion, then we’re off the hook. To use Foot’s own examples, this is the kind of reasoning that would allow us to drive past a dying person on the side of the road in order to save five other people, but would not allow us to drive directly over another individual for the same reason.

Returning to the discussion of euthanasia, Foot’s approach allows us to make a moral distinction between active and passive euthanasia on the basis of who sets the harmful sequence of events in motion. When we remove the respirator from a patient, the sequence of events that leads to her death comes about as the result of her disease. When we administer a morphine overdose, however, we initiate a new sequence of events – a sequence that we are responsible for. We become the agent of harm.

So, does all of this mean that providing active euthanasia (including in cases where a patient is suffering from a severe and incurable mental illness) is morally wrong? Not necessarily. While we’ve shown that there is an important moral distinction to be made between cases of passive and active euthanasia, we have not yet shown that this distinction is significant enough to make one kind (passive) always morally right, and the other kind (active) always morally wrong.

And there’s something even stranger going on here. Consider a case where someone suffering from a mental illness takes her own life. While we might see this as tragic or regrettable, few – if any – of us would say that the individual did something morally wrong. And this is interesting. Because if we insist on holding that active euthanasia is morally wrong, we are essentially saying that it’s morally impermissible for us to help an individual do something that is otherwise morally permissible. This is a strange conclusion indeed. In fact, we’d be hard-pressed to think of many other cases where it’s wrong for us to help someone do something that’s right. This strange anomaly just goes to show how complex this discussion is, and how much more there is to be said on the matter.

The Morality of Forgiving Student Debt

photograph of graduates at commencement

In March 2020, as the pandemic began, the federal government temporarily suspended student-loan payments and the charging of interest on student debt. Two years later, the suspension continues. There are now growing calls for student debt to be canceled entirely.

Forgiving student loans is a deeply controversial topic, as a few of our own writers have discussed. The policy raises difficult economic questions (would forgiving student loans beneficially stimulate the economy, or simply contribute to the already-high inflation?), political questions (would this be a political “winner” for the Democrats going into the midterms?), and also essentially moral questions.

Do the borrowers deserve forgiveness? Would forgiving existing loans be fair to those who have already paid off theirs? Would a government bailout of student loan borrowers be just when they tend to earn more than most taxpayers?

Both sides of the student loan forgiveness debate use the language of morality and justice to defend their views. On the anti-forgiveness side, it is common to hear expressions to the effect of “I paid mine. You pay yours.” How is it fair on those who worked hard, lived frugally, and repaid their loans that their lazier or less financially responsible counterparts get their loans bailed out by the government? It seems morally wrong to reward failure when it is the result of personal irresponsibility. Those who took out loans did so freely. Perhaps they ought to deal with the consequences themselves, rather than have those consequences shifted onto the taxpayer’s back.

Whether this is a convincing argument depends largely on whether you think those taking student loans are fully informed about the relevant information before making their decisions, and whether you think they are being financially exploited by the universities they are joining. If borrowers were exploited, then it seems just to forgive their debts.

First, some background. Student loan debt has grown rapidly over the past two decades, almost fourfold from $480 billion in 2006 to $1.73 trillion in 2021. Approximately 45 million Americans have student debt, an average of $39,351.

The U.S. Department of Education claims that 10 years is the ideal length of time to pay off a student loan. But, in reality, these loans take an average of 21 years to pay off. If you graduate at 22, you can be expected to be paying off your student loan into your mid-40s. And some student loans are far worse than that. The average Professional degree at a for-profit college takes a shocking 46 years to pay off — longer than most Americans are in the workforce. Even worse, some borrowers are unable to repay their debts. The default rate for the student loans owed to for-profit colleges is 52%, and 66% for African Americans.

The personal impact of crushing student loan payments can be severe and endure for decades. Given these possible long-term negative effects, perhaps the federal government shouldn’t be giving these student loans out in the first place.

The brain takes an average of 25 years to fully mature, but the life-changing decision to take a student loan is made by those as young as 18 years old. If these loans should have never been given, then forgiving them would be rectifying past exploitation.

Debt is also not solely the moral responsibility of the borrower; the provider bears some moral responsibility too. But federal student loans are available to almost all students with no requirements beyond meeting the program’s requirements. The government spends no time nor effort assessing whether the prospective student will be capable of repaying the loan, nor if the degree will be considered an asset. Both eligibility and interest rates are the same for the top-earning degrees (e.g., Petroleum Engineering, Operations Research & Industrial Engineering), and the lowest (e.g., Medical Assisting, Mental Health, Early Childhood Education), despite their vastly different risks of default. Is it really fair to give the burden of a student loan to a future low-paid Medical Assistant, on the same terms as a future Petroleum Engineer? If not, perhaps the federal government has failed to act responsibly in giving these loans in the first place, suggesting forgiveness is the moral choice.

But why should the government opt for forgiveness?

If you get into debt you cannot repay, our society has a system for escape: bankruptcy. It is a painful solution, but an essential one used by 1.5 million Americans each year. Isn’t this the solution to the student debt crisis? The problem is that this basic financial right is tightly restricted in the case of federal student debt. While some advocate changing bankruptcy law to include student debt, until those changes are enacted we are seemingly left with only one solution for those with non-repayable student loans: forgiveness.

Despite these considerations, there is also a strong case against student debt forgiveness. Student loans are not always exploitative. Used well, they can provide access to higher education to millions of Americans who could otherwise not afford it. In a world without student loans, we would expect fewer students from poor families to go to university. Most college students take student loans, and most are able to repay. The access to higher education that these loans can provide is often immensely valuable, both economically and personally.

Of course, an education is worth far more than its financial benefits, but even if we focus narrowly on the economic benefits of university education, those with a bachelor’s degree earn an average of $2.8 million over their careers, compared to $1.6 million for those with a high school diploma. In fact, every extra level of education is correlated with another boost to lifetime earnings. So, while some student loans are lifelong financial burdens, others act as financial life-rafts, leading borrowers to better lives in the broadest sense. Student loans can be irresponsible, exploitative and morally wrong, but they can also be transformative.

If student loans are neither inherently exploitative nor inherently beneficial, how can we assess blanket policies such as forgiveness? One way is to examine the effect of the policy through the lens of distributive justice — the question of what allocation of society’s wealth and resources would be equal, fitting, or otherwise just.

Congresswoman Ayanna Pressley appealed to the value of distributive justice in support of student loan forgiveness, calling it ‘a racial justice issue’, ‘a gender justice issue’, and ‘an economic justice issue’, and tweeting that “Black women are … the most burdened by student debt.” The implication is that Black women are unjustly disproportionately burdened by student debt, in part due to the existing racial wealth gap, and that forgiving this debt would make the country more just. Similarly, Senator Elizabeth Warren and Senate Majority Leader Chuck Schumer wrote that “Canceling student debt is one of the most powerful ways to address racial and economic equity issues. The student loan system mirrors many of the inequalities that plague American society and widens the racial wealth gap.”

Historically, Joe Biden has been fairly skeptical of such claims. In 2021, he told The New York Times, “The idea that you go to [the University of Pennsylvania] and you’re paying a total of 70,000 bucks a year and the public should pay for that? I don’t agree.” Despite the fact that Biden disagrees with Pressley, Warren, and Schumer, he too views the issue through the lens of distributive justice. But Biden believes distributive justice would not be served by a blanket policy of forgiveness. This explains the most recent proposals to be floated by members of the Biden administration, which consider much more limited and targeted debt forgiveness, aimed at those below a certain income threshold.

Biden has a point. Those who go to university earn, on average, significantly more than their high-school diploma holding counterparts. They also are much less likely to be unemployed; college graduates’ unemployment rate is now just 2%.

So how could it really help promote equality and distributive justice to bail out the debts of the high-earning university-educated elite?

Pushing this point further, the recent calls for student debt-forgiveness are seen by some as a disproportionately wealthy, powerful, and influential segment of society seeking to massively financially benefit themselves at the taxpayer’s expense. Is it right to force blue-collar taxpayers to bail out Harvard graduates? Megan Kelly recently put it like this: “There people are going to be… elite graduates… Why should I be paying for their education? I don’t want to!”

Congresswoman Alexandria Ocasio-Cortez has pushed back against these skeptical characterizations and defended the distributive-justice credentials of student loan forgiveness. She wrote that “Taking the school that someone went to college to is not really shorthand for the income of the family that they come from.” Martina Orlandi gives a similar argument here. The trouble with this argument is that when we talk about adults being wealthy, we aren’t generally talking about their parents’ wealth but their own. The first person in a family to have wealth is still wealthy, and we don’t think they should be taxed less because their parents were poor. Likewise, it is unclear why college graduates should have their debts forgiven because their parents were poorer than them.

At a recent town hall, Ocasio-Cortez provided a much stronger argument in defense of debt forgiveness as a vehicle for distributive justice, pointing out that most students from high-income families never take student loans: “if you are very wealthy, if you are a multimillionaire’s child, if you are Bill Gates’ kid, if you’re Jeff Bezos’s kid—Jeff Bezos isn’t taking out a student loan to send his kids to college.” If rich kids don’t take student loans and poor kids do, then it is clear that forgiving these loans should promote greater wealth equality.

To get a better grasp on these various conflicting claims about what distributive justice demands in relation to student loans, we need to look more closely at the statistics.

Black college students are indeed the demographic of students most likely to use federal student loans. However, Black Americans have significantly lower rates of college enrolment than White Americans. 29% of Black Americans aged 25 to 29 have undergraduate degrees, while 45% of White Americans do. Therefore, forgiving federal student debt would probably help narrow the racial wealth gap between college graduates, but it would most likely widen the racial wealth gap between Americans overall. Likewise, college students from the wealthiest families tend to take out fewer loans, while those from the poorest take out more. Forgiving student loan debt would, therefore, likely decrease wealth inequality between college graduates. But, in terms of income, the top 40% of households owe 60% of outstanding education debt and make 75% of the payments. The bottom 40% of households have only 19% of outstanding educational debt, and make only 10% of the payments. So forgiving student loans would likely increase the wealth inequality between Americans overall, even as it lowers wealth inequality between college graduates.

Intergenerational justice may provide a more convincing lens from which to defend student loan forgiveness. In 1970, the average in-state tuition for a public university was $394. In 2020 it was 25.8 times higher, at $10,560. Meanwhile, the federal minimum wage has risen by just 3.5 times. Instead of 5 hours of work per week paying for a year of tuition at an in-state university, it now takes 28 hours per week. The days of paying for college with a part time job are over. At the same time, employers now demand higher levels of education from their employees, putting this generation under immense pressure to take on educational debt to access the same jobs their parents worked with less education. In this context, student debt forgiveness can be seen as a way of mitigating the inequality between the generations — a way of transferring the nation’s wealth to younger Americans who have lacked the financial opportunities their parents had. Whether this is convincing or not likely depends on your view of government debt. Forgiving student debt would, effectively, nationalize the debt — add it to the total U.S. federal debt. But fiscal conservatives argue this would simply add to the burden of future taxpayers (i.e. young people and their children). If they are right, then student loan forgiveness could simply perpetuate generational injustice, rather than mitigate it.

Student loan forgiveness is a controversial topic for good reason. Student loans can be irresponsibly given and exploitative, but they can also be extremely beneficial. Forgiving them could reduce certain unjust inequalities in American society, but it could increase others. But this much is clear; the issue is not just political. It is also a debate about morality and about justice.

Virtual Work and the Ethics of Outsourcing

photograph of Freshii storefornt

Like a lot of people over the past two years, I’ve been conducting most of my work virtually. Interactions with colleagues, researchers, and other people I’ve talked to have taken place almost exclusively via Zoom, and I even have some colleagues I’ve yet to meet in person. There are pros and cons to the arrangement, and much has been written about how to make the most out of virtual working.

A recent event involving Canadian outlets of restraint chain Freshii, however, has raised some ethical questions about a certain kind of virtual working arrangement, namely the use of virtual cashiers called “Percy.” Here’s how it works: instead of an in-the-flesh cashier to help you with your purchase, a screen will show you a person working remotely, ostensibly adding a personal touch to what might otherwise feel like an impersonal dining experience. The company that created Percy explains their business model as follows:

Unlike a kiosk or a pre-ordering app, which removes human jobs entirely, Percy allows for the face-to-face customer experience, that restaurant owners and operators want to provide their guests, by mobilizing a global and eager workforce.

It is exactly this “global and eager workforce” that has landed Freshii in hot water: it has recently been reported that Freshii is using workers who are living in Nicaragua and are paid a mere $3.75 an hour. In Canada, several ministers and labor critics have harshly criticized the practice, with some calling for new legislation to prevent other companies from doing the same thing.

Of course, outsourcing is nothing new: for years, companies have hired overseas contractors to do work that can be done remotely, and at a fraction of the cost of domestic workers. At least in Canada, companies are not obligated to pay outsourced employees a wage that meets the minimum standards of Canadian wage laws; indeed, the company that produces Percy has maintained that they are not doing anything illegal.

There are many worries one could have with the practice of outsourcing in general, primarily among them: that they take away job opportunities for domestic employees, and that they treat foreign employees unfairly by paying them below minimum wage (at least by the standards of the country where the business is located).

There are also some arguments in favor of the practice: in an op-ed written in response to the controversy, the argument is made that while $3.75 is very little to those living in Canada and the U.S., it is more significant for many people living in Nicaragua. What’s more, with automation risking many jobs regardless, wouldn’t it be better to at least pay someone for this work, as opposed to just giving it to a robot? Of course, this argument risks presenting a false dichotomy – one could, after all, choose to pay workers in Nicaragua a fair wage by Canadian or U.S. standards. But the point is still that such jobs provide income for people who need it.

If arguments about outsourcing are old news, then why all the new outrage? There does seem to be something particularly odd about the virtual cashier. Is it simply that we don’t want to be faced with a controversial issue that we know exists, but would rather ignore, or is there something more going on?

I think discomfort is definitely part of the problem – it is easier to ignore potentially problematic business practices when we are not staring them in the virtual face. But there is perhaps an additional part of the explanation, one that raises metaphysical questions about the nature of virtual work: when you work virtually, where are you?

There is a sense in which the answer to this question is obvious: you are wherever your physical body is. If I’m working remotely and on a Zoom call, the place I am would be in Toronto (seeing as that’s where I live) while my colleagues will be in whatever province or country they happen to be physically present in at the time.

When we are all occupying the same Zoom call, however, we are also in another sense in the same space. Consider the following. In this time of transition between COVID and (hopefully) post-COVID times, many in-person events have become hybrid affairs: some people will attend in-person, and some people will appear virtually on a screen. For instance, many conferences are being held in hybrid formats, as are government hearings, trials, etc.

Let’s say that I give a presentation at such a conference, that I’m one of these virtual attendees, and that I participate while sitting in the comfort of my own apartment. I am physically located in one place, but also attending the conference: I might not be able to be there in person, but there’s a sense in which I am still there, if only virtually.

It’s this virtual there-ness that I think makes a case like Percy feel more troubling. Although a Canadian cashier who worked at Freshii would occupy the physical space of a Freshii restaurant in Canada, a virtual cashier would do much of the same work, interact with the same customers, and see and hear most of the same things. In some sense, they are occupying the same space: the only relevant thing that differentiates them from their local counterpart is that they are not occupying it physically.

What virtual work has taught us, though, is that one’s physical presence really isn’t an important factor in a lot of jobs (excluding jobs that require physical labor, in-person contact, and work that is location-specific, of course). If the work of a Freshii cashier does not require physical presence, then it hardly seems fair that one be compensated at a much lower rate than one’s colleagues for simply not being there. After all, if two employees were physically in the same space, working the same job, we would think they should be compensated the same. Why, then, should it matter if one is there physically, and the other virtually?

Again, this kind of unfairness is present in many different kinds of outsourced work, and whether physical distance has ever been a justification for different rates of pay is up for debate. But with physical presence feeling less and less necessary for so many jobs, new working possibilities call into question the ethics of past practices.

Unions and Worker Agency

photograph of workers standing together, arms crossed

The past few years have seen a resurgence of organized labor in the United States, with especially intense activity in just the past few months. This includes high profile union drives at Starbucks, Amazon, the media conglomerate Condé Nast, and even MIT.

Parallel to this resurgence is the so-called “Great Resignation.” As the frenetic early days of the pandemic receded into the distance, workers began quitting at elevated rates. According to the Pew Research Center, the three main reasons for quitting were low pay, a lack of opportunity for advancement, and feeling disrespected. Former U.S. Secretary of Labor Robert Reich even analogized it to a general strike, in which workers across multiple industries stop work simultaneously.

Undoubtedly, the core cause of both the Great Resignation and growing organized labor are the same – dissatisfaction with working conditions – but they are also importantly different. The aim of quitting is to leave the workplace, the aim of unions and strikes are to change it. They do this by trying to shift the balance of power in the workplace and give more voice and agency to workers.

Workplaces are often highly hierarchical with orders and direction coming down from the top, controlling everything from mouse clicks to uniforms. This has even led some philosophers, like the noted political philosopher Elizabeth Anderson, to refer to workplaces as dictatorships. She contends that the workplace is a blind spot in the American love for democracy, with the American public confusing free markets with free workers, despite the often autocratic nature of the workplace. Managers may hold almost all the power in the workplace, even in cases where the actual working conditions themselves are good.

Advocates of greater workplace democracy emphasize “non-domination,” or that at the very least workers should be free from arbitrary exercises of managerial power in the workplace. While legal workplace regulations provide some checks on managerial power, the fact remains that not everything can or should be governmentally regulated. Here, worker organizations like unions can step in. This is especially important in cases where, for whatever reasons, workers cannot easily quit.

Conversations about unionization generally focus on wages and benefits. Unions themselves refer to the advantage of unionization as the “union difference,” and emphasize the increases in pay, healthcare, sick leave, and other benefits compared to non-unionized workplaces. But what causes this difference? Through allowing workers to bargain a contract with management, unions enable workers to be part of a typically management-side discussion about workplace priorities. Employer representatives and union representatives must sit at the same table and come to some kind of agreement about wages, benefits, and working conditions. That is, for good or for ill, unions at least partially democratize the workplace – although it is far from full workplace democracy, in which workers would democratically exercise managerial control.

Few would hold that, all things being equal, workers should not have more agency in the workplace. More likely, their concern is either that worker collectives like unions come at the cost of broader economic interests, or that unions specifically do not secure worker agency but in fact saddle workers with even more restrictions.

The overall economic effect of unions is contentious, but there is little evidence that they hobble otherwise productive industries. A 2019 survey of hundreds of studies on unionization found that while unionization did lower company profits, it did not negatively impact company productivity and decreased overall societal inequality.

More generally, two assumptions must be avoided. The first is that the interests of the workers are necessarily separate from the interests of the company. No doubt company interests do sometimes diverge from union interests, but at a minimum unionized workers still need the company to stay in business. This argument does not apply to public sector unions (government workers), but even there, unions can arguably lead to more invested workers and stronger recruitment.

The second assumption to avoid is that management interests are necessarily company interests. Just as workers may sometimes pursue their personal interests over broader company interest, so too can management. This concern is especially acute when investment groups, like hedge funds, buy a company. Their incentive is to turn a profit on their investment, whether that is best achieved by the long-term health of the company or by selling it for parts. Stock options were historically proposed as a strategy to tie the personal compensation of management to the broader performance of a company. This strategy is limited however, as what it does more precisely is tie management compensation to the value of stock, which can be manipulated in various ways, such as stock buybacks.

Beyond these economic considerations, a worker may also question whether their individual agency in the workplace is best represented by a union. Every organization is going to bring some strictures with it, and this can include union regulations and red tape. The core argument on behalf of unions as a tool for workplace agency is that due to asymmetries of power in the workplace, the best way for workers to have agency is collective agency. This is especially effective for goals that are shared widely among workers, such as better pay. Hypothetically, something like a fully democratic workplace (or having each individual worker well positioned to be part of company decision making) would be better for worker agency than unions. The question of whether these alternatives would work is more practical than ethical.

There can be other tensions between individual and collective agency. In America specifically, unions have been viewed as highly optional. The most potent union relationship is a “closed shop,” in which a union and company agree to only hire union workers. Slightly less restrictive is a “union shop,” under which all new workers must join the union. Both are illegal in the United States under the 1947 Taft Hartley Act, which restricted the power of unions in several ways. State-level  “right to work” laws go even further, forbidding unions from negotiating contracts that automatically deduct union representation fees from employees. The argument is one of personal freedom – that if someone is not in the union they should not have to pay for it. The challenge is that the union still has to represent this individual, who benefits from the union they are not paying for. This invites broader questions about the value of individual freedoms, and how they must be calibrated with respect to the collective good.

 

The author is a member of Indiana Graduate Workers Coalition – United Electrical Workers, which is currently involved in a labor dispute at Indiana University Bloomington.

“Severance,” Identity and Work

split image of woman worrying

The following piece discusses the series Severance. I avoid specific plot details. But if you want to go into the show blind, stop reading now.

Severance follows a group of employees at Lumon Industries, a biotech company of unspecified purpose. The main characters have all received a surgery before starting this job. Referred to as the “severance” procedure, this surgery causes a split in the patient’s personality. After surgery, patients awaken to find that while they have factual memories, they have no autobiographical memories – one character cannot remember her name or the color of her mother’s eyes but remembers that Delaware is a state.

However, the severance procedure does not cause irreversible amnesia. Rather, it creates two distinct aspects of one’s personality. One, called the outie, is the individual who was hired by Lumon and agreed to the procedure. However, when she goes to work, the outie loses consciousness and another aspect, the innie, awakens. The innie has no shared memories with the outie. She comes to awareness at the start of each shift, the last thing she remembers being walking to the exit the previous day. Her life is an uninterrupted sequence of days at the office and nothing else.

Before analyzing the severance procedure closer, let us take a few moments to consider some trends about work. As of 2017, 2.6 million people in the U.S. worked on-call, stopping and starting at a moment’s notice. Our smartphones leave us constantly vulnerable to emails or phone calls that pull us out of our personal lives. The pandemic and the corresponding need for remote, at-home work only accelerated the blurring of lines between our personal lives and spaces, and our work lives. For instance, as workplaces have gone digital, people have begun creating “Zoom corners.” Although seemingly innocuous, practices like these involve ceding control of some of our personal space to be more appealing to our employers and co-workers.

Concerns like these lead Elizabeth Anderson to argue in Private Government that workplaces have become governments. Corporate policies control our behavior when on the clock and our personal activities, which can be easily tracked online, may be subject to the scrutiny of our employers. Unlike with public, democratic institutions where we can shape policy by voting, a vast majority have no say in how their workplace is run. Hence this control is totalitarian. Further, “low skilled” and low-wage workers – because they are deemed more replaceable – are even more subject to their employer’s whims. This increased vulnerability to corporate governance carries with it many negative consequences, on top of those already associated with low income.

Some consequences may be due to a phenomenon Karl Marx called alienation. When working you give yourself up to others. You are told what to produce and how to produce it. You hand control of yourself over to someone or something else. Further, what you do while on the clock significantly affects what you want to do for leisure; even if you loved gardening, surely you would do something else to relax if your job was landscaping. When our work increasingly bleeds into our personal lives, our lives cease to be our own.

So, we can see why the severance procedure would have appeal. It promises us more than just balance between work and life, it makes it impossible for work to interfere with your personal life; your boss cannot email you with questions about your work on the weekend and you cannot be asked to take a project home if you literally have no recollection of your time in the office. To ensure that you will always leave your work at the door may sound like a dream to many.

Further, one might argue that the severance procedure is just an exercise of autonomy. The person agreeing to work at Lumon agrees to get the procedure done and we should not interfere with this choice. At best, it’s like wearing a uniform or following a code of conduct; it’s just a condition of employment which one can reject by quitting. At worst, it’s comparable to our reactions to “elective disability”; we see someone choosing a medical procedure that makes us uncomfortable, but our discomfort does not imply someone should not have the choice. We must not interfere with people’s ability to make choices that only affect themselves, and the severance procedure is such a choice.

Yet the show itself presents the severance procedure as morally dubious. Background TV programs show talking heads debating it, activists known as the “Whole Mind Collective” are campaigning to outlaw severance, and when others learn that the main character, Mark, is severed, they are visibly uncomfortable and uncertain what to say. So, what is the argument against it?

To explain what is objectionable about the severance procedure, we need to consider what makes us who we are. This is an issue referred to in philosophy as “personal identity.” In some sense, the innie and the outie are two parts of the same whole. No new person is born because of the surgery and the two exist within the same human organism; they share the same body and the same brain.

However, it is not immediately obvious that people are simply organisms. A common view is that a significant portion, if not all, of our identity deals with psychological factors like our memories. To demonstrate this, consider a case that Derek Parfit presented in Reasons and Persons. He refers to this case as the Psychological Spectrum. It goes roughly as follows:

Imagine that a nefarious surgeon installed a microchip on my brain. This microchip is connected to several buttons. As the surgeon presses each button, a portion of my memories change to Napoleon Bonaparte’s memories. When the surgeon pushes the last button, I would all of and only Napoleon’s memories.

What can we say about this case? It seems that, after the doctor presses the last button Nick no longer exists. It’s unclear when I stopped existing – after a few buttons, there seems to be a kind of weird Nick-Napoleon hybrid, who gradually goes full Napoleon. Nonetheless, even though Nick the organism survives, Nick the person does not.

And this allows us to see the full scope of the objection to the severance procedure. The choice is not just self-regarding. When one gets severed, they are arguably creating a new person. A person whose life is spent utterly alienated. The innie spends her days performing the tasks demanded of her by management. Her entire life is her work. And what’s more troubling is that this is the only way she can exist – any attempts to leave will merely result in the outie taking over, having no idea what happened at work.

This reveals the true horror of what Severance presents to us. The protagonists have an escape from increasing corporate protrusion into their personal lives. But this release comes at a price. They must wholly sacrifice a third of their lives. For eight hours a day, they no longer exist. And in that time, a different person lives a life under the thumb of a totalitarian government she has no bargaining power against.

The world of Severance is one without a good move for the worker. She is personally subject to private government which threatens to consume her whole life, or she severs her work and personal selves. Either way, her employer wins.

Do Grades Make Our Lives Worse?

photograph of old-fashioned elementary report card

It’s nearing the end of the semester, and many students will be waiting on the edge of their seats to receive their final grades. For those who seek higher education, their GPA will matter for their applications to med school, law school, and other graduate schools. This numerical representation of a student’s academic achievement allows institutions, like universities and medical schools, to have some objective measure by which to discriminate between applicants. And perceptive students can figure out ways to maximize their GPA.

A numerical representation of academic performance is a good thing, right? It is both legible and achievable. However, if we look at contemporary philosopher C. Thi Nguyen’s work on value capture, the answer might not be so clear. According to Nguyen, “value capture occurs when: 1. Our values are, at first, rich and subtle. 2. We encounter simplified (often quantified) versions of those values. 3. Those simplified versions take the place of our richer values in our reasoning and motivation. 4. Our lives get worse.”

To see how this process works, take Nguyen’s example of the Fitbit. Say that I’m trying to start off the year healthier and increase my exercise. My thoughtful mother buys me a Fitbit so that I can track my steps and try to meet a goal of 10,000 steps a day. After a while, I find myself motivated to get 10,000 steps in a day, but that motivation has now replaced my earlier motivation to be healthier and get more exercise. I may be walking more, but I might be neglecting other forms of movement and a more holistic practice of promoting health to meet the clear and concrete goal of meeting my step count. Depending on how obsessed I am with the 10,000 steps number, my life has probably gotten worse. This is the process of value capture.

Are grades subject to value capture? Let’s start with the first step. What are the prior values we are trying to measure with grades? At the broadest level, it seems that grades are meant to capture how well a student is performing given the standards of the class, which are subsequently determined by the standards of the discipline. Given the complexity of any given subject and the many ways that subject could be broken down into a class, it’s very difficult to give a clear and easy explanation of what any given grade is trying to capture. And the same grade could mean different things — two students could be performing equally well in a class but each have different strengths. The values that grading tries to capture are evidently rich and subtle values. Step 1 is complete.

Do grades represent simpler (and sometimes quantified) versions of these rich values? Yes. Grades capture student performance into a number that can be bureaucratically sorted through at an institutional level. This has certain benefits — a law school can quickly do an initial sift through applicants to ensure that they have a sufficiently high GPA and LSAT score. But it also has its drawbacks. It doesn’t capture, for instance, that a slightly lower grade in a very hard class represents better student performance than a higher grade in an easier class. Given the standardized format of grades, a student’s scores may also do a poor job at representing personal growth and achievement that may vary based on the social and educational starting points of different students. Steps 1 and 2 are complete.

What about step 3? Do grades take the place of our richer values in our reasoning and motivation? It seems that often they do. This is in part because of external motivations, such as the importance of grades for employment or getting into a certain program. But it is also in part because of the ways in which we tend to start valuing the grade for its own sake. Think about, for instance, the parent who wants their child to succeed. Instead of focusing on the actual progress their child is making given the challenges their classes present, that parent can easily be seduced by the clarity and seeming objectivity of their child’s grades. The goalposts can quickly shift from “being a good student” to “making good grades.”

This shift can happen for students as well. Grades are often the most tangible feedback they get from their instructors, even though they may sometimes receive qualitative assessments. Grades may feel like a more real and concrete measure of academic performance, especially because they are the record that remains after the course. Students who start off valuing education may easily get sucked into primarily working to maximize their grades rather than to maximize their learning. It is worth noting that Nguyen himself thinks that this motivational shift happens with grades, noting that “students go to school for the sake of gaining knowledge, and come out focused on maximizing their GPA.” Steps 1, 2, and 3 are all complete.

What about step 4? Do grades make our lives worse? This is a hard question to answer, as it’s an empirical question that depends on a myriad of different personal experiences. In my own experience, focusing on getting a higher grade has often interfered with my ability to learn in a course. Instead of diving into the material itself, I often got stuck at the level of trying to figure out how to make sure that I got that A. In harder courses, this would make me very stressed as I worked exceptionally hard to meet the requirements. In easier courses, this would mean that I often slacked off and did not perform as well as I could have, since it was an easy A. And, as much as I tried to shake the motivational pull of grades, it was always there. Grades made my educational experience worse.

What should we do with this problem? Given the potential for value capture, grades are a powerful tool, and teachers should be careful to create an assessment structure that more closely incentivizes an engagement with the rich, pluralistic values that students should come to appreciate. This is a difficult task, as often those values cannot be easily translated into a grading system that is legible to the institution (and to other people across institutions). Because grades provide an easy way to communicate information, it’s unlikely that getting rid of them would make things better, at least in the short-term.

One solution might be to retain the current numerical/letter grade assignments but to add on a short paragraph qualitatively assessing the student’s performance throughout the course. This could be fraught for a number of reasons (including implicit bias, the bureaucratic logistics of tracking of such information, and the additional work for teachers), but that extra information would help to contextualize the numbers on the page and provide a richer understanding of a student’s performance, both for that student and for those assessing the student as an applicant. This solution is far from perfect, but it might provide one step towards recapturing our motivation to track the rich values we started with.