Back to Prindle Institute

Educating Professionals

photograph of graduation caps thrown in the air

Universities around the country have, in the last century, shifted their focus from a traditional liberal arts curriculum to an increasingly “practical” or vocational form of education. There is a popular conception that the purpose of higher education is some form of job-training. A cursory Google search will produce a number of articles asking whether college is a “sound investment,” or whether college graduates make more money than their peers who elect to forego college for work. Virtually every one of these articles defines the worth of a college degree in purely economic terms. There is little room to deny that, in our modern liberal democracy, making money is a practical necessity. Yet, I think there is something deeply confused about the attempt to reduce the value of education generally — and higher education specifically — to the economic gains that come from education. I have argued elsewhere that conflating the so-called “practicality” of education with the “vocationality” of education is a conceptual mistake, so I will not rehearse those arguments here.

Instead, I intend to discuss a related problem present in the ways we conceive of the nature, purpose, and value of higher education. Following the 2008 recession, there was a marked shift in students’ and educators’ priorities toward STEM (science, technology, engineering, and mathematics) fields. People seem to see STEM fields as a means to a professional end — scientists, engineers, and folks in tech tend to make money, and that’s something people in a precarious economic environment want. We can see the need for economic stability reflected in every aspect of the university, including many college and university mission and vision statements.

It is not difficult to see the ways in which gaining technical proficiency in biology or engineering, for example, will prepare students for a career. However, what some students and educators fail to recognize is that even areas within sciences that most directly correlate to in-demand jobs need the humanities. In preparing a guest lecture on engineering ethics, I looked into the nature of professional ethics generally. This led me to think about the nature of a profession and why it is important that certain professions have ethical guidelines by which practitioners must abide. The word “profession” is derived from the late Latin professus, which roughly means “to profess one’s vows.” One might wonder what a profession of one’s vows has to do with a “profession” as we consider it today. The answer is surprisingly straightforward — in the monastic tradition, monks were asked to make a public declaration of their commitment to living a more just, ethical life in light of their training. Accordingly, they would profess their commitment to living according to this higher standard. Such dedications bled over into highly skilled and highly specialized trades — as jobs require increasingly specific training, it becomes increasingly important that the people who take on these skilled positions profess to self-govern according to higher standards, if only because the number of people who have the knowledge to provide a check on them has become vanishingly small. There can be little doubt that technicians at every level need to behave ethically, but with a larger peer group, there are more individuals, and more opportunities to recognize and correct potential abuse. As William F. May powerfully states, “if knowledge is power, then ignorance is powerlessness. Although it is possible to devise structures that limit the opportunities for abuse of specialized knowledge, ultimately one needs to cultivate virtue in those who wield that relatively inaccessible power.”

It is not difficult to see how we can take this idea of professionalism as tied with virtue and apply it to higher education today. Let’s take the example of our engineering students. Within the field of engineering, there are different fields of sub-specialization, the result of which is a (relatively) small number of professional peers — those with the specialized knowledge to recognize and correct potential problems before they become catastrophic. The fact that students in a senior-level engineering class already have narrowly defined expertise that differs from peers in the same class highlights the need for a curriculum that instills ethics early on.

This problem becomes more acute as students graduate and enter the profession. As the number of engineers who have the specific knowledge necessary to evaluate the choices made by any given engineer is so small, we must rely on the engineers themselves to abide by a higher standard — especially in light of the public-facing nature of the work engineers undertake. Engineering is a profession, and as such we need engineers who profess to, and actually do, live and work according to a higher standard. Such a profession requires more than mere compliance with a code of conduct. As Michael Pritchard notes, “professional codes of ethics are not static documents. But even if a code is relatively unchanging, it is not simply an algorithm for decision making. It must be applied – which calls for independent judgment on the part of the professional.” In light of this understanding of the nature and demands of professionalism, I propose that universities insist upon an increased emphasis on humanities — those fields whose value is less directly connected to vocational outcomes and are more easily connected to the development of character, person, and civic responsibility. Humanistic fields are just as valuable as more vocationally-directed fields, even to those vocational-directed fields themselves.

According to a recent report from the Bureau of Labor Statistics, many institutions were ill-prepared to handle the influx of people looking for STEM degrees following the 2008 recession. The BLS additionally cautions that the pandemic is likely to cause another STEM surge, offering us another opportunity to shape industries and mold the next wave of future professionals. In considering how to do this, and how to do it well, it should be clear from what I’ve said that we need to emphasize the connections between the humanities and STEM fields. While we often like to think of science as purely descriptive and divorced from considerations of value (moral, aesthetic, or otherwise), that is simply not an accurate, or at any rate a complete picture. The ultimate aims of science are, I suggest, intrinsically value-laden. I don’t have room here to defend this claim, but for a careful discussion, see Heather Douglas’ Science, Policy, and the Value-Free Ideal (especially chapters 4, 5, and 8). For now, let’s return to our example of engineering students. In my discussions with students, many report that they went into engineering with high-minded goals about improving the quality of life for those around them. They see the end for the sake of which they pursue STEM not as mere financial stability, but for the betterment of human lives; yet most report that they have had little or no formal education in ethics or value theory. The narrow scope of their education illustrates that colleges and universities are not doing enough to truly prepare students for the non-technical aspects of their chosen profession. The solution, I propose, is to return to a more well-rounded form of education; one that emphasizes humanities and integrates humanistic education with STEM fields.

We do not need technically proficient but ethically unimaginative or inflexible workers to fill the needs of our consumer economy; rather, we need professionals understood in the broad sense I’ve described. We need to cultivate and encourage our students to commit to living according to the highest standards of moral virtue. As Rena Beatrice Goldstein argues,

“Virtue enables a person to navigate challenging human encounters in many spheres, and virtue curricula can help students learn to navigate well by practicing virtue in different environments. It takes time to develop virtues like open-mindedness. Indeed, being open-minded with strangers in the civic domain may require different motivations than being open-minded with one’s peers, family, or friends. Practicing virtues in a variety of domains can help students develop the right motivations, which may be different in different domains.”

I propose that we see the next STEM push as an opportunity to re-emphasize our commitment to all of the core values of higher education: personal growth, civic responsibility, and professional excellence. When we consider “professional excellence,” we must build into that concept a healthy understanding of, and respect for, the stable virtues cultivated through sustained humanistic study.

Revenge Porn, Public Interest, and Free Speech

image of camera lens blended with an image of an eye

After resigning from office in 2019, former California Representative Katie Hill has been dragged back in the news. Hill’s lawsuits against the Daily Mail and RedState for publishing nonconsensual pornography were recently dismissed, and, to add insult to injury, she was ordered to pay more than $200,000 in defendants’ legal fees. Not only was Hill denied the possibility of being made whole, but was also forced to pay for the “privilege of them publishing nude photos … obtained from an abuser.”

How did we get here? Why is Katie Hill continuing to foot the bill for being made the target of “revenge porn”? How could the distribution of others’ private sexual images without their consent ever be protected by the First Amendment? Shouldn’t respect for individual privacy trump others’ interest in publicizing such intimate information?

In 2019, news broke that Hill and her then-husband had been involved in an improper relationship with a junior employee. Texts surfaced suggesting one of Hill’s campaign workers was not only engaged in a sexual relationship, but that she might also have been abused. It was further alleged that Hill was romantically involved with a congressional aide. RedState and the Daily Mail added fuel to the fire by publishing a number of suggestive and salacious photos. With rumors of a media trove containing hundreds of additional pictures and texts and a congressional probe just beginning, Hill resigned.

These revelations were no small thing; they suggested more than a mere lapse in judgment. The asymmetric power relation between the would-be congresswoman and a subordinate demonstrated a failure of responsibility and constituted an abuse of power. These charges must be taken seriously.

But the question of whether Hill’s relationship with a campaign staffer was improper (it was) should be kept separate from the question of whether the Daily Mail and RedState were acting in the public interest by choosing to publish nonconsensual pornography.

Judge Yolanda Orozco, who dismissed Hill’s case, claimed that the media outlets’ circulation of those nude photographs served a compelling public interest in questioning Hill’s “character, judgment and qualifications for her congressional position.” The electorate, she argued, deserved to bear witness to their representative’s apparent vices, and these news organizations were merely serving a democratic function in satisfying that need. The photos “allegedly depicted [Hill] with a campaign staffer whom she was alleged to have had a sexual affair with and appeared to show [Hill] using a then-illegal drug and displaying a tattoo that was controversial because it resembled a white supremacy symbol.” These, Orozco insists, are important details that the public has a right to know, or, at least, that news organizations have an overriding interest in disseminating.

This reasoning, however, appears to provide an incredibly broad read of what qualifies as a “matter of public concern.” Indeed, it seems to offer a blank check to any gossip or tabloid journalism that people might be able to put to good (political) use. (Consider, for example, the recently leaked video of New York City Council candidate Zack Weiner.) This justification does more than simply make the relaying of private information an important social good. Instead, Orozco’s position suggests that it is people’s feelings about the information, not the legal relevance of the information itself, which should determine the permissibility of sharing revenge porn. Whatever distaste or revulsion an image might provoke is enough to warrant overturning an individual’s right to legal protection against this kind of invasion of privacy, harassment, and sexual violence.

Further, according to Judge Orozco’s ruling, supplying a written description of the photos’ content instead of posting the photos themselves would fail to adequately capture or sufficiently communicate the level of depravity exhibited by the actual image. Instead, “the public should be permitted to determine the importance or relevance … for itself.” Again, this suggests that the ambiguity attending any likeness (inevitably lacking context) is just as important as (if not more important than) the cold, hard facts. A picture is worth a thousand words, and it’s meaning should lie with the beholder.

Note, however, that the photos’ publication is defended on the grounds that they provide evidence of Hill lacking moral fiber, not proof of the allegations leveled against her. The photograph which includes a staffer depicts a previous relationship that occurred during the campaign — a consensual relationship that was surely unethical, but not illegal, and to which Hill admitted — while the public value of the other photos centers on Hill holding a bong and sporting a questionable tattoo.

The publication of these photos, then, does little else but invite the viewer to pass judgment according to arbitrary standards of decency and decorum that amounts to nothing more than moralizing and slut-shaming. (Consider, for example, commentators’ judgments that the photos are revealing, “bespeaking a kind of hedonism that at least some voters may view as a character defect.”)

But our recognition of this unique kind of harm is precisely why revenge porn laws exist in the first place. Their purpose is to restrict the freedom of bad actors in recording and distributing salacious materials against their victims’ will. The publicizing of nonconsensual pornography weaponizes information by bringing public opinion down on the head of victims and branding them with an unerasable social stigma. Given the limitations to genuine redress, the law must have sufficient teeth to act as a deterrent capable of discouraging other would-be attackers. The law operates with the understanding that there is no putting the genie back in the bottle once a leak occurs.

The language of “revenge porn” is often criticized for focusing our attention on the perpetrator’s mindset as opposed to the damage done to victims. Surely, we shouldn’t concentrate our efforts on divining whether spite was the overwhelming motivation behind a perpetrator choosing to distribute illicit images. What matters is the unique kind of sexual violence that is being threatened. But, in this particular case, the context might prove important. First, the photos were taken without Hill’s knowledge (or consent) and distributed by a jilted and “abusive” ex-husband in the midst of a messy divorce. But the leak’s publication also looks to be the work of a concerted effort by Hill’s political opponents. As Quinta Jurecic of Lawfare notes, this may be the first known instance where a “politically aligned publication has published an explicit photo of an opposition politician for apparent political gain.” In fact, Politico reported that the person responsible for posting the photos used the same platform to publicly advocate for the Republican running for Hill’s vacated seat.

Despite these troubling circumstances, commentators have suggested that Hill simply “needs to take the L and move on.” Anyone who chooses to thrust herself into the public eye sacrifices the right to keep any detail of her private life hidden from view. Everything she does becomes a public concern. Ultimately, ours is a society that prioritizes the community’s right to know over individuals’ right not to disclose, privileges one’s freedom to do rather than others’ right to impede, and chafes much more at government overreach than at the paparazzi’s prying eyes. Still, it seems backwards to conclude those occupying the spotlight are less in need of this protection rather than more.

Autonomy, Euthanasia, and Non-Terminal Patients

photograph of hospital room with empty beds

In March of this year, changes to the law regarding assisted suicide and euthanasia in Canada loosened some of the restrictions about who is eligible to apply. Assisted suicide and voluntary active euthanasia has been legal in Canada since 2016, however, the law governing such procedures was recently challenged for being too restrictive and not in keeping with the original ruling of the Supreme Court. Initially, to be eligible for assisted suicide or euthanasia the patient’s terminal condition must have been considered “reasonably foreseeable,” but now that criterion has been dropped so that even people suffering from grievous and irremediable mental illnesses will be eligible. But this new-found freedom only raises more questions about the concept of consent, especially in those cases where patients with impaired mental faculties express a desire to end their life.

There are relatively few nations in the world which permit either physician-assisted suicide or physician-assisted euthanasia, and even fewer that permit them for people who are not terminally ill. For example, in the United States passive euthanasia is legal, and in a few states voluntary active euthanasia is permitted but only in situations where there is a terminal illness. On the other hand, Canada will now join nations like Belgium and the Netherlands in permitting voluntary active euthanasia and assisted suicide to patients who are not terminally ill. In Belgium, patients with psychiatric conditions, even children, can request euthanasia. But in non-terminal situations where a patient may request that their life end because of a psychiatric condition like depression or dementia, the issue is complicated. In all of the above nations, the notion of informed consent is paramount, but it isn’t always easy to determine if a patient is competent enough to provide consent.

As Scott Kim, a psychiatrist and philosopher, notes,

“It is not easy to distinguish between a patient who is suicidal and a patient who qualifies for psychiatric euthanasia, because they share many key traits…one does not need to be a psychiatrist to appreciate how psychiatric disorders, especially when severe enough to lead to euthanasia requires, could interfere with a patient’s ability to make ‘voluntary and well considered’ decisions.”

Unlike in the case of terminal illness which may be determined by a chemical or imaging test, it can be difficult to say for sure whether a patient with a psychiatric illness has “unbearable suffering without the prospect of improvement.” For example, a Dutch woman elected to be euthanized 12 months after her husband’s death for “prolonged grief disorder,” despite being otherwise physically healthy. To make this determination even more complicated, just last year, the Dutch Supreme Court expanded the law to extend this choice to those suffering from dementia as well.

Those who defend the idea of psychiatric-assisted dying in non-terminal patients argue that the suffering caused by mental illness justifies physician-assisted dying, but only if the patient is able to request such a procedure autonomously and rationally. However, some philosophers and ethicists take issue with this criterion. In a paper on the subject, Jukka Varelius points out that, “Given that the distress of a psychiatric patient undergoes can be very severe and that there may not always be adequate means of alleviating it (short of ending the patient’s life), the idea that psychiatric-assisted dying could sometimes be morally acceptable does merit attention.” Indeed, many argue that excluding the mentally ill from enjoying this freedom on the basis of disability is objectionably discriminatory.

For a patient to make an autonomous decision, it is commonly thought that it must be voluntary, intentional, and based on sufficient understanding of the nature and consequences of the decision. But certain mental illnesses undermine a patient’s ability to understand the world. A similar problem occurs in cases of dementia. As noted in a paper on the subject from the Journal of Neurology, “those suffering from dementia suffer from a disease that itself infiltrates the very center of autonomy and voluntariness.” But Varelius makes the case that even if their conception of reality is distorted, non-autonomous psychiatric patients can also suffer unbearably if they are unable to express a reasoned decision to end their life. It is already common practice, for example, to engage in non-voluntary euthanasia by withdrawing life support from non-autonomous patients if it is deemed to be in the patient’s best interests, such as those who are in an irreversible comatose state or those with severe brain damage. It is, however, difficult to argue that we have any certainty regarding the patient’s personal preferences. Because of this, our standards involving autonomous choice may be less important than we often make them, and it would be cruel to claim that the suffering we force people to endure due to our skepticism of their “true” interests is not morally significant.

On the other hand, many may argue that there is a significant difference between active and passive euthanasia, or even that passive euthanasia should be endorsed at all. Also, when it comes to issues like dementia and mental illness, it won’t always be clear if suffering can be abated. Longitudinal studies show that patients with chronic psychiatric disorders sometimes get better or worse for reasons beyond the control of healthcare providers. So, it might not ever be clear whether there are other reasonable alternatives to euthanasia. And, without the ability to predict the future or have a meaningful conversation with a patient, there is no more reason to think that a person would want to be euthanized than to think that they wouldn’t.

There is also strong rejection of euthanasia from psychiatrists stemming from the nature of the profession. A fundamental core of psychiatry, to many, is to prevent suicide and to address hopeless, helplessness, the desire to die, and the inability to see the future. The shift in policy towards psychiatric euthanasia is considered a fundamental change to the practice of psychiatry. Many worry about slippery slope precedents of the ever-expanding criteria for euthanasia which are beginning to include reasons those who feel “like they have a completed life” and are tired of living. And some studies of the system in Holland reveal that the no-reasonable-alternative criterion is not always met.

For these reasons, it is difficult to assess whether a decision is autonomous or how important that determination is in many of these cases. We need to explore the various frameworks in place to determine appropriate eligibility criteria and approval processes. Finding the right balance (if there even is such a thing) may be something that can only be learned the hard way.

Creation, Destruction, and the Ethics of “Murderabelia”

absratct image of ink painting

On March 30th 1981, 25-year-old John Hinckley Jr. attempted to assassinate President Ronald Regan. He was convinced that doing so would earn him the affection of the young woman with whom he was obsessed — the 19-year-old actress Jodie Foster, who had recently starred in the film Taxi Driver. Hinckley was successful at shooting the president in the chest, and in the process he also shot Press Secretary James Brady in the head, Secret Security Agent Timothy McCarthy in the side, and District of Columbia police officer Thomas Delahanty in the neck. Hinckley was found not guilty by reason of insanity and was committed to St. Elizabeth’s Hospital, a psychiatric institution.

Hinckley remained at St. Elizabeth’s for 34 years before he was conditionally released into the care of his mother in 2016. He now owns a shop that sells books and antiques. While institutionalized, Hinckley gained a fondness for expressing himself through art; he paints and also writes and plays music. For many years, if he wanted to display his work, he was required to do so anonymously because of statutes that prohibit convicted criminals from benefiting from the notoriety that they gained from committing their crimes. In October, 2020, he was granted permission by a District Court judge to sign, claim credit for, and sell his work.

Hinckley’s visual art consists mostly of landscapes. Other notorious criminals create art that is more disturbing and intimately connected to the crimes that they have committed. Serial killer John Wayne Gacy, who was convicted of killing 33 young men and boys and burying them in the crawlspace under his family home is infamous for creating art depicting himself as a killer clown. Before he was convicted, Gacy would often dress up as a clown to entertain patients at the local children’s hospital. His art is inseparable from who he was and is as a person; it is a manifestation of the murderous persona that he developed over the course of many tragic years. Arthur Shawcross, murderer of 14 people in the 1980’s, also created art related to his crimes, some of it surprisingly similar to the art created by Gacy. Other killer artists include Charles Manson, Richard Ramirez, Anthony Sowell, and Elmer Wayne Henley.

These artifacts produced by incarcerated criminals are trade items in the small but thriving business known as the murderabelia market. Those that deal in this kind of thing obtain letters, artwork, recorded music, and even the hair and skin of notorious criminals. Some true crime aficionados will pay a significant price for these items.

Over the years, many pieces of legislation have been passed in an attempt to ward off the possibility that the commission of a crime, in many cases a serious and violent one, will make various forms of artistic expression more profitable for the person or people who committed it. In the immediate aftermath of the Son of Sam murders that plagued New York City in the summer of 1977, New York passed the Son of Sam Law prohibiting criminals from profiting from expression that is lucrative because of the notoriety that they have gained from their crimes. Many other states quickly followed suit. The law was passed because people believed that David Berkokowitz, the person found legally responsible for the killings, was planning to sell exclusive story rights related to his life and crimes to the publishing company Simon & Schuster. They reasoned that if any money is made from the sale of such items, that money should go to the families of the victims. The company, which was attempting to publish the memoirs of Henry Hill, the FBI informant whose life story would later be dramatized in the movie Goodfellas, sued in 1989. They argued that preventing convicted individuals from expressing themselves in this way violated their First Amendment rights. They won the case in the Supreme Court, which held that the laws were constructed in a way that was too broad to achieve the state’s compelling interest.

Hinckley differs from the other figures listed above. He suffers from mental health issues and the court decided that he could not be held legally responsible for the crimes that he committed for that reason. The attitudes that we have toward criminals frequently turn on the extent to which we believe that they acted freely. If coercive factors were in play such that Hinckley had no real control over whether he shot Regan, we shouldn’t hold him fully responsible for doing so. Certain mental health problems are coercive in this way.

Freedom of expression is important for many reasons, and some of them are therapeutic. Art therapy is used in a variety of settings to deal with trauma and mental health problems. The therapeutic aspects of creating art don’t happen only at the individual level; sharing, interpreting, and discussing art is a social experience. Limiting this form of expression might violate access to our nature as creative and social beings who need these outlets in some form or other to survive and to be psychologically healthy.

That said, there are great discrepancies from one state to the next and from one jury to the next when it comes to how seriously people are willing to take insanity defenses. The legal definition of insanity has nothing to do with whether a person has a diagnosed or diagnosable mental illness and has everything to do with whether the person in question knew the difference between right and wrong at the time that the crime was committed. Some people view mental health problems as coercive factors and others do not. Some of the other convicted criminals who make art could have also been deemed legally insane, had they had a different lawyer, a different jury, a different judge, or been tried in a different state. A similar problem arises when it comes to death penalty sentences — such judgments are often arbitrary and have more to do with where a defendant committed a crime than with the mental state of the accused at the time that it was committed.

Even if there are good reasons to allow the social experience of artistic expression to take place, there may also be legitimate medical and social reasons to prevent it. Hinckley was diagnosed with narcissistic personality disorder, and early on doctors feared that social recognition of his art and music might get in the way of his work on that condition. Human beings are both creative and destructive forces. Not all forms of creation are positive, but some of them are so positive that they are life-affirming. Does cutting off all pathways to creation disrespect the dignity of human beings?

There are other cases in which creation is really a form of destruction. Sometimes, the creation of art is a display of hate and violence — not all art is rainbows and rose petals, or even landscapes. What’s more, we often don’t want it to be; it can be important that art is cathartic. That said, sometimes an artist’s intent is to pass along their rage. The artist may want to revictimize the people that they’ve hurt, or to lash out against people they believe have hurt them. Indeed, some families feel crushed when art created by someone who killed their family member is sold. They feel that the perpetrator has not only robbed them of their loved one, but they are taking that crime all the way to the bank.

There are many artists who haven’t been convicted of crimes who have nevertheless done awful things and are terrible human beings. For example, the 20th century artist and architect Eric Gill is now known to have sexually abused his two oldest daughters, yet his art is still featured in exhibitions and is a fixture of private collections. Some of this work even features his nude daughters as subjects. The 16th century artist Caravaggio was violent and was sentenced to death for murder before he fled to Naples, yet his work is displayed at The Louvre in Paris, The Uffizi in Florence, and The National Gallery in London. All of these years removed, much of his art is likely to be more interesting rather than off putting to patrons as result of the profile of the artist. We don’t need to go centuries into the past to witness this phenomenon. The decisions of contemporary politicians may have been the cause of many unnecessary deaths, yet people are inclined to think of that art as charming some decades removed from the fact.

A critical component in the moral assessment of this issue has to do with the people who obtain, sell, and purchase this art. Why is it that a person might want a painting composed by a serial killer to feature prominently in their collection? Perhaps it is a desire to own something noteworthy and unusual that no one else has. It may be an instinct to be, somehow, close to infamy. The fact that these objects are available for sale gives people the opportunity to glorify the wrong people, to look to the worst possible individuals as role models. There is nothing “cool” about John Wayne Gacy. If this art was hanging in a gallery and the artist was unknown, no one would purchase it. It is because the art was created by a serial killer, and not in spite of that fact that led to the purchase of the work. Should we approach all works of art with an empathetic eye, or is there some art to which it is important that we do not relate? Does some art weaken our moral character rather than build on our capacity to view the world in new and diverse ways?

Art is a meaningful part of the human experience. The conditions under which it is made are rich and varied. Our aesthetic experiences and judgments are linked, in many cases, to other kinds of value judgments, which makes these questions very difficult to resolve.

Should Speech Have Consequences?

image of speech bubbles surrounding iphone outline

Particularly in left-leaning circles, it has become fashionable to say that those who are targeted for various kinds of sanctions for their objectionable speech — unfriending, blocking, doxing, university investigations, terminations, threats of bodily harm or death, and so forth — are merely suffering the justifiable consequences of speaking in ways that harm or offend others. This was the line taken by many commentators concerning the recent controversy at the Journal of the American Medical Association (JAMA). There, an editor of the journal said in a podcast that “many people like myself are offended by the implication that we are somehow racist.” The outcry that followed led to the resignation of the journal’s editor-in-chief. Speech, indeed, has consequences.

We might put the principle as follows: “Freedom of speech does not mean freedom from the consequences.” While this principle seems sound, in this column I want to explore some of its implications. It turns out, I think, that we have many reasons not to sanction others for their speech, however offensive or harmful it may be.

The first question we should ask about social sanctions against speech is whether we ought to think of them as forward- or backward-looking. In other words: are they justified because they are deserved by the speaker (backward-looking) or are they justified because of the beneficial effects of sanctioning (forward-looking)? Here are some familiar reasons why we ought to think of them as forward-looking.

First and foremost, any system of sanctions requires a principle of proportionality: a principle that tells us which punishment “fits” a given crime. Does a racist slur deserve a cold shoulder or a death threat? Beyond simply invoking our moral intuitions, it seems that reason has little to say about what a particular transgression deserves. By contrast, forward-looking considerations give us some rational metric by which to judge the severity of the punishment based at least in part on the nature of the “crime” and the nature of the “criminal.” We don’t punish shoplifters with death, for example, because this would give them the perverse incentive to do worse things than shoplifting, and because shoplifters are more likely to be reformed by relatively light punishments than by onerous ones.

Second, there are well-known puzzles about whether people are morally responsible in the desert-implying sense. We can bypass all of these problems by justifying sanctions not on the basis of desert, but on the basis of the effects of the sanctions.

Why, then, sanction speech? Most obviously, sanctioning speech is a form of deterrence: say this, and you will suffer bad consequences. In this way, the amount of bad speech is reduced. Relatedly, sanctioning is a way of encouraging or promoting the adoption of certain views. If the assertion that p is sanctioned, this will encourage the adoption of the belief that not p. Secondly, sanctioning speech has a signaling function: it means that certain kinds of speech are not to be tolerated, and it tells those who are offended or harmed by the speech that their suffering matters. Finally, it may have a reforming effect: the sanctioned person might, by suffering consequences for his speech, come to understand why that speech is not to be tolerated.

These are the benefits of sanctioning speech. What are the drawbacks? Sanctioning speech undoubtedly has a chilling effect. After all, we listed its chilling effect as one of its benefits! If we could all agree on a narrow category of speech that is sanction-worthy, perhaps this effect would be entirely beneficial. But it turns out, I think, that when society adopts the norm that allows sanctioning any offensive or harmful speech, this empowers people to sanction every kind of speech they don’t like. And in a pluralistic society, there is no agreement about what kind of speech is acceptable. This inevitably leads to instances of benign speech, like one’s expression of political preferences, being sanctioned. This might be why, according to a recent poll, 62% of Americans say they are afraid to express some political beliefs. It is noteworthy that this feeling crosses party lines. But democracies require speech in order to function properly; democratic deliberation is possible only when people are able not only to have opinions, but to voice them to their fellow citizens. The cost of allowing the widespread sanctioning of speech, then, is weaker democratic deliberation.

Nor does sanctioning eliminate the views it aims at effacing from public discussion. Rather, the effect of sanctioning, particularly if harsh, is often to cause those who hold the views to double down on them and to look for ways to have those views affirmed by others. Far from deterring these views, then, burdensome sanctioning may in many cases encourage their secret proliferation.

Sanctioning is also not a very effective tool for educating others or getting them to adopt certain views. In the JAMA case, it seems doubtful that anyone who does not already believe in the existence of systemic racism in medicine will adopt that belief simply because someone has been sanctioned for denying it. Sanctions are not arguments; they are in fact the opposite.

In addition to the costs of sanctioning speech, we ought to consider the benefits of tolerating speech. Expression is itself a good for the speaker, insofar as it is the exercise of their autonomy. So, tolerating speech contributes to the well-being of speakers. And as J.S. Mill pointed out, in most matters we are in a state of at least partial ignorance, so tolerating the free play of ideas can help us get closer to the truth. Furthermore, even in those areas where we are not ignorant, the free play of ideas can get us closer to knowledge of the truth by sharpening our reasons for holding our beliefs.

It is often said that toleration of offensive and harmful speech protects the powerful. Like all generally applicable principles, this is true: the universal prohibition on murder protects Elon Musk. But like that prohibition, the toleration of free speech can also protect the weak. The peaceful protests against police violence that were the hallmark of 2020 were possible only in a country where a content neutral principle of free speech is respected not only by government but by the vast majority of citizens, even those vehemently opposed to the aims of Black Lives Matter. Conversely, speech codes and other restrictions have often been used to oppress minority groups. A speech code at the University of Michigan that was struck down by a federal court in 1989 was used to punish one student for stating that Jewish people use the Holocaust to justify Israel’s policy towards the Palestinians. Another speech code complaint was lodged against a student who said that “he had heard that minorities had a difficult time in the course and that he had heard they were not treated fairly.”

One of the strengths of consequentialism is that it teaches us that everything in life is a trade-off or a balancing act among competing values. Sometimes, surely, there will be strong reasons to sanction a particular speech-act or -acts. For example, speech that is sufficiently frequent and malicious can create a hostile environment. In many cases, however, people are currently being strongly sanctioned for stray remarks or for offensive speech from a long time ago. As in the JAMA case, they are also being sanctioned for departing from the political orthodoxy of their community by, for example, questioning the existence of systemic racism. In these sorts of cases, the benefits of sanctioning are slight, and the drawbacks great. Sanctioning will create an environment in which people feel scrutinized for every indiscretion, and as a result, they will self-censor. Self-censorship is an intrinsic harm, and is also detrimental to the search for truth and the communication of political views, both essential in a functioning democracy. Finally, sanctioning will alienate the sanctioned; far from educating them or getting them and others to change their views, it will cause a defensive reaction that leaves their objectionable views intact, and perhaps more popular due to the perception that they are being suppressed.

Toleration of offensive or harmful speech comes with costs. So does sanctioning such speech. The question is whether, on the whole, the benefits of sanctioning outweigh the costs. In many contemporary cases, I would argue that the answer is no.

Cancel Culture and the Possibility of Nuance

image of multicolored speech bubbles

In June of 2021, Nigerian novelist Chimamanda Ngozi Adichie posted a short essay titled “It is Obscene: An Essay in Three Parts” on her website. Adichie, author of award-winning books like Americanah and We Should All Be Feminists, was famously featured in the Beyoncé song “Flawless,” reciting a basic outline of feminist thought between the pop singer’s verses. Adichie’s essay explores her relationship with two former students, who she frames as poisoned by online cancel culture. She laments that her students (and many young people like them) possess

“an ease with dishonesty and pretension and selfishness that is couched in the language of self-care . . . language that is slick and sleek but with little emotional intelligence; an astonishing level of self-absorption; an unrealistic expectation of puritanism from others . . . a passionate performance of virtue that is well executed in the public space of Twitter but not in the intimate space of friendship.”

Though the two students are unnamed, the second student was quickly identified as writer Akwaeke Emezi, a young novelist who accused Adichie of transphobia on Twitter. Emezi, a non-binary writer whose debut novel Freshwater was critically acclaimed, has written extensively on black trans identity through fiction and memoir alike.

It’s a bit reductive to label this a “feud,” though many news sources (like NPR, to name one) have succumbed to the temptation. There is clearly animosity between the two parties, but “feud” implies something entirely personal, even petty. While their personal history does come up in Adichie’s essay, it’s more accurate to say that Emezi and Adichie embody diametrically-opposed moral stances towards cancel culture, an opposition partly rooted in a generational divide.

While many have applauded Adichie’s essay, accusations of transphobia have taken over the conversation about her piece. Some wonder if her essay is a smokescreen, an attempt to deflect attention from Emezi’s original condemnation of Adichie’s brand of feminism. Has “Condemning cancel culture has become a reliable way to obscure transphobia,” as writer Aja Romano suggests in their article on Adichie for Vox?

Adichie summarizes her controversial stance on trans women in a 2017 interview. She said in a response to a question about trans identity,

“When people talk about, ‘Are trans women women?’ my feeling is trans women are trans women. But I think if you’ve lived in the world as a man, with the privileges the world accords a man, and then sort of change—switch gender, it’s difficult for me to accept that then we can equate your experience with the experience of a woman who has lived from the beginning in the world as a woman and who has not been accorded those privileges that men are.I don’t think it’s a good thing to conflate everything into one. I don’t think it’s a good thing to talk about women’s issues being exactly the same as the issues of trans women. What I’m saying is that gender is not biology. Gender is sociology.”

While some of these points are generally accepted (trans women indeed have different experiences from cis women), her response has been described as alarmingly close to TERF ideology. Not all trans women possessed much in the way of privilege before their transition, especially if they are people of color. Adichie has also faced criticism for an article she wrote defending J.K. Rowling, who peddles blatantly transphobic rhetoric in the guise of promoting gender equality. At the same time, many critics of Adichie have ascribed bad-faith motivations to Adichie’s actions where there seem to be none. For example, Adichie has come under fire for releasing her essay during Pride Month, even though Pride celebrations are still largely non-existent in Nigeria, and it’s unlikely that timing was a factor here. It’s also worth noting that TERFs tend to be more prominent in first-world countries, where the “feminism” part of the acronym is more palatable. In Nigeria, even garden variety feminism is considered suspect, let alone radical feminism. “Feminist” is less of a neutral descriptor than an insult in most parts of the country, as explained in an article for The New York Times, and many women still struggle to access their most fundamental rights. While she claims in the essay that she actively supports trans rights, the issue may seem alien or extraneous to Adichie. Her cultural background hardly excuses transphobia, but it’s important to consider that not everyone will be fluent in the occasionally dense and ever-changing vocabulary of trans issues in online spaces.

Adichie writes,

“[Emezi] knows me enough to know that I fully support the rights of trans people and all marginalized people. That I have always been fiercely supportive of difference, in general . . . Of course she could very well have had concerns with the interview. That is fair enough. But I had a personal relationship with her. She could have emailed or called or texted me. Instead she went on social media to put on a public performance.”

Claiming to support trans people and actually doing so are two different things, and she continuously misgenders Emezi, who has identified as non-binary for years. But there is still value in her larger point; what purpose did Emezi’s tweet serve? If the goal is to start a productive dialogue with someone and hopefully influence their views, is calling someone out on Twitter the most effective way to go about it? It’s unreasonable to ask trans people to educate every single transphobe they encounter, but in this case, the two had a pre-existing relationship, and as Adichie points out, Emezi could have used that as an opening.

At one point in the essay, she describes Emezi’s tweet as a “a public insult,” which succinctly gets at the problem with public shaming. We interpret such accusations as an attack, an insult; we experience a sense of powerlessness, especially if we aren’t media savvy, which may corner the accused into doubling down on their problematic views, shutting down a conversation before it can even begin. The performative brand of online wokeness Adichie dislikes requires a certain kind of knowledge, a list of phrases to be trotted out without any meaningful discussion of what those phrases mean. While most of this is well-intentioned, it can create echo chambers and ideological rigidity. Twitter, which is generally very American-centric, relies on a knowledge of this vocabulary that often excludes well-meaning older people, ESL folk, and those who aren’t from the West. At its worst, it encourages a culture of hostility to questions made in good faith.

Adichie notes,

“There are many social-media-savvy people who are choking on sanctimony and lacking in compassion, who can fluidly pontificate on Twitter about kindness but are unable to actually show kindness . . . People for whom friendship, and its expectations of loyalty and compassion and support, no longer matter. People who claim to love literature – the messy stories of our humanity – but are also monomaniacally obsessed with whatever is the prevailing ideological orthodoxy. People who demand that you denounce your friends for flimsy reasons in order to remain a member of the chosen puritan class.”

Some of this may seem extreme, and it’s worth critiquing the conflict with Emezi at the root of this essay, but we should answer her call for nuance, and grapple with both the good and the bad in her piece. Deplatforming her, as Emezi has called for, only aggravates an already massive generational divide and saps humanity from online spaces.

Re-Examining Scared Straight Programs

photograph of teen boy confined behind chain link fence

In the United States, summer camps for kids across the nation make up a $4 billion dollar industry. However, not all summer camps are designed to be fun. “Scared Straight” programs are designed to deter at-risk youth by forcing them into an exaggeratedly violent and threatening prison setting for 1-3 days. The purpose of these programs is to encourage children to change their actions which could eventually lead to adult conviction. Parents voluntarily pay for their children to be sent to these programs, a decision which some child psychologists have labeled as “child abuse.”

Are prison camps for kids ethical? Is it right to use fear to elicit a change in behavior? Are there certain disciplinary tactics which should be off-limits to parents?

Public awareness of these camps first began with the documentary “Scared Straight!” which aired in 1978, and won an Emmy and Academy Award for its depiction of children sent to prison camps. Awareness increased due to the A&E TV series “Beyond Scared Straight” which documented the experiences of children in prison camp programs for 9 seasons. Most recently, Vice ran a mini-documentary feature in 2018, which contained highly disturbing footage. One of the most documented “scared straight” prison camps for kids in the U.S. is “Project STORM,” a camp based in North Carolina, whose existence is predicated on the belief that “punishment and fear (i.e. getting tough on crime), is one approach to reducing juvenile crime.” Participation in the program costs between $75-100 and lasts 12 hours, including an overnight stay.

While many would agree there is some utility in allowing parents agency over personal parenting decisions, where should the line between children’s rights and parental discipline lie? One place to turn for answers might lie in the Convention on the Rights of the Child, an international agreement with 140 signatory countries, including the U.S. who has signed but failed to ratify. The Convention recognizes 40 rights that children are entitled to, including protection from abuse and violence as well as the prioritization of their best interests by parents and governments. Though the Convention does not define abuse and violence, child abuse is defined by the U.S. Department of Health and Human Services as ”any recent act…on the part of a parent or caretaker which results in death, serious physical or emotional harm, sexual abuse, or exploitation.” Studies have shown that certain forms of punishment, such as corporal punishment, negatively impact the emotional and physical well-being of children. With this in mind, it is clear that any program which uses physical and emotional violence, or the threat of violence, against children would be widely recognized as immoral in violating the rights of children. It is also hard to justify “scared straight” programs even from a consequentialist perspective. The American Psychological Association maintains that physical discipline is ineffective as well as harmful. In fact, Scared Straight programs have been found to actually increase the likelihood of criminal conviction for children who partake in them.

However, even if such camps appear to violate the rights of children, do the rights of parents to choose their own disciplinary methods present an irresolvable tension? Legally, parents have the right to exercise “reasonable force or reasonable punishments on their child to control, train, and educate.” This right is often referred to as the “parental discipline privilege.” While many states recognize this right as a defense during allegations of child abuse, the legality of an action does not necessarily make it moral. However, there is some utility in generally allowing parental autonomy in raising children. After all, parents often know their children better than anyone else, and therefore are the most qualified to make disciplinary decisions which will be simultaneously effective and appropriate to the child’s temperament. Additionally, giving parents more autonomy may lead to better outcomes for children, as some advocates claim. Parents who feel criticized or judged may also be more likely to react harshly to their child’s misbehavior due to embarrassment.

It’s also worth examining whether the moral culpability of parents rides partially on their motivation. The decision to send one’s child into a Scared Straight program might be the best decision a parent believes they can make, or they might see it as a last resort to teach their children to stay out of trouble. Parents who turn to these programs may feel they do not have the time, money, or energy to invest in helping their children make better decisions, so they rely on fear as a remedy. While generally ineffective and arguably abusive, parents who believe the promises of Scared Straight programs may ultimately be making a decision which they believe is ultimately positive for the child. For example, a single-parent trying to make ends meet, whose child has been expelled might believe that the program is the most manageable option for helping their child choose a different path. While parents who send their children to such programs for minor problems or who have historically abused their child, might be less excusable in their decision. Whether or not one believes that even well-intentioned parents should know better is likely the determining factor in whether or not one believes parents are fully culpable for the immoral and inhumane treatment of children in these programs.

Turning to Scared Straight programs themselves, is using fear as a method to elicit “positive” behavioral changes okay? If the programs were indeed highly effective, one might argue that the cost of such deterrence is worth the benefits. Incarceration has severely negative impacts on an individual, and some might believe that one traumatic weekend at a scared straight program is well-worth it. On the other hand, perhaps it is never okay to use such methods, as they are irreconcilably wrong in themselves. The use of fear to elicit or deter certain behaviors is highly debated within the psychological and sociological fields. “Fear appeal,” as it is often referred to, is “a persuasive message that attempts to arouse fear in order to divert behavior through the threat of impending danger or harm. Though the concept of fear appeal is commonly used in public health and marketing-based settings, this strategy is also clearly present in Scared Straight programs which use extreme, and arguably exaggerated circumstances of prison, to encourage kids to do everything they can to avoid criminal prosecution. This overload of fear might be the reason for its ineffectiveness as studies have found that extreme fear can effectively shut down the subject and may even lead to risk denial for those especially susceptible to the threat. If one believes that effective use of fear which leads to positive outcomes is justified, then the moral problem with Scared Straight programs is their ineffectiveness. On the other hand, if one believes that no matter the outcomes, the violent and threatening methods employed by these programs are wrong in itself, the most ethical answer is to simply abolish the programs altogether.

Parents are not only the main legal guardians of their children, but also the protector of their rights. Sometimes, parents make decisions which they believe will benefit the child, but lead to more harm than good. It is clear that Scared Straight programs which mentally and physically torment children have severely negative impacts on children and are highly ineffective. Until they are fundamentally reformed or ultimately abolished, they will likely continue to motivate shocking documentaries and critical exposes.

What It Means to Be a Hero

photograph of mural of DC superheroes

This is an article about oral sex, gender roles, and fictional characters who like to dress up in dark leather and hurt each other (specifically, DC’s Batman and Catwoman).

According to a recent interview with the executive producers of Harley Quinn, an R-rated DC-owned television show streaming on HBO Max, the corporate owners of the Dark Knight vetoed the showrunners’ intentions to include a sex scene between Batman and Catwoman where the Caped Crusader would have performed cunnilingus on Selina Kyle. Explaining their decision, DC told the producers that “…we sell consumer toys for heroes. It’s hard to sell a toy if Batman is also going down on someone” because “Heroes don’t do that.”

For many reasons, it’s understandable if you’re confused right now.

Why are comic book characters (ostensibly created as children’s stories) involved in sexual content? Why is Batman (a “good” character) having sex with Catwoman (a “bad” character)? And why don’t heroes “do that”?

The first two questions are answered fairly easily: since his introduction in the pages of Detective Comics back in 1939, Batman has developed into one of the most popular, recognizable (and, therefore, lucrative) characters in American culture. With dozens of live action and animated movies and television shows, video games, graphic novels, and more, it is safe to say that, in 2021, Batman is not just for kids — HBO’s Harley Quinn is on the list of properties like the Oscar-winning 2019 film Joker and the Arkham games from Rocksteady that are marketed more directly to older fans. (To be clear: this is hardly a new phenomenon: Grant Morrison, Alan Moore, Frank Miller, and plenty of other authors have been writing “adult” Batman stories for decades.)

Similarly, Catwoman has developed since her debut in 1940. While Selina Kyle was originally a simple jewel thief and burglar (and was famously portrayed as a straightforward villain by award-winners like Julie Newmar and Eartha Kitt), recent decades have seen the character grow into more of an anti-hero who often trades flirtatious banter with Batman. From the latex-clad Michelle Pfieffer dating Michael Keaton’s Bruce Wayne in 1992’s Batman Returns to the most recent pages of Tom King’s take on the characters (which saw Kyle and Wayne in an overt romance), the sexual tension between the Cat and the Bat is a well-established element of their relationship.

So, what about the sex?

Although the quote doesn’t give us much to go on, it seems like there are at least two ways to interpret the studio executive’s warning to the Harley Quinn showrunners; “Heroes don’t do that” might mean:

1. “Heroes don’t have sex.”

2. “Heroes don’t give oral sex.”

For several reasons, option (1) seems unlikely: not only is sexual virility a common feature of the “masculine hero” trope in American cinema (think of everyone from James Bond to Captain Kirk to Indiana Jones), but the full quote suggests specifically that “Batman going down on someone” would hurt toy sales.

Again, there is more than one way to understand what “Heroes don’t give oral sex” might mean in this context:

3. “Heroes can’t be depicted performing sex acts.”

4. “Heroes don’t perform that specific sex act.”

And, again, option (3) seems unlikely: not only are sexual innuendos and double entendres commonplace on the silver screen — including even in animated DC superhero shows intended more overtly for children — but Batman himself has already been featured in sex scenes. Even if we rule out straightforwardly pornographic content, there is still plenty of evidence that heroes have sex of one kind or another on screen (or just off its edge, at the very least).

So, that leaves us with (4). In context, it seems like particular emphasis is on the term ‘heroes’ — other characters might “do that,” but heroes don’t. Why might someone think this?

Here’s where a little philosophy can be helpful. According to the French theorist Luce Irigaray, “Female sexuality has always been conceptualized on the basis of masculine parameters” (from This Sex Which Is Not One, published in 1985) — as many feminists have pointed out, the historical over-emphasis of men’s perspectives has traditionally led to the silencing of women’s perspectives. When it comes to sexuality and the experience of sex, Irigaray argues that oppressive cultural habits have turned the public understanding of sexual pleasure into something that properly “belongs” to men: “Woman, in this sexual imaginary, is only a more or less obliging prop for the enactment of man’s fantasies.” So-called “good” women (in Kate Manne’s analysis of the term) will play their part within this misogynistic system, thereby allowing the patriarchal structure (that benefits men) to be upheld. Against this, Irigaray calls for a “rediscovering” of women’s pleasure (and, by extension, women’s perspectives and power): “in order for woman to reach the place where she takes pleasure as woman, a long detour by way of the analysis of the various systems of oppression brought to bear upon her is assuredly necessary.”

Ironically, the socially-constructed nature of various gender roles, although stereotypically beneficial for men in many ways, also serves to define expectations and norms for them that, when breached, can bring shame and ridicule down onto the offending man’s head. This is just one more disturbing element of so-called “toxic masculinity” that, in short, is much like Manne’s point about how misogyny can benefit “good” women: patriarchy can hurt “bad” men (or “men who are bad at being men”). Not only can this observation help to explain, for example, homophobic reactions to gay men (but not gay women), but, as philosopher Robin Dembroff argues, “Patriarchy, it turns out, doesn’t put men on top; it elevates men who are most mirrored within manhood — an ideal that was shaped, all along, to reflect that group of men. Or, to put it simply, patriarchy puts real men on top.”

It is not hard to see, then, why a corporate exec concerned with merchandise sales might worry about Batman giving Catwoman oral sex: in such a scene, the woman — and the woman alone — would (presumably) be experiencing sexual pleasure in precisely the way that the patriarchal system cannot compute. Were the characters’ positions reversed, and Catwoman were giving Batman oral sex, then consumers and toy-purchasers would likely interpret that as just one more risqué sign of the hero’s strength and power — in short, of his manliness. For Batman to “go down” on Catwoman might suggest instead that he is submissively giving up his masculinity — and, by extension, his right to be a hero.

By definition, heroes don’t do that.

Time for Social Conscription?

photograph of Uncle Same "We Want You!" poster

The metaphor of war has been widely employed during the pandemic. Donald Trump characterized the virus as the “invisible enemy,” Boris Johnson declared himself the head of a wartime government, Narendra Modi envisioned the ‘Tika Utsav’ (a vaccination festival) as the start of the second Indian war on COVID-19, and even the WHO recommended the suspension of vaccine patents to secure the world’s “war footing.” Using the language of conflict to convey the pandemic’s threat and rally our march toward its defeat has been a regular feature of discourse worldwide.

The deployment of this kind of combat analogy isn’t novel. Similar language was used by Brazil officials during the 2018 Zika outbreak, by Nixon in the 1970s ‘war on drugs,’ and by contemporary conservative commentators to decry the ‘war on Christmas.’ And, while its appropriateness is sometimes questionable, the power which such an illustrative framing possesses is often considerable.

Conceptualizing challenges as battles elicits an ‘us v. them’ mentality. In the COVID-19 context, it provides a common enemy (the virus), a retaliatory strategy (flattening the curve), the soldiers (clinical staff), the home-front (people isolating), and the saboteurs (those breaking lockdown rules). The idea of uniting to fight an adversary capable of causing such catastrophic harm was used to justify the rapid closing of schools, bars, sporting events, the restrictions on domestic and international travel, and the general reduction in the freedoms many enjoyed pre-pandemic. Echoing WWII’s campaigns like ‘Dig for Victory,’ we’re repeatedly told that we must do our part to prevent disaster, preserve vital services and resources, and save lives.

The demands on state resources worldwide have reinforced a picture not dissimilar from that of a mass global conflict. Hospitals overrun with patients, a lack of available qualified staff, panic-hoarding and rationing of goods, as well as fear of potentially compromised foreign individuals, all elicit similar feelings of dread in the face of an uncertain future. Indeed, living under the shadow of war and the shadow of a pandemic has some palpable thematic similarities.

Yet, one aspect of the wartime era has been notably absent in the discussion about how governments might rise to meet the significant demands placed on public services during a pandemic: conscription; that is, people’s mandatory enlistment into national service.

This practice is typically reserved for compelling people into military service because war is broadly seen as the only event that could justify such a coercive social program. However, there are plenty that believe obligatory military service represents an important public good even in peacetime. In the U.K., for example, the prospect of military conscription’s reinstatement has been floated by politicians, commentators, as well as royals. Additionally, the U.K.’s public appears to possess an appetite for its restoration, with 48% of respondents to a 2018 YouGov poll and 43%-47% of respondents to a similar 2016 poll favoring compulsory military service for young people.

But, if it is appropriate to think of the pandemic as a war, shouldn’t a conscription program for non-military, social roles — such as the U.K.’s National Health Service or Italy’s Servizio Sanitario Nazionale — be considered?

Non-military conscription isn’t new. Germany, Austria, Finland, Switzerland, amongst others, have utilized a conscription service to ensure citizens contribute to social services in the past. However, this form of national service is generally employed as an alternative to military conscription, not a separate, preferable option. Social conscription is typically framed as the secondary option, something that is only considered in countries that employ military conscription as their default service option. But, a program of social conscription may not simply be a backup alternative for those who don’t wish to be involved in the armed forces. Rather, there is a compelling argument that the former avoids several of the latter’s more troubling ethical implications.

Chief amongst these criticisms concerns the methods employed by the armed forces to achieve their goals. The military secures its objectives via the threat of violence. Coercing people via social and judicial means to serve in an industry that invariably leads to the demise of others is hard to justify. Imposing an obligation on citizens to participate in activities that might conflict with their deeply held personal, moral, or religious convictions — like actively supporting the military-industrial complex or contributing to the taking of others’ lives — will strike many as going beyond what any legitimate government can demand of its people. This is why many jurisdictions with military conscription provide an opt-out pathway where individuals can pursue alternative service avenues to serve the public good and do their bit.

Social conscription, on the other hand, does not raise the same problem. Through a national service program that supports well-being-enhancing institutions — such as social care or health services — individuals can assist their communities and countries in a manner comparable to military conscription without being (in)directly involved with the killing of another human being. In fact, they’d be instrumental in the saving of lives.

Social conscription doesn’t simply avoid some of the ethical quandaries associated with military conscription. On the contrary, it possesses benefits that make it not merely relationally preferable to military conscription but inherently preferable, especially during a pandemic. With healthcare services struggling to meet demand due to a lack of available staff, an enormous task force needed to administrate and administer vaccination initiatives, and social care industries besieged by the increased need of their services, a consistent and reliable source of labor in the form of social conscripts may be the most effective way to secure the best possible outcome in the war against COVID-19. And, if increasing well-being, health, and positive results are ethically desirable outcomes, then social conscription may be morally required.

Whether such a program would be practical is something that needs investigation. However, given the appetite for conscription alongside the evident need to bolster those institutions that increase well-being and protect people’s health globally — a need highlighted by the pandemic — social conscription may not be military conscription’s inferior alternative. Rather, it may be its indispensable and desirable superior.

Intersectionality and the Problem of the MCU’s Ancient One

photograph of Doctor Strange comic book cover

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of our discussion questions, check out the Educational Resources page.


In 2016, Marvel Studios’ Doctor Strange introduced Benedict Cumberbatch’s eponymous hero to the Marvel Cinematic Universe. The film sees Strange learn how to harness magical energy as a sorcerer after journeying to the mystical city of Kamar-Taj and meeting its leader, the Ancient One (played by Tilda Swinton). This casting was controversial: in the comic books on which the movie is based, the Ancient One is an Asian man; Swinton is neither.

Swinton’s Ancient One is an example of what is sometimes called the “whitewashing” problem in Hollywood (where white actors are cast in non-white roles). Although Swinton’s portrayal of the character does not attempt to appeal to stereotypes about Asian people (and is explicitly described in the film as being of Celtic ancestry) — thereby setting it apart from straightforwardly racist performances like Mickey Rooney’s Japanese character in Breakfast at Tiffany’s — it nevertheless rewrites the backstory of a character long-depicted as Asian to instead substitute a white actor in the role. Much like how “whitewashing” a building involves covering it with white paint, the MCU’s portrayal of the Ancient One covers the character’s non-European background by giving them a Scottish face.

Granted, Swinton is a talented actor, but there is certainly no shortage of talented people available to act in the MCU; as Rob Chan, president of the Media Action Network for Asian Americans, pointed out, “Given the dearth of Asian roles, there was no reason a monk in Nepal could not be Asian.” (Notably, Doctor Strange is far from the only recent movie criticized for whitewashing: Wikipedia has a surprisingly long list of references available.) In May 2021, Marvel Studios President (and MCU mastermind) Kevin Feige officially acknowledged that casting Swinton as the Ancient One was a mistake.

(A quick note to anyone about to ask something like “Will Kevin Feige also apologize for Nick Fury, Heimdall, or Johnny Storm in the recent non-MCU adaptation of the Fantastic Four?” The answer is pretty clearly “No.” While it’s true that, like Swinton’s Ancient One, those are characters portrayed by actors (Samuel L. Jackson, Idris Elba, and Michael B. Jordan, respectively) of a different race than (at least some of) the source material, the problem of “whitewashing” is not simply a matter of casting a member of the “wrong” race to play a role. Instead, the issue is rooted in the lack of Hollywood roles — especially leading roles — for non-white actors in general. When a talented white actor is cast in a part that could easily (and historically has been) filled by a talented non-white actor, this only serves to further reduce the opportunities for non-white actors. (As Chan also pointed out, “Tilda Swinton can afford to turn down roles.”)

But perhaps the most unusual thing about Swinton’s casting was actually the attention that Doctor Strange director and co-writer Scott Derrickson thought he was paying in portraying the character as the film does; as he explained in a 2016 interview, “The first decision that I made was to make [the Ancient One] a woman, before we ever went to draft, before we ever had a script…There was a desire for diversity in making that decision.” After this choice, Derrickson was worried that casting an Asian woman in the role would actually end up perpetuating long-standing Asian stereotypes:

“I know the history of cinema and the portrayal of the Dragon Lady in Anna May Wong films, and the continued stereotype throughout film history and even more in television. I just didn’t feel like there was any way to get around that because the Dragon Lady, by definition, is a domineering, powerful, secretive, mysterious, Asian woman of age with duplicitous motives—and I just described [the MCU’s Ancient One]. I really felt like I was going to be contributing to a bad stereotype.”

Reflecting on this in 2021, Feige pointed out, “We thought we were being so smart, and so cutting-edge…But it was a wake-up call to say, ‘Well, wait a minute, is there any other way to figure it out? Is there any other way to both not fall into the cliché and cast an Asian actor?’ And the answer to that, of course, is yes.”

I think this is where a little philosophy can be helpful to understand what’s going on. In 1989, Kimberlé Crenshaw, a legal scholar at the UCLA School of Law, argued that discrimination against Black women in Western culture is a particularly complex kind of injustice. While it might be tempting to think about racism against Black men and racism against Black women as essentially similar, this kind of oversimplification ignores the sexism that Black women also encounter (making their experience different than that of their male counterparts). Instead of analyzing the treatment of Black women along the single axis of “race,” Crenshaw argued that an intersectional analysis (that pays special attention to the multidimensional nature of a Black woman’s social identity) is necessary to fully capture the experience of people suffering from multiple kinds of oppression. As Crenshaw explains in the opening pages of the article that coined the term intersectionality, “Because the intersectional experience is greater than the sum of racism and sexism, any analysis that does not take intersectionality into account cannot sufficiently address the particular manner in which Black women are subordinated.”

Since 1989, intersectional analyses have proliferated to explain many different kinds of overlapping social identities and the complex ways that people navigate the social world. Although the term (and Crenshaw’s name) has become strangely misrepresented as an ominous threat by some politically-(or financially)-minded agents, the basic idea of intersectionality is relatively uncontroversial: people are complicated and simply treating any one person as simply one kind of thing will inevitably cause you to misunderstand (and potentially mistreat) them.

So, by simply thinking about “diversity” as a matter of casting a woman in a role traditionally played by a man, the creative team behind Doctor Strange was oversimplifying the complex nature of the Ancient One’s (and, for that matter, Tilda Swinton’s) social identity. The idea of intersectionality (and critical theories in general) does not argue that race or sex or gender or anything else about a person is central or primary or more important than anything else about them; they instead try to call attention to the complicated ways that diverse people’s different backgrounds and histories can interact to create unique and complicated experiences. Recasting the Ancient One by focusing only on the character’s sex ignored plenty of other relevant facts about him/her.

One final note: this is not a call to harangue Scott Derrickson, pillory Kevin Feige, or “cancel” Tilda Swinton — this is an attempt to understand how the makers of Doctor Strange might have made the decision that they now have openly (and repeatedly) called a “mistake.” And it’s a mistake that Marvel might have actually learned something from: not only has the recently-completed Falcon and the Winter Soldier miniseries on Disney+ explored racial tensions long-bubbling in the world of the Avengers, but the upcoming Shang-Chi and the Legend of the Ten Rings, a feature film slated for release in September, will introduce the MCU’s first Asian superhero. And while this film is not without a casting controversy of its own, many are hoping that its Chinese-Canadian star, Simu Liu, and its all-Asian cast will help the Marvel Cinematic Universe to move forward.

The Ethics of Self-Citation

image of man in top hat on pedestal with "EGO" sash

In early 2021, the Swiss Academies of Arts and Sciences (SAAS) published an updated set of standards for academic inquiry; among other things, this new “Code of Conduct for Scientific Integrity” aims to encourage high expectations for academic excellence and to “help build a robust culture of scientific integrity that will stand the test of time.” Notably, whereas the Code’s previous version (published in 2008) treated “academic misconduct” simply as a practice based on spreading deceptive misinformation (either intentionally or due to negligence), the new document expands that definition to include a variety of bad habits in academia.

In addition to falsifying or misrepresenting one’s data — including various forms of plagiarism (one of the most familiar academic sins) — the following is a partial list of practices the SAAS will now also consider “academic misconduct”:

  • Failing to adequately consider the expert opinions and theories that make up the current body of knowledge and making incorrect or disparaging statements about divergent opinions and theories;
  • Establishing or supporting journals or platforms lacking proper quality standards;
  • Unjustified and/or selective citation or self-citation;
  • Failing to consider and accept possible harm and risks in connection with research work; and
  • Enabling funders and sponsors to influence the independence of the research methodology or the reporting of research findings.

Going forward, if Swiss academics perform or publish research failing to uphold these standards, they might well find themselves sanctioned or otherwise punished.

To some, these guidelines might seem odd: why, for example, would a researcher attempting to write an academic article not “adequately consider the expert opinions and theories that make up the current body of knowledge” on the relevant topic? Put differently: why would someone seek to contribute to “the current body of knowledge” without knowing that body’s shape?

As Katerina Guba, the director of the Center for Institutional Analysis of Science and Education at the European University at St. Petersburg, explains, “Today, scholars have to publish much more than they did to get an academic position. Intense competition leads to cutting ethical corners apart from the three ‘cardinal sins’ of research conduct — falsification, fabrication and plagiarism.” Given the painful state of the academic job market, researchers can easily find incentives to pad their CVs and puff up their resumes in an attempt to save time and make themselves look better than their peers vying for interviews.

So, let’s talk about self-citation.

In general, self-citation is simply the practice of an academic who cites their own work in later publications they produce. Clearly, this is not necessarily ethically problematic: indeed, in many cases, it might well be required for a researcher to cite themselves in order to be clear about the source of their data, the grounding of their argument, the development of the relevant dialectical exchange, or many other potential reasons — and the SAAS recognizes this. Notice that the new Code warns against “unjustified and/or selective citation or self-citation” — so, when is self-citation unjustified and/or unethical?

Suppose that Moe is applying for a job and lists a series of impressive-sounding awards on his resume; when the hiring manager double-checks Moe’s references, she confirms that Moe did indeed receive the awards of which he boasts. But the manager also learns that one of Moe’s responsibilities at his previous job was selecting the winners of the awards in question — that is to say, Moe gave the awards to himself.

The hiring manager might be suspicious of at least two possibilities regarding Moe’s awards:

  1. It might be the case that Moe didn’t actually deserve the awards and abused his position as “award-giver” to personally profit, or
  2. It might be the case that Moe could have deserved the awards, but ignored other deserving (potentially more-deserving) candidates for the awards that he gave to himself.

Because citation metrics of publications are now a prized commodity among academics, self-citation practices can raise precisely the same worries. Consider the h-index: a score for a researcher’s publication record determined by a function of their total number of publication credits and how often their publications have been cited in other publications. In short, the h-index claims to offer a handily quantified measurement of how “influential” someone has been on their academic field.

But, as C. Thi Nguyen has pointed out, these sorts of quantifications not only reduce complicated social phenomena (like “influence”) to thinned-out oversimplifications, but they can be gamified or otherwise manipulated by clever agents who know how to play the game in just the right way. Herein lies one of the problems of self-citations: an unscrupulous academic can distort their own h-index scores (and other such metrics) to make them look artificially larger (and more impressive) by intentionally “awarding themselves” with citations just like Moe granted himself awards in Situation #1.

But, perhaps even more problematic than this, self-citations limit the scope of a researcher’s attention when they are purporting to contribute to the wider academic conversation. Suppose that I’m writing an article about some topic and, rather than review the latest literature on the subject, I instead just cite my own articles from several years (or several decades) ago: depending on the topic, it could easily be the case that I am missing important arguments, observations, or data that have been made in the interim period. Just like Moe in Situation #2, I would have ignored other worthy candidates for citation to instead give the attention to myself — and, in this case, the quality of my new article would suffer as a result.

For example, consider a forthcoming article in the Monash Bioethics Review titled “Can ‘Eugenics’ Be Defended?” Co-written by a panel of six authors, many of whom are well-known in their various fields, the 8-page article’s reference list includes a total of 34 citations — 14 of these references (41%) were authored by one or more of the article’s six contributors (and 5 of them are from the lead author, making him the most-cited researcher on the reference list). While the argument of this particular publication is indeed controversial, my present concern is restricted to the article’s form, rather than its contentious content: the exhibited preference to self-cite seems to have led the authors to ignore almost any bioethicists or philosophers of disability who disagree with their (again, extremely controversial) thesis (save for one reference to an interlocutor of this new publication and one citation of a magazine article). While this new piece repeatedly cites questions that Peter Singer (one of the six co-authors) asked in the early 2000s, it fails to cite any philosophers who have spent several decades providing answers to those very questions, thereby reducing the possible value of its purported contributions to the academic discourse. Indeed, self-citation is not the only dysgenic element of this particular publication, but it is one trait that attentive authors should wish to cull from the herd of academic bad habits.

Overall, recent years have seen just such an increased interest among academics about the sociological features of their disciplinary metrics, with several studies and reports being issued about the nature and practice of self-citation (notably, male academics — or at least those without “short, disrupted, or diverse careers” — seem to be far more likely to self-cite, as are those under pressure to meet certain quantified productivity expectations). In response, some have proposed additional metrics to specifically track self-citations, alternate metrics intended to be more balanced, and upending the culture of “curated scorekeeping” altogether. The SAAS’s move to specifically highlight self-citation’s potential as professional malpractice is another attempt to limit self-serving habits that can threaten the credibility of academic claims to knowledge writ large.

Ultimately, much like the increased notice that “p-hacking” has recently received in wider popular culture — and indeed, the similar story we can tell about at least some elements of “fake news” development online —  it might be time to have a similarly wide-spread conversation about how people should and should not use citations.

A Pause on Rights: Canada’s Constitutional Clause

photograph of interior of Canada's House of Commons

Imagine a world where overturning Citizens United could be done with a simple act of Congress. According to polls, 88% of Americans hailing from both sides of the aisle would back a constitutional amendment to overturn the decision. But why does it take a constitutional amendment? Because the U.S. Supreme Court ruled that political donations constitute free speech, and because corporations have a constitutionally protected right to free speech, including unlimited donations. Since changing the U.S. Constitution is exceedingly difficult, this is not likely. But what if Congress could simply override the Supreme Court and say that there are limits to such forms of free speech. Canada has such a mechanism in its constitution, and it is starting to raise serious ethical concerns.

All rights in Canada enshrined in the Charter of Rights and Freedoms are already subject to “reasonable limits,” an ambiguous phrase whose meaning must often be determined by courts. However, under section 33 of the Charter, the federal Parliament of Canada or a provincial legislature can declare that a piece of legislation will operate notwithstanding the Charter. So, if a court rules that a law violates the Charter, a legislature with a simple majority can say that the law will remain in force for up to five years before the legislature must revisit the question and potentially renew the exception. In essence, it can put individual rights on pause for half a decade at a time. The clause doesn’t apply to democratic rights or mobility rights, but legal rights concerning detention, trials, punishment, and fundamental freedoms like freedom of association, peaceful assembly, and free speech are all fair game.

The clause has been called “uniquely Canadian,” and would no doubt be controversial in the United States. It could be used to limit political donations or enact stronger gun control legislation, but it would also be easy to limit protest or engage in practices that might be considered cruel and unusual. Created as part of a constitutional bargain to appease the provinces, the clause has almost never been used and has always been controversial. The Charter is highly prized by Canadians, and so the idea of overriding it is typically politically perilous. Nevertheless, in the last three years there have been about as many threats to use it as there were during the first eighteen years of its existence or the eighteen years after that.

The latest controversy involves Premier Doug Ford of Ontario who has threatened to invoke the clause in response to the courts striking down election finance legislation that the Ford government had passed, limiting third-party advertising for a period of one full year from an election (prior to this, the law restricted spending six months before the election). Typically, these third parties are supported by public sector unions which tend not to support Ontario Conservatives, so the move to use the clause in this case, described as using a “sledgehammer on a gnat,” appears even more controversial as it seems to politically benefit Ford. (Ford had previously threatened to use the clause in 2018 after his government unilaterally changed election laws in Toronto during an election.)

In other provinces since 2018, the threat of invoking or actually invoking the clause has been a response to issues relating to public funding for Catholic schools, legislation requiring vaccination, the use of religious symbols in the civil service, and, recently, protecting the French language in Quebec at the expense of minorities in the province. This increasing willingness to use the clause, and use it more frivolously to pre-empt a court decision before it is even made, is a cause for concern. As Justin Trudeau’s former secretary puts it, “what’s at stake here is whether the ultimate arbiter of your Charter of Rights and Freedoms is the Supreme Court of Canada or your provincial premier.” But “don’t democratic societies put minority rights into inviolate foundational documents precisely because they’re politically tempting to violate?”

Indeed, Canada has recently seen exactly what can happen when the rights of minorities are ignored for the sake of a political majority, and there are plenty of other historical examples in Canada where rights have been violated. So obviously, there are ethical concerns about the clause. So why keep it? Originally, it was thought that the clause would only be used in non-controversial circumstances because it would be so unpopular to use it otherwise. However, for those who support the clause, there are two important factors to consider. The first is that the clause is considered to be an important check on judicial review.

While the Canadian Supreme Court does not have the same partisan tinge that it’s U.S. counterpart has, in the Canadian legal framework, Parliament is supposed to be supreme, meaning that what it says goes rather than the rulings of unelected judges. Some who support the clause argue that it is ultimately one of the reasons the court is less partisan; judges will be less activist and partisan if they know they can be so easily overridden. Indeed, with such a clause in the U.S., there might be less concern about when certain justices retire. Another reason why the clause is supported goes back to part of the reason it was created: to allow provinces to opt out for the sake of balancing collective rights.

Quebec has used the clause the most, typically defending its usage by claiming it is needed to protect the French language. Long concerned about declining usage of French and eventually becoming assimilated into English Canada, Quebec’s defenders seek to protect collective French community rights over individual rights to secure what Quebecers consider to be a distinct society in North America. Thus, they claim that the clause serves a vital moral good.

On the other hand, critics might charge that this simply amounts to securing the rights of the majority over the minority by bypassing individual rights. Indeed, imagine any state simply choosing to ignore Roe v. Wade simply because it would be a popular move to do so. To that extent, the clause has raised new issues of moral concern as provinces now seem more willing to use it, even for things other than “non-controversial issues.” The concern now is that the more it is used, the more it will be overused for the sake of convenience and political gain rather than as a last resort.

The moral issue for Canadians reconsidering the clause after almost forty years of existence is: How should collective and individual rights be balanced relative to each other? And how might these calculations change when a government threatens to use it? Experts believe that a move like Doug Ford’s will be unpopular because it carries a lot of political baggage. On the other hand, Canadians are famously apathetic about politics and rarely turf one-term governments. It remains to be seen whether Canadians will be keen to defend the Charter from clause users come election time. I’d be skeptical that Ford’s use of the clause becomes a major election issue a year from now. But the moral danger is that a constitutional tool capable of doing something so potentially harmful slowly shifts from a taboo to a norm fueled by populism. The moral task for the public is to re-evaluate how comfortable we are with this and under what conditions we consider the clause’s use acceptable.

The Ethics of Animal Dis-Enhancement

photograph of chickens packed into pens at poultry farm

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of our discussion questions, check out the Educational Resources page.


Human beings have long treated animals not as sentient beings, but as objects or products to be used and consumed. We do this in spite of the fact that animals demonstrate every sign of having mental lives. We have the same reasons to believe that animals have mental lives that we do to believe that other human beings have mental lives; the best evidence we have is behavior. Humans report having affection for animals. Nevertheless, we intensively farm them for food and use them for medical experiments, activities which are quite painful and lead to suffering, permanent disability, and/or death. Engaging in these activities requires compartmentalization and moral disengagement.

The tension arises because humans want to use animals for all of the purposes that they typically use them for, but they also don’t want the animals to feel pain or to suffer if it can be prevented. Under ordinary conditions, when faced with this choice, humans use animals rather than exhibit care for their suffering. Recently, some scientists and philosophers have suggested an alternative solution: genetically engineering farm and research animals to experience little to no suffering. Recent research on pain suggests that it is registered in the brain in two places. The first is in the primary or somatosensory cortex, which establishes the nature of the pain (burning, throbbing, etc.). The second involves the affective dimension which happens in the anterior cingulate cortex. This area controls not the pain itself, but how much the sentient creature minds the pain. Either area could be genetically engineered to reduce the discomfort experienced by the animal.

Advocates of this approach care about animal pain and suffering; if they didn’t, they would remain satisfied with the status quo. They are advocating what philosophers frequently refer to as an approach in line with non-ideal theory. The distinction between ideal and non-ideal theory is a meta-ethical distinction, that is, it is a distinction between different theories of what ethics is and how we ought to approach it. Ideal theorists argue that ethics should be concerned with identifying the correct moral theory and then directing behavior so that it conforms to that standard. The non-ideal theorist, on the other hand, acknowledges that everyone would follow the ideal theory in a perfect world, but emphasizes that we don’t live in such a world. As such, our ethical theorizing should situate itself in the world that we actually inhabit, with all of its flaws and imperfections — a world where we perhaps can’t or shouldn’t expect everyone’s compliance or agreement on all things at all times.

When it comes to dis-enhancement, these non-ideal theorists often acknowledge that it would be best if we simply stopped exploiting animals and using them as objects for human purposes. They also recognize, however, that animal advocates have been shouting their messages from the rooftops for decades, even centuries in some parts of the world. To the extent that these messages are being heard, they are also largely being dismissed. If we are going to continue to use animals for food and research, at least we could do so in a way that minimizes pain. This may not realize what true justice demands, but it may represent incremental change toward that ideal state of affairs.

Opponents of dis-enhancement make several different kinds of arguments. First, they argue that dis-enhancement leaves animals vulnerable. The ability to experience and to care about pain is an evolutionary mechanism that helps creatures to avoid danger. If there is no longer any fear of pain because dis-enhanced animals do not feel it, then animals could die from otherwise avoidable risk. In response to these claims, the non-ideal theorist might argue that the unfortunate truth is that these animals aren’t going to be venturing out into the wide world in which they might make bad decisions. Their fate is certain — they are destined to live lives during which they are imprisoned and used and then discarded. If this is the case, why not do what we can to make their existences less unpleasant?

Opponents argue further that our willingness to do this to non-human animals highlights the extent of our speciesism — our tendency to direct our moral concern only to members of certain species on the basis of species membership alone. Imagine that a scientist wanted to create a group of people to enslave and abuse. The scientist doesn’t want to cause the resultant humans any pain, so he creates them without the ability to experience it. It is reasonable to suppose that many people would object to this experiment. Would their objections be justified? How is this different from creating a horde of robotic slaves? If we react negatively to this thought experiment, but not to dis-enhancing animals, what could explain our reaction other than speciesism?

Margaret Atwood’s novel Oryx and Crake takes place in a futuristic universe that is increasingly bioengineered. At one point, Crake, who is working in the “Neo-agricultural Department” at a research university takes the main character, Jimmy, to observe a new method of food production. They are growing parts of chickens — only the breasts — for the purposes of food. Crake says,

“You get chicken breasts in two weeks – that’s a three-week improvement on the most efficient low-light, high-density chicken farming operation so far devised. And the animal-welfare freaks won’t be able to say a word, because this thing feels no pain.”

Crake’s laboratory is experimenting with animal parts, but, at least at this point in the story, they are not creating sentient beings. They are just chicken breasts, they aren’t having lived experiences of the world, they don’t have preferences or a dignity to contend with. (Consider in vitro meat, which allows scientists to create meat and organs to eat or to test on without changing the genetic structure of future sentient animals.)

When it comes to farm and research animals who have been “dis-enhanced,” we are still dealing with sentient creatures that have experiences of their world. They may lack the ability to feel or to care about feeling their own pain, but they still have a set of dispositions to behave in certain ways and have the ability to develop preferences. This makes them different from robots or disembodied chicken breasts. They are aware of their own experiences. Some opponents argue that respect for the lived experiences of sentient creatures demands that researchers refrain from playing Frankenstein with their bodies in ways that have serious consequences.

In response, advocates of dis-enhancement might appeal again to the non-ideal nature of the theory under which they are operating. They might agree that it would be wonderful if everyone respected the dignity of sentient creatures. Unfortunately, that isn’t going to happen. Given this, dis-enhancement might be our best chance at reducing or eliminating the massive amounts of suffering that these processes entail.

Another objection to dis-enhancement comes from the perspective of environmental virtue ethics. Are we oriented virtuously toward the natural world and the living creatures on it if we respond to the crises that we face with dis-enhancement? Consider the following parallel case. One way of responding to climate change is to engage in geoengineering. One form that this can take is changing the chemical constitution of our atmosphere in such a way as to roll back or lessen the effects of global warming. Opponents of geoengineering point out that when a child messes up their room, the right thing to do is get them to clean it and teach them how to keep it clean rather than searching for ways to mess it up ever further without consequences. By analogy, we should limit our greenhouse gas emissions and try to clean up the mess we’ve made rather than pursuing geoengineering strategies that threaten to produce ever more mess.

Critics of dis-enhancement argue that we should adopt the same standard of responsibility when it comes to cruelty to animals. Instead of finding ways to engage in cruel behavior without causing pain, we should simply stop engaging in cruel behavior. Treating animals in the way that we do is an exhibition of vicious character. Even if it has little effect on the animals because they have been dis-enhanced and don’t feel pain, the philosopher Immanuel Kant’s observations may be useful here. He says that we “must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men. We can judge the heart of a man by his treatment of animals.” If we behave callously toward dis-enhanced sentient animals because they don’t experience pain, the consequence may be that we are increasingly callous and cruel toward the beings that do.

The question of dis-enhancement is ultimately a question of how we should view the relationship between human beings and the natural world. Should it be a conqueror’s relationship to the conquered? Are we so depraved as a species that it would be naïve for us ever to expect broad scale changes? Or is there hope that we can someday view ourselves as empathetic fellow participants in biotic communities?

The Knowledge Norms of Emotions

simple single-line drawing of person curled up in bed

This post begins with a sad backstory. A little while back my wife and I had a miscarriage of our first child. There was a lot that was terrible in the experience, but in this post I want to address a certain oddity that I noticed about grief.

Due to a range of bureaucratic complications, it took about a week from when we first suspected a miscarriage to when we had final confirmation. During that week, our confidence that we had miscarried grew, but throughout the period it remained a likelihood rather than a certainty.

What surprised me, during that week, was that the uncertainty made it difficult to grieve. Even when I was pretty sure we had lost the child, it felt ‘out of whack’ to grieve the loss, since there was a chance the child was still alive. It was a terrible week, and I was extremely sad, but it felt out of joint to grieve for the child while recognizing the chance that all might be well. There was no obstacle to feeling anxious, there was nothing out of joint about feeling trepidation, but outright grief felt strange. And it continued to feel strange until we received confirmation of the loss.

This, eventually, got me wondering: is grief characterized by a knowledge norm? In philosophy, a knowledge norm is a normative rule which says that knowledge of something is required for an action or mental state to be appropriate. For example, there seems to be a knowledge norm on assertion: you should only tell someone something if you know that thing is true. This explains, for instance, why if I say “it will rain tomorrow” it is appropriate for you to ask “how do you know?” Or why saying “I don’t know” is an appropriate response if someone asks you a question. (For a thorough defense of a knowledge norm of assertion see Timothy Williamson’s “Knowing and Asserting.”)

Many philosophers also argue that there is a knowledge norm of belief: you should only believe X if you know X is true. Thus, Williamson argues in his book Knowledge and its Limits

“Knowledge sets the standard of appropriateness for belief. That does not imply that all cases of knowing are paradigmatic cases of believing, for one might know p while in a sense treating p as if one did not know p—that is, while treating p in ways untypical of those in which subjects treat what they know. Nevertheless, as a crude generalization, the further one is from knowing p, the less appropriate it is to believe p. Knowing is in that sense the best kind of believing. Mere believing is a kind of botched knowing. In short, belief aims at knowledge (not just truth).”

There also seems to be a knowledge norm of certain actions. For instance, it seems like you should only punish someone if you know they are guilty, and only chastise someone if you know they did wrong. Some philosophers have gone even further and suggested that there is a general knowledge norm on all action: you should only treat X as a reason for action if you know X to be true.

My own experience with grief seems to suggest that there might also be a knowledge norm on various emotions; but as far as I know that topic has not yet been seriously investigated by philosophers.

My experience of the miscarriage suggested there was a knowledge norm to grief because the problem was that it felt wrong to grieve our child’s death as long as I recognized that the child might still be alive. This is parallel to how I can’t know the child had died as long as I recognized that the child might still be alive. In some sense, what is characteristic of knowledge is the elimination of all relevant alternatives. As long as those relevant alternatives remained, we did not know, nor did it feel quite right to grieve.

Here is another reason for thinking that grief is characterized by a knowledge norm: it is hard to fit probabilities with the emotion of grief. It would be weird to think that as I grow more certain, my grief grows proportionally. I do not grieve a small amount at a 5% chance that my spouse has died, nor would my grief double as my confidence grows to 10%. I grieve less for less bad things, not for lower probabilities of equally bad things. But it would be equally weird to think that there is some probabilistic threshold at which grief suddenly becomes appropriate. It is not as though when I go from 94% confident my child died to 96% confident my child died that suddenly grief goes from inappropriate to appropriate.

But if grief neither scales with probability, nor requires a certain probabilistic threshold, then it seems like grief is responsive to a standard other than probabilistic credence, and the natural alternative is that it is responsive to knowledge.

Other emotions also seem to be knowledge normed in this way. It is hard to feel grateful because you think it is likely that someone brought you a present. Normally gratitude is a response to the knowledge that someone did something for you. Jonathan Adler makes a point along these lines about resentment: “Mild resentment is never resentment caused by what one judges to be a serious offense directed toward oneself tempered by one’s degree of uncertainty in that judgment.”

Now, some other emotions at first blush seem different. I can be worried about something without knowing that thing will occur. Similarly, I can be hopeful of something without knowledge it will occur. Yet, even here, it seems that there might be some knowledge norm at play. For instance, it seems weird to be worried about or hope for something you know is impossible. Thus, it might be that you must know that something is possible before you can worry about it or hope for it.

If this is right, does it suggest a general pattern? I think it does. Emotions have appropriateness conditions. Resentment is an appropriate response to being wronged. Gratitude is an appropriate response to being given a gift. Hope is an appropriate response to the possibility of certain goods, as worry is an appropriate response to the possibility of certain bads. In each of these cases, what is required to rightly feel the emotion is knowledge.

That, then, is why grieving felt strange. I didn’t yet know if my grief was appropriate since I lacked knowledge of the tragedy to which my grief was a response.

The Double-Edged Sword of “Free Speech”

photograph of mic on graduation stage before empty chairs

On June 2nd, The Christian Post reported the story of Savannah Lefler, a high school valedictorian in Michigan whose “Christianized” honors night speech was facing censorship by school officials; after a legal nonprofit urged administrators to reconsider, Lefler was informed that she will be allowed to deliver her prepared remarks as desired. According to First Liberty, a law firm dedicated “to defending religious liberty for all Americans” who wrote in Lefler’s defense, “Too often, we have seen well-meaning school officials thinking they are complying with the Establishment Clause mistakenly go too far and censor the private speech of students, violating students’ rights under the Free Speech Clause.” (First Liberty has also defended other high school graduates from Pennsylvania and Michigan in similar cases.)

One day later, on June 3rd, The Christian Post reported the story of Paxton Smith, a high school valedictorian in Texas who delivered a graduation speech criticizing the so-called “heartbeat bill” recently signed by Governor Greg Abbott; rather than delivering the pre-approved remarks she had written and submitted, Smith spoke for roughly three minutes against Senate Bill 8 that, among other restrictions, bans abortions performed after six weeks of fetal development. According to Smith in her surprise, unapproved commentary, “I cannot give up this platform to promote complacency and peace, when there is a war on my body and a war on my rights.”

Rather than discuss the details of religious freedom or the debate about legal abortion in this article, I’m interested in thinking about what happened on June 4th — or, more accurately, what didn’t happen. As I comb through the recent archives of several large-scale news media organizations, many of them are only reporting about one of these two high school graduates with controversial speeches.

On one hand, as of this writing, sites like Fox News and Christianity Daily are promoting Lefler’s story, including excerpts from First Liberty’s letter to the school; in one of several articles it ran on the subject, Fox News also includes multiple excerpts of the religious language from Lefler’s draft and ends with additional comments from First Liberty on Lefler’s constitutional right to free speech. (In a similar — though inverted — fashion, NewsMax and the National Review have published pieces criticizing Smith while remaining silent about Lefler.)

On the other hand, sites like CNN and The New York Times are promoting Smith’s story, including by linking to the viral video of her graduation speech; the headline for the June 4th article from CBS News reads “Dallas high school valedictorian scraps speech, makes impassioned plea for abortion rights” and ends with a statement from Smith’s school district that reads, “The content of each student speaker’s message is the private, voluntary expression of the individual student and does not reflect the endorsement, sponsorship, position or expression of the District or its employees.” (As far as I could tell, no large-scale news outlets who have remained silent about Smith have also published opinion pieces critical of Lefler.)

It might well be true that no single news source could hope to comprehensively report on every newsworthy event, so it might be unfair to imply that Fox News or CNN is doing something wrong by only reporting on one of these stories. It might also be true that one (or both) of these stories is not actually “newsworthy” (in a broadly agreeable sense) — certainly there are differences between what Lefler and Smith did, and we might not want to oversimplify those distinctions for the sake of an easy comparison.

But it seems like defenders of “free speech” (as a blunt principle) are required to support both Lefler and Smith for exercising their right to express their private beliefs in a public forum.

John Stuart Mill is perhaps one of the most famous defenders of an inherent value in free speech; in his 1859 essay On Liberty, Mill argues that opinions are a kind of public good and the absolutely free exchange of differing opinions is the best way to promote ideal outcomes for the epistemic community. No one can hope to know all that there is to know on the complex topics relevant to social life, so we must rely on each other to raise alternate perspectives for our consideration; as he says, “Complete liberty of contradicting and disproving our opinion, is the very condition which justifies us in assuming its truth for purposes of action; and on no other terms can a being with human faculties have any rational assurance of being right.” In short, because he also thought that false opinions would naturally give way to true ones, the more opinions we have on the table, the more likely Mill thought it would be that we would discover the truth.

Furthermore, because he found value in the process of inquiry itself, Mill saw absolutely free speech as an opportunity for the development of individual virtue; as he explains:

“No one can be a great thinker who does not recognise, that as a thinker it is his first duty to follow his intellect to whatever conclusions it may lead. Truth gains more even by the errors of one who, with due study and preparation, thinks for himself, than by the true opinions of those who only hold them because they do not suffer themselves to think.”

Akin to common refrains today about “doing your own research” and “thinking for yourself,” Mill believed that free speech was a necessary precondition for an optimal environment to promote intellectual activity (and, by extension, epistemic virtue).

So, suppose that Moe has a social media account and shares on it a story from Fox News about Savannah Lefler’s speech nearly being censored; if Moe believes that this is simply a matter of Lefler’s freedoms being threatened, and he also believes that justice was indeed upheld in Lefler’s case, then he should (upon pain of inconsistency) also be proud of Paxton Smith exercising those same freedoms in her speech. Similarly, if Calvin believes that Smith was doing something honorable by speaking freely (despite going off-script), then he might also be required to view Lefler in a similar light.

If, however, Moe or Calvin only feel like one of these two high school valedictorians was actually doing something praiseworthy, then it must be for reasons other than the value of free speech. This is, of course, as unproblematic as it is likely (indeed, Moe might well approve of public religious speech or not approve of outspoken abortion defenses while Calvin believes the opposite on both counts). In a society where people enjoy the freedom to hold and express such different opinions, neither Moe nor Calvin is clearly doing anything inappropriate by disagreeing on these matters.

But it does seem inappropriate (or, at the very least, confusing) to wave the concept of “free speech” around as a defense of ideas that others might criticize. As Mill stresses, Lefler and Smith being free to express their ideas does not automatically make those ideas correct — indeed, their free expression (on Mill’s view) is one of the best ways to recognize which one (if either) is actually false.

For clarity’s sake, it would be better for Moe, Calvin, and everyone to just say what they truly support and what positions they believe to be correct, rather than hiding behind the double-edged sword of “free speech.”

Pride Parades and Respectability Politics

photograph of rainbow flag with silhouette figures crowded beneath it

Every June, the LGBT community celebrates inclusiveness, the right to self-expression, and the radical politics of queer love. Every June, the LGBT community also engages in the same circular discourse about what pride parades should look like, and what subcultures within the community deserve the recognition and visibility afforded by a parade. The most controversial aspect of pride for many (both inside and outside the community) is the prevalence of kink paraphernalia, “kink” meaning any expression of fetishistic sexuality. This usually takes the form of men in revealing full-leather BDSM outfits, those bogeymen of conservative pearl-clutchers. Almost every year, members of the LGBT community debate whether or not kink should be allowed at pride, though for some reason, the debate has flared up this year with unusual intensity. Perhaps we can blame a viral tweet from a leftist YouTuber, declaiming kink and BDSM at pride, or perhaps the absence of pride parades last year has made queer activists re-examine the existential foundations of the event.

In a recent op-ed for The Independent, gay journalist Skylar Baker-Jordan lays out the most prevalent arguments against the inclusion of kink and BDSM at pride, citing consent as a major issue:

“As Pride is held in a public space and is a public event, it should be open to the public while also following the standards of public decency. Overtly sexualized displays . . . breech [sic] the boundaries of good taste and decency even as Pride stretches what is and is not acceptable. It alienates members of our community who are modest, who have ethical or philosophical objections (as many feminists do), who have children, or who simply do not want to participate in your sex life as unwilling voyeurs. BDSM and kink displays deter many of us from attending . . . Pride should be for everyone in the LGBT community.”

The problem, as he sees it, is accessibility. Muslims who identify as LGBT, for example, might not feel fully comfortable at an event where kink is out in the open, and a community that prides itself on inclusiveness ought to take that discomfort into account. Baker-Jordan is also right that there is a long anti-kink feminist tradition. Audre Lorde, Judith Butler, and Alice Walker (to name a few) have all written against BDSM, and their arguments were compiled in the controversial 1982 anthology Against Sadomasochism. However, Baker-Jordan doesn’t seem interested in probing the ethical or philosophical motivations of those who wear leather to pride events; only the anti-kink objectors are allowed any intellectual depth or moral sense. He falls back on extremely subjective terms like “good taste and decency,” which have long been utilized by conservative critics of the LGBT community to reinforce hegemonic systems. As Alex Abad-Santos explains for Vox,

“Queer history is often about resistance to norms and embracing radical existence, so engaging in respectability politics — the idea that marginalized groups need to behave or act in a certain way to validate the compassion shown toward them — flies in the face of those goals.”

Baker-Jordan further argues that the “struggle for lesbian, gay, and bisexual equality was always about gaining parity with straight people and straight couples, of having our relationships recognized as equally valid and legitimate. It has never been about our sex lives.” Many parts of this assertion are fundamentally untrue. For one thing, kink has been a visible component of pride since the 1960’s, and the woman who spearheaded the first pride parade in early-1970’s New York City was herself affiliated with kink. Furthermore, Baker-Jordan arguing that sex has nothing to do with LGBT rights is a bit like arguing that the struggles of the civil rights movement were about social and legal equality, but had nothing to do with the color of Black people’s skin. The very thing that sets the community apart from mainstream society is elided, which only serves to homogenize and de-radicalize a subversive group.

Baker-Jordan goes on to say that members of the LGBT community are an “identifiable class of individuals discriminated against in law and culture based on shared characteristics: their sexual orientation and gender expression.” Kink and BDSM, he argues, are preferences rather than orientations, and leather is an aesthetic that (while important to many members of the LGBT movement) is not inherently queer in itself. But BDSM is an incredibly psychologically and historically complicated niche of human sexuality, and boiling it down to a preference or fashion trend feels reductive. Kink is a form of sexual expression that has long been viewed as deviant, and for that reason is very important to many members of the LGBT community. Queer anthropologist Janie Lawson explains in an interview with Vice that the

“BDSM or kinky communities recentre sex around pleasure, not reproduction. It’s no coincidence that the leather scene is so closely associated with radical, transgressive queerness – the gay leather aesthetic emerges post-World War II in America and it’s been part of queer culture ever since. That kinky, leather aesthetic has been part of queer politics and queer protest since the 1960s.”

Many are troubled by pieces like Baker-Jordan’s, which reframe the important concepts of consent and accessibility (both of which are crucial to feminist, disabled, and queer thought) in service of socially conservative ends. This year, New York City decided to ban police officers from marching at the annual pride parade, and many pride organizers are having conversations about “rainbow capitalism,” or the commodification of LGBT issues by large corporations looking to make a buck. Clearly there are still questions about what pride should be and who it should be for, but excluding a large and historically important portion of the community from the event in the name of “respectability” ultimately serves no one.

Ethical Considerations in the Lab-Leak Theory

3D image of Covid-19 virus cells

President Biden announced recently that he would be launching an investigation into the origin of the coronavirus. While the standard narrative over much of the course of the pandemic has been that it was initially transmitted to humans via contact with animals in Wuhan, China – thought by many to be bats, although there have also been theories that pangolins could have been involved – a second possibility has also been entertained, namely that the virus originated in a virology lab. Indeed, this was one of the favorite theories of Donald Trump, who, on several occasions, simply stated that the virus originated in a lab, although he failed to provide any evidence for his assertions. The so-called “lab-leak” theory soon took on the status of a conspiracy theory: it was explicitly rejected by numerous scientists, and its association with Trump and other members of the alt-right greatly hindered any credibility that the theory may have had within the scientific community. With Trump out of office, however, questions about the plausibility of the theory have resurfaced, and there has been enough pressure for Biden to open the investigation.

Should Biden have opened his investigation into the lab-leak theory? While it might seem like a question that can be answered by considering the science – i.e., by looking at whether there is good evidence for the theory, whether expert scientific opinion considers it a plausible hypothesis, etc. – there are other ethical factors that we should consider, as well.

Here’s one sense in which it seems that such an investigation is worthwhile: it is always worthwhile to try to learn the truth. Now, there are a lot of truths that we might think really don’t add that much value to our lives – I can spend a lot of time counting the number of blades of grass on my lawn, for example, and at the end of a very long day will possess a shiny new true belief, but hardly anyone would think that I had spent my time wisely. The COVID-19 pandemic, however, is of substantial importance, and so learning about where it came from may seem like an investigation that is worth pursuing for its own sake.

At the same time, there are also potential practical benefits to learning the truth of the matter about the origin of COVID-19. The pandemic has raised many questions about how we should react to the next one, and what we can do to prevent it. Making sure that we have the correct theory of the origin of the virus would then no doubt be useful when thinking about responses to future outbreaks. So here are two points in favor of conducting the investigation: we can learn the truth of something important, and we might be able to become better prepared for similar events in the future.

However, there are also some potential drawbacks. Specifically, there have been concerns that, especially during the previous administration, the impetus for discussing the lab-leak theory was not an attempt to make sure that one’s science was correct, but to find a scapegoat. The theory comes in two different forms. According to one version, the virus was intentionally released from the lab, for whatever reason. If this were to be the case, then there would be a definitive place to direct one’s blame. This version of the theory, however, falls predominantly within the realm of conspiracy theory. The other, more popular version states that while the virus originated in a lab, its transmission into the surrounding population was an accident. Even if this is the case, though, it would seem to represent an act of negligence, and thus the lab, the scientists, and the government would be blameworthy for it.

One of the early criticisms of Trump’s endorsement of the lab-leak theory was that given that it was driven by the search for someone to blame instead of a theory that was best supported by evidence, he was fanning the flames of anti-Asian racism. Indeed, by insisting on the truth of the theory without evidence, as well as consistently referring to the coronavirus as the “China virus,” incidents of anti-Asian racism increased during the course of the pandemic in the U.S.

Here, then, is a concern with Biden’s investigation: opening an official investigation into the lab-leak theory gives legitimacy to a view that has been considered by many to be little more than a conspiracy theory, which may again result in an increase in incidents of anti-Asian racism. Given the potential ethically problematic results of the inquiry, we can then ask: is it worth it?

What is perhaps encouraging is that Biden’s investigation seems to be motivated more by dissent within parts of the scientific community than by the political search for a scapegoat. We might still be concerned, however, that people will not be good at distinguishing versions of the theory under consideration. As noted above, there are two versions of the lab-leak theory, one more distinctly conspiratorial than the other. However, by giving credence to the view that the virus accidentally leaked from the lab, one may instead interpret this as giving more credence to the other.

This is not to say that the investigation is a bad idea. Instead, it should remind us that inquiry is never conducted in a vacuum, and that which questions are worth investigating may depend not solely on the evidence, but on the ethical consequences of doing so.

“Fake News” Is Not Dangerously Overblown

image of glitched "FAKE NEWS" title accompanied by bits of computer code

In a recent article here at The Prindle Post, Jimmy Alfonso Licon argues that the hype surrounding the problem of “fake news” might be less serious than people often suggest. By pointing to several recent studies, Licon highlights that concerns about social standing actually prevent a surprisingly large percentage of people from sharing fake news stories on social media; as he says, “people have strong incentives to avoid sharing fake news when their reputations are at stake.” Instead, it looks like many folks who share fake news do so because of pre-existing partisan biases (not necessarily because of their gullibility about or ignorance of the facts). If this is true, then calls to regulate speech online (or elsewhere) in an attempt to mitigate the spread of fake news might end up doing more harm than good (insofar as they unduly censor otherwise free speech).

To be clear: despite the “clickbaity” title of this present article, my goal here is not to argue with Licon’s main point; the empirical evidence is indeed consistently suggesting that fake news spreads online not simply because individual users are always fooled into believing a fake story’s content, but rather because the fake story:

On some level, this is frustratingly difficult to test: given the prevalence of expressive responding and other artifacts that can contaminate survey data, it is unclear how to interpret an affirmation of, say, the (demonstrably false) “immense crowd size” at Donald Trump’s presidential inauguration — does the subject genuinely believe that the pictures show a massive crowd or are they simply reporting this to the researcher as an expression of partisan allegiance? Moreover, a non-trivial amount of fake news (and, for that matter, real news) is spread by users who only read a story’s headline without clicking through to read the story itself. All of this, combined with additional concerns about the propagandistic politicization of the term ‘fake news,’ as when politicians invoke the concept to avoid responding to negative accusations against them, has led some researchers to argue that the “sloppy, arbitrary” nature of the term’s definition renders it effectively useless for careful analyses.

However, whereas Licon is concerned about potentially unwarranted threats to free speech online, I am concerned about what the reality of “fake news” tells us about the nature of online speech as a whole.

Suppose that we are having lunch and, during the natural flow of our conversation, I tell you a story about how my cat drank out of my coffee cup this morning; although I could communicate the details to you in various ways (depending on my story-telling ability), one upshot of this speech act would be to assert the following proposition:

1. My cat drank my coffee.

To assert something is to (as explained by Sandford Goldberg) “state, report, contend, or claim that such-and-such is the case. It is the act through which we tell others things, by which we inform an audience of this-or-that, or in which we vouch for something.” Were you to later learn that my cat did not drink my coffee, that I didn’t have any coffee to drink this morning, or that I don’t live with a cat, you would be well within your rights to think that something has gone wrong with my speech (most basically: I lied to you by asserting something that I knew to be false).

The kinds of conventions that govern our speech are sometimes described by philosophers of language as “norms” or “rules,” with a notable example being the knowledge norm of assertion. When I assert Proposition #1 (“My cat drank my coffee”), you can rightfully think that I’m representing myself as knowing the content of (1) — and since I can only know (as opposed to merely believe) something that is true, I furthermore am representing (1) as true when I assert it. This, then, is one of the problems with telling a lie: I’m violating how language is supposed to work when I tell you something false; I’m breaking the rules governing how assertion functions.

Now to add a wrinkle: what if, after hearing my story about my cat and coffee, you go and repeat the story to someone else? Assuming that you don’t pretend like the story happened to you personally, but you instead explain how (1) describes your friend (me) and you’re simply relaying the story as you heard it, then what you’re asserting might be something like:

2. My friend’s cat drank his coffee.

If this other person you’re speaking to later learns that I was lying about (1), that means that you’re wrong about (2), but it doesn’t clearly mean that you’re lying about (2) — you thought you knew that (2) was true (because you foolishly trusted me and my story-telling skills). Whereas I violated one or more norms of assertion by lying to you about (1), it’s not clear that you’ve violated those norms by asserting (2).

It’s also not clear how any of these norms might function when it comes to social media interaction and other online forms of communication.

Suppose that instead of speaking (1) in a conversation, I write about it in a tweet. And suppose that instead of asserting (2) to someone else, you simply retweet my initial post. While at first glance it might seem right to say that the basic norms of assertion still apply as before here, we’ve already seen (with those bullet points in the second paragraph of this article) that fake news spreads precisely because internet users seemingly aren’t as constrained in their digital speech acts. Maybe you retweet my story because you find it amusing (but don’t think it’s true) or because you believe that cat-related stories should be promoted online — we could imagine all sorts of possible reasons why you might retransmit the (false) information of (1) without believing that it’s true.

Some might point out that offline communication can often manifest some of these non-epistemic elements of communication, but C. Thi Nguyen points out how the mechanics of social media intentionally encourage this kind of behavior. Insofar as a platform like Twitter gamifies our communication by rewarding users with attention and acclaim (via tools such as “likes” and “follower counts”), it promotes information spreading online for many reasons beyond the basic knowledge norm of assertion. Similarly, Lucy McDonald argues that this gamification model (although good for maintaining a website’s user base) demonstrably harms the quality of the information shared throughout that platform; when people care more about attracting “likes” than communicating truth, digital speech can become severely epistemically problematic.

Now, add the concerns mentioned above (and by Licon) about fake news and it might be easy to see how those kinds of stories (and all of their partisan enticements) are particularly well-suited to spread through social media platforms (designed as they are to promote engagement, regardless of accuracy).

So, while Licon is right to be concerned about the potential over-policing of online speech by governments or corporations interested in shutting down fake news, it’s also the case that conversational norms (for both online and offline speech) are important features of how we communicate — the trick will be to find a way to manifest them consistently and to encourage others to do the same. (One promising element of a remedy — that does not approximate censorship — involves platforms like Twitter explicitly reminding or asking people to read articles before they share them; a growing body of evidence suggests that these kinds of “nudges” can help promote more epistemically desirable online norms of discourse in line with those well-developed in offline contexts.)

Ultimately, then, “fake news” seems like less of a rarely-shared digital phenomenon and more of a curiously noticeable indicator of a more wide-ranging issue for communication in the 21st century. Rather than being “dangerously overblown,” the problem of fake news is a proverbial canary in the coal mine for the epistemic ambiguities of online speech acts.

Criticism of Israel, Media Focus, and Bias

photograph of Jerusalem through barbed wire fence

As in the past, in the wake of this most recent bout of violence in Israel an argument has raged in the media over the proper bounds of criticism of the state of Israel — when, in other words, does criticism of Israel merge with Jew hatred? No sane person denies that criticism of Israeli policies is, at least under some circumstances, not equivalent to prejudice against Jews. Nevertheless, some defenders of Israel claim that what they call the disproportionate attention paid to the Israel-Palestine conflict by Western media and other critics of Israel is itself evidence of Jew hatred. It is this claim that I will evaluate in this column.

I will grant the premise of those who make this claim — to wit, that the Israel-Palestine conflict does receive more critical attention than other, worse conflicts around the world. This is not to underplay the moral enormity of Israel’s treatment of Palestinians; my claim is simply that there are even more egregious violations of human rights in other parts of the world that receive relatively less attention from certain quarters. In order not to beg any questions — “disproportionate” has a negative connotation — I will put the question to be answered as follows: “Is the relatively greater attention paid to the Israel-Palestine conflict over other, worse conflicts itself evidence of Jew hatred?”

There are a number of players who might be said to pay more attention to the Israel-Palestine conflict than other, worse conflicts, but I will consider three: the Palestinians themselves, Arab observers, and the Western media. First, let’s consider the Palestinians. Suppose that Smith is a shop-owner whose store is periodically raided by a Jewish shoplifter. As yet unable to catch the shoplifter, Smith is consumed by hatred for him. Meanwhile, in another town, a Gentile serial killer rapes and kills women. On the rare occasions when the thought of the killer comes to Smith’s mind, Smith feels some degree of indignation and pity. Still, he does not hate and resent the serial killer as he does the Jewish shoplifter. It seems to me that Smith’s attitudes are not evidence of Jew hatred. It is natural — not to say morally good, just natural or commonplace — to focus more attention on the moral wrongs perpetrated against oneself than on the moral wrongs perpetrated against distant others. To do so does not necessarily reveal prejudice against the ethnicity of the wrongdoer. By the same token, Palestinians are naturally more focused on the wrongs perpetrated against them by Israel than on other, perhaps worse conflicts.

Now consider Arab observers. Suppose that Adam is Smith’s cousin, and Adam is almost equally consumed by hatred for the Jewish shoplifter as Smith. Again, I do not think this relatively greater focus on the Jewish shoplifter is evidence of Jew hatred. It is natural — again, not morally good, just commonplace — for those who feel a kinship towards victims of particular wrongdoing to focus more on that wrongdoing, even if there is worse wrongdoing somewhere else in the world. Arab critics of Israel tend to feel a bond of ethnic kinship with the Palestinians, and so will naturally focus more attention on the wrongs done to them than to others. This may offend against some conception of moral equality according to which we ought to dole out our attention to wrongs precisely in proportion to their egregiousness, with no special attention paid to wrongs that are “closer” to us in any sense. My point is merely that even if this form of neutrality is morally required, those who offend against it do not necessarily reveal prejudice in doing so.

It might be objected that some Arab observers have a history of openly anti-Semitic statements. Suppose Adam had such a history. Given this, would Adam’s focus on the Jewish shoplifter smack of Jew hatred? Surprisingly, the answer is no. This is a subtle point, so I want to be clear. With a past history of anti-Semitic statements, we have good evidence that Adam is an anti-Semite. However, Adam’s focus on the Jewish shoplifter does not provide additional evidence, over and above Adam’s past statements, that Adam is an anti-Semite. Similarly, an Arab who criticizes Israel and has a history of anti-Semitic statements is not more likely to be an anti-Semite than an Arab who has a history of anti-Semitic statements but never criticizes Israel.

Now suppose that both the Jewish shoplifter and the Gentile serial killer are caught. It turns out that the Jewish shoplifter is a local “golden boy” who attends high school on a city scholarship, volunteers at soup kitchens, and plays varsity basketball. Many of his relatives hold important positions in government and the media, and many prominent members of the community rally around him, raising money for his legal defense. The serial killer, by contrast, lives on the margins of society, grew up in an abusive household, and had innumerable encounters with law enforcement prior to his most recent crimes. He’s represented by a public defender. Predictably the media, including local gadflies on the opinion pages of the local newspaper, focus a lot of their attention and ire on the golden boy. Once again, that they do this may be in some ways regrettable, as they ignore the serial killers’ victims in the process. There may be a moral argument for apportioning their attention and criticism differently. On the other hand, there is at least a partial justification (and not just a “man bites dog” explanation) for the focus in the fact that the golden boy receives a city scholarship. The point is that the greater focus on the golden boy is not necessarily evidence of Jew hatred. Similarly, Israel not only holds itself out as an upstanding member of the international community, but it also receives significant material support from the United States. These facts can explain why the Western media focuses attention and criticism on the Israel-Palestine conflict to a relatively greater degree than other, worse conflicts.

My conclusion, then, is that there are reasons that explain why both participants and observers pay more attention to the Israel-Palestine conflict than other conflicts — reasons that have nothing to do with Jew hatred. This is, of course, not to say that such attention is certainly not due to anti-Semitism. Rather, my conclusion is that the fact that, say, a member of the Western media focuses more attention on the Israel-Palestine conflict than other conflicts is not itself evidence — does not make it likelier — that this person is an anti-Semite.

Is Fake News Dangerously Overblown?

photograph of smartphone displaying 'Fake News' story

“Censorship laws are blunt instruments, not sharp scalpels. Once enacted, they are easily misapplied to merely unpopular or only marginally dangerous speech.”

—Alan Dershowitz, Finding, Framing, and Hanging Jefferson: A Lost Letter, a Remarkable Discovery, and Freedom of Speech in an Age of Terrorism

Fake news, false or misleading information presented as though it’s true, has been blamed for distorting national politics in the United States and undercutting the faith that citizens place in elites and institutions — so much so that Google has recently stepped in to provide a tool to help users avoid being hoodwinked. It looks plausible, at first glance, that fake news is a widespread problem; if people can be fooled into thinking misleading or false information is genuine news, their attitudes and beliefs about politics and policy can be influenced for the worse. In a functioning democracy, we need citizens, and especially voters, to be well-informed — we cannot have that if fake news is commonplace.

A recent study found political polarization — left, right, or center — to be the primary psychological motivation behind people sharing fake news. It seems we aren’t driven by ignorance, but vitriol for one’s political opponents. It isn’t a matter of folks being fooled by political fictions because they lack knowledge of the salient subject matter, say, but rather that people are most inclined to share fake news when it targets political adversaries whom they hate. And this aligns with what we already know about the increasing polarization in American politics: that it’s becoming increasingly difficulty for people in different political parties, notably Republicans and Democrats, to agree on issues that used to be a matter of bipartisan consensus (e.g., a progressive tax structure).

In the face of the (alleged) increasing threat from fake news, some have argued we need stronger intervention on the part of tech companies that is just shy of censorship — that is, fake news is parasitic on free speech, and can perhaps only be controlled by a concerted legal effort, along with help from big technology companies like Facebook and Google.

But perhaps the claim that fake news is widespread is dangerously overblown. How? The sharing of fake news is less common than we are often led to believe. A study from last year found that

“[although] fake news can be made to be cognitively appealing, and congruent with anyone’s political stance, it is only shared by a small minority of social media users, and by specialized media outlets. We suggest that so few sources share fake news because sharing fake news hurts one’s reputation … and that it does so in a way that cannot be easily mended by sharing real news: not only did trust in sources that had provided one fake news story against a background of real news dropped, but this drop was larger than the increase in trust yielded by sharing one real news story against a background of fake news stories.”

There are strong reputation incentives against sharing fake news — people don’t want to look bad to others. (Of course, the researchers also acknowledge the same incentives don’t apply to anonymous individuals who share fake news.) Humans are a cooperative species that rely on help from others for survival — and so it matters how others view us. People wouldn’t want to cooperate with someone with a bad reputation, thus most people will track how they are seen by others. We want to know those we cooperate with have a good reputation; we want them to be sufficiently trustworthy and reliable since we rely on each other for basic goods. As other researchers explain,

“[Humans] depend for their survival and welfare on frequent and varied cooperation with others. In the short run, it would often be advantageous to cheat, that is, to take the benefits of cooperation without paying the costs. Cheating however may seriously compromise one’s reputation and one’s chances of being able to benefit from future cooperation. In the long run, cooperators who can be relied upon to act in a mutually beneficial manner are likely to do better.”

Of course, people sometimes do things which aren’t in their best interests — taking a hit to one’s reputation is no different. The point though is that people have strong incentives to avoid sharing fake news when their reputations are at stake. So we have at least some evidence that fake news is overblown; people aren’t as likely to share fake news, for reputational reasons, than it may appear given the amount of attention the phenomenon of fake news has garnered in the public square. This doesn’t mean, of course, that there isn’t a lot of fake news in circulation on places like, say, social media — there could be substantial fake news shared, but only by a few actors. Moreover, the term ‘fake news’ is often used in a sloppy, arbitrary way — not everything called ‘fake news’ is fake news. (Former President Trump, for example, would often call a story ‘fake news’ if it made him look bad, even if the story was accurate.)

Overstating the problem fake news represents is also troubling as it encourages people to police others’ speech in problematic ways. Actively discouraging people from sharing ‘fake news’ (or worse, silencing them) can be a dangerous road to traverse. The worry is that just as former President Trump did to journalists and critics, folks will weaponize the label ‘fake news’ and use it against their political enemies. While targeting those who supposedly share fake news may prevent misinformation, often it will be used to suppress folks who have unorthodox or unpopular views. As the journalist Chris Hedges observed,

“In late April and early May the World Socialist Web Site, which identifies itself as a Trotskyite group that focuses on the crimes of capitalism, the plight of the working class and imperialism, began to see a steep decline in readership. The decline persisted into June. Search traffic to the World Socialist Web Site has been reduced by 75 percent overall. And the site is not alone. … The reductions coincided with the introduction of algorithms imposed by Google to fight ‘fake news.’ Google said the algorithms are designed to elevate ‘more authoritative content’ and marginalize ‘blatantly misleading, low quality, offensive or downright false information.’ It soon became apparent, however, that in the name of combating ‘fake news,’ Google, Facebook, YouTube and Twitter are censoring left-wing, progressive and anti-war sites.”

Perhaps the phenomenon of fake news really is as bad as some people say — though the evidence suggests that isn’t the case. In any event, we shouldn’t conclude from this that fake news isn’t a problem at all; we may need some form of policing that, while respecting freedom of expression, can empower voters and citizens with tools to allow them to avoid, or at least identify, fake news. But we can acknowledge both the need for fake news oversight and the need to significantly curtail that power.

The Ethics of a Global Corporate Tax

photograph of unequal columns of stacked coins

The Biden administration has recently proposed a global minimum corporate tax, but what is at stake in such a policy? When debating public financial matters, it can be easy to get so focused on economics and politics that basic ethical considerations fade into the background. David Scheffer, for example, notes that when it comes to corporate tax avoidance “much of the ensuing debate has centered on how to tax corporate profits fairly and more efficiently…but there has been little effort to associate tax avoidance schemes with corporate abdication of responsibility for advancing critical societal goals.”

Scheffer was writing in 2013, when Starbucks paid only £8.6 million in British taxes over a 14-year period, and paid no UK corporate taxes in 2011 despite over $400 million in sales. U.S. corporations had $1.7 trillion in overseas accounts to avoid taxes. Apple, for example, held about $100 billion in tax haven accounts to avoid taxation in the U.S. In 2020, despite record-breaking profits, Amazon only paid an effective tax rate of 9.4% rather than the actual 21% rate, avoiding over 2 billion dollars in taxes. (Prior to that, Amazon had avoided paying taxes altogether for several years.) As a result of these trends, Scheffer points out that the percentage of tax revenue collected from wage-earners and consumers has increased dramatically, while the percentage of corporate taxation has dropped precipitously.

Unfortunately, figuring out what to do about the situation is no small task. While a nation can try to close loopholes and raise taxes, a corporation can simply move their corporate headquarters to a different nation with a lower corporate tax rate. These tax havens allow companies to minimize their tax liabilities through profit-shifting; companies register their headquarters in an alternative jurisdiction rather than the country where its sales took place.

To crack down on corporate tax avoidance, the Biden administration is now calling for a global minimum corporate tax rate of at least 15%. As Treasury Secretary Janet Yellen recently stated, a global minimum would “stop what’s been essentially a race to the bottom, so that it’s competitive attractions of different countries that influence location decisions, not tax competition.” The idea is that a country could require a corporation to pay the difference between its minimum tax rate and the rate it pays on earnings in foreign countries.

So far, several nations have signaled their agreement with the proposal. Canada, Germany, France, and many others have indicated their interest, while nations like Ireland and Hungary have registered vocal opposition. (Ireland has only a 12.5% corporate tax rate and has encouraged numerous businesses to create subsidiaries there for years to take advantage of this.) Many developing nations have also expressed misgivings about the proposal due to fears a crackdown will discourage foreign investments.

While a global minimum rate may be important for issues of trade and economic development, the issue of tax competition has received comparatively little attention when it comes to issues of ethics and justice. But Peter Dietsch and Thomas Rixen have argued that tax competition undermines the de facto sovereignty of states. Without the ability to effectively set the size of the state budget and the extent of redistribution, states have no fiscal self-determination.

Likewise, Scheffer argues that taxes are a moral issue because the future of human rights depends on a state which is capable of protecting and securing them (and has the funds to do so). Further, while Milton Friedman and others have argued that corporations are primarily only responsible to their shareholders, Scheffer notes that given climate change, rising income disparity, and the backsliding to authoritarianism, there is no such neat division between capitalist pursuits and societal imperatives. He argues:

“The fact that major multinational corporations are paying such comparatively miserly taxes in their home or operating jurisdictions, and doing so legally, means they are minimizing their contributions to social priorities in education, infrastructure, public health care, law enforcement, and even the military defense of countries that provide them with the security and stability that allows them to earn their profit. Societies where these government services are properly financed stand a much better chance of protecting the human rights of the populace.”

Overall, tax avoidance by corporations contributes to the overall decline of government services, which “degrades the operating environment and the very markets within which corporations seek to thrive.” These considerations suggest important moral issues at stake in addressing corporate tax avoidance.

On the other hand, critics of the global minimum corporate rate argue that the move is unfair. While the move would equalize tax rates across the globe, it would also benefit richer nations at the expense of smaller and developing economies who would no longer be able to set lower, more competitive rates to attract foreign investment. Foreign investment represents an integral part of the development plans for lower-income countries, and so the move threatens to reduce the overall welfare of lower-income countries. Even Ireland has managed to dramatically increase living standards after once having one of the worst living standards in Europe, largely thanks to foreign investment. Nations like Mauritius, Paraguay, Uzbekistan, and Kosovo would likely suffer from a decline in tax revenue as well, while a global standard would help nations like the U.S. and France.

But of course, that doesn’t mean that steps couldn’t be taken to mitigate some of these concerns such as direct redistribution of financial means into education and public infrastructure of developing nations. Besides, perhaps taxes should be applied more where economic activity and value creation occurs rather than the location of corporate headquarters. But beyond these practical considerations, Scheffer argues that “the higher ethical perspective” demands that corporations look past minimal standards of compliance and embrace a stronger sense of social corporate responsibility. In order to address the larger problem of which tax competition is merely symptomatic, it’s important to stress the ethical role that corporations have to play in advancing our shared societal goals.

Justice Breyer and Strategic Retirement

photograph of a contemplative Justice Breyer at a speaking engagement

Though President Biden’s election win is still fresh, many political strategists and advocates are already thinking about the 2024 election. Amid the various areas of concern, the increasing conservatism of the Supreme Court has become a central focus for Democrats. Serious consideration of expanding the Supreme Court is becoming increasingly unlikely. In its place, a wave of calls for Justice Breyer’s resignation has followed. Calls for Supreme Court justices to retire are not new, but the fervor in these renewed calls is considerably more intense. These voices raise significant ethical issues surrounding the Supreme Court and the political duties of its justices.

Are calls for Justice Breyer to retire ethical? Is age a sufficient reason to call for one to step down? Should political concerns influence the actions of Supreme Court justices?

Calls for Justice Breyer’s retirement are often directed as appeals to Breyer’s ethos. His refusal to retire has been labeled as out of touch, supremely consequential, and essential to safeguarding his legacy. Many of these appeals have the appearance of being ageist, motivated by prejudice about his getting on in his years. Justice Breyer is the oldest sitting member of the Court. At 82 years old, Breyer has technically passed the average life-expectancy of an adult male in the United States. Older people already face significant discrimination in society, and the force of that stigma should be considered in calling for Breyer’s resignation. Ageism is especially prevalent in the workforce. Calling for Breyer to retire due to his age arguably perpetuates the notion that older people have little left to contribute.

It is likely, however, that the calls to retire are also born out of the idea that Justice Breyer has served his time. In fact, he has served for nearly a decade more than the average tenure of a Supreme Court justice. Supporters of his retirement might also point out that the majority of appointed Supreme Court justices in history have opted to retire. Though appointments are technically for a lifetime, 57 out of the 108 Justices left office voluntarily. With these facts in mind, calls for Breyer’s retirement arguably reflect Supreme Court norms.

Defenders of Justice Breyer, however, point to the inconsistency of his critics. It’s hard to ignore the blatant difference in treatment which Breyer is receiving compared to his predecessors or even current fellow justices. Justice Ginsberg passed in the last few months of the Trump administration, leading to the appointment of Justice Coney Barret, who was highly unpopular with Democrats and especially pro-choice advocates. Justice Ginsberg had a history of health complications, yet was applauded for continuing to serve on the bench. In comparison, Justice Breyer has far fewer physical health complications, but has been subject to a plethora of scrutiny for continuing to serve. Perhaps this clear difference in treatment is a result of Ginsburg’s celebrity-like popularity in American culture, or even her perceived status as a liberal icon. It is also possible that the increased scrutiny directed at Breyer is a direct result of Ginsburg’s death and its aftermath. Democrats are fully aware of Republicans’ ideal Supreme Court candidates, Trump’s appointments shifted the balance of the Court decisively. Perhaps, then, calls for Breyer to retire are a direct response to this political reality, making for an unfair comparison between the treatment of Justice Breyer and Justice Ginsberg.

This suggests that many of the calls for Breyer to retire are motivated not by discrimination regarding his ability to do his job adequately, but rather out of concern that his seat might need to be filled at a politically inconvenient time. If not for fear of a potential future Republican-appointed justice, there likely would not be any scrutiny aimed at Justice Breyer. As such, calls for Justice Breyer’s retirement are premised on the assumption that the Supreme Court is a political institution (for discussion, see Tucker Sechrest’s “Politicians in Robes”).

The politicization of the Supreme Court has been a topic in legal scholarship for decades. Critics of this phenomenon argue that using the Court as a political scoresheet undermines the public’s faith in the legal institution of the country to be fair and even-handed. One might also argue that the very structure of our government, which prioritizes the balance of powers, indicates that the Supreme Court was meant to be an institution far removed from politicking. Indeed, the Court has historically handed down politically disfavored decisions which had enormous effects on American society, for better or worse.

Despite the fervent calls for his retirement, Justice Breyer has made it clear that he will not step down. Unlike his critics, Breyer believes that political compromise is still possible and should be pursued by the very Democrats who criticize him. Whether this is virtue or folly, his critics would do well to consider what ultimately motivates their impatience and where it might lead.