← Return to search results
Back to Prindle Institute

Treating Principles as Mere Means

photograph of US Capitol Building with mirror image reflected in lake

With the Republican about-face concerning Supreme Court Senate votes, hypocrisy is once again back in the headlines. Many accusations of hypocrisy have been directed at Senator Lindsey Graham, whose support for a Senate vote for President Trump’s Supreme Court nominee so clearly clashes with earlier statements — he said in 2018 that “if an opening comes in the last year of President Trump’s term and the primary process has started, we’ll wait till the next election” — that his behavior seems like the Platonic form of a certain kind of hypocrisy. Graham has responded with a hypocrisy accusation of his own, writing to Democrats on the judiciary panel that “if the shoe were on the other foot, you would do the same.” Amidst this controversy, it’s worth taking a step back to ask what force the accusation of hypocrisy is supposed to have.

In earlier columns, I have explored some suggestions for why hypocrisy is morally objectionable and rejected them. In this column I want to consider a theory first articulated by the philosopher Eva Feder Kittay. This account says that hypocrisy is morally objectionable because it involves treating important religious, political, or moral principles as mere means.

Immanuel Kant famously intoned against treating persons as mere means, or using them as mere instruments for the satisfaction of our own desires. What’s wrong with this is that it involves a kind of category error — it treats persons, beings with the capacity to rationally order their lives, as if they were things.

Clearly, however, this can’t be exactly what Kittay means when she talks about hypocrites treating principles as mere means: principles are not persons. Yet there is a link here. The kinds of principles Kittay is concerned with — moral and religious principles — are supposed to be adhered to because they are right, and not because they are useful to the adherent. Kant expressed this point with his distinction between categorical and hypothetical imperatives. A categorical imperative is one that is binding on you regardless of what you happen to desire. You can’t claim that some moral principle — “don’t kill innocents,” say — is not binding on you because you happen to want to kill innocents. That principle provides a reason for you not to kill innocents regardless of what you happen to want. By contrast, a hypothetical imperative — for example, “go to the store” — is only binding if you have some desire that will be promoted by acting according to the imperative. If there were nothing you wanted that you could get by going to store, that imperative would not be binding on you.

So, when Kittay says that hypocrites treat principles as mere means, she means that they treat categorical imperatives as if they were merely hypothetical. The hypocrite will adopt and discard moral principles as it suits them. Sometimes that adoption will be merely rhetorical — some hypocrites are entirely conscious that their pretense of principle is a charade. But other hypocrites will sincerely adopt moral principles, only to discard them whenever holding to them becomes inexpedient. In the case of Senate Republicans, their hypocrisy lies in their adoption of the principle of not confirming Supreme Court justices during an election year when it was convenient for them to do so, followed by their abandonment of this principle when it was convenient to do that. In doing this, they treated what seemed to be a categorical imperative — one that was binding on them even if they didn’t want to adhere to it — as if it were hypothetical.

What’s wrong with treating principles as mere means? For Kittay, the problem has to do with trust. According to her, we trust that when people claim to hold to certain categorical principles, they hold to them as categorical. We rely on this belief in our dealings with them, assuming, for example, that they will hold to those principles even if it is inconvenient for them to do so. Moreover, their assurances of commitment are all we have to go on; we can’t look into their souls to see what their true attitude toward their principles is. Hypocrisy reveals that there can be a deep divide between what people say they are committed to and what they are actually committed to. Thus, hypocrisy shows us that the part of our lives structured by principles is actually quite fragile, depending as it does on our trust in what people say. We therefore have strong incentives to expose and condemn hypocrisy. As Graham’s Democratic challenger for his Senate seat recently tweeted, “Senator Graham, you have proven that your word is worthless.”

There is, I think, another point to be made about how hypocrisy undermines categorical principles. What hypocrisy reveals is that for at least certain people, categorical principles are a mere mask for the unvarnished pursuit of power, wealth, and self-aggrandizement. The trouble is that compared to such people, those who voluntarily restrain themselves in accordance with categorical principles are at a distinct disadvantage. This puts pressure on everyone to abandon their principles. Thus, hypocrisy tends to erode everyone’s commitment to categorical principles as such. And if we think that categorical principles are good on the whole — that they help solve certain coordination problems, for example — then this is a bad thing for everyone.

So, what Senate Republicans have revealed with their latest hypocrisy is that for them, politics is a game of power untempered by principles. But when Republicans throw their principles overboard when it is convenient for them to do so, this increases the incentives for everyone else to do the same. And that, I will wager, is worse for everyone in the long run.

The Ambiguous Perspective of HBO’s ‘Succession’

photograph of cast of Succession after Golden Globes

Succession, one of the most popular recent additions to HBO’s stable of prestige dramas, dominated the drama category at the 2020 Emmys. But despite critical acclaim, the show inspired complicated and even unpleasant emotions in viewers. Equal parts pleasure and disgust contribute to Succession’s allure, and if articles like “How embarrassed should you be about your ‘Succession’ crush?” are any indication, guilt is the price fans often pay for their investment.

The Roys are a treacherous and amoral clan of one-percenters dominated by aging patriarch Logan Roy, a media mogul who made his fortune disseminating right-wing propaganda through a FOX-esque news network. The central conflict of the show, as its title suggests, is who will inherit his sprawling media empire. The main contenders are Logan’s three children, recovering drug-addict Kendall, cunning political analyst Shiv, and wisecracking playboy Roman. Other possibilities include various cronies and extended family members, like Greg, an unpolished (and impoverished) Roy cousin who stumbles into the family’s orbit in search of a job.

The closer we get to the family, the more our discomfort grows. We’re drawn in by Kendall’s perpetual sadness and vulnerability, Roman’s darkly funny sense of humor, and Shiv’s resentment at being passed over in favor of her brothers. We can’t help but identify with and even pity them, but our identification is constantly challenged by the wickedness of the Roy family. In the show’s first episode, Roman invites the young son of a staff member to participate in the family’s baseball game. When he seems reluctant, Roman writes out a check for one million dollars, offering it as a prize if the kid can hit a home run. Of course, he gets tagged out just inches away from home base. Roman rips up the check with a flourish and offers the boy a fragment, or a “quarter of a million dollars,” as he puts it. In his review of the show, writer Jorge Cotte asks if “As viewers, do we separate our ethical concerns from the conniving and calloused amorality of the Roys’ business machinations? This is related to another question: is there something suspect in feeling for these fictional power brokers who are so similar to those causing actual harm and systemic violence in the world?” In other words, how can we identify with the child and the spoiled billionaire taunting him at the same time?

The show’s engagement with wealth and privilege offers no clear moral perch for the viewers to situate themselves upon. The show seems to set up bumbling and well-intentioned Greg as an alternative to the Roys, yet he is purposefully difficult to identify with. His scenes, though invariably funny, are excruciatingly awkward. He can never read a room, and always seems to take up too much space. But over the course of the series, he proves to be as mercenary and self-serving as his cousins, illustrating the impossibility of achieving affluence without dirtying one’s hands. In Succession, we are never allowed to rest too comfortably in one place. The audience is situated everywhere at once, ricocheted from viewpoint to viewpoint.

This discomfort is built into the very fabric of the show. The camera is usually handheld, and its gaze feels shaky and restless. When characters move from one location to another, we often see them from a distance, as if through the perspective of the paparazzi. In this way, Succession borrows much from Veep, another show filmed in a mockumentary style without in-fiction justification. In Veep, the handheld camera is used for comedic purposes. It allows for quick reaction shots and zooms, which provide extra flair to jokes. But in Succession, the effect is disorienting, even nauseating. While the mockumentary style usually suggests verisimilitude, here it suggests voyeurism and instability. There is a fundamental clash between how the Roys see themselves and how they are perceived by the world, or on another level, a clash between how they perceive themselves and how the audience perceives them. We learn that as children, Kendall frequently locked Roman in a dog cage and made him eat kibbles. Roman insists that this was sadistic torture, but Kendall insists that Roman enjoyed it too. Storytelling is central to this family, which made its fortune spinning yarns, but even the Roys can’t agree on their own narrative.

Critic Rachel Syme points out that “While Succession does not glorify wealth, it also makes no apologies for it. The Roys are not like you and me. They have SoHo lofts and trust funds and cashmere everything, and they own theme parks and movie studios and shady cruise lines . . . They have everything anyone could want, but they are all empty and lonesome, neglectful and neglected.” Syme describes the ambiguity at the heart of the show, an ambiguity that is mirrored in audience reactions. While we may cheer them on, we derive equal pleasure from watching them fail. As a character from an equally rich but far more old-money family tells Kendall in season two, “Watching you people melt down is the most deeply satisfying activity on the planet.” The amoral world of Succession allows for both disgust and identification, which is perhaps a more honest way of depicting the rich and famous than complete disavowal or complete worship.

Cardi B, Ben Shapiro, and the Pop Culture vs. High Culture Debate

black-and-white photograph of two white men sharing opera glasses ina theater box

Recently, there has been a clash of rival philosophies in the public sphere. Popular rapper Cardi B not too long ago dropped a controversial single. Her song, titled “WAP,” is incredibly raunchy, and its impropriety prompted the conservative commentator Ben Shapiro to accuse it of being demeaning to women. Nonetheless, as we will see, Shapiro’s problem with this song goes far beyond its explicit lyrics. In fact, Shapiro’s criticism of WAP fits into a long history of members of dominant groups criticizing and dismissing instances of “low” culture in favor of “high” culture.

But before that, let’s treat the man charitably and evaluate his critique of the song. In a tongue-in-cheek tweet, he wrote that “it’s misogynistic to question whether graphic descriptions of ‘wet-ass p****’ is [sic] empowering for women.” It is interesting that he focuses on the “graphic” nature of the song. Of course, sex has been a part of pop music since the beginning. And it has always been controversial. The Beatles sang “Why Don’t We Do It In the Road?” and had their song “A Day in the Life” censored by the BBC. Ostensibly, this was for a drug reference. However the song only contained a reference to cigarettes. John Lennon said he thought the phrase “I want to turn you on” had gotten them censored. But of course, these songs could not today be called “explicit” or “graphic.” So then, maybe Shapiro has a point. Maybe these songs are acceptable but such graphic songs as WAP are not.

Alas, it is not so easy. Shapiro places rappers and The Beatles on the same level regarding “suckage.” How is this position consistent? Well, it seems Shapiro equally dismisses all pop music. We can see this from his tweets comparing rap negatively to Mozart and explicitly stating that he does not consider rap music at all. Combining this claim with his earlier one about The Beatles, we can conclude Shapiro doesn’t consider The Beatles to have produced music either. This is odd given how Rolling Stone Magazine has consistently ranked The Beatles at the top of their list of the “100 Greatest Artists.” To understand Shapiro, to see why he despises WAP so much, we must now consider how one could come to the conclusion that pop music isn’t music.

Much to Shapiro’s chagrin, I’m sure, Mozart is beloved but not as much as he used to be. The rise of popular or “pop” music starting in the 50s with Elvis Presley and solidifying with The Beatles in the 60s meant the end of classical works of Mozart and Beethoven being the standard of music. And these pop artists couldn’t have gotten a foothold without technological advancements like radio democratizing access to music. Before, you either had to go to a concert hall or play the music yourself. And you could really only do the former if you were well-to-do.

In fact, for a very long time there has existed a class distinction when it comes to music, and in fact to all of art. There is “high” art and there is “low” or “pop” art. These compose high and low culture respectively. People will argue a lot about what counts as “high” art but we can come to a decent understanding through some uncontroversial examples: the epics of Homer, the poems of Catullus, the sculptures of Michelangelo, the plays of Shakespeare, and the symphonies of Beethoven. “Low” art (until recently) would not include any writing since only the upper class could write. Additionally, few commoners would be able to afford the marble, paints, and even just paper and ink that are requisite for much of high art. And who could write a symphony without ever having seen more than a few instruments in one place at a time? Again, until recently, all but the upper class had to spend a great deal of time laboring.

Which is better, high culture or low? And is the difference really so substantial as Shapiro makes it out to be? Aesthetics is the study of beauty. When talking about whether one piece or set of art is better than another, we are usually judging them by the standard of beauty. So what makes something beautiful? One camp says that beauty is completely subjective. An old Latin aphorism expresses this: “de gustibus non est disputandum” (“there cannot be arguments about taste”). If this were true, the distinction between high and low culture would be pointless. But of course people do argue about taste. Who has not gotten into an argument about whether this or that song, this or that movie, is superior to another?

One way of settling these arguments by appealing to authority. Let the movie critics at Rotten Tomatoes decide whether the movie was truly good. But those who study literature, sculpture, music, and art will usually judge the classics of high culture as superior to those of pop culture. Movie critics, as everyone knows, love art films more than summer blockbusters. The tastes of critics and the tastes of the public don’t always match. How do we justify ourselves in these cases? And how do the experts themselves decide?

There may be some ways to define beauty or “goodness” more clearly, if not completely rigorously. Good pieces of art are usually complex. They are often difficult to make. They frequently express a message. And of course, most subjectively, good art gives us pleasure, or at least an emotional reaction of some sort. Of course, all of these rules, except possibly the last one, have exceptions. John Cage’s song ‘4’33”,’ which is just silence, isn’t complex. Maurizio Cattelan’s “Comedia” artwork isn’t difficult to make: it’s a banana duct-taped to a wall. Alas, it’s not as easy to ascend Plato’s ladder as we had hoped.

The main argument in favor of popular culture and art is that it’s far more pleasurable for more people. Most of us remember the classics of high culture as the books/plays/art/songs we were assigned in boring classes in high school. The argument is easy to make: If Mozart is so good, why don’t more people choose to listen to him?

Now is a good time to consider the other thing that makes art “high” rather than “low.” High art isn’t just good. And not all good art is high art. High art is partially defined by its exclusiveness. How few artistic works of women or people of color are counted as high culture? How many works not produced in Europe? “Rap music isn’t music” was not an uncommon position twenty years ago and even though rap continues to grow in popularity, a rap album hasn’t won the Grammy for Best Album of the Year in over 15 years. Until rap, via white male rapper Eminem, got popular with white men instead of black people, it was simply not accepted. In the same way, until The Beatles got popular with white, adult men instead of just teenage girls, pop music too was considered not to be a legitimate art form.

Cardi B, from the perspective of a high culture aesthete and according to the prejudices of our society, represents the lowest of the low. She is a woman and a person of color. She’s bisexual and a former sex worker. Regardless of whether her music is good or not by any measure we’ve discussed, it would never be counted as high culture and so is dismissed by some as worthless.

Obviously there is a great deal of value in the art which composes high culture. No one would seriously argue that Ode to Joy hurts their ears or that Shakespeare was a hack. And really, opinions like those of Shapiro where popular music and art is dismissed as worthless are vestigial; few hold them and those that do are old. Nonetheless, it is common for our biases regarding the origins of art to sway what would otherwise be legitimate discussions about beauty. Black teens making graffiti are a menace. But when Banksy does it, it’s okay and even counted as high culture. WAP may be a terrible work of art. That’s debatable. But the suggestion that it or any other instance of popular art isn’t art at all isn’t. Any such suggestion is an attempt at exclusion, an attempt to prop up the slowly dying concept of high culture.

The Ethics of Escapism (Pt. 3): Searching for the Personal when Everything Is Political

image of Lebron James with "More than an Athlete" slogan

It is beyond understatement to say that anyone could be feeling overwhelmed right now. For over four months, there have been daily protests against the brazenly public murder of Black people by police officers. If the police violence and terrorizing weren’t enough, the often hateful and willfully ignorant responses to calls for change to the system are emotionally draining, constant throughout the year, and can come from surprising places within anyone’s personal circle.

We can add to this ongoing discord the divisive attitudes concerning the pandemic and a rapidly approaching national election — fewer than 40 days away! — where the stakes are presented as the highest in history. The sheer amount of noteworthy news flooding in every day makes it difficult to balance the need to act and stay informed against restorative personal commitments that are needed to reproduce this labor on a daily basis.

In recent days and weeks, there have been loud calls to have a sharper dividing line between what can or should be “political” and what shouldn’t be. There are loud public complaints that the arenas that appear to allow for the sort of personal restoration, now drudge up the very issues that many seek to escape.

At the NFL season opener, fans in Missouri booed the Kansas City Chiefs and Houston Texans when they stood arm-in-arm on the field in support of racial unity. Angry responses to demonstrations calling attention to the unjust treatment of Black people in the US and in sports have a long history, and the response to the unity demonstration is reminiscent of the hateful response to Colin Kaepernick’s protest during the national anthem four years ago.

Also this year, Naomi Osaka won the US Open’s singles tournament while wearing masks with the names of Breonna Taylor, Elijah McClain, Ahmaud Arbery, Trayvon Martin, George Floyd, Philando Castile and Tamir Rice, all victims of racial injustice. British Formula One driver Lewis Hamilton wore a shirt calling for the arrest of Breonna Taylor’s murderers.

The world of sports is not the only arena where politics is “encroaching.” Media such as movies and TV are including more and more topical content and referential issues. For instance, Marvel series have made more and more direct references to the realm of government and the dynamics played out in our administration. Zac Stanton at Politico claims that escapist content is impossible at this point and recommends “leaning in.” In 2018 at the New Republic, Jeet Heer suggested that avoiding the content of the news would require the drastic measure of avoiding media completely.

The desire to cling to apolitical media and discourse is nothing new, and complaints about political encroachment similarly did not begin with the exhaustion of 2020. But it is difficult to develop an understanding of “political” that can cleanly divide entertainment that has a political message from that which doesn’t.

One sense of the “political” is the realm that you can disagree with friends but remain friends; perhaps in this sense the political denotes the particulars of policies and government, while the fundamentals of friendship can be preserved. For instance, in this sense, the “political” could address specifics of how the economy should work: should we have a progressive or flat tax? What should the brackets be? How should our tax dollars be allocated to different programs? Many can imagine disagreeing with friends over this question, especially given how much we can understand where the values of our friends originate and which issues resonate deeply and differently with each of our loved ones.

Another sense of the political is more robust. “Political” can delineate the role that power has in our society, and the importance that the relationships of power be put to use appropriately. The government wields a huge amount of power in a variety of ways, legislating support for citizens in less fortunate circumstances, articulating parameters of punishments for its citizens, and the standards for individuals to dwell in its boundaries and become citizens. The government further protects and ensures that people are respected and treated with the dignity that morality demands in domains that the public deems its jurisdiction.

When “political” denotes the power dynamics and moral reality of persons in our society, we could understand this domain as one regarding justice. On this understanding, the relevant topics would go beyond the policies that are invoked in the more minimalist sense above – where the political represents the sphere in which two people could agree on “the fundamentals” of morality, yet disagree about politics. Here, the topics and issues of politics or justice, include how people are rightfully treated, how government plays a role in how we should relate to each other in a society, who should be granted rights, how punishments should be meted out, etc.

There is media and entertainment whose content explicitly addresses the issues that are uncontroversially “political.” News is the clearest example, but films and TV shows that include political figures, revolutions, and topics related to our current situations of racist violence, corrupt leaders, and widespread illness also might qualify as “political” media at this time.

However, it is not just the explicit content of the entertainment that we consume that qualifies as political. Our interactions with one another every day reflect a particular power dynamic and moral reality. When the media we consume encourages the dehumanization of marginalized groups of people in our society, it buttresses our current power structures and propagates the unjust relationships in our society, where not everyone is equal and not everyone gets to be respected, safe, and viewed as worthy of the same rights.

The jokes in our movies and TV series whose premises are that fat people are lazy, gorge themselves on food, or are punishments to pursue for romantic pursuits at bars are a matter of politics, propagate an unjust society, or the representation of trans people as individuals “dressing up” as a different gender, or tricking people into dating them, or completely outside of mainstream society, or the overrepresentation of straight white people as the default, and non-white and queer people as struggling, or victims, or uneducated, are all matters of justice, whose continued use helps to prop up an unjust society. Media is saturated with depictions of the moral relationship between the various members of our society, and this composite picture creates and reflects the reality we see.

When people boo at a show of solidarity and inclusion at a sporting event, they are mistakenly categorizing “sports” as entertainment free from the political domain. Setting aside the billions of subsidies the NFL gets from tax-payers, the national anthem sung at every game, the fact that politicians gain political points by throwing first pitches and are seen as more “American” for luxuriating in their overpriced viewing seats, all suggest that sports are political. Players stood for racial unity, faced booing from the fans, and then performed for an unsupportive and aggressive audience in a sport that is notorious for putting its players’ health and safety at risk, which is a matter of justice.

This group of professionals face a history of racism in their organization, working for team owners who are almost exclusively white (and many of whom have deep ties to Trump), and lacking the support from their privileged white teammates during fights for racial justice. Further, 70% of NFL fans are white, and the racist attacks on shows of support for racial equality as well as the brazen display of disrespect for athletes reveals deeper issues in the fandom of the NFL.

On neither understanding of “political” can sports fans genuinely claim political encroachment.

On the minimalist understanding of “political,” where we are restricted from considering questions of human rights and respect, the recognition of racial violence and bigotry falls outside the scope, and there are not grounds for complaint on the basis of political encroachment. Anything that could be considered “political” on this account has always been there, friends simply don’t talk about it.

If, however, we have the more robust understanding of “political,” then all media and entertainment are matters of justice and a question of our obligations regarding the the rights and dignity of ourselves and others. The oppression and violence towards members of our society is relevant to justice and politics, but it is not distinct to particular arenas, or content explicitly about the news. Politics in this sense is a part of all domains of life, and friendships, entertainment, etc. are part of politics. Relationships with those close to us would not be very healthy or successful if they included deep disagreements over who was worthy of rights and dignity. Entertainment is politically laden and potentially unjust when it exploits the labor of marginalized groups, as many sports do. Similarly, when pernicious stereotypes saturate visual media and reinforce dehumanization and bigotry, this is a political issue when politics is understood as justice.

Whether we understand politics in the minimal sense or as the domain of justice, there is no clear boundary cordoning it off from between the various aspects of our lives. Entertainment, as tied up in our experience of the human condition, has always been (and will always be) “political.”

 

Part I – “The Ethics of Escapism” by Marko Mavrovich

Part II – “Two Kinds of Escape” by A.G. Holdier

The Ethics of Escapism (Pt. 2): Two Kinds of Escape

photograph of business man with his head buried in the sand

Shortly before Labor Day this year, polling data of the American workforce indicated that a majority (58%) of employees are experiencing some form of burnout. Not only was this an increase from the early days of the pandemic (when the number was around 45%), but over a third of respondents directly referenced COVID-19 as a cause of their increased stress. Reports on everyone from “essential” workers, to parents, to healthcare professionals and more indicate that the effects of the coronavirus are not merely limited to physical symptoms. Ironically, while the steps taken to limit COVID-19’s physical reach have been largely effective (when properly practiced), those same steps (in particular, self-imposed isolation) may be simultaneously contributing to a burgeoning mental health crisis, particularly in light of additional social pressures like widespread financial ruin, state-sanctioned racial injustices, and a vitriolic election season.

Indeed, 2020 has not been an easy year.

Nearly a century ago, J.R.R. Tolkien — creator of Gandalf, Bilbo Baggins, and the whole of Middle-Earth — explained how fantasy stories like The Lord of the Rings not only offer an “outrageous” form of “Escape” from the difficulties people encounter in the lives, but that this Escape can be “as a rule very practical, and may even be heroic.” In his essay On Fairy Stories, Tolkien asks, “Why should a man be scorned if, finding himself in prison, he tries to get out and go home? Or if, when he cannot do so, he thinks and talks about other topics than jailers and prison-walls?” It is true that Escape from reality can sometimes be irresponsible and even immoral (for more on this, see Marko Mavrovic’s recent article), but Tolkien reminds us to avoid confusing “the Escape of the Prisoner with the Flight of the Deserter” — the problems of the latter need not apply to the former.

There are at least two ways we can distinguish between Tolkien’s two kinds of Escape: epistemically (rooted in what someone seeks to escape) and morally (concerning one’s motivations for escaping anything at all). Consider how a person might respond to the NFL’s decision to highlight a message of social justice during its games this season: if they are displeased with such displays because, as Salena Zito explains, they “are tired of politics infecting everything they do” and “ just want to enjoy a game without being lectured,” then we might describe their escape as a matter of escaping from information, perspectives, and conversations that others take to be salient. Depending on how commonly someone engages in such a practice, this could encourage the crystallization of their own biases into an “epistemic bubble” where they end up never (or only quite rarely) hearing from someone who doesn’t share their opinions. Not only can this prevent people from learning about the world, but the “excessive self-confidence” that epistemic bubbles engender can lead to a prideful ignorance about reality that threatens a person’s epistemic standing on all sorts of issues.

If, however, someone instead wants to avoid “being lectured at” while watching a football game because they wish to escape from the moral imperatives embedded within the critiques of the lecture (or, more accurately, the slogan, symbol, chant, or the like), then this is not simply an epistemic escape from information, but an escape from moral inquiry and confrontation. Failing to care about a potential moral wrong (and seeking to avoid thinking about it) is, in itself, an additional moral wrong (just imagine your response to someone ignoring their neighbor trapped in a house fire because they “just wanted to watch football”). In its worst forms, this is an escape from the responsibility of caring for the experiences, needs, and rights of others, regardless of how inconvenient it might be to care about such things (in the middle of a football game or elsewhere). Nic Bommarito has argued that being a virtuous person simply is a matter of caring about moral goods in a manner that manifests such caring by instantiating it in particular ways; much like the people who passed by the injured Samaritan on the road, escaping from reminders that we should care about others cannot be morally justified simply by selfish desires for entertainment.

Both of these are examples of Tolkien’s Flight of the Deserter: someone who has a responsibility to learn about, participate in, and defend the members of their society is choosing to escape — both epistemically and morally — from reminders of the duties incumbent upon their roles as social agents. But this is different from the Escape of the Prisoner who simply desires a temporary reprieve to unwind after a stressful day. In the absence of immediately pressing issues (like, say, your neighbor trapped in a house fire), it seems perfectly acceptable to take some time to relax, de-stress, and recharge your emotional reserves. Indeed, this seems like the essence of “self-care.”

For example, the early weeks of the first anti-pandemic lockdowns happened to coincide with the release of Animal Crossing: New Horizons, a Nintendo game where players calmly build and tend a small island filled with cartoon animals. For a variety of reasons, quarantined players latched on to the peaceful video game, finding in it a cathartic opportunity to simply relax and relieve the stress mounting from the outside world; months later, the popularity (and profitability) of Animal Crossing has yet to wane. You can imagine the surprise, then, when this gamified Escape of the Prisoner was invaded by Joe Biden’s presidential campaign, who elected to offer virtual signs to players wanting to adorn their island in support of the Democratic candidate for president. Although it would seem an exaggeration to call this a “lecture,” insofar as someone complains about “just wanting to play a game” without being confronted with political ads, there seems to be nothing morally wrong with criticizing (or electing to avoid) Biden’s campaign tactic — probably because there is no inherent obligation to care about a politician’s attempt to get elected (in the same way that there is a duty to care for fellow creatures in need).

So, when thinking about the ethics of escape, it is important to distinguish what kind of escape we mean. Attempts to escape from our proper moral obligations (a Flight of the Deserter) will often amount to ignorant or shameful abdications of our moral responsibilities to care for each other. On the other hand, attempts to (temporarily) escape from the often-difficult burdens we bear, both by doing our duties in public society and simply by quarantining ourselves at home, will amount to taking care of the needs of our own finitude — Tolkien’s Escape of the Prisoner.

In short, just as we should care about others, we should also care for ourselves.

 

Part III – “Searching for the Personal when Everything Is Political” by Meredith McFadden

Part I – “The Ethics of Escapism” by Marko Mavrovich

The Ethics of Escapism (Pt. 1)

photograph of Green Bay Packers stadium lightly populated before game

The NFL will imprint “End Racism” and “It takes all of us” in the end zones of stadiums in lieu of team logos. NFL Commissioner Roger Goddell stated that “the NFL stands with the Black community players, clubs and fans confronting systemic racism,” a commendable sentiment. The NFL will also allow coaches and officials to wear patches embroidered with “social justice phrases or names of victims of systemic racism.” Many coaches have signalled their support for the league’s latest policies and its general shift towards allowing sociopolitical issues into the game. Adam Gase and the entire coaching staff of the New York Jets will wear “Black Lives Matter” throughout the 2020 season. Mike Tomlin, head coach of the Pittsburgh Steelers, used this summer’s protests to speak with his team about social justice while also calling out the league’s lack of diversity. Even Bill Belichick, who is famously averse to “off-field distractions,” has welcomed these social justice conversations into the locker room. The NFL is but one example of the invasion of sociopolitical movements into entertainment.

The politically blank pages of one’s life are disappearing. No longer can one turn on late-night television to escape politics. No longer can one watch post-game interviews without being reminded of the social discord. No longer are Academy Award acceptance speeches devoid of political advocacy. These spaces are politicized. While late-night talk show hosts, athletes, and actors have undoubtedly revealed their politics and championed social values in the past, such revelations were notable because they were atypical. Now, they are the norm. As comedy writer Blayr Austin notes, “There’s never a moment reprieve from the chaos of news.”

Perhaps this transformation of those previously unscathed spaces of entertainment ought to be celebrated. There are at least three reasons to think so. Firstly, the entertainers — athletes, musicians, actors, and business personalities — are members of society, too. It may be unfair to expect them to silo their personal political preferences from their public work so that some of their fans can enjoy an escape from the so-called “chaos of the news.” But while entertainers are not entirely removed from society’s ills, they enjoy a comfortable arm’s length distance from it and the everyday reality of average civilians. Secondly, entertainers may be necessary catalysts for the desired change. But of course, the flipside is also true: entertainers may be necessary catalysts for undesired change (See my piece, “Novak Djokovic and the Expectations for Celebrity Morality”). Thirdly, some may argue that their silence will only serve to perpetuate the social ills debated today — systemic racism and racism injustice come to mind. But using this argument as a reason to compel entertainers to be political would obligate entertainers to “speak out” and raise awareness on a whole host of ills without any sense of how the ills should be prioritized, while also assuming that the classification of some social developments as “ills” is always beyond doubt. Ending racism and sexism are goals we might all agree on, but are inclusion riders equally unobjectionable? This third reason is also problematic because it conflates permissive conditions with obligatory participation. While silence on a social ill may permit the ill to continue unabated by criticism, silence is not an instrument of enacting that ill. In other words, silence is not the act that propagates the ill — even if it permits the ill to occur — which is an important distinction to understand when discerning proper responses to social developments.

But what about the fans who may find this blending of social justice and politics with entertainment suffocating (even if they do are sympathetic to the cause)? Is it wrong for them to feel that way? Is it wrong for a fan to seek a 3-hour reprieve from the omnipresent sociopolitical tumult by watching football? There are at least three reasons to think so.

Firstly, the ability to escape from the omnipresent sociopolitical tumult is not a luxury that every individual enjoys, especially if that tumult intimately affects an individual; therefore, the escapist fan should not want such an escape or, at least, the vehicle of entertainment (e.g. the NFL) should not address his or her escapist desire (e.g. not be reminded of the unjustified police killings and destructive demonstrations while watching the game). Yet the principle that underlies this reason would imply behavioral and attitudinal changes that others would find ridiculous. I should not complain to my landlord about the malfunctioning A/C in my apartment because, after all, some people suffer in far hotter climates without it. I should not want to go to the gym to clear my head because some people do not have access to gyms or other means of clearing their head. These examples are absurd, but so, too, is the argument.

Secondly, the fan should not be critical of the blending of social justice and politics with entertainment suffocating because he or she is not obligated to watch. Social media plays host to some version of the following exchange: “I don’t like how political football/the Tonight Show/the Oscars has gotten!” which is invariably followed by the retort: “Well, you don’t have to watch!” But of course, such a statement does not resolve the conflict. Suppose the conflicted spectator only watches professional football and finds pleasure in the 21 weekends of games from September to February. To tell him or her “You don’t have to watch!” is akin to saying “You don’t have to participate in your favorite hobby/interest/game/mode of necessary relaxation!”

Lastly, the fan should be held to the same standard as the entertainer: to be silent — to escape — only serves to perpetuate the existing ills that have fomented the never-ending barrage of news and the social fractures, therefore it is wrong to seek the reprieve. Do not sports, late-night talk shows, and award shows celebrating cinematic achievement pale in comparison to the problems yet to be resolved in our community? Maybe so. But does this mean that one is wrong to seek a political-free zone of entertainment? If the answer is yes, then it seems we must always be on-watch, always be advocating, always be consuming the news, always be active in resolving all of society’s ills, and always be denying ourselves an escape — however short, however trivial — from our contentious, divisive, and oft-disappointing reality.

 

Part II – “Two Kinds of Escape” by A.G. Holdier

Part III – “Searching for the Personal when Everything Is Political” by Meredith McFadden

The Continued Saga of Education During COVID-19

photograph of empty elementary school classroom filled with books and bags

In early August, Davis County School District, just north of Salt Lake City, Utah, announced its intention to open K-12 schools face-to-face. All of the students who did not opt for an online alternative would be present. There would be no mandatory social distancing because the schools simply aren’t large enough to allow for it. Masks would be encouraged but not required. There was significant pushback to this decision. Shortly thereafter the district announced a new hybrid model. On this model, students are divided into two groups. Each group attends school two days a week on alternating days. Fridays are reserved for virtual education for everyone so that the school can be cleaned deeply. In response to spiking cases, Governor Herbert also issued a mask mandate for all government buildings, including schools. Parents and students were told that the decision would remain in place until the end of the calendar year.

On Tuesday, September 15th, the school board held a meeting that many of the parents in the district did not know was taking place. At this meeting, in response to the demands of a group of parents insisting upon returning to a four or even five-day school week for all students, the board unanimously voted to change direction mid-stream and switch to a four-day-a-week, all-students-present model. Many of these same parents were also arguing in favor of lifting the mask mandate in the schools, but the school board has no power to make that change.

Those advocating for a return to full-time, in-person school are not all making the same arguments. Some people are single parents trying to balance work and educating their children. In other households more than one adult might be present, but they might all need to be employed in order to pay the bills. In still other families, education is not very highly valued. There are abusive and neglectful homes where parents simply aren’t willing to put in the work to make sure that their children are keeping up in school. Finally, for some students, in-person school is just more effective; some students learn better in face-to-face environments.

These aren’t the only positions that people on this side of the debate have expressed. For political, social, and cultural reasons, many people haven’t taken the virus seriously from the very beginning. These people claim that COVID-19 is a hoax or a conspiracy, that the risks of the virus have been exaggerated, and that the lives of the people who might die as a result of contracting it don’t matter much because they are either old or have pre-existing conditions and, as a result, they “would have died soon anyway.”

Still others are sick of being around their children all day and are ready to get some time to themselves back. They want the district’s teachers to provide childcare and they believe they are entitled to it because they pay property taxes. They want things to go back to normal and they think if we behave as if the virus doesn’t exist, everything will be fine and eventually it will just disappear. Most people probably won’t get it anyway or, if they do, they probably won’t have serious symptoms.

Parents and community members in favor of continuing the hybrid model fought back. First and foremost, they argued that the hybrid model makes the most sense for public health. The day after the school board voted to return to full-time in-person learning, the case numbers in Utah spiked dramatically. Utah saw its first two days of numbers exceeding 1,000 new cases. It is clear that spread is happening at the schools. Sports are being cancelled, and students are contracting the virus, spreading the virus, and being asked to quarantine because they have been exposed to the virus at a significant number of schools in the district.

Those in favor of the hybrid model argue that it is a safe alternative that provides a social life and educational resources to all students. On this model, all students have days when they get to see their friends and get to work with their teachers. If the switch to a four-day-a-week schedule without social distancing measures in place happens, the only students who will have access to friends and teachers in person are the community members who aren’t taking the virus seriously and aren’t concerned about the risks of spreading it to teachers, staff, and the community at large. It presents particular hardship for at-risk students who might have to choose the online option not only for moral reasons, but also so they don’t risk putting their own lives in jeopardy. Those making these arguments emphasize that the face-to-face model simply isn’t fair.

Advocates of this side of the debate also point out that we know that this virus is affecting people of color at a more significant rate, and the evidence is not yet in on why this is the case. The children who are dying of COVID-19 are disproportionately Black and Hispanic. The face-to-face option has the potential to disproportionately impact students of color. If they attend school, they are both more likely than their white classmates to get sick and more likely to die. Many of these students live in multi-generational homes. Even if the students don’t suffer severe symptoms, opening up the schools beyond the restrictions put in place by the hybrid model exposes minority populations to a greater degree of risk.

Slightly less pressing, but still very important, considerations on this side of the debate have to do with changing directions so abruptly in the middle of the term. The school board points out that students that don’t want to take the risk of attending school four days a week can always just take part in the online option, Davis Connect. There are a number of problems with this. First, Davis Connect isn’t simply an extension of the school that any given child attends; it is an independent program. This means that if students and their families don’t think it is safe to return to a face-to-face schedule, they lose all their teachers and all of the progress that they have made in the initial weeks of the semester. Further, the online option offers mostly core classes. High school students who chose the online option would have to abandon their electives — classes that in many cases they have come to enjoy in the initial weeks of the semester. Some students are taking advanced placement or dual-enrollment courses that count for college credit. These students would be forced to give up that credit if they choose the online option. The result is a situation in which families may feel strongly coerced to allow their children to attend school in what they take to be unsafe conditions and in a way that is not consistent with their moral values as responsible members of the community.

Those on this side of the argument also point out that community discussions about “re-opening the schools” tend to paint all students with the same brush. The evidence does not support doing so. There is much that we still don’t know about transmission and spread among young children. We do know that risk increases with age, and that children and young adults ages 15-24 constitute a demographic that is increasingly contracting and spreading the virus. What’s more, students at this age are often willful and defiant. With strict social distancing measures in place and fewer students at the school, it is more difficult for the immature decision-making skills of teenagers to cause serious public health problems. It is also important to take into account the mental health of teenagers. Those on the other side of the debate claim that the mental health of children this age should point us in the direction of holding school every day. In response, supporters of the hybrid model argue that there is no reason to think that a teenager’s mental health depends on being in school four days rather than two. Surely two days are better than none.

Everyone involved in the discussion has heard the argument that the numbers in Davis County aren’t as bad as they are elsewhere in the state. In some places in the area, schools have shut down. In a different district not far away, Charri Jenson, a teacher at Corner Canyon High, is in the ICU as a result of spread at her school. The fact that Davis County numbers are, for now, lower than the rates at those schools is used to justify lifting restrictions. There are several responses to this argument. First, it fails to take into consideration the causal role that the precautions are playing in the lower number of cases. It may well be true that numbers in Davis County are lower (but not, all things considered, low) because of the precautions the district is currently taking. Other schools that encountered significant problems switched to the hybrid model, which provides evidence of its perceived efficacy. Second the virus doesn’t know about county boundaries and sadly people in the state are moving about and socializing as if there is no pandemic. The virus moves and the expectation that it will move to Davis County to a greater degree is reasonable. You don’t respond to a killer outside the house by saying “He hasn’t made his way inside yet, time to unlock the door!”

To be sure, some schools have opened up completely and have seen few to no cases. This is a matter of both practical and moral luck. It is a matter of practical luck that no one has fallen seriously ill and that no one from those schools has had to experience the anguish of a loved one dying alone. It is a matter of moral luck because those school districts, in full possession of knowledge of the dangers, charged forward anyway. They aren’t any less culpable for deaths and health problems — they made the same decisions that school districts that caused deaths made.

A final lesson from this whole debate is that school boards have much more power than we may be ordinarily inclined to think. There are seven people on this school board and they have the power to change things dramatically for an entire community of people and for communities that might be affected by the actions of Davis County residents. This is true of all school boards. This recognition should cause us to be diligent as voters. We should vote in even the smallest local elections. It matters.

Waiting for a Coronavirus Vaccine? Watch Out for Self-Deception

photograph of happy smiley face on yellow sticky note surrounded by sad unhappy blue faces

In the last few months, as it is clear that the coronavirus won’t be disappearing anytime soon, there has been a lot of talking about vaccines. The U.S. has already started several trials, and both Canada and Europe have followed suit. The lack of a current vaccine has made even more evident how challenging it is to coexist with the current pandemic. Aside from the more extreme consequences that involve hospitalizations, families and couples have been separated for what is a dramatic amount of time, and some visas have been halted. Unemployment rates have hit record numbers with what will be predicted to be a slow recovery. Restaurants, for example, have recently reopened, yet it is unclear what their future will be when the patio season will soon come to an end. With this in mind, many (myself included) are hoping that a vaccine will come, the sooner the better.

But strong interest for a vaccine, raises the worry of how this influences what we believe, and in particular, how we examine evidence that doesn’t fit our hopes. The worry is that one might indulge in self-deception. What do I mean by this? Let me give you an example that clarifies what I have in mind.

Last week, I was talking to my best friend, who is a doctor and, as such, typically defers to experts. When my partner and I told my friend of our intention of getting married, she reacted enthusiastically. Unfortunately, the happiness of the moment was interrupted by the realization that, due to the current coronavirus pandemic, the wedding would need to take place after the distribution of a vaccine. Since then, my friend has repeatedly assured me that there will be a vaccine as early as October on the grounds that Donald Trump has guaranteed it will be the case. When I relayed to her information coming instead from Dr. Anthony Fauci, who instead believes the vaccine will be available only in 2021, my friend embarked in effortful mental gymnastics to justify (or better: rationalize) why Trump was actually right.

There is an expression commonly used in Italian called “mirror climbing.” Climbing a mirror is an obviously effortful activity and it is also bound to fail because the mirror’s slippery surface makes it easy to fall from. Italians use the expression metaphorically to denote the struggle of someone attempting to justify a proposition that by their own lights is not justifiable. My friend was certainly guilty of some mirror climbing and she is a clear example of someone who, driven by the strong desire to see her best friend getting married, self-deceives that the vaccine will be available in September. This is in fact how self-deception works. People don’t simply believe what they want for that is psychologically impossible. You couldn’t possibly make yourself believe that the moon was made of cheese, even if you wanted to. Beliefs are just not directly controllable like actions. Rather, it is our wishes, desires, interests that influence the way we come to believe what we want by shaping how we gather and interpret evidence. We might, for example, give more importance to reading news that align with our hopes and scroll past news titles that question what we would like to be true. We might give weight to a teaspoon of evidence coming from a source we wouldn’t normally trust, and instead give credibility to evidence coming from sources that we know is not relevant.

You might ask though, how is my friend’s behavior different from someone who is simply wrong instead of self-deceived? Holding a belief that it turns to be false usually happens out of mistake, and as a result, when people correct us, we don’t have problems revising that belief. Self-deception instead, doesn’t happen out of mere error, it is driven by a precise motivation — desires, hope, fears, worry, and so on — which biases the way we collect and interpret evidence in favor of that belief. Consider my friend again. She is a doctor, and as such she always trusts experts. Now, regardless of political views, Trump, contrary to Dr Fauci, is not an expert in medicine. Normally, my friend knows better than trusting someone who is not an expert, yet the only instance when she doesn’t, is one where there is a real interest at stake. This isn’t a coincidence; the belief there will be a vaccine in October is fueled by a precise hope. This is a problem because our beliefs should be guided by evidence, not wishes. Beliefs, so to speak, are not designed to make us feel better (contrary to desires, for example). They are supposed to match reality, and as such be a tool that we use to navigate our environment. Deceiving ourselves that something is the case when it’s not inevitably leads to disappointment because reality has a way to intrude our hopes and catch up with us.

Given this, what can we do to prevent being falling into the grips of self-deception? Be vigilant. We are often aware of our wishes and hopes (just like you are probably aware now that you’re hoping a vaccine will be released soon). Once we are aware of our motivational states, we should slow down our thoughts and be extra careful when considering evidence in favor. This is the first step in protecting ourselves from self-deception.

Anti-Maskers and the Dangers of Collective Endorsement

photograph of group of hands raised

Tensions surrounding the coronavirus pandemic continue to run high, especially in parts of America in which discussions over measures to control spread of the virus have become something of a political issue. Recently, some of these tensions erupted in the form of protests of “anti-maskers”: in Florida, for example, a group of such individuals marched through a Target, telling people to take off their masks, and playing the song “We’re Not Going to Take It.” Presumably the “it” that they were no longer interested in taking pertained to what they perceived to be a violation of personal liberties, as they felt as though they were being forced to wear a mask against their wills. While evidence regarding the effectiveness of masks at keeping oneself and others safe continues to grow, there nevertheless remains a vocal minority that believes otherwise.

A lot of thought has been put into the problem of why it is that people continually ignore good scientific evidence, especially when the consequences of doing so are potentially dire. There is almost certainly no singular, easy answer to the problem. However, there is one potential reason that I think is worth focusing on, namely that anti-maskers, among many others of those who reject the best available scientific evidence on a number of issues, will tend to trust sources that they find on social media instead of through more reputable outlets. For instance, one investigation of why anti-maskers hold their beliefs pointed to the effects of Facebook groups in which such beliefs are discussed and shared. Indeed, despite their efforts to contain the spread of such misinformation, anti-masker Facebook groups remain easy to find.

However, the question remains: why would anyone believe a group of random Facebook users over scientific experts? The answer to this is no doubt multifaceted as well. But one reason may come down to a matter of trust, and that the ways we determine who is trustworthy works differently online than it does in other contexts.

As frequent internet users will no doubt be familiar with already, it can often be difficult to identify trustworthy sources of information online. One reason is that the internet offers varying degrees of anonymity: the consequence is that one will potentially not have much information about the person one’s talking with, especially given the possibility that people can fabricate aspects of their identities in online environments. Furthermore, interacting with others through text boxes on a computer screen is a very different kind of interaction than one that occurs face-to-face. For instance, researchers have shown that there are different “communication cues” that we pick up on when interacting with each other, including verbal cues like tone of voice, volume of speech, and rate at which one is speaking, and visual cues like facial expressions and body language. These kinds of cues are important when we make judgments about whether we should believe what the other person is saying, and are largely absent in a lot of online communication.

With less information about each other to go on when interacting online, we will then tend to look to other sources of information when determining who to trust. One thing internet users tend to appeal to is endorsement. For instance, when reading things on social media or message board sites we tend to put more trust in those posts that have the most hearts, or likes, or upvotes, etc. This is perhaps most apparent when you’re trying to decide what product to buy: we tend to gravitate towards those with not only the highest ratings, but those that have the most high ratings (something with one 5 star review doesn’t mean much, but a product with hundreds of high reviews means a lot more). The same can be the case when it comes to determining which information to believe: if your post has thousands of endorsements then I’m probably going to at least give it a look, whereas if it has very few, I’ll probably pass it by.

There is good reason to trust information that is highly endorsed. As noted above, it can be hard to determine who to trust online because it’s not clear whether someone is really who they say they are. It’s easy for me to join a Facebook group and tell everyone that I’m an epidemiologist, for example, and without having access to any more information about me you’ve got little other than my word to go on. Something that’s much harder to fake, though, is a whole bunch of likes, or hearts, or upvotes. So the thought is that if enough other people endorse something, that’s good reason to trust it. So here’s one reason why people getting their information off social media might trust that information more than that coming from the experts, namely because it is highly endorsed by many other members of their group.

At the same time, people might be more willing to believe those with whom they interact with online in virtue of the fact that they are interacting with them. For instance, when a scientific body like the CDC tells you that you should be wearing a mask, information is traveling in only one direction. When interacting with groups online, though, it can be much easier to trust those that you are interacting with, and not merely deferring to. Again, this is one of the problems raised by online communication: while there is lots of good information available, it can be easier to trust those with whom one can engage with, as opposed to just take orders from them.

Again, given that the problem is complex and multifaceted means that there will not be a one-size-fits-all solution. That said, it is worthwhile to think about how it might be possible for those with the good information to establish relationships of trust with those who need it, given the unique qualities of online environments.

Is the U.S. Becoming Less Democratic?

photograph of worn USA flag on pole with clouds behind

What does it mean to be a democracy and is the United States becoming less democratic in nature? With November rapidly approaching, the election has been marred by accusations of voter suppression, worries about Russian interference, claims that the entire election is rigged, and concern that this will be the most litigious election ever. Given this state of affairs, it seems like the democratic process is being undermined. However, the process of voting and democracy are not the same thing; the former is an instrument for enabling the latter. Does the problem go beyond one election?

American philosopher John Dewey understood democracy as a much broader phenomenon. While elections and the machinery of democracy matter, and while the vote of a majority is important, it is more important to consider how the will of a majority is formed or how the public can manifest the desires and preferences that matter to it. As he notes in Democracy and Education, “A democracy is more than a form of government; it is primarily a mode of associated living, of conjoint communicated experience” that when fully realized affects all modes of human association. In The Public and Its Problems, he explains, “From the standpoint of the individual, it consists in having a responsible share according to capacity in forming and directing the activities of the groups to which one belongs…From the standpoint of the groups, it demands liberation of the potentialities of members of a group in harmony with the interest and goods which are common.”

Essentially, democracy allows for individuals to provide input for the direction of the group while the group ensures that each individual within the group can realize their potential in keeping with common interests. It is a method for ensuring that conflicts within a society can be resolved in ways that promote growth and development, “it is the idea of community life itself.” Since these kinds of social interactions go beyond the scope of government, it stands to reason that democracy itself has a larger scope than how a government is selected.

For Dewey, in order for a political democracy to function properly it must allow for the interest of the public to be the supreme guide for government activity to enable the public to achieve its goals. To do this, however, a public must be able to identify itself and its aims. But, the public is prevented from doing this for reasons that are as relevant today (probably more so) as they were for Dewey. Rapid technological and social development means that we are simultaneously able to both affect distant locations, yet often lack a clear sense of the distant consequences of our actions. Lack of public awareness of these consequences means that we must rely on expert administrators.

But, during the age of fake news, COVID conspiracies, and the rise of QAnon, there is disagreement over basic facts. How can a democratic public perceive indirect consequences when they can’t agree on what is happening? One might expect the public to perceive a threat like COVID and assert what it wants, but without a common understanding, the government response has been confused, and significant segments of the public have demonstrated through protest and gathering that they simply aren’t concerned about the indirect consequences they may cause.

COVID-19 has been a global threat, it has caused (at least) almost 200,000 deaths, and it has created an economic crisis, yet many are unwilling to tolerate limited sacrifices such as wearing a mask and social distancing. Given that this has been the response to COVID, how will the public respond to the issue of climate change when the effects become more apparent? How will segments of the public respond when asked to make more significant sacrifices for a problem they may not believe is real?

It is also increasingly evident that tribalism is affecting the machinery of democracy. Partisanship has become an end in itself as a significant number of voters seem to believe that a platform does not matter, political norms (such as over Supreme Court nominations) do not matter, and the traditional stances taken by political parties do not really matter. This may lead to a situation where the Supreme Court, whose legitimacy has already been questioned, seems even less legitimate, just before a very litigious election.

Dewey believes that it is important to distinguish the machinery of democracy (elections, Congress, the Supreme Court) from democracy as a way of life. The form this machinery takes should respond to the needs of the public of the day and should be open to experimental revision. One might be tempted to believe that so long as this machinery can be maintained and revised where necessary there is no threat to democracy. However, Dewey suggests that since the machinery of democracy is merely an instrument for achieving what a democratic public wants, short of a unified public, it is futile to consider what machinery is appropriate. In other words, any potential reforms regarding mail-in voting, the Supreme Court, the Electoral College, and so on will not address the underlying issue without first addressing the fractured democratic public. If the public remains unable to find itself, the government will be less and less able to represent it and that makes the nation less democratic in the long run.

Causality and the Coronavirus

image of map of US displayed as multi-colored bar graph

“Causality” is a difficult concept, yet beliefs about causes are often consequential. A troubling illustration of this is the claim, which is being widely shared on social media, that the coronavirus is not particularly lethal, as only 6% of the 190,000+ deaths attributed to the virus are “caused” by the disease.

We tend to think of causes in too-simplistic terms

Of all of the biases and limitations of human reasoning, our tendency to simplify causes is arguably one of the most fundamental. Consider the hypothetical case of a plane crash in Somalia in 2018. We might accept as plausible causes things such as the pilot’s lack of experience (say it was her first solo flight), the (old) age of the plane, the (stormy) weather, and/or Somalia’s then-status as a failed state, with poor infrastructure and, perhaps, an inadequate air traffic control system.

For most, if not all, phenomena that unfold at a human scale, a multiplicity of “causes” can be identified. This includes, for example, social stories of love and friendship and political events such as wars and contested elections.1

Causation in medicine

Causal explanations in medicine are similarly complex. Indeed, the CDC explicitly notes that causes of death are medical opinions. These opinions are likely to include not only an immediate cause (“final disease or condition resulting in death”), but also an underlying cause (“disease or injury that initiated the events resulting in death”), as well as other significant conditions which are or are not judged to contribute to the underlying cause of death.

In any given case, the opinions expressed on the death certificate might be called into question. Even though these opinions are typically based on years of clinical experience and medical study, they are limited by medical uncertainty and, like all human judgments, human fallibility.

When should COVID count as a cause?

Although the validity of any individual diagnosis might be called into question, aggregate trends are less equivocal. Consider this graph from the CDC which identifies the number of actual deaths not attributed to COVID-19 (green), additional deaths which have been attributed to COVID-19 (blue), and the upper bound of the expected number of deaths based on historical data (orange trend line). Above the blue lines there are pluses to indicate weeks in which the total number (including COVID) exceeds the reported number by a statistically significant margin. This has been true for every week since March 28. In addition, there are pluses above the green lines indicating where the number of deaths excluding COVID was significantly greater than expected. This is true for each of the last eight weeks (ignoring correlated error, we would expect such a finding fewer than one in a million times by chance). This indicates that the number of deaths due to COVID in America has been underreported, not overreported.

Among the likely causes for these ‘non-COVID’ excess deaths, we can point, particularly early in the pandemic, to a lack of familiarity with, and testing for, the virus among medical professionals. As the pandemic unfolded, it is likely that additional deaths can be attributed, in part to indirect causal relationships such as people delaying needed visits to doctors and hospitals out of fear, and the social, psychological, and economic consequences that have accompanied COVID in America. Regardless, the bottom line is clear: without COVID-19, over two hundred thousand other Americans would still be alive today. The pandemic has illuminated, tragically, our interconnectedness and with it our
responsibilities to each other. One part of this responsibility is to deprive the virus of the
opportunity to spread by wearing masks and socially distancing. But this is not enough: we
need to stop the spread of misinformation as well.

 

1 Some argue that we can think of individual putative causes as “individually unnecessary” but as “jointly sufficient.” In the 2000 US Presidential Election, for example, consider the presence of Ralph Nader on the ballot, delays in counting the vote in some jurisdictions, the Monica Lewinsky scandal, and other phenomena such as the “butterfly ballot” in Palm Beach County, Florida. Each of these might have been unnecessary to lead the election to be called for G.W. Bush, but they were jointly sufficient to do so.

Is Prenatal Sex Discernment Unethical?

On Saturday September 5, a gender-reveal party gone-wrong set fire to a California forest, burning down thousands of acres over the following week. This is not the first time a gender-reveal party has led to a major wildfire, nor is it the first time one has been responsible for threatening human life. Gender-reveal parties are largely a product of 20th century natal care medical advancement. The El Dorado fire has renewed debates around gender-reveal parties and the ethical questions that surround them.

Does prenatal sex discernment do more harm than good? Should gender-reveal parties be banned? And what value is there, if any, in determining sex before birth?

While there is evidence that humans have attempted to predict the gender of an unborn fetus for thousands of years, the integration of ultrasound technology into prenatal care in the 1960’s radically improved the accuracy of predicting fetal gender. Typically, gender is determined using an obstetric ultrasonography which can be up to 98%-100% accurate.

The practice of determining a child’s sex before birth is relatively uncontroversial in the United States, but it has been banned in parts of the world where this information has been used to initiate abortion. Because of women’s economic marginalization and lack of socioeconomic mobility, in some places girls are considered an economic burden compared to boys. The preference for boy babies has led to sex-selective abortion and an imbalance in the sex ratio in countries such as India and China. Studies have found that an imbalance in sex ratio favoring males, has been correlated with many other social problems such as human trafficking and an increase in violence against women. In order to combat these rising sex ratios both India and China have previously banned, or severely limited, the practice prenatal sex discernment. Despite these attempts to discourage sex-selective abortions, there still exist many concerns that regulations have not gone far enough.

Prenatal discernment in the United States has not led to sex selective abortion in the way it has in the rest of the world, but it has become a cornerstone of the pregnancy process. In a 2001 study of expectant parents, more than half of both men and women expressed a desire to know the sex of the fetus. Interestingly, researchers also found that there were sharp differences in desire to know the sex of a fetus across ethnicity, age, race, and marital status indicating that at least some of our desire to know the sex of our child comes from cultural or social influences.

While knowing the sex of a fetus does not mean a parent will necessarily have a gender reveal party, gender reveal parties certainly necessitate prenatal discernment. In the 2010’s gender reveal parties in the United States have become strikingly common. Pregnant women and their partners perform some type of ceremony in which gendered objects or colors are revealed to indicate whether the child will be male or female. This practice might seem strange to many, considering the fact that the medical process in determining the sex of the child is medical and very private in many cases.

But the point of gender-reveal parties is not simply to find out the gender of a future child, but in many cases, as Lindsey King-Miller of Vox describes, “to make a spectacle…like all kinds of social media challenges, gender reveals are made to be recorded.” By their very nature, these spectacles often involve pyrotechnics, complicated machinery, and other forms of entertainment more commonly found at an amusement park rather than one’s backyard. Perhaps this is why gender reveal parties have led to so much destruction in modern history, with critics such as Arwa Mahdawi arguing that “gender reveal parties are a form of domestic terrorism.”

The practice of gender-reveal parties has clearly led to many negative and unethical consequences. However, this is not the only reason that many find them to be morally abhorrent. Critics argue that at their core, gender-reveal parties perpetuate sexism and transphobia, exclude intersex people, and contribute to our relentless obsession with defining people within a gender binary. These parties are often rife with gender stereotypes, with themes like “Touchdowns or Tutus.” Gender-reveals also fail to acknowledge the crucial distinction between gender and sex. As psychologist Daniel L. Carson explains, “Gender is the social, behavioral, and psychological characteristics that we use to distinguish the sexes…By definition, parents have no idea what the gender of their child will be since they have yet to interact with the child.” The distinction between gender and sex has been recognized by Western sociologists, medical professionals and psychologists since at least 1987, with the establishment of “Gender and Society” and the publication of the groundbreaking article, “Doing Gender.” Today, the World Health Organization defines gender as “characteristics of women and men that are largely socially created” while sex on the other hand is “encompasses [differences] that are biologically determined.” This difference is important in understanding both the ways in which our experience of the world is impacted by our biology as well as by social stereotypes associated with our gender. It is also crucial to recognize this difference to acknowledge that not all who are biologically male or female identify with the “corresponding”, or cis, gender. Recognizing and honoring this difference is imperative for ensuring the rights of transgender, genderqueer, and non-binary people. Choosing to undergo prenatal sex discernment or host a gender-reveal party does not necessarily mean one does not understand or support the difference between sex and gender. However, it could be indicative of one’s overall attitudes toward those different from them, and toward stereotypes associated with sex and gender in general. A 2014 study, for example, found that those women who chose not to undergo prenatal discernment, tended to be “open to new experiences, and combine egalitarian views about the roles of men and women in society with conscientiousness.”

Gender-reveal parties are not the only form of American ritual that has been enabled by prenatal discernment. Companies, such as the Gender Reveal Game, have built an entire profit scheme around providing a platform for parents-to-be to encourage their loved ones to place bets on the sex of their child. Baby showers, a common custom where friends and family “shower” expectant parents and unborn children with gifts before birth, arguably center on goods like clothing and toys which are heavily marketed and designed to be appropriate for a baby depending on their sex. Anyone who has attended a baby shower can attest to the fact that it is much more challenging to find gender-neutral toys and clothes for expectant parents. In fact, experts have reported that children’s toys are more divided by gender now than they were 50 years ago. While some progress is being made on the front of gender-neutral children’s clothing, industry experts affirm that the vast majority remains gendered, beginning in infancy.

But is wanting to know the gender of an unborn child necessarily immoral? Some might argue not. As mentioned earlier, there were sharp divisions in parents wanting to know the sex of their child based on ethnic, racial, age, and marital status. For some, knowing the gender of one’s child before birth might be religious and traditional. Knowing a child’s gender might also help parents decide which name to give their child, depending on their cultural or religious background. Additionally, knowing the gender of a child might be a way to ease anxiety during pregnancy. It is especially important to note that in the 2001 study mentioned above, the two groups with the highest desire to know the sex of their unborn child were pregnant women below the age of 22, at 98% and single-mothers at 90%. Being pregnant at a young age, or without a partner to help raise the child undoubtedly creates a lot of uncertainty. Knowing the sex of the child might be one way for these expectant mothers to ease anxiety during pregnancy.

In an article in Today’s Parent, Dave Coodin, father-to-be, explains his decision to partake in prenatal discernment. He explains both that prior to knowing the sex of his child, he and his partner referred to the baby as “it” which was rather dehumanizing. He also explained that by knowing the sex, he was able to conceptualize a part of his baby’s identity in a manner that allowed him to “construct fantasies that satisfy us in the present, no matter how crazy and deluded.” Pregnancy is certainly a long and difficult process, and some might agree with and sympathize with Dave’s desire to know at least one potential aspect of his future child’s identity. In a 2015 research paper, Florence Pasche Guignard argued that gender reveal parties have filled a role “where neither medical nor religious institutions offer ritual options deemed appropriate enough for celebrating joyfully and emotionally during pregnancy.” While there doesn’t seem to be anything inherently wrong with celebrating during a pregnancy, critics might still push back that it isn’t the celebratory nor ritualistic aspect of prenatal discernment and gender-reveals that is the problem, but rather the desire to define a human being, and a baby, based on its sex.

Regardless of what one believes about gender-reveal parties, the tide is certainly turning on emphasizing gender in children in general, with about 1 in 5 American parents supporting gender-neutral clothing. In fact, even the woman credited with starting the gender-reveal party trend back in 2008 has become a vocal critic of the phenomenon. In a viral Facebook post from 2019, Jenna Karvunidis asserted “Assigning focus on gender at birth leaves out so much of their potential and talents that have nothing to do with what’s between their legs.” In a rather ironic quip, she concluded by revealing, “PLOT TWIST, the world’s first gender-reveal party baby is a girl who wears suits!”

Debate over the Anti-Racist Reading List

blurred photograph of bookshelf with birght, colorful books

The Black Lives Matter movement has generated important conversations in multiple arenas of American life, including one surprising conversation currently taking shape in the literary sphere. Intellectuals and book-lovers alike are reconsidering the value of the anti-racist reading list. These lists, which rapidly gained popularity in the wake of George Floyd’s death, offer a selection of primarily non-fiction books that deal either directly or indirectly with racism for those hoping to educate themselves about structural inequality. There is no one official list; the most popular and visible are those assembled by national newspapers, public libraries, universities, but websites and blogs with less cultural pedigree are putting together their own lists of recommended reading. Many Black writers and intellectuals, like Ibram Kendi X, have wholeheartedly embraced the anti-racist reading list, while others have expressed doubt over the purpose and effectiveness of the project. Are reading lists a good foundation for anti-racism, or are they another dead-end outlet for white allyship?

Ever since Harriet Beecher Stowe’s novel Uncle Tom’s Cabin captivated America in 1852, white Americans have turned to literature, especially fiction, to navigate the emotional and political labyrinth of racism. But as Melissa Phruksachart points out, “unlike previous moments in which fiction supposedly becomes the portal to empathy for the Other, the contemporary literature of white liberalism eschews the novel and coheres around the genres of nonfiction, autobiography, and self-help.” This is one of the most prevalent critiques of the anti-racist reading list, that such lists rely heavily on the exploitation of Black pain rather than celebrate their creative potential and achievements. Kaitlyn Liu articulates this point when she says, “Although anti-racist reading lists are published with the best intentions, they have become part of a broader system which generalizes and compartmentalizes Black authorship into perpetual voices of trauma and pain. Rarely do the books listed support the overlooked stories of Black joy, love or success without mandating a hardship among it.” As Lauren Michele Jackson, one of the earliest critics of anti-rascist reading lists, notes, such lists often defeat the very purpose of literature. Someone who reads Morrison as a field guide to understanding racism, she says, will not fully appreciate the work as a novel, as an artistic achievement by a talented Black writer. Jackson fears that reading lists can encourage white people to approach Black literature “zoologically,” engaging in the art at arms-length. The books that most commonly appear on anti-racist reading lists (Ibram X. Kendi’s recent nonfiction, Robin J. DiAngelo’s White Fragility, biographies of revolutionaries like Malcolm X and Martin Luther King Jr.) can feed into this trend, despite the value of each individual work.

At face value, it seems that Americans have turned away from the emotional insight and sensitivity to language offered by the novel for the cold hard facts of nonfiction, but Phruksachart argues that that isn’t the case. She notes that contemporary trends in nonfiction don’t teach white readers racial literacy so much as “emotional literacy. In doing so, they attempt to help colormute readers see, hear, think, and respond to the concepts of race and racism without triggering the sympathetic nervous system—without launching into fight-or-flight mode, which too often materializes as denial, anger, silence, or white women’s tears.” Understanding why we respond emotionally and physiologically to our own prejudice does seem like a valuable first step to addressing racism, but Phruksachart poses an extremely vital question: “is white supremacy really a problem of knowledge?” Does knowing the history of racism, or knowing our individual place in it, spur white allies to change the material conditions of non-white Americans?

Phruksachart would argue that it doesn’t. In perhaps the most penetrating critique of the anti-racist reading list, she states that

“The literature of white liberalism is obviously not a decolonial abolitionist literature. It succeeds by allowing the reading class to think about antiracism untethered from anti-imperialism and anti-capitalism. That is not to say that it has nothing to offer, nor that the authors are pro-capitalist shills. While all of these books offer sharp analyses of the way capitalism destroys Black and minoritized lives, they mention, but don’t center, the powerful critiques of capitalism issued by Black and minoritized traditions.”

In her view, contemporary anti-racist literature is palliative for white liberal readers, and while these works may contain glimmers of insight, they cannot truly help their audience unpack the causes of structural inequality.

This is an extremely insightful point, but it is perhaps unrealistic to expect a nebulous list of books to unravel global capitalism. As Jackson writes, “it is unfair to beg other literature and other authors, many of them dead, to do this sort of work for someone. If you want to read a novel, read a damn novel, like it’s a novel.” We certainly shouldn’t discourage anyone from reading and learning about race, but we should remember that reading isn’t meant to be a substitute for praxis, but a supplement. It’s a way of indicating to ourselves and others that we are engaging with a certain issue, that we are willing to think deeply about it and learn from others. As Harvard professor Khalid Muhammad puts it, “People use reading as a way to understand what they’re doing, why they’re doing it and why the work is critically important. There’s a fundamental requirement of organizing around shared knowledge, usually coming from shared text, to build collective engagement around what histories are relevant to explain the matter. That’s been true, certainly, for the entire history of Black freedom struggles.” Knowledge might not be the single key to unraveling white supremacy, but it is the basis for critical engagement with the world, and it’s certainly a good place for white readers to start.

When Are Leaders Culpable?

photograph of pyramid of wooden cubes indicating people on yellow background

When are leaders, especially politicians, morally culpable for the deaths their decisions and actions cause? This is a hard question of course because culpability comes in degrees. For example, Sally is culpable for murder if she knowingly kills someone without moral reason (e.g., self-defense); however, Sam is less culpable than Sally if he knowingly sells someone a defective automotive part which results in a fatal car accident. By the same token, the culpability of leadership comes in degrees too. This issue made especially salient recently when Kristen Urquiza, at the Democratic National Convention, shared how she lost her father due to coronavirus complications, arguing her father likely wouldn’t have died had he ignored President Trump’s downplaying of the threat. This isn’t an isolated problem. President Trump misled Americans about the impact of the pandemic, with disastrous results, in an attempt to revive his reelection prospects. We may wonder then about the blame leaders deserve for the death they cause.

There is an obvious way leaders, and politicians in particular, are directly culpable for the deaths of their citizens: starting an unjust conflict, like a war, without accurately assessing the long-run consequences. Leaders look blameworthy here because of the incentive structure at play: soldiers on a battlefield often face perverse incentives, like the prospect of prison, if they don’t carry out an order. This of course isn’t to deny that soldiers share some blame for following orders they know are wrong. However, leaders share in this responsibility given the position of power they hold, especially if they order something they know is unjust.

For example, we should be reticent to accept a proposed war is legitimate given the historical record: throughout history, especially recently, wars are often justified with moral language. Perhaps a group living in the targeted nation or region is claimed to have wronged us somehow; perhaps our invasion would help set things right; perhaps we would be justified using force to get back what was wrongly taken from us. If these kinds of justifications for war sound familiar, it is because they are. It is too easy to use flimsy moral appeals to justify things we would otherwise think morally wrong. We are susceptible to this sort of thing as individuals; so it wouldn’t be surprising if politicians and governments routinely abuse their trust to leverage baseless moral justifications to convince their citizens and constituents that the proposed war would be morally permissible.

Things are less clear when morally weighing an order from a leader or politician not intended to cause harm, but with foreseeable negative consequences. Some ethicists appeal here to what is known as the doctrine of double effect: an order or action is morally acceptable, even if it has bad and foreseen consequences, if they are the by-product of a morally good, intended action. For the sake of argument: even if abortion is morally bad, on this doctrine a doctor may still abort a fetus if the intention is to save the pregnant mother’s life: the intended, morally good outcome (saving the mother’s life) can’t occur without the bad, unintended outcome (aborting the fetus). Whether the doctrine of double effect exonerates leaders and politicians for ordering a war, even a just war, with very bad foreseen consequences is controversial.

What about indirect culpability of leaders and politicians? Things are dicier here. However, we can still call to mind cases that may help us think through indirect culpability. An obvious and recent case is that of managing the coronavirus in the United States: the current United States President, Donald Trump, downplayed the threat of the coronavirus and gave poor advice to U.S. citizens. This is not of course to say that the current U.S. president intended for people to die of coronavirus; but it does illustrate he could well have indirectly contributed to citizens deaths by downplaying the virus, and playing up ‘cures’ that ultimately failed.

We should pause here to reflect on why the current U.S. President — or any leader similarly situated — looks indirectly culpable for such deaths, even if he isn’t nearly as culpable, say, when starting an unjust war. There is an obvious source of indirect culpability here: abusing the trust placed in them by his followers and constituents. If Harry knows his constituents trust him (whether this is poor judgment on their part or not), he bears indirect culpability for what happens to them if he knowingly gives them bad advice, and they act on it, especially if they wouldn’t have acted that way had they not trusted him. This would be wrong, just as it would be wrong for a physician to knowingly give dangerous medical advice to her patients, especially knowing they only took her advice because they trusted her good intentions and competence.

This is because, broadly speaking, when there is trust, there is vulnerability. When I trust that someone is competent and has my best interests at heart, I place myself in a vulnerable position that can be exploited by those with bad intent. The point generalizes to the ethics of leadership: a leader may be in a position to exploit their followers because of the trust placed in them by their followers, even though such trust is only placed in them on the condition that the leader has their best interests at heart. And if the leader used the trust to knowingly put their followers in harms’ way for their own end, they bear some responsibility for that bad outcome, even if it was unintended.

Moral Luck and the Judgment of Officials

photograph of empty tennis court and judge's chair

Novak Djokovic was defaulted from the US Open last week for violating the Abuse of Balls rule. During the first set of his quarterfinal match with Pablo Busta, he struck a ball to the back of the court without looking. This resulted in the ball hitting a line judge. The referee, Soeren Friemel, after consulting with other officials, made a ruling to bypass the Point Penalty Schedule and issue an immediate default. In other words, Djokovic lost the match, left the tournament, forfeited all of his winnings in the tournament, and is subject to further fines. In the aftermath of this incident, many of the TV commentators discussed issues of the severity of the injury to the judge, that the ruling was correct, and the bad luck of Djokovic. The bad luck was in reference to the fact that just as Djokovic was striking the ball, the line judge straightened up from her bent over position which put her head in the direct path of the ball.

As I watched the events unfold and before the ruling was made, I immediately began to think about the fact that the referee’s judgment was going to hinge on the problem of moral luck. This problem was initially discussed by Bernard Williams and Thomas Nagel in a two-part article in 1976. Dana Nelkin describes the problem as one that “occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control.”  In other words, judgments of moral approval or disapproval, including the imposition of sanctions, can depend upon accidents or choices by third parties. The problem can be exemplified by considering two teenagers drag racing. Both of them are using poor judgment as well as speeding. The car on the right is clearly pulling ahead of the car on the left (due, let’s say, to crummy spark plugs in the left car) when an animal darts out into the street from the left. Neither teen attempts to avoid hitting the animals because neither sees the animal. As luck would have it, even though the animal darts into the road from the left, the car on the left misses the animal but the car on the right strikes it. Is it really the case that the driver on the left is morally innocent compared to the driver on the right? Had it not been for the crummy spark plugs the driver on the left would have struck the animal; had it not been for the presence of the animal the accident would not have occurred at all.

What seems to be at issue here, Nelkin explains, is the acceptability of two ideas, one called the Control Principle and the other a corollary of that principle.

Control Principle (CP): We are morally assessable only to the extent that what we are assessed for depends on factors under our control.

CP-Corollary: Two people ought not to be morally assessed differently if the only other differences between them are due to factors beyond their control.

At first, these ideas seem to be intuitively acceptable. To accept them means that luck should play no role in moral assessment. But notice that they imply that in our stipulated example of drag racing that the driver on the left seems to be just as culpable as the driver on the right for hitting the animal — either both are culpable or neither is culpable. After all, the only difference between the two drivers are factors beyond the control of either driver and both were in control of the decision to drag race. So, what is to be questioned? Should the judgment that the two drivers have different levels of culpability be jettisoned or should CP and its corollary be abandoned?

This hypothetical case is analogous to the situation with Djokovic. A few points before the offending event, Djokovic much more angrily and with much more force slammed a ball into a side wall of the court. None was injured. He was not warned, given a point penalty, or given a game penalty.  But, given the rule, the earlier event was just as much of a violation of the rule as the latter event. It is worth seeing the rule in its entirety:

ARTICLE III: PLAYER ON-SITE OFFENSES

  1. ABUSE OF BALLS Players shall not violently, dangerously or with anger hit, kick or throw a tennis ball within the precincts of the tournament site except in the reasonable pursuit of a point during a match (including warm-up). Violation of this Section shall subject a player to fine up to $20,000 for each violation. In addition, if such violation occurs during a match (including the warmup) the player shall be penalised in accordance with the Point Penalty Schedule hereinafter set forth. For the purposes of this Rule, abuse of balls is defined as intentionally hitting a ball out of the enclosure of the court, hitting a ball dangerously or recklessly within the court or hitting a ball with negligent disregard of the consequences.

What should be noticed is that the mere act of hitting a ball “violently, dangerously or with anger,” regardless of whether anyone is injured, is sufficient to violate the rule. So, the earlier act by Djokovic was sufficient for Friemel to issue a warning in accordance with the Point Penalty Schedule. Nowhere in the code does it specify that Friemel may skip directly to default based on the poor luck of the ball hitting and injuring someone, though, as with all officials in sports, part of his job is to use judgment to make decisions.  But, it seems as if part of that decision to not issue a warning for the earlier outburst and to default Djokovic for the latter outburst included a rejection of the control principle and its corollary. Otherwise it seems as if the only difference between the two events was the placement of the line judge and the fact that just as Djokovic hit the ball she stood up in a way that placed her head in the direct path of the ball. Both of these elements were beyond the control of Djokovic. So, if CP is operative, then Djokovic seems to be equally culpable and equally deserving of being defaulted for the earlier outburst as for the one that resulted in the injury to the line judge. By abandoning CP, while Djokovic clearly violated the rule earlier, he did not need to be sanctioned because luckily the outcome was different.

But now comes the twist. It looks like other officials at the match bear some responsibility for the line judge’s injury.

What do we say about the Friemel’s non-application of the rule earlier in the match?  Furthermore, what do we say about the officials at the Western & Southern Open just a few days before who did not default Aljaz Bedene for hitting a camera operator in a similar situation? Here we have an almost identical set of facts, but the injury sustained by the camera operator did not require immediate medical attention, unlike the line judge injured by Djokovic. The rules do not make an explicit allowance for the severity of the injury to factor into the judgment of the officials, but in these three cases, the severity of the injury was considered. The different decisions make sense if we abandon the control principle because those different outcomes, that were due in part to factors beyond the control of the players, seem to allow for different judgments.

Now, all we have to do is accept that luck plays a role when making moral judgments. This implies that you can be morally culpable for things beyond your control. Friemel and the other tennis officials seem to be committed to this idea. But now that we know that consequences matter, it appears that Friemel and other officials should also be culpable in the injury of the US Open line judge. After all, if we let consequences matter, then we have to confront the suggestion that acts of omission resulting in bad outcomes are open to moral censure. By not giving Bedene a harsher penalty a few days before, and not even issuing a warning a few minutes before in the Djokovic – Busta match, the officials perform acts of omission. These acts of omission appear to support a claim that Djokovic could vent his frustration in violation of the Abuse of Balls rule without fear of serious sanction. The officials are thus, oddly, morally implicated in Djokovic’s transgression. They seem to be responsible for creating a situation in which Djokovic could behave this way. The resulting injury involves actions beyond their control (the line judge standing up and Djokovic hitting the ball). But by abandoning the CP and its corollary, they nevertheless appear to share in the responsibility of injury.

These observations — to accept or reject the CP as well as the implications of doing so — apply beyond sports. In any social arena, officials who are entrusted with making judgments may have more responsibility for the outcomes of their silence than they want to recognize.

Under Discussion: Dog Whistles, Implicatures, and “Law and Order”

image of someone whispering in an ear

This piece completes our Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

For the last several days, The Prindle Post has explored the concept of “law and order” from multiple philosophical and historical angles; I now want to think about the phrase itself — that is, I want to think about what is meant when the words ‘law and order’ appear in a speech or conversation.

On its face, ‘law and order’ is a term that simply denotes whether or not a particular set of laws are, in general, being obeyed. In this way, politicians or police officers who reference ‘law and order’ are simply trying to talk about a relatively calm public state of affairs where the official operating procedures of society are functioning smoothly. Of course, this doesn’t necessarily mean that ‘law and order’ is always a good thing: by definition, acts of civil disobedience against unjust laws violate ‘law and order,’ but such acts can indeed be morally justified nonetheless (for more, see Rachel Robison-Greene’s recent discussion here of “substantive” justice). However, on the whole, it can be easy to think that public appeals to ‘law and order’ are simply invoking a desirable state of peace.

But the funny thing about our terminology is how often we say one thing, but mean something else.

Consider the previous sentence: I said the word ‘funny,’ but do I mean that our terminology is designed to provoke laughter (or is humorous in other ways)? Certainly not! In this case, I’m speaking ironically to sarcastically imply not only that our linguistic situation is more complicated than simple appearances, but that the complexity of language is actually no secret.

The says/means distinction is, more or less, the difference between semantics (what is said by a speaker) and pragmatics (what that speaker actually means). Often, straightforward speech acts mean precisely what a speaker says: if I ask you where to find my keys and you say “your keys are the table,” what you have said and what you mean are roughly the same thing (namely, that my keys are on the table). However, if you instead say “your keys are right where you left them,” you are responding with information about my keys (such as that they are on the table), but you also probably mean to communicate something additional like “…and you should already know where they are, dummy!”

When a speaker uses language to implicitly mean something that they don’t explicitly say, this is what the philosopher H.P. Grice called an implicature. Sarcasm and ironic statements are a few paradigmatic examples, but many other kinds of figures of speech (such as hyperbole, understatement, metaphor, and more) function along the same lines. But, regardless, all implicatures function by communicating what they actually mean in a way that requires (at least a little) more analysis than simply reading how they appear on their face.

In recent years, law professors like Ian Haney López and philosophers like Jennifer Saul have identified another kind of implicature that explicitly says something innocuous, but that implicitly means something different to a subset of the general audience. Called “dog whistles” (after the high-pitched tools that can’t be heard by the human ear), these linguistic artifacts operate almost like code words that are heard by everyone, but are only fully understood by people who understand the code. I say “almost” like code words because one important thing about a dog whistle is that, on its face, its meaning is perfectly plain in a way that doesn’t arouse suspicion of anything tricky happening; that is, everyone — whether or not they actually know the “code” — believes that they fully understand what the speaker means. However, to the speaker’s intended clique, the dog whistle also communicates a secondary message surreptitiously, smuggling an implicated meaning underneath the sentence’s basic semantics. This also means that dog whistles are frustratingly difficult to counter: if one speaker uses a dog whistle that communicates something sneaky and another speaker draws attention to the implicated meaning, the first speaker can easily deny the implicature by simply referencing the explicit content of the original utterance as what they really meant.

Use of dog whistles to implicitly communicate racist motivations in government policy (without explicitly uttering any slurs) was, infamously, a political tactic deployed as a part of the Republican “Southern strategy” in the late 20th century (for more on this, see Evan Butts’ recent article). As Republican strategist (and member of the Reagan administration) Lee Atwater explained in a 1981 interview:

“You start out in 1954 by saying, ‘[n-word], [n-word], [n-word].’ By 1968 you can’t say ‘[n-word]’—that hurts you, backfires. So you say stuff like, uh, forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites.…”

Of course, terms like ‘forced busing’ and ‘states’ rights’ are, on their faces, concepts that are not necessarily associated with race, but because they refer to things that just so happen, in reality, to have clearly racist byproducts or outcomes —  and because Atwater’s intended audience (Republican voters) knew this to be so — the terms are dog whistles for the same kind of racism indicated by the n-word. When a politician defended ‘forced busing’ or when a Confederate apologist references ‘states’ rights,’ they might be saying something about education policy or the Civil War, but they mean to communicate something much more nefarious.

Exactly what a dog whistle secretly communicates is still up for debate. In many cases, it seems like dog whistles are used to indicate a speaker’s allegiance to (or at least familiarity with) a particular social group (as when politicians signal to prospective voters and interest groups). But other dog whistles seem to signal a speaker’s commitment (either politically or sincerely) to an ideology or worldview and thereby frame a speaker’s comments as a whole from within the perspective of that ideology. Also, ideological dog whistles can trigger emotional and other affective responses in an audience who shares that ideology: this seems to be the motivation, for example, of Atwater’s racist dog whistles (as well as more contemporary examples like ‘welfare,’ ‘inner city,’ ‘suburban housewife,’ and ‘cosmopolitan elites’). Perhaps most surprisingly, ideological dog whistles might even work to communicate or trigger ideological responses without the audience (and, more controversially, perhaps even without the speaker) being conscious of their operation: a racist might dog whistle to other racists without any of them explicitly noticing that their racist ideology is being communicated.

This is all to say that the phrase ‘law and order’ seems to qualify as a dog whistle for racist ideology. While, on its face, the semantic meaning of ‘law and order’ is fairly straightforward, the phrase also has a demonstrable track record of association with racist policies and byproducts, from stop-and-frisk to the Wars on Drugs and Crime to resistance against the Civil Rights Movement and more. Particularly in a year marked by massive demonstrations of civil disobedience against racist police brutality, politicians invoking ‘law and order’ will inevitably trigger audience responses relative to their opinions about things like the Black Lives Matter protests and other recent examples of civil unrest (particularly when, as Meredith McFadden explains, the phrase is directly used to criticize the protests themselves). And, crucially, all of this can happen unconsciously in a conversation (via what Saul has called “covert unintentional dog whistles”) given the role of our ideological perspectives in shaping how we understand and discuss the world.

So, in short, the ways we do things with words are not only interesting and complex, but can work to maintain demonstrably unethical perspectives in both others and ourselves. Not only should we work to explicitly counteract the implicated claims and perspectives of harmful dog whistles in our public discourse, but we should consider our own words carefully to make sure that we always mean precisely what we think we do.

Under Discussion: Right to Riot?

photograph of broken storefront window

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

The climactic moment of Spike Lee’s seminal 1989 film, “Do the Right Thing,” comes when its protagonist, Mookie, hurls a trash can through the window of a Bed-Stuy pizza parlor owned and operated by Italian Americans, setting off a frenzy of destruction that destroys the restaurant. It’s an action that seems inevitable, given the simmering hostility between the parlor’s owners and certain members of the majority African-American community that the film documents in brilliant and searing detail. Yet the title of the film tells us that what looks inevitable is actually a moral choice; the question we’re left with is whether Mookie did, after all, do the right thing. Is destroying private property a legitimate, or at least excusable, form of political expression?

Joe Biden recently articulated the standard argument for the negative answer to this question, saying that “rioting is not protesting…it’s wrong in every way…it divides instead of unites. [I]t destroys businesses [and] only hurts the working families that serve the community.” An American politician’s hostility to rioting must strike us as at least a little ironic, given that we celebrate political acts of property destruction as part of our national mythos — the Boston Tea Party being perhaps the most prominent example. This example alone also shows that it is a mistake to deny that rioting can be a form of political protest. When people destroy private property in order to express their political views, that is a form of protest. Still, it is a further question whether rioting-as-protest is justified or excused. 

A more sympathetic view of rioting can be found in Martin Luther King Jr.’s now-famous line that “rioting is the voice of the unheard.” There are many ways to interpret King’s remark, but I will try to extract two arguments that might be consistent with it. The first goes like this. To begin with, we cannot reasonably expect any community to live under conditions of oppression without resorting to destructive means as a way of expressing their discontent with the situation. By “oppression” I mean a condition in which agents of government and society commit injustices against a community in a persistent, widespread, and systematic fashion. If we cannot reasonably expect this, then those who do it are blameless, or at least less blameworthy than they would be were they under different conditions. In other words, on this reading King is saying that rioters are excused, or less blameworthy, because of the conditions in which they find themselves. Notice that this argument, if accepted, might have major implications for how the justice system ought to treat rioters. It does not, however, strictly contradict Biden’s claim that rioting is wrong: an excused act is, almost by definition, a wrongful one.

One problem with analogizing rioting to action under duress is that the reason we do not blame people who act under duress is because they are faced with a choice between a wrongful option and an option that isn’t reasonable from the point of view of their own well-being — for example, allowing themselves to die or become seriously injured, or allowing someone they love to suffer the same fate. It’s not clear that not destroying property is unreasonable in this sense for members of oppressed communities: while some members of the community literally face a choice between death or serious injury at the hands of government agents and violent protest, many do not. Furthermore, unlike in cases of duress, the choice of destructive protest does not ensure that the oppression will cease. For these reasons, it is unlikely that a case for mitigated blameworthiness due to duress can be made out for most protestors who are engaged in the destruction of property.

The second argument is more ambitious in that it purports to show that rioting is morally justified. The legal system that protects private property is a part of the same system that oppresses the community. So, to attack private property is to attack that system. Furthermore, as King claims, to attack the system at this point and with these means is the only option available to people living under conditions of oppression. Every person has a moral right to try to alter the oppressive conditions in which they live by morally legitimate means. Finally, if you have a right to do X, and Y is a necessary means to X, you have a right to do Y — Y is a morally legitimate means. Therefore, members of oppressed groups have a right to riot (in order to attack the system).

The weakest link in this argument, I think, is the claim that rioting is a means to attacking the system that oppresses the community. By this logic, attacking any part of the system is a reasonable means to attacking the parts of the system that do the work of oppression. But there are clearly parts of the system that are very far removed from the parts that, for example, oppress communities of color. Would it make sense to burn down National Park Service buildings as a means of relieving their oppression? It seems doubtful. But then it can be argued that attacking others’ private property is like attacking Park Service buildings.

One response to this objection is to claim that attacking private property is a form of political expression aimed at bringing attention to conditions of oppression, rather than a means of directly attacking the system. With that amendment, we enter into the empirical discussion of whether destroying private property is a reasonably adequate means of altering oppressive conditions. Is the kind of consciousness-raising that rioting accomplishes useful, or is it counter-productive? Here is where political scientists may be able to help us, and where philosophers must take a back seat.

We might also question how far the conclusion of the argument gets supporters of rioting. Even if it establishes that members of oppressed groups have a right to riot, having a right to do X does not necessarily make doing X right. What it is for me to have a right to do something is for others to have a duty not to interfere in my doing it, but that does not mean I ought to do it. For example, I may have the moral right to verbally castigate someone who has committed a minor wrong, but it does not necessarily follow that I ought to do that. And here is where Biden’s point about the effects of rioting are relevant. Perhaps members of oppressed groups have a right to riot, but the detrimental effects of rioting — the destruction of community businesses and livelihoods — make it something that no one ought to do. At the end of “Do the Right Thing,” with the pizza parlor in ruins, one cannot help but feel that the neighborhood has suffered a real loss; the loss of the parlor feels like a tragedy. Perhaps, then, we ought to say that Mookie had the right to do what he did, but that what he did was not right.

Under Discussion: Law and Order as Suppression and Oppression

photograph of police in riot gear in Portland

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

In the last four months, there have been protests every day in support of the Black Lives Matter movement. Despite an estimated 93% of these protests being peaceful, there have been continual calls for “law and order.” Trump tweeted as much and emphasized the need for it during his speech in response to recent protests in Kenosha, and now both he and presidential candidate Joe Biden have campaign ads promoting law and order.

When leaders focus on public safety during nationwide protests, this shifts the attention from the cause, motivation, and aim of the protests. For instance, consider a case in Kenosha, WI. Protests began after police shot Jacob Blake seven times in the back. Blake was an unarmed Black man returning to his family in his car. 17-year-old Kyle Rittenhouse travelled to Kenosha, allegedly to protect local businesses from the protesters and ended up killing two men.

But the Kenosha Sheriff, who was called to apologize for a racist rant in 2018, emphasized that the shootings would not have happened if Kenosha’s 7pm curfew had been respected. His words shifted attention from the shooter to the policies in place to ensure public safety. This spreads the blame for the murder of the protestors to include the victims as well as Rittenhouse, who had arrived from out of state with an AR-15-style rifle. (The ACLU is calling for the sheriff to be fired.)

When “law and order” is the story, the fact that “law” has never been meted out in any sort of even-handed fashion isn’t the story. When there have been months of Black Lives Matter protests, and the response is to call for “law and order,” this should give pause. The structure of law is saturated with practices that guarantee that its protections and penalties will not ensure the safety or dignity of Black members of our country. At every stage of its production and execution, “law and order” is something worth working to change.

Representatives making the law in Congress are disproportionately white. (Though the 116th congress is the most racially and ethnically diverse congress ever, it is only 22% non-white; 39.9% of people in the US are non-white according to the last census.) Further, voting for the representatives who make the law is easier, and designed to be easier, for white people. (After a 2013 Supreme Court decision struck down the Voting Rights Act, over half of the states have added policies that make it more difficult to vote, disproportionately affecting non-white voters. In the end, the system of law-making is bent towards white interests and white voices. And the justice system reflects this as well, both in the first contact it can make with individuals (the police), and the disparate consequences of this contact. Over-policing leads to the disproportionate arrests of Black and Latinx people living in the US. The use of forensic evidence that isn’t scientifically valid and biased for the prosecution, added to the practice of peremptory exemptions stack the deck against defendants in trials. When previously incarcerated people can’t vote, and incarcerated people are disproportionately Black due to these and other systemic problems, there are deep issues with the structure order of law.

But this summer, what are the protests around the country protesting? Not necessarily these legal institutions directly. Rather, the pattern of violence and brutality aimed at Black men and women by the police that has gone unchecked, that has only grown more and more blatantly obvious. The policies and practices of the police force have meant that the patterns of violence have continued. Ahmaud Arbery, George Floyd, Breonna Taylor, Elijah McClain, Jacob Blake, Daniel Prude, Mychael Johnson, Tony McDade and Wilbon Woodard are just some of the Black people murdered by police officers in 2020.

Appealing to “order” in the face of institutionalized oppression and a lack of indication that law or order will address the violence and lack of accountability is disingenuous, negligent, or hateful. The “order” that the leaders call for characterizes the protests as problematically “disorderly” instead of focusing on the cause for the protests disrupting the order. When paired with the value of “law,” these appeals ignore the failure of the legal system to serve rather than suppress members of our community.

This context might be different if the calls to law and order were redirected to law enforcement. The human rights violations during the protests brought the racism in the United states to the attention of the UN Human Rights Convention. Laws protecting the rights of journalists and medics were blatantly ignored, as police targeted them with the same tear gas and rubber bullets they assaulted other peaceful protesters with. Just in the time between May 26th and June 5th, Amnesty International documented 125 examples of police violence against protesters. The organization also found that the protesters’ human rights were repeatedly violated and documented acts of excessive force by police and law enforcement.

Ultimately, calls for “law and order” fail to acknowledge the grave injustices that got us here in the first place: the enforcers of law and order acting as tools of racism and violence. The response to the protests only highlights the need for the protests in the first place.

Under Discussion: Law and Order, Human Nature, and Substantive Justice

black-and-white photograph of lady justice

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

For many, the end of this week marks the passage of a six-month period of American history characterized by throbbing dystopian existential dread. The pandemic has been the score to a dark production that, when the spotlight was hot, turned out to be a series of character studies that no one asked for nor were particularly interested in watching. With hundreds of thousands dead and millions more left with lives permanently affected by the virus, the richest among us have become much richer not just during the pandemic, but because of it, and many who were thriving at the start of this year now find themselves evicted from their homes with nowhere to go. What’s more, police brutality and systemic injustice have packed our streets with protesters demanding meaningful change. Looting and rioting have occurred, which has motivated the federal government to respond with force not just against people violating the law, but against reporters and peaceful protestors as well. Against this backdrop of chaos, the President of the United States clenches his fist and calls for “law and order.”

In Plato’s Republic, Glaucon, one of the characters in the dialogue, provides a justification for the existence of laws that paints a grim picture of human nature. He argues that being unjust is in everyone’s interest, presumably because doing so allows a person to satisfy all of their desires. However, in a world populated by other individuals possessed of strength and skill, no single individual can get away with being unjust all of the time. This is why laws are necessary. Glaucon says, “When men have both done and suffered injustice and have experience of both, not being able to avoid the one and obtain the other, they think that they had better agree among themselves to have neither; hence there arise laws and mutual covenants; and that which is ordained by law is termed by them lawful and just.” If Glaucon is right, we are all, at our core, interested in promoting our self-interest, and we relinquish our ability to do so only so that we won’t be harmed by others attempting to do the same. Without the strict enforcement of the laws, we will inevitably descend into division and outright battle with one another — it’s in our very nature to do so.

If this is the right way of viewing things, then the state is justified in acting forcefully to protect us from ourselves and from each other. The government is the only entity preventing us from tearing one another apart for our own selfish reasons. When people call for law and order, they are calling for governmental intervention against perceived danger at the hands of people who they view as scarcely more civilized than beasts. One important corollary of this kind of view of law and order is that executing the law, whatever that law might be, is just.

There are a number of serious problems with this theory regarding the relationship between law and justice. First, some laws are morally and rationally indefensible. In these cases, the cry for “law and order!” is a cry to violate rights or to bring about a worse rather than a better state of affairs. For example, when slaves that escaped from captivity were returned and punished when captured, technically demands for “law and order” were being satisfied. This example highlights the need for a more substantive account of justice according to which just laws are not just agreements between self-interested persons, but instead are designed to promote some objective good or to prevent some objective harm.

Second, this kind of demand for “law and order” doesn’t do anything to ensure fairness in practice. This is because the entities that people are inclined to describe as “beastly” and “threatening” are determined by prejudices and tribalism. Calls for “law and order” tend to be demands to prevent or punish certain kinds of crimes committed by certain categories of people — usually poor people and members of minority populations. People don’t want to see vagrancy, public intoxication, and petty crimes on their streets, but they don’t make much of a fuss about corporations violating environmental regulations in ways that endanger the health of members of nearby communities and create unsafe living conditions for future generations. People want crimes against property to be punished but aren’t up in arms about the losses people experience due to insider trading and other kinds of white-collar crime. People want populations that they view as “scary” out of their neighborhoods, but they aren’t concerned about whether individuals and institutions doing significantly more harm end up getting away with it. Corporations and men in suits don’t tend to frighten people.

People who demand “law and order” often want proportional retributive justice for the members of the groups that they find threatening. The more power, wealth, and privilege a person has, the less likely they are to be punished severely. For example, consider Felicity Huffman, a rich actress who committed fraud to get her daughter into a good college. She was sentenced to 14 days in prison. For rich people who can afford good representation, the criminal system is a revolving door — they are out before they even have time to process the fact that they were in. Privileged populations almost never face society’s most serious punishments. As Supreme Court Justice Ruth Bader Ginsburg famously said, “People who are well represented at trial do not get the death penalty.” Good representation is expensive.

At the end of the day, if “law and order” is just a social construction that people agree to protect their own interests, then the entities with the most power in society will see to it that the laws end up protecting their interests first and foremost. After all, we don’t all actually consent to the laws. Many citizens are politically disenfranchised because of their life circumstances. Representatives rarely end up actually speaking for these people.

The picture of human nature according to which we are each self-interested individuals protecting ourselves from harms caused by other self-interested individuals is psychologically impoverished. We are beings that can and do care about others. We are capable of empathy and altruism. Our criminal justice system could be a real justice system, where that term means something more than shallow retributivism. To protect the well-being and basic dignity of all people, the call should not be for “law and order!”, but for “Justice!”, which is rarely the same thing.

Under Discussion: The Multiple Ironies of “Law and Order”

photograph of a patch of the confederate flag

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

You hear a person running for office described as a “law and order” candidate. What, if anything, have you learned about them and their policies? The answer is either “nothing” or “nothing good.” The only wholesome association with the phrase is the infinitely replicating and endless Law and Order television franchise. Otherwise, this seemingly staid phrase misleads — and that is exactly the intention. As we all are routinely reminded, “law and order” is a deliberate verbal irony. When people don’t heed these reminders, it becomes a tragic irony.

In 1968, two conservative candidates were running for President of the United States: Richard Nixon and George Wallace. Nixon, the Republican nominee, won the election. However, Wallace won the electoral votes of five southern states and garnered 13% of the popular vote. Both candidates ran on explicitly law and order platforms and articulated them as such. During the course of that campaign, Nixon was often challenged to distinguish himself from Wallace on the issue of law and order. During a televised interview on Face the Nation, Nixon demonstrated the slipperiness of the term “law and order.” He said that each of the three candidates in the 1968 presidential election — Hubert Humphrey, George Wallace, and himself — were each in support of law and order. The difference was what they meant by it and how they would achieve it.

Each presidential candidate in 1968 presented a different vision of law and order, during a period of significant unrest. Wallace gave full-throated support to a segregationist and populist message, couched in terms of the rights of states to shape their culture free from heavy-handed federal meddling. Hubert Humphrey was an advocate of civil rights legislation and nuclear disarmament. Though his anti-war credentials were tarnished by his role as Lyndon B. Johnson’s vice-president during the Vietnam conflict, Humphrey’s view of law and order was broadly one of egalitarianism and peace. Nixon’s avowed interpretation of law and order was the rule of law, and freedom from fear. Here, the irony begins.

The details of the strategy by which Nixon and the Republican Party won over voters in the states of the US south are now well-known. The practically named “Southern Strategy” first took the presidential stage with the 1964 campaign of Barry Goldwater. He took a pronounced stance against civil rights legislation that garnered him the few electoral votes he received in his presidential run —all from southern states (and his home state of Arizona). Opposing civil rights legislation, and any other federally mandated policies of integration and egalitarianism, was the core. This was not done in an explicitly racist manner, but under the banner of preserving the sovereignty of individual states, as Republican strategist Lee Atwater laid bare in a 1981 interview.

This is deliberate verbal irony: the strict meaning of the words actually uttered differs from the meaning intended by the speaker. Atwater confirms that when Republican candidates for office say, “preserve state’s rights” what they mean is “preserve the white southern way of life.” Nor is this an idiosyncrasy of Atwater. The intellectual basis for the Southern Strategy comes from William F. Buckley‘s 1957 editorial in the National Review, in which he states that whites in the south are entitled to ensure they “prevail, politically and culturally, in areas in which it does not predominate numerically.” Law and order, but only for white people. Freedom from fear, but only for white people. This is the Southern Strategy.

This direct verbal irony entails more irony at the level of political philosophy and general jurisprudence (i.e., theories of the concept of law). Predominant theories of general jurisprudence, especially among conservatives, see law as being generally applicable: that is, every person is subject to the same laws in the same way. This is the meaning of the phrases “rule of law” and “equal under the law.” However, talk of states’ rights in the context of the Republican Southern Strategy stands for exactly the opposite proposition: the law should apply in one way to white people and a different way to non-white people. The legal legerdemain achieved is profound in its pernicious effect. When the law is articulated in a sufficiently abstract fashion, it will not say that one group will be disparately, negatively affected. Because it doesn’t say it, many people will be convinced that it doesn’t actually affect people differently. This allows people to shift blame onto those whose lives are made more difficult, or ruined, by the law.

Disparate impact, however, has become one of the trademark U.S. Supreme Court tests for unconstitutional practices. The test arose from Griggs v. Duke Power Co., in which Black employees sued their employer over a practice of using IQ tests as a criterion for internal promotion. Previously, the company had directly forbidden Black employees from receiving significant promotions. However, after the passage of the Civil Rights Act of 1964, formally discriminatory policies were unconstitutional. The Griggs court expanded the ambit of the Civil Rights Act to policies that were substantially discriminatory in their effect, even if they were non-discriminatory in form. This rule was later limited by the Supreme Court in Washington v. Davis, in which the court required that it be proven substantially discriminatory policies were adopted with the intent to achieve that discriminatory effect.

The Supreme Court, the ultimate authority on U.S. law, holds that laws which have disparate impact are bad law. But disparate impact, as it is defined by the Court, is exactly what the Southern Strategy aimed at. Say one thing, which is superficially acceptable, but mean another thing, which is expressly forbidden. Hence the “law” of the Southern Strategy’s “law and order” is not law at all.

How much of this dynamic any particular law-and-order candidate, much less the people that vote for them, is aware of is an open question. Here, the deliberate verbal irony becomes tragic irony. Anyone who has learned the lessons of history knows what will happen, while those who have not learned do not.

Kyle Rittenhouse and the Legal/Moral Limits of Self-Defense

photograph of protesters carrying automatic rifles

On August 25th, Kyle Rittenhouse carried a firearm into the protests in Kenosha, WI. He killed Joseph Rosenbaum, 36, and Anthony Huber, 26, and seriously injured Gaige Grosskreutz, 26.

Rittenhouse is being charged with one count of first-degree intentional homicide; one count of first-degree reckless homicide; one count of attempted first-degree intentional homicide; and two counts of first-degree reckless endangerment. The Kenosha police chief called the shootings a senseless act of violence on protesters: “We’ve had two people lose their lives senselessly while peacefully protesting,” Chief Miskinis said.

His lawyers, on the other hand, claim that he was “protecting his community,” acting in self-defense: “before Rittenhouse fired his gun, he was ‘accosted,’ ‘verbally threatened and taunted’ by ‘rioters’ while he guarded a mechanic’s shop alongside a group of armed men.” By claiming that Rittenhouse was acting in self-defense the legal team invokes one of the most intuitive exceptions to the prohibition on inflicting harm on another person. But, there are limits, both morally and legally.

Morally speaking, the views on the appropriate use of self-defense are more varied than the range permitted by law. This is of necessity – to allow broad ranges of interpretation in matters that include inflicting harm on one another isn’t conducive to a well-functioning legal system. In ethical theories, the question of self-defense involves slightly different questions than in the realm of law. Legally, you have some right to defend your person — though the conditions differ by jurisdiction — and this presumption already diverges from one moral position: pacifists. Pacifists defend the position that harming another person is never justified. There are pacifists that emphasize that this lack of justification arises because of the alternatives to harm that are ever-present, and this concern does show up in many self-defense statutes. If someone can avoid using force in order to defend themselves, then this can undermine the justification for the use of force (though in WI, there isn’t a “duty to retreat” as there is in other states).

Other pacifists emphasize that the same principle that makes it inappropriate for your assailant to harm you also holds in the case of your harming them. And it gets more complicated because most theorists agree that not all cases of harming someone in order to avoid them harming you are justified. There are limits to when defensive force is permissible even for non-pacifists. Self-defense doesn’t always work as a defense, so to speak.

Imagine if I put myself in the position where I needed to defend myself in the first place. In such circumstances, the role of the “attacker” becomes more murky, and the sense in which I need to defend myself becomes harder to explain. This complicates matters for a number of ethicists. In such a case, if some action of mine could de-escalate the situation or prevent the threat to my safety, then I am not justified in using force to defend myself. Underlying these cases is the idea that we can avoid circumstances where inflicting harm, or at the very least inflicting lethal harm, on assailants. If generalizable, this would undermine the force of the self-defense arguments.

For example: Imagine that I am robbing a house with a firearm, and the homeowner pulls a gun on me, shouting “Make another move and I’ll shoot!” I believe the homeowner to be a little trigger-happy and fear for my life. I shoot the homeowner out of this fear, and thus in self-defense. Was I acting permissibly in shooting the homeowner? According to moral theorists, self-defense doesn’t clearly apply here because the home’s defenders were responding to my use of force. The important feature, arguably, is that I could avoid defending myself by ceasing my aggressive, law-breaking conduct that initiated the exchange. When I threatened the homeowner with lethal force, she was using appropriate force in response. Morally speaking, if I stepped down and ceased posing a threat, the homeowner loses her moral justification for threatening harm to me.

Here the law and these moral theories arrive at similar conclusions (with the Castle Doctrine complicating matters), but with important differences. Legally speaking, breaking a law at the time of defending your safety undermines a claim to self-defense, but not entirely. However, it isn’t purely the lawbreaking that changes the morality of the situation for all ethicists. In this idealized scenario, the threat to my life exists because of my threat to the homeowner. If I stop my threat, I do not need to harm anyone in self-defense.

According to Wisconsin’s self-defense law, people are permitted to “use force which is intended or likely to cause death or great bodily harm (if they) reasonably believe that such force is necessary to prevent imminent death or great bodily harm to (themselves).” The key here is what the defendant reasonably believes. If the defendant’s lawyers can establish that he had a reasonable belief that he needed to use the force he did to prevent imminent death, his self-defense claim may stand. In Wisconsin, there isn’t a duty to retreat before using force. As such, a great deal rests on whether the jury judges that Rittenhouse had a reasonable belief that his use of lethal force was necessary to preserve his life. The jury’s judgment will depend on a variety of interpretative aspects, as none of the defendant’s victims seem to, in fact, be directing lethal force at him according to witnesses and video, and only one was armed at all. But there is often a distance between what is true and what someone reasonably believes is true.

Eric Zorn, news and politics correspondent for the Chicago Tribune, highlights elements of the scenario from both the legal and moral discussion above: “Did the teen willingly put himself in that fraught milieu and illegally, allegedly, risk a horrific escalation of that danger by carrying a gun on the scene? Yes.” Rittenhouse chose to put himself into a potentially lethal situation. In fact, that the situation was dangerous is his reason for being there. For some theorists, this makes a difference in how morally justified he is in using force against his assailants. He could have avoided the risk to his safety and avoided inflicting harm, similar to the armed burglar example.

Zorn also notes: “What about the context, though? The confrontational, high-adrenaline interactions that led up to the tragic deaths. The night air punctuated by gunshots. Danger all around.” From a legal perspective, and also according to some moral theorists, the relevant context is more narrow in scope. It is the setting in which Rittenhouse killed two people and injured another. Did he reasonably feel his life was threatened then? And was lethal force his reasonable route of defense?

Rittenhouse’s lawyers say yes: “In fear for his life and concerned the crowd would either continue to shoot at him or even use his own weapon against him,” the lawyer’s statement says, “Kyle had no choice but to fire multiple rounds towards his immediate attackers.”

But there are further moral and legal issues that the Rittenhouse case represents.

Aside from the question of whether there was a reasonable belief in a lethal threat to his life, Rittenhouse faces further legal scrutiny in his carrying of a firearm illegally. Further, his behavior exists in a context of a culture that is praising violent responses to protests of police violence, and in this case, inciting violence in response to them.

Rittenhouse allegedly did a lot of illegal things. The 17-year-old reports being motivated by a call to protect people and businesses in Kenosha, and arrived with a gun at an auto mechanic’s on August 25th. His lawyers claim that the 17-year-old’s “intent was not to incite violence, but simply to deter property damage and use his training to provide first aid to injured community members.” The lawyers also report: “Rittenhouse and others stood guard at a mechanic’s shop near the car depot, even after the curfew was in effect.” Unfortunately, Rittenhouse’s chosen method of “deterring property damage” was standing guard with an assault-style rifle he was not legally permitted to possess in Wisconsin, or conceal carry in his home state of Illinois.

Rittenhouse is facing misdemeanor charges for his illegal engagement with the assault-style rifle. Meanwhile, the calls for armed response against the protests in Kenosha have come under scrutiny. Facebook chief executive Mark Zuckerberg said the “Armed Citizens to Protect our Lives and Property” event, hosted by the Kenosha Guard on Tuesday night encouraging armed people to go to Kenosha, was in violation of policies and should have been removed. The direct calls for armed citizens to go to Kenosha were seen as inciting violence, and thus inappropriate on social media. We see their impact in Rittenhouse’s behavior, and the deaths that the calls result in.

In response to these protests, besides directing violence to the protestors themselves, there has been an outpouring of praise towards the people committing the acts of violence. For example, Rep. Thomas Massie (R-Ky.) praised Rittenhouse’s “incredible restraint” at not emptying his magazine into the crowd. And though he also admitted to not being as aware of the circumstance of the murder of Jacob Blake as the case against Rittenhouse, despite claiming: “As a 17-year-old, he was legally entitled to have that firearm in his possession. This is 100% self-defense.” Likewise, DeAnna Lorraine, a Republican congressional candidate, tweeted: “We need more young people like Kyle Rittenhouse and less like Greta Thunberg.” And even President Trump praised Rittenhouse in a tweet: “The only way you will stop the violence in the high crime Democrat run cities is through strength!”

While praise and comparisons to heroes might not rise to the level of incitement — it does not directly encourage another person to commit a crime — it is still dangerous. So, on the other side of the incitement that drove Rittenhouse, there is the encouragement and positive reinforcement that leads to think pieces about an oncoming Civil War.

When the praise heaped onto a vigilante who acted in response to incitement comes from so many sources, the positive reinforcement becomes dangerous in itself. It doesn’t constitute incitement, but continues to divide cultural battle lines where institutional systems that promote violence are paired individual citizens suppressing voices protesting those systems. This encouragement, the incitement, and the people who act on it are a unified voice against change and institutional reform.

This praise is not for someone acting in self-defense. It is for acts of aggression against people rising up against violence and murder. The mixed messaging regarding the case of Kyle Rittenhouse may complicate the case for self-defense. Is he a brave patriot, fighting on the side of law, justice, and the American way, or a scared innocent simply trying to protect himself?

Against Abstinence-Based COVID-19 Policies

black-and-white photograph of group of students studying outside

There are at least two things that are true about abstinence from sexual activity:

  1. If one wishes to avoid pregnancy and STD-transmission, abstinence is the most effective choice, and
  2. Abstinence is insufficient as a policy category if policy-makers wish to effectively promote pregnancy-avoidance and to prevent STD-transmission within a population.

I take it that (1) is straightforward: if someone wishes to avoid the risks of an activity (including sex), then abstention from that activity is the best way to do so. By (2), I simply mean that prescribing abstinence from sexual activity (and championing its effectiveness) is often not enough to convince people to actually choose to avoid sex. For example, the data on the relative effectiveness of various sex-education programs is consistent and clear: those programs that prioritize (primarily or exclusively) abstinence-only lessons about sex are regularly the least effective programs for actually reducing teen pregnancies and the like. Instead, pragmatic approaches to sex education that comprehensively discuss abstinence alongside topics like contraceptive-use are demonstrably more effective at limiting many of the most negative potential outcomes of sexual activity. Of course, some might argue in response that, even if they are less effective, abstinence-only programs are nevertheless preferable on moral grounds, given that they emphasize moral decision-making for their students.

It is an open question whether or not policy-makers should try to impose their own moral beliefs onto the people affected by their policies, just as it is debatable that good policy-making could somehow produce good people, but the importance of policy making based on evidence is inarguable. And the evidence strongly suggests that abstinence-based sex education does not accomplish the goals typically laid out by sex education programs. Regarding such courses, Laura Lindberg — co-author of a 2017 report in the Journal of Adolescent Health on the impact of “Abstinence-Only-Until-Marriage” (AOUM) sex ed programs in the US — argues that such an approach is “not just unrealistic…[but]…violates medical ethics and harms young people.”

In this article, I’m interested less in questions of sex education than I am in questions of responsibility for the outcomes of ineffective public policies. I think it’s uncontroversial to say that, in many cases of pregnancy, the people most responsible for creating a pregnancy (that results from sexual activity) are the sexual partners themselves. However, it also seems right to think that authority figures who knowingly enact policies that are severely unlikely to effectively prevent some undesirable outcome carry at least some responsibility for that resulting outcome (if it’s true that the outcome would have probably been prevented if the officials had implemented a different policy). I take it that this concern is ultimately what fuels both Lindberg’s criticism of AOUM programs and the widespread support for comprehensive sex-education methods.

Consider now the contemporary situation facing colleges and universities in the United States: despite the persistent spread of the coronavirus pandemic over the previous several months, many institutions of higher education have elected to resume face-to-face instruction in at least some capacity this fall. Across the country, university administrators have developed intricate policies to ensure the safety and security of their campus communities that could, in theory, prevent a need to eventually shift entirely to remote instructional methods. From mask mandates to on-campus testing and temperature checks to limited class sizes to hybrid course delivery models and more, colleges have demonstrated no shortage of creativity in crafting policies to preserve some semblance of normalcy this semester.

But these policies are failing — and we should not be surprised that this is so.

After only a week or two of courses resuming, many campuses (and the communities surrounding them) are already seeing spikes of COVID-19 cases and several universities have already been forced to alter their previous operating plans in response. After one week of classes, the University of North Carolina at Chapel Hill abruptly decided to shift to fully-remote instruction for the remainder of the semester, a decision mirrored by Michigan State University, and (at least temporarily, as of this writing) Notre Dame and Temple University. Others like the University of Iowa, the University of South Carolina, and the Ohio State University have simply pushed ahead with their initial plans, regardless of the rise in positive cases, but the feasible longevity of such an option is bleak. Indeed, as the semester continues to progress, it seems clear that many more colleges will be disrupted by a mid-semester shift, regardless of the policies that they had previously developed to prevent one.

This is, of course, unsurprising, given the realities of life on a college campus. Dormitories, dining halls, and Greek life houses are designed to encourage social gatherings and interactions of precisely the sort that coronavirus-prevention recommendations forbid. Furthermore, the expectations of many college students (fueled explicitly by official university marketing techniques) is that such social functions are a key element of the “college experience.” (And, of course, this is aggravated all the more by the general fearlessness commonly evidenced by 18-25 year-olds that provoke them into generally more risky behavior than other age groups.) Regardless of how many signs are put up in classrooms reminding people to wear masks and no matter the number of patronizing emails sent to chastise students (or local businesses) into “acting responsibly,” it is, at best, naive of university administrators to expect their student bodies to suddenly enter a pandemic-preventing mindset (at least at the compliance rates that would be necessary to actually protect the community as a whole).

Basically, on the whole, colleges have pursued COVID-19-prevention policies based on the irrational hope that their students would exercise precisely the sort of abstinence that college administrators know better than to expect (and, for years leading up to this spring, actively discouraged). As with abstinence-based sex education, two things are true here also:

  1. If one wishes to avoid spreading the coronavirus, constantly wearing masks, washing hands, and avoiding social gathering are crucial behavioral choices, and
  2. Recommending (and even requiring upon pain of punishment) the behaviors described in (1) is insufficient as a policy category if university administrators wish to effectively prevent the spread of the coronavirus on their campuses.

We are already seeing the unfortunate truth of (2) grow more salient by the day.

And, as with sex education, on one level we can rightfully blame college students (and their choices to attend parties or to not wear masks) for these outbreaks on college campuses. But the administrators and other officials who insisted on opening those campuses in the first place cannot sensibly avoid responsibility for those choices or their consequences either. Just as with abstinence-only sex education programs, it seems right to hold officials responsible for policies whose implementation is wildly unlikely, no matter how effective those fanciful policies might be if people were to just follow the rules.

This seems especially true in this case given the (in one sense) higher stakes of the COVID-19 pandemic. Because the coronavirus is transmitted far more quickly and easily than STDs or pregnancies, it is even more crucial to create prevention strategies that are more likely to be successful; in a related way, it also makes tracking responsibility for the spread of the virus far more complicated. At least with a pregnancy, one can point to the people who chose to have sex as shouldering much of the responsibilty for the pregnancy itself; with COVID-19, a particular college student could follow every university policy perfectly and, nevertheless, contract the virus by simply coming into contact with a classmate who has not. In such a case, it seems like the responsible student can rightfully blame both her irresponsible classmate and the institution which created the conditions of her exposure by insisting that their campus open for business while knowingly opting for unrealistic policies.

Put differently: imagine how different sex education might look like if you could “catch” a pregnancy literally just by walking too close to other people. In such a world, simply preaching “abstinence!” would be even less defensible than it already is; nevertheless, that approach is not far from the current state of many COVID-19-prevention policies on college campuses. The only thing this kind of rhetoric ultimately protects is the institution’s legal liability (and even that is up for debate).

In early July, the University of Southern California announced that it would offer no in-person classes for its fall semester, electing instead for entirely remote course-delivery options. At the time, some responded to this announcement with ridicule, suggesting that it was a costly overreaction. Nevertheless, USC’s choice to ensure that its students, staff, and faculty be protected by barriers of distance has meant not only that its semester has been able to proceed as planned, but that the university has not been linked to the same level of case spikes as other institutions (though, even with such a move, outbreaks are bubbling).

As with so much about the novel coronavirus, it remains to be seen what the full extent of its spread will look like. But one thing is clear already: treating irresponsible college students as scapegoats for poorly-conceived policies that justified the risky move of opening (or mostly-opening) campuses is transparently wrong. It oversimplifies the complicated relationship of policy-makers and constituents, even as it misrepresents the nature of moral responsibility for public action, particularly on the part of those in charge. The adults choosing to attend college parties are indeed to blame for doing so, but those parties wouldn’t be happening at all if other adults had made different choices about what this semester was going to look like in the first place.

In short, if college administrators really expect abstinence to be an effective tool to combat COVID-19, then they should be the ones to use it by canceling events, closing campuses, and wrapping up this semester (and, potentially, the next) online.