Back to Prindle Institute

The Ethics of “Let’s Go, Brandon”

photograph of Biden on phone in Oval Office

On Christmas Eve, Joe and Jill Biden were taking holiday phone calls as a part of an annual tradition of the North American Aerospace Defense Command (NORAD) to celebrate Christmas by “tracking” Santa Claus on his trip around the globe; at the end of one conversation, the Bidens were wished “Merry Christmas and ‘Let’s Go, Brandon’” by Jared Schmeck, a father calling with his children from Oregon. In recent months, after a crowd at a NASCAR event chanting “F**k Joe Biden” was described by a reporter as saying “Let’s Go, Brandon” (seemingly referring to the winner of the race), the sanitized slogan has been wholeheartedly adopted by people seeking to (among other things) express dissatisfaction with the current president. Plenty of others have offered explanations about the linguistic mechanics of the coded phrase (for what it’s worth, I think it’s an interesting example of a conventional implicature), but this article seeks to consider a different question: did Schmeck do something unethical when he uttered the phrase directly to Joe Biden?

There’s at least two factors here that we need to keep distinct:

  1. Did Schmeck do something unethical by uttering the phrase “Let’s Go, Brandon”?
  2. Did Schmeck do something unethical by uttering “Let’s Go, Brandon” directly to Joe Biden?

The first point is an interesting question for philosophers of so-called “bad” language, often categorized as profanity, obscenity, vulgarity, or other kinds of “swear” words. It’s worth considering why such pejoratives are treated as taboo or offensive in various contexts (but, depending on various factors, not all contexts) and scores of philosophers have weighed in on such debates. But, arguably, even if you think that there is something unethical about using the word ‘f**k,’ the utterance “Let’s Go, Brandon” side-steps many relevant concerns: regardless of what Schmeck meant by his utterance, he technically didn’t say a word that would, for example, get him fined by the FCC. After all, there’s nothing innately offensive about the terms ‘let’s,’ ‘go,’ or ‘Brandon.’ In much the same way that a child who mutters “gosh darn it,” “what the snot,” or “oh my heck” might expect to dodge a punishment for inappropriate speech, saying “Let’s Go, Brandon” might be a tricky way to blatantly communicate something often considered to be offensive (the dreaded “f-word”) while technically abiding by the social prohibition of the term’s utterance.

This move — of replacing some offensive term with a vaguely similar-sounding counterpart — is sometimes referred to as “denaturing” profanity with a euphemism (including even with emoji): for example, the phrases “what the frick?” and “what the f**k?” are not clearly substantively semantically different, but only the latter will typically be censored. However, over time, this kind of “minced oath” often ends up taking on the conventional meaning of the original, offensive term (in a process that it is itself sometimes described as a “euphemism treadmill”): that is to say, at some point, society might well decide to bleep “frick” just as much as its counterpart (although, actually, social trends largely seem to be moving in the opposite direction). Nevertheless, although “Let’s Go, Brandon” is only a few months old, its notoriety might be enough to already suggest that it’s taken on some of the same offensive qualities of the phrase that it’s meant to call to mind. If you think that there’s something unethical about uttering the phrase “F**k Joe Biden,” then you might also have a reason to think that “Let’s Go, Brandon” is likewise problematic.

Notably, the widespread use of “Let’s Go, Brandon” in many places typically opposed to profanity — such as churches, airplanes, and the floor of the House of Representatives — suggests that people are not treating the phrase as being directly vulgar, despite its clear connection to the generally-offensive ‘f**k.’

Which brings us to the second point: was Schmeck wrong to utter “Let’s Go, Brandon” directly to Biden on Christmas Eve?

Again, it seems like there are at least two factors to consider here: firstly, we might wonder whether or not Schmeck was being (something like) rude to Biden by speaking the anti-Biden slogan in that context. If you think that profanity use is simply offensive and that “Let’s Go, Brandon” is a denatured form of profanity, then you might have a reason to chastise Schmeck (because he almost said a “bad word” in an inappropriate context). If Schmeck had instead directly uttered “Merry Christmas and ‘F**k Joe Biden,’” then we might at least criticize the self-described Christian father (whose small children were with him on the call) as being impolite. But if, as described above, the meaning of “Let’s Go, Brandon” is less important than the technical words appearing in the spoken sentence, then you might think that Schmeck’s actual utterance is more complicated. Initially, Schmeck suggested that he simply intended to make a harmless, spur-of-the-moment joke (a claim that is admittedly less-credible by Schmeck recording the conversation for his YouTube page and in light of Schmeck’s later comments on Steve Bannon’s podcast) — without additional context, interpreting the ethical status of the initial utterance might be difficult.

But, secondly, we would do well to remember that Joe Biden is the President of the United States and some might suppose that uttering offensive speech (whether overtly or covertly) insufficiently shows the office of the POTUS the respect that it deserves. Conversely, we might easily deny that the office “deserves” respect simpliciter at all: the fact that Biden is an elected politician, and that the United States boasts a long tradition of openly and freely criticizing our political leaders — including in notable, public displays — absolves Schmeck from ethical criticism in this case. You might still think that it is a silly, disingenuous, or overly-complicated way to make an anti-Biden jab, but these are aesthetic (not ethical) critiques of Schmeck’s utterance.

In a way, Schmeck seems to have evoked something like this last point after he started receiving criticisms for his Christmas Eve call, arguing that he was well within his First Amendment rights to freely speak to Biden as he did. Indeed, this claim (unlike his initial characterization of the comment as having “meant no disrespect”) seems correct — even as it also fails to touch our earlier question of whether or not Schmeck’s actions were still impolite (and therefore subject to social reactions). It is fully possible to think that Schmeck did nothing illegal by spiking the NORAD Santa Tracker with a political pseudo-slur, even while also thinking that he did something that, all things considered, he probably shouldn’t have done (at least not in the way that he did it). It bears repeating: the First Amendment protects one’s ability to say what they generally want to say; it does not prevent potential social backlash from those who disagree (and also enjoy similar free-speech protections).

All things considered, though he’s reportedly considering running for office, Jared Schmeck’s fifteen minutes of fame have likely passed. Still, his Santa-based stunt offers an interesting look at a developing piece of applied philosophy of language: regardless of the ethical questions related to “Let’s Go, Brandon,” the phrase is certainly not going anywhere anytime soon.

Meaning-as-Use and the Punisher’s New Logo

photograph of man in dark parking garage with Punisher jacket

Recently, Marvel Comics announced plans to publish a new limited-series story about Frank “The Punisher” Castle, the infamous anti-villain who regularly guns down “bad guys” in his ultra-violent vigilante war on crime; set to premiere in March 2022, early looks at the comic’s first issue have revealed that the story will see Castle adopt a new logo, trading in the iconic (and controversial) skull that he’s sported since his introduction in the mid-70s. While some Marvel properties (like Spider-Man or the X-Men) could fill a catalog with their periodic redesigns, the Punisher’s look has remained roughly unchanged for almost fifty years.

From a business perspective, rebranding is always a risky move: while savvy designers can capture the benefits of adopting a trendy new symbol or replacing an out-of-date slogan, such opportunities must be balanced against the potential loss of a product’s identifiability in the marketplace. Sometimes this is intentional, as in so-called “rehabilitative” rebrands that seek to wash negative publicity from a company’s image: possible examples might include Facebook’s recent adoption of the name “Meta” and Google’s shift to “Alphabet, Inc.” But consider how when The Gap changed its simple blue logo after twenty years the company faced such a powerful backlash that it ditched the attempted rebrand after just one week; when Tropicana traded pictures of oranges (the fruit) for simple orange patches (of the color) on its orange juice boxes, it saw a 20% drop in sales within just one month. Similar stories abound from a wide variety of industries: British Airways removing the Union flag, Pizza Hut removing the word ‘pizza,’ and Radio Shack removing the word ‘radio’ from their logos were all expensive, failed attempts to re-present these companies to consumers in new ways. (As an intentional contrast, IHOP’s temporary over-the-top rebrand to “the International House of Burgers” was a clever, and effective, marketing gimmick.)

So, why is Marvel changing the Punisher’s iconic skull logo (one of its most well-known character emblems)?

Although it looks like the new series will offer an in-universe explanation for Castle’s rebrand, the wider answer has more to do with how the Punisher’s logo has been adopted in our non-fictional universe. For years, like the character himself, the Punisher’s skull emblem has been sported by numerous groups also associated with the violent use of firearms: most notably, police officers and military servicemembers (notably, Chris Kyle, the Navy SEAL whose biography was adapted into the Oscar-winning film American Sniper, was a Punisher fan who frequently wore the logo). Recent years have seen a variety of alt-right groups deploy variations of the skull symbol in their messaging and iconography, including sometimes in specific opposition to the Black Lives Matter movement, and multiple protests and riots (including the attempted insurrection in Washington D.C. last January) saw participants wearing Frank Castle’s emblem. In short, the simple long-toothed skull has taken on new meaning in the 21st century — a meaning that Marvel Comics might understandably want to separate from their character.

Plenty of philosophers of logic and language have explored the ways in which symbols mean things in different contexts, and the field of semiotics is specifically devoted to exploring the mechanics of signification — a field that can, at times, grow dizzyingly complex as theorists attempt to capture the many different ways that symbols and signs arise in daily life. But the case of the Punisher’s skull shows at least one crucial element of symbolization: the meaning of some sign is inextricably bound up in how that symbol is used. Famously codified by the Austrian philosopher Ludwig Wittgenstein (in §43 of his Philosophical Investigations), the meaning-as-use theory grounds the proper interpretation of a symbol firmly within the “form of life” in which it appears. So, while the skull logo might have initially been intended by its creator to symbolize the fictional vigilante Frank Castle, it now identifies violent militia groups and other real-world political ideologies far more frequently and publicly — its use has changed and so, too, has its meaning. Marvel has attempted to bring legal action against producers of unauthorized merchandise using the skull symbol, and Gerry Conway, the Punisher’s creator, has explicitly attempted to wrest the symbol’s meaning back from control of the alt-right, but the social nature of a symbol’s meaning has all but prevented such attempts at re-re-definition. Consequently, Marvel might have little choice but to give Frank Castle a new logo.

For another example of how symbols change over time, consider the relatively recent shift in meaning for the hand gesture made by touching the pointer finger and thumb of one hand together while stretching out the other three fingers simultaneously: whereas, for many years, the movement has been a handy way to mean the word “okay,” white supremacists have recently been using the same gesture to symbolize the racist idea of “white power.” To the degree that the racist usage has become more common, the meaning of the symbol has become far more ambiguous — leaving many people reluctant to flash the hand gesture, lest they unintentionally communicate some white supremacist idea. The point here is that the individual person’s intentions are not the only thing that matters for understanding a symbol: the cultural context (or, to Wittgenstein, “form of life”) of the sign is at least as, if not more, important.

So, amid calls to stop selling Punisher-related merchandise (and with speculation abounding that the character might be re-introduced to the wildly lucrative Marvel Cinematic Universe), it makes sense that Marvel would want to avoid further political controversy and simply give the Punisher a fresh look. But what a vaguely-Wittgensteinian look at the skull logo suggests is that it’s been years since it was simply “the Punisher’s look” at all.

The Texas Heartbeat Act and Linguistic Clarity

black-and-white photograph of Texas State Capitol Building

On September 1st, S.B. 8, otherwise known as the Texas Heartbeat Act, came into force. This Act bars abortions once fetal cardiac activity is detectable by ultrasound. While the specific point at which this activity can be identified is challenging to pin down, it most often occurs around the six-week mark. Past this point, the Act allows private citizens to sue those who offer abortions or ‘aids and abets’ a procedure – this includes everyone from abortion providers to taxi drivers taking people to clinics. If the suit is successful, not only can the claimant recover their legal fees, but they also receive $10,000 – all paid by the defendant.

The introduction of this law raises numerous concerns. These include (but are certainly not limited to) whether private citizens should be rewarded for enforcing state law, the fairness of the six-week mark given that most people won’t know they’re pregnant at this point, the lack of an exception for pregnancies resulting from rape or incest, and whether the law is even constitutional. However, in this piece, I want to draw attention to the Act’s language. Specifically, I want to look at two key terms: ‘fetal heartbeat’ and ‘fetus.’

Fetal Heartbeat

At multiple points within the Act, reference is made to the fetal heartbeat requiring detection. This concept is so central to the Act that not only does heartbeat feature in its title, but it is also the very first definition provided – “(1) ‘Fetal heartbeat’ means cardiac activity or the steady and repetitive rhythmic contraction of the fetal heart within the gestational sac.” You would think that such terminology is correct and accurate. After all, accuracy is essential for all pieces of legislation, let alone one that has such crucial and intimate ramifications. Indeed, the Act itself indicates that the term is appropriate as, in the Legislative Findings section, it states, “(1) fetal heartbeat has become a key medical predictor that an unborn child will reach live birth.

However, there exists here a problem. For something to have a heartbeat, it must first have the valves whose opening and closing results in the tell-tale ‘thump-thump’; no valves, no heartbeat. While this may seem obvious (indeed, I think it is), it appears to be something the Act’s creators have… overlooked.

At six weeks, the point at which cardiac activity is typically detectable and abortions become prohibited, a fetus doesn’t have these valves. While a rudimentary structure will be present, typically developing into a heart, this structure doesn’t create a heartbeat. So, if you put a stethoscope on a pregnant person’s stomach at this point, you wouldn’t hear the beating of a heart. Indeed, when someone goes in for an ultrasound, and they listen to something sounding like a heartbeat, this is created by the ultrasound machine based upon the cardiac activity it detects. As such, the Heartbeat Act concerns itself with something that is entirely incapable of producing a heartbeat.

For some, this may seem like a semantic issue. After all, the Act clarifies what it considers a fetal heartbeat when it conflates it with cardiac activity. You may think that I’m being overly picky and that the two amount to roughly the same thing at the end of the day. You might argue that while this activity may not result in the same noise you would hear in a fully developed person, it still indicates a comparable biological function. However, the term heartbeat is emotively loaded in a way that cardiac activity isn’t, and this loading is essential to the discussion at hand.

For centuries, a heartbeat (alongside breath) was the defining quality that signified life. Thus, someone was dead when their heart irrevocably stopped beating. However, with developments in medical technologies, most notably transplantation, this cardiopulmonary definition of death became less valuable. After all, undergoing a heart transplant means, at some point, you’ll lack a heartbeat. Yet, saying that person is dead would seem counterintuitive as the procedure aims to, and typically does, save the organ’s recipient. As a result, definitions of death started to focus more on the brain.

By saying that cardiac activity is synonymous with a heartbeat, the creators of the Act seek to draw upon this historical idea of the heartbeat as essential for life. By appealing to the emotive idea that a heartbeat is detectable at six weeks, an attempt is made to draw the Act’s ethical legitimacy not from scientific accuracy but an emotional force. Doing so anthropomorphizes something which is not a person. The phrase fetal heartbeat seeks to utilize our familiarity with the coupling of personhood and that tell-tale ‘thump-thump.’ But it is important to remember that the entity in question here does not have a heartbeat. Heck, cardiac activity, which is at its core electrical activity, doesn’t even indicate a functional cardiovascular system or a functional heart.

Fetus

So far in this piece, I have used the same terminology as the Act to describe the entity in question, that being the word ‘fetus.’ However, much like the use of ‘fetal heartbeat,’ the Act’s use of the phrase is inaccurate and smuggles deceptive emotive rhetoric. Unlike ‘fetal heartbeat,’ however, ‘fetus’ is at least a scientific term.

There are, roughly speaking, three stages of prenatal development: (i) germinal, where the entity is nothing more than a clump of cells (0 – 2 weeks); (ii) embryonic, where the cell clump starts to take on a human form (3 – 8 weeks); and (iii) fetal, where the further refinement and development occurs (9 weeks – birth).

I’m sure you can already spot the issue here. If cardiac activity occurs typically around the six-week mark, at which point the Act prohibits abortions, then this would place this boundary squarely in the embryonic, not the fetal, stage. Thus, using the term ‘fetus’ throughout the Act is scientifically inaccurate at best, and dangerously misleading at worst. Once again, you might wonder why this matters and think I’m making a bigger deal of this than it needs to be. After all, it’s only a couple of weeks out of step with the scientific consensus. However, as is with the case of ‘fetal heartbeat’ (a term that is now doubly inaccurate as it refers to neither a fetus nor a heartbeat), the term ‘fetus’ comes packaged with emotional baggage.

Describing the developing entity as a fetus evokes images of a human-like being, one that resembles how we are after birth and makes it easier to ascribe it some degree of comparable moral worth. But, this is not the case. An embryo, around the six-week point, may possess some human-like features. However, it is far from visually comparable to a fully formed person, and it is this point that the Act’s language obfuscates. Describing the embryo as a fetus is to try and draw upon the imagery the latter evokes. To make you think of a baby-like being developing in a womb and to push the belief that abortion is a form of murder.

Wrapping it up

It would seem a reasonable claim to make that accuracy is essential in our philosophical reasoning and our legal proceedings. We want to understand the world as it is and create systems that are best suited for the challenges thrown at them. Key to this is the use of appropriate language. Whether deliberative or not, inaccurate terminology makes it harder to act morally as inappropriate assumptions often lead to inappropriate results.

The moral status of the embryo and fetus is a topic that has been debated for centuries, and I would not expect it to be unanimously resolved anytime soon. However, using incorrect language as a means of eliciting a response built solely on the passions is undoubtedly not going to help. Laws need to describe the things they are concerned with accurately, and the Texas Heartbeat Act fails in this task.

“Fake News” Is Not Dangerously Overblown

image of glitched "FAKE NEWS" title accompanied by bits of computer code

In a recent article here at The Prindle Post, Jimmy Alfonso Licon argues that the hype surrounding the problem of “fake news” might be less serious than people often suggest. By pointing to several recent studies, Licon highlights that concerns about social standing actually prevent a surprisingly large percentage of people from sharing fake news stories on social media; as he says, “people have strong incentives to avoid sharing fake news when their reputations are at stake.” Instead, it looks like many folks who share fake news do so because of pre-existing partisan biases (not necessarily because of their gullibility about or ignorance of the facts). If this is true, then calls to regulate speech online (or elsewhere) in an attempt to mitigate the spread of fake news might end up doing more harm than good (insofar as they unduly censor otherwise free speech).

To be clear: despite the “clickbaity” title of this present article, my goal here is not to argue with Licon’s main point; the empirical evidence is indeed consistently suggesting that fake news spreads online not simply because individual users are always fooled into believing a fake story’s content, but rather because the fake story:

On some level, this is frustratingly difficult to test: given the prevalence of expressive responding and other artifacts that can contaminate survey data, it is unclear how to interpret an affirmation of, say, the (demonstrably false) “immense crowd size” at Donald Trump’s presidential inauguration — does the subject genuinely believe that the pictures show a massive crowd or are they simply reporting this to the researcher as an expression of partisan allegiance? Moreover, a non-trivial amount of fake news (and, for that matter, real news) is spread by users who only read a story’s headline without clicking through to read the story itself. All of this, combined with additional concerns about the propagandistic politicization of the term ‘fake news,’ as when politicians invoke the concept to avoid responding to negative accusations against them, has led some researchers to argue that the “sloppy, arbitrary” nature of the term’s definition renders it effectively useless for careful analyses.

However, whereas Licon is concerned about potentially unwarranted threats to free speech online, I am concerned about what the reality of “fake news” tells us about the nature of online speech as a whole.

Suppose that we are having lunch and, during the natural flow of our conversation, I tell you a story about how my cat drank out of my coffee cup this morning; although I could communicate the details to you in various ways (depending on my story-telling ability), one upshot of this speech act would be to assert the following proposition:

1. My cat drank my coffee.

To assert something is to (as explained by Sandford Goldberg) “state, report, contend, or claim that such-and-such is the case. It is the act through which we tell others things, by which we inform an audience of this-or-that, or in which we vouch for something.” Were you to later learn that my cat did not drink my coffee, that I didn’t have any coffee to drink this morning, or that I don’t live with a cat, you would be well within your rights to think that something has gone wrong with my speech (most basically: I lied to you by asserting something that I knew to be false).

The kinds of conventions that govern our speech are sometimes described by philosophers of language as “norms” or “rules,” with a notable example being the knowledge norm of assertion. When I assert Proposition #1 (“My cat drank my coffee”), you can rightfully think that I’m representing myself as knowing the content of (1) — and since I can only know (as opposed to merely believe) something that is true, I furthermore am representing (1) as true when I assert it. This, then, is one of the problems with telling a lie: I’m violating how language is supposed to work when I tell you something false; I’m breaking the rules governing how assertion functions.

Now to add a wrinkle: what if, after hearing my story about my cat and coffee, you go and repeat the story to someone else? Assuming that you don’t pretend like the story happened to you personally, but you instead explain how (1) describes your friend (me) and you’re simply relaying the story as you heard it, then what you’re asserting might be something like:

2. My friend’s cat drank his coffee.

If this other person you’re speaking to later learns that I was lying about (1), that means that you’re wrong about (2), but it doesn’t clearly mean that you’re lying about (2) — you thought you knew that (2) was true (because you foolishly trusted me and my story-telling skills). Whereas I violated one or more norms of assertion by lying to you about (1), it’s not clear that you’ve violated those norms by asserting (2).

It’s also not clear how any of these norms might function when it comes to social media interaction and other online forms of communication.

Suppose that instead of speaking (1) in a conversation, I write about it in a tweet. And suppose that instead of asserting (2) to someone else, you simply retweet my initial post. While at first glance it might seem right to say that the basic norms of assertion still apply as before here, we’ve already seen (with those bullet points in the second paragraph of this article) that fake news spreads precisely because internet users seemingly aren’t as constrained in their digital speech acts. Maybe you retweet my story because you find it amusing (but don’t think it’s true) or because you believe that cat-related stories should be promoted online — we could imagine all sorts of possible reasons why you might retransmit the (false) information of (1) without believing that it’s true.

Some might point out that offline communication can often manifest some of these non-epistemic elements of communication, but C. Thi Nguyen points out how the mechanics of social media intentionally encourage this kind of behavior. Insofar as a platform like Twitter gamifies our communication by rewarding users with attention and acclaim (via tools such as “likes” and “follower counts”), it promotes information spreading online for many reasons beyond the basic knowledge norm of assertion. Similarly, Lucy McDonald argues that this gamification model (although good for maintaining a website’s user base) demonstrably harms the quality of the information shared throughout that platform; when people care more about attracting “likes” than communicating truth, digital speech can become severely epistemically problematic.

Now, add the concerns mentioned above (and by Licon) about fake news and it might be easy to see how those kinds of stories (and all of their partisan enticements) are particularly well-suited to spread through social media platforms (designed as they are to promote engagement, regardless of accuracy).

So, while Licon is right to be concerned about the potential over-policing of online speech by governments or corporations interested in shutting down fake news, it’s also the case that conversational norms (for both online and offline speech) are important features of how we communicate — the trick will be to find a way to manifest them consistently and to encourage others to do the same. (One promising element of a remedy — that does not approximate censorship — involves platforms like Twitter explicitly reminding or asking people to read articles before they share them; a growing body of evidence suggests that these kinds of “nudges” can help promote more epistemically desirable online norms of discourse in line with those well-developed in offline contexts.)

Ultimately, then, “fake news” seems like less of a rarely-shared digital phenomenon and more of a curiously noticeable indicator of a more wide-ranging issue for communication in the 21st century. Rather than being “dangerously overblown,” the problem of fake news is a proverbial canary in the coal mine for the epistemic ambiguities of online speech acts.

On the Weaponization of Forgiveness

black and white photograph of pray hands

WARNING: The following article contains discussions of sexual assault and other violent crimes, including the sexual abuse of minors.

On April 23rd, former reality television stars Josh and Anna Duggar posted a gender reveal for their seventh child on Instagram, happily announcing Anna’s pregnancy; six days later, Josh Duggar was arrested and charged with downloading and possessing child pornography. At Duggar’s detention hearing, federal authorities testified that they found hundreds of images of sexually abused children, including toddlers, on one of Duggar’s office computers in a case file described by one agent as being in the “top five of the worst of the worst that I’ve ever had to examine.” Although software was installed on this computer to track Duggar’s activity (and regularly inform his wife of his internet searches), additional software had been installed to circumvent these measures. Josh Duggar pleaded “not guilty” to the charges and has been released on bond to the custody of family friends pending his trial in July.

This is not the first time that Josh Duggar — son to former Arkansas state representative Jim Bob Duggar — has made national headlines. In 2015, In Touch magazine published copies of a 2006 police report indicating that Duggar had repeatedly sexually molested five minors when he was fourteen years old; the ensuing scandal, worsened by the fact that Duggar’s father had leveraged his political capital to protect his son from consequences (despite several of Duggar’s sisters being among his victims), led to Duggar resigning his position as the executive director of the Family Research Council (a Christian lobbying organization). Additionally, in the wake of the controversy, TLC chose to cancel 19 Kids and Counting, the popular reality show portraying the lifestyle of Jim Bob Duggar’s large family. Several months later, hackers exposed user data from AshleyMadison.com, a dating site that markets itself towards “cheating spouses” seeking extramarital affairs; Josh Duggar was one of several celebrities revealed to have paid for multiple accounts with the service.

In his response to these previous scandals, Duggar apologized in 2015 for his “wrongdoing” as a teenager and said that he had “sought forgiveness from those I had wronged and asked Christ to forgive me and come into my life.” Regarding his infidelity, Duggar said he had been “the biggest hypocrite ever” and explained that he had developed a “secret addiction” to pornography that led him to become “unfaithful to [his] wife.” As his confession continues, he says: “I am so ashamed of the double life that I have been living and am grieved for the hurt, pain and disgrace my sin has caused my wife and family, and most of all Jesus and all those who profess faith in Him.” Duggar’s 2015 statement finishes with the following: “I humbly ask for your forgiveness. Please pray for my precious wife Anna and our family during this time.”

At this point, apart from his court plea, Duggar has been silent about his 2021 arrest, but his parents released a short statement asking for prayer and reaffirming their commitment to their family.

Although it might seem like a surprising topic to consider, philosophers have had multiple things to say about the phenomenon of forgiveness that Duggar’s past statements repeatedly invoke. Some have analyzed the emotional elements of forgiveness to, among other things, define the necessary and sufficient conditions for actions that qualify as actually bestowing “forgiveness” on transgressors. (If I say the words “I forgive you” while still harboring resentment, have I truly forgiven you?) Other academics have focused on questions of standing for acts of forgiveness: for example, if Calvin pulls Susie’s hair, it seems like only Susie could rightfully forgive Calvin (should she choose to do so) — no matter how much Rosalyn might insist that she forgives Calvin for pulling Susie’s hair, it seems like Rosalyn lacks the proper standing to forgive the offense. However, this scenario raises another question: what about acts of religious forgiveness, in particular those connected with receiving forgiveness from God? (Could God forgive Calvin on Susie’s behalf? Or has Calvin somehow wronged both Susie and God such that God has standing to forgive Calvin in this case? Or is something else going on here?) And what about obligations to forgive — are there ever duties to do so? Additionally, should forgiveness itself be seen as a virtue?

Indeed, the philosophy of forgiveness can be a rich field to plow.

I think that the Duggar case demonstrates another interesting feature of forgiveness and how it functions as a sociopolitical kind of speech act: namely, one that triggers certain social expectations (and, perhaps, even duties) to view the speaker in a certain valenced perspective (in a manner similar to what J.L. Austin describes as a “behabitive” speech act). When Josh Duggar references his past sins and explains how he has already sought “Christ’s forgiveness,” he is not explicitly obligating people to likewise forgive him for his actions — however, for a certain subset of Duggar’s audience, he is implicitly indicating that they should forgive him on their own. According to Duggar’s religion, Christ’s forgiveness is freely given to all who ask for it: for anyone who might treat Jesus as a moral exemplar (and ask “What would Jesus do?”), Duggar’s invocation of his having already sought divine absolution is an implicit appeal to the Christians hearing his confession that they should do likewise.

In this way, Duggar’s deployment of Christian terminology (like asking Jesus to “come into my life”) functions as what philosopher Jennifer Saul has called a “dogwhistle” because it has multiple layers of meaning, but only certain people in a given audience will be able to fully decode the deeper message. On its face, hearing that someone asked Jesus to “come into their life” might be easily understood as a metaphorical way to recognize Jesus’ influence on the speaker; for Christians — particularly fundamentalist Protestants like Duggar — this phrase carries significant theological meaning with considerable baggage automatically communicated implicitly to anyone who understands the code. And even if audience members don’t calculate the full implicatum of Duggar’s words (“Jesus has forgiven me for X, therefore you should not hold X against me”), they might nevertheless recognize Duggar as a member of their own social group in a manner that often results in the triggering of various in-group biases.

My point is not that Josh Duggar (or anyone else who speaks in similar fashions) is necessarily intentionally trying to manipulate their audience by evoking Christian (or otherwise partisan) terminology; importantly, dog whistles (and other sorts of covert speech acts) can easily be used without speakers realizing that they are doing so. Nevertheless, when such words function to effectively manipulate the emotions and perceptions of audience members, we would do well to pay more attention to their operation.

Consider what happened in 2015: various other celebrity Christians, including former Arkansas governor Mike Huckabee, rushed to Duggar’s defense, insisting that, although Duggar’s actions were indeed terrible, his “mistakes” had been addressed and the families involved should be protected from the “blood-thirsty media” looking for a scandal. Pundit Matt Walsh argued that “progressives” were the real hypocrites in this case (because they were allegedly only looking to discredit a prominent Christian family). Whether or not such charges carry water is beside the present point: if Duggar’s statement functioned as I’ve suggested (and indeed triggered certain members of his audience, like Huckabee and Walsh, to implicitly recognize a duty to support their fellow Christian) then these partisan responses are unsurprising.

In short, I’m suggesting that public statements mentioning God and forgiveness (which have been made by everyone from former President Bill Clinton to Kanye West) can work to identify the speaker as an ally or member of a particular subculture or sect. In much the same way that my saying “Live long and prosper” or “May the Force be with you” entitles my audience to make certain assumptions about my background or social position (insofar as they might think I’m a member of certain sci-fi fandoms), deploying specific language — like Duggar’s “Christianese” discussing his sins — works similarly. When such associations might alter interpretations or feelings about violent or otherwise unjust events, said language should be analyzed more carefully.

To date, with the exception of his lawyers and family members, no one has publicly jumped to Josh Duggar’s defense. However, he has been released from jail to await his July trial in the custody of Lacount Reber who was described in court as a “close friend” of the Duggars. Mr. Reber is a pastor in northwest Arkansas.

How Should One Call It like It Is?

photograph of threatening protestor group with gas masks

This week in response to the Capitol attack many have urged that we “call the event what it is.” Given the events which took place in Washington this week, perhaps the most prominent moral question facing everyone is how should one describe something? Initially this was even the case before the 6th when the details of Trump’s call to the Georgia Secretary of State became public and it became known that he wanted to “find” votes. Is that an attempt to intimidate a public official to overturn an election or is it merely the innocent efforts of a person to rectify a perceived slight? Following the 6th, this type of question gained new importance. How should we describe such an event? Was it an attempted coup? Was it a protest? An insurrection? Domestic terrorism? How do we describe the day? Did the president have a rally with heated rhetoric that got a crowd out of control or did he unleash a mob on Congress with the intention of preventing them from following the Constitution? Answering a question like “how should we describe such events?” reveals just how complicated of a moral problem language can be.

In his account of inquiry, philosopher John Dewey argued that the nature of any judgment is to be able to link some selected content (a subject) to some selected quality of characterization (a predicate). His central point is that determining how to characterize the predicate and how to characterize the subject is the work of inquiry; neither is simply given to us in advance and our own inquiries require us to appraise how we should characterize a subject and predicate in relationship to each other. Moral inquiry is no different, and thus whether we characterize the people who invaded the Capitol as protestors or insurrectionists depends on what appraisals we make about what information is relevant in the course of moral inquiry. Of course, one of the means that society has at its disposal to do this work is the legal system.

The question about what legally took place is complicated. For example, does the storming of the Capitol constitute domestic terrorism? Despite some, including President-elect Biden, calling the act sedition, in reality many of those who participated may only be legally guilty of trespassing (though there may be a stronger case against some particular individuals who may be charged with seditious conspiracy and assaulting police). Even for the president, and many in Congress who spread lies about the election and stoked the crowd before the riot, it isn’t abundantly clear they can be held legally responsible. Legally speaking, was the president and his supporters in Congress only practicing their First Amendment right to free speech or were they participating in an attempted coup? Again, legally it is complicated with many precedents setting a high bar to prove such charges in court.

But a legal determination is only one way of evaluating the situation. For example, in addressing whether the attack constitutes domestic terrorism, a recent Vox article points out, “It’s useful to think about terrorism as three different things: a tactic, a legal term, and a political label.” In each case the application of the term requires paying attention to different contexts and points of interest. Morally speaking, we will each have to determine how we believe the events of this week should be characterized. But, as a moral declaration how do we make such determinations? Outside of mere political rhetoric, when does it become appropriate to label someone a “fascist”? At what point does a protest become a “coup attempt”? Should we call the people who stormed the Capitol “terrorists,” “insurrectionists,” “protestors,” or as others have called them, “patriots”? Were Trump and his supporters merely expressing grievances over an election that many of them genuinely believe was fraudulent?

One way of trying to come to a justified determination is to compare the situation to similar examples from the past. Case-based reasoning, or casuistry, may be helpful in such situations because it allows us to compare this case to other cases to discover commonality. But what cases should one choose to compare it with? For example, is what happened on the 6th similar to Napoleon storming the French legislature? Napoleon arranged a special session and used bribery, propaganda, and intimidation to get the legislature to put him in charge and then cleared them out by force when they refused to step aside. Or is this case more similar to the crisis in Bolivia? International scholars have been divided over whether that was a coup or a popular uprising following assertions of a rigged election.

Unfortunately, such reasoning is problematic because it all depends on which elements we choose to emphasize and which similarities and differences we think most relevant. Do we focus on the fact that many of these people were armed? Do we focus on the political rhetoric compared to other coups? Does it matter whether the crowd had a coherent plan? It’s worth pointing out that Republican supporters and Trump supporters won’t necessarily make the same connections. 68% of Republicans do not believe the storming of the Capitol was a threat to democracy. 45% of Republicans even approve storming the Capitol. As YouGov points out, “the partisan difference in support could be down to differing perceptions of the nature of the protests.” Thus, comparing this case to others is problematic because cases like this do not come with a label, thus making it easy to make comparisons that are politically motivated and logically circular rather than being morally justified. As G.E. Moore noted “casuistry is the goal of ethical investigation. It cannot be safely attempted at the beginning of our studies, but only at the end.”

What alternative is there to comparing cases? One could assert a principle stating necessary and sufficient conditions. For example, if X acts in a way that encourages or causes the government from being unable to fulfill its functions, X is engaging in a coup. The problem with these principles, just like casuistry, is the temptation to engage in circular reasoning. One must describe the situation in just such a way for the principle to apply. Perhaps the answer is not to focus on what happened, but on the threat that still may exist and take an inductive risk strategy. Even if the benefit of historical hindsight may one day lead us to say otherwise, we may be justified in asserting that the attack was an attempted coup because of the extremely high risks of getting it wrong. This requires us to be forward-looking to future dangers rather than focusing on past cases.

In other words, given the possible grave threat it may be morally justified to potentially overreact to a possibly false belief in order to prevent something bad from happening. By the same token, a Trump supporter who believes that the election was rigged (but is ultimately committed to democracy despite their mistaken beliefs) would be in a worse position for underreacting to an attempted coup if they are wrong about the election and about Trump’s intentions. Such judgments require a careful appraisal of available evidence compared to future possible risks of action or inaction. However, given that the population overall does not see this situation in the same light, the need for having clear reasons, standards, and justifications which can be understood and appreciated by all sides becomes all the more important.

Under Discussion: Dog Whistles, Implicatures, and “Law and Order”

image of someone whispering in an ear

This piece completes our Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

For the last several days, The Prindle Post has explored the concept of “law and order” from multiple philosophical and historical angles; I now want to think about the phrase itself — that is, I want to think about what is meant when the words ‘law and order’ appear in a speech or conversation.

On its face, ‘law and order’ is a term that simply denotes whether or not a particular set of laws are, in general, being obeyed. In this way, politicians or police officers who reference ‘law and order’ are simply trying to talk about a relatively calm public state of affairs where the official operating procedures of society are functioning smoothly. Of course, this doesn’t necessarily mean that ‘law and order’ is always a good thing: by definition, acts of civil disobedience against unjust laws violate ‘law and order,’ but such acts can indeed be morally justified nonetheless (for more, see Rachel Robison-Greene’s recent discussion here of “substantive” justice). However, on the whole, it can be easy to think that public appeals to ‘law and order’ are simply invoking a desirable state of peace.

But the funny thing about our terminology is how often we say one thing, but mean something else.

Consider the previous sentence: I said the word ‘funny,’ but do I mean that our terminology is designed to provoke laughter (or is humorous in other ways)? Certainly not! In this case, I’m speaking ironically to sarcastically imply not only that our linguistic situation is more complicated than simple appearances, but that the complexity of language is actually no secret.

The says/means distinction is, more or less, the difference between semantics (what is said by a speaker) and pragmatics (what that speaker actually means). Often, straightforward speech acts mean precisely what a speaker says: if I ask you where to find my keys and you say “your keys are the table,” what you have said and what you mean are roughly the same thing (namely, that my keys are on the table). However, if you instead say “your keys are right where you left them,” you are responding with information about my keys (such as that they are on the table), but you also probably mean to communicate something additional like “…and you should already know where they are, dummy!”

When a speaker uses language to implicitly mean something that they don’t explicitly say, this is what the philosopher H.P. Grice called an implicature. Sarcasm and ironic statements are a few paradigmatic examples, but many other kinds of figures of speech (such as hyperbole, understatement, metaphor, and more) function along the same lines. But, regardless, all implicatures function by communicating what they actually mean in a way that requires (at least a little) more analysis than simply reading how they appear on their face.

In recent years, law professors like Ian Haney López and philosophers like Jennifer Saul have identified another kind of implicature that explicitly says something innocuous, but that implicitly means something different to a subset of the general audience. Called “dog whistles” (after the high-pitched tools that can’t be heard by the human ear), these linguistic artifacts operate almost like code words that are heard by everyone, but are only fully understood by people who understand the code. I say “almost” like code words because one important thing about a dog whistle is that, on its face, its meaning is perfectly plain in a way that doesn’t arouse suspicion of anything tricky happening; that is, everyone — whether or not they actually know the “code” — believes that they fully understand what the speaker means. However, to the speaker’s intended clique, the dog whistle also communicates a secondary message surreptitiously, smuggling an implicated meaning underneath the sentence’s basic semantics. This also means that dog whistles are frustratingly difficult to counter: if one speaker uses a dog whistle that communicates something sneaky and another speaker draws attention to the implicated meaning, the first speaker can easily deny the implicature by simply referencing the explicit content of the original utterance as what they really meant.

Use of dog whistles to implicitly communicate racist motivations in government policy (without explicitly uttering any slurs) was, infamously, a political tactic deployed as a part of the Republican “Southern strategy” in the late 20th century (for more on this, see Evan Butts’ recent article). As Republican strategist (and member of the Reagan administration) Lee Atwater explained in a 1981 interview:

“You start out in 1954 by saying, ‘[n-word], [n-word], [n-word].’ By 1968 you can’t say ‘[n-word]’—that hurts you, backfires. So you say stuff like, uh, forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites.…”

Of course, terms like ‘forced busing’ and ‘states’ rights’ are, on their faces, concepts that are not necessarily associated with race, but because they refer to things that just so happen, in reality, to have clearly racist byproducts or outcomes —  and because Atwater’s intended audience (Republican voters) knew this to be so — the terms are dog whistles for the same kind of racism indicated by the n-word. When a politician defended ‘forced busing’ or when a Confederate apologist references ‘states’ rights,’ they might be saying something about education policy or the Civil War, but they mean to communicate something much more nefarious.

Exactly what a dog whistle secretly communicates is still up for debate. In many cases, it seems like dog whistles are used to indicate a speaker’s allegiance to (or at least familiarity with) a particular social group (as when politicians signal to prospective voters and interest groups). But other dog whistles seem to signal a speaker’s commitment (either politically or sincerely) to an ideology or worldview and thereby frame a speaker’s comments as a whole from within the perspective of that ideology. Also, ideological dog whistles can trigger emotional and other affective responses in an audience who shares that ideology: this seems to be the motivation, for example, of Atwater’s racist dog whistles (as well as more contemporary examples like ‘welfare,’ ‘inner city,’ ‘suburban housewife,’ and ‘cosmopolitan elites’). Perhaps most surprisingly, ideological dog whistles might even work to communicate or trigger ideological responses without the audience (and, more controversially, perhaps even without the speaker) being conscious of their operation: a racist might dog whistle to other racists without any of them explicitly noticing that their racist ideology is being communicated.

This is all to say that the phrase ‘law and order’ seems to qualify as a dog whistle for racist ideology. While, on its face, the semantic meaning of ‘law and order’ is fairly straightforward, the phrase also has a demonstrable track record of association with racist policies and byproducts, from stop-and-frisk to the Wars on Drugs and Crime to resistance against the Civil Rights Movement and more. Particularly in a year marked by massive demonstrations of civil disobedience against racist police brutality, politicians invoking ‘law and order’ will inevitably trigger audience responses relative to their opinions about things like the Black Lives Matter protests and other recent examples of civil unrest (particularly when, as Meredith McFadden explains, the phrase is directly used to criticize the protests themselves). And, crucially, all of this can happen unconsciously in a conversation (via what Saul has called “covert unintentional dog whistles”) given the role of our ideological perspectives in shaping how we understand and discuss the world.

So, in short, the ways we do things with words are not only interesting and complex, but can work to maintain demonstrably unethical perspectives in both others and ourselves. Not only should we work to explicitly counteract the implicated claims and perspectives of harmful dog whistles in our public discourse, but we should consider our own words carefully to make sure that we always mean precisely what we think we do.

Under Discussion: The Multiple Ironies of “Law and Order”

photograph of a patch of the confederate flag

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

You hear a person running for office described as a “law and order” candidate. What, if anything, have you learned about them and their policies? The answer is either “nothing” or “nothing good.” The only wholesome association with the phrase is the infinitely replicating and endless Law and Order television franchise. Otherwise, this seemingly staid phrase misleads — and that is exactly the intention. As we all are routinely reminded, “law and order” is a deliberate verbal irony. When people don’t heed these reminders, it becomes a tragic irony.

In 1968, two conservative candidates were running for President of the United States: Richard Nixon and George Wallace. Nixon, the Republican nominee, won the election. However, Wallace won the electoral votes of five southern states and garnered 13% of the popular vote. Both candidates ran on explicitly law and order platforms and articulated them as such. During the course of that campaign, Nixon was often challenged to distinguish himself from Wallace on the issue of law and order. During a televised interview on Face the Nation, Nixon demonstrated the slipperiness of the term “law and order.” He said that each of the three candidates in the 1968 presidential election — Hubert Humphrey, George Wallace, and himself — were each in support of law and order. The difference was what they meant by it and how they would achieve it.

Each presidential candidate in 1968 presented a different vision of law and order, during a period of significant unrest. Wallace gave full-throated support to a segregationist and populist message, couched in terms of the rights of states to shape their culture free from heavy-handed federal meddling. Hubert Humphrey was an advocate of civil rights legislation and nuclear disarmament. Though his anti-war credentials were tarnished by his role as Lyndon B. Johnson’s vice-president during the Vietnam conflict, Humphrey’s view of law and order was broadly one of egalitarianism and peace. Nixon’s avowed interpretation of law and order was the rule of law, and freedom from fear. Here, the irony begins.

The details of the strategy by which Nixon and the Republican Party won over voters in the states of the US south are now well-known. The practically named “Southern Strategy” first took the presidential stage with the 1964 campaign of Barry Goldwater. He took a pronounced stance against civil rights legislation that garnered him the few electoral votes he received in his presidential run —all from southern states (and his home state of Arizona). Opposing civil rights legislation, and any other federally mandated policies of integration and egalitarianism, was the core. This was not done in an explicitly racist manner, but under the banner of preserving the sovereignty of individual states, as Republican strategist Lee Atwater laid bare in a 1981 interview.

This is deliberate verbal irony: the strict meaning of the words actually uttered differs from the meaning intended by the speaker. Atwater confirms that when Republican candidates for office say, “preserve state’s rights” what they mean is “preserve the white southern way of life.” Nor is this an idiosyncrasy of Atwater. The intellectual basis for the Southern Strategy comes from William F. Buckley‘s 1957 editorial in the National Review, in which he states that whites in the south are entitled to ensure they “prevail, politically and culturally, in areas in which it does not predominate numerically.” Law and order, but only for white people. Freedom from fear, but only for white people. This is the Southern Strategy.

This direct verbal irony entails more irony at the level of political philosophy and general jurisprudence (i.e., theories of the concept of law). Predominant theories of general jurisprudence, especially among conservatives, see law as being generally applicable: that is, every person is subject to the same laws in the same way. This is the meaning of the phrases “rule of law” and “equal under the law.” However, talk of states’ rights in the context of the Republican Southern Strategy stands for exactly the opposite proposition: the law should apply in one way to white people and a different way to non-white people. The legal legerdemain achieved is profound in its pernicious effect. When the law is articulated in a sufficiently abstract fashion, it will not say that one group will be disparately, negatively affected. Because it doesn’t say it, many people will be convinced that it doesn’t actually affect people differently. This allows people to shift blame onto those whose lives are made more difficult, or ruined, by the law.

Disparate impact, however, has become one of the trademark U.S. Supreme Court tests for unconstitutional practices. The test arose from Griggs v. Duke Power Co., in which Black employees sued their employer over a practice of using IQ tests as a criterion for internal promotion. Previously, the company had directly forbidden Black employees from receiving significant promotions. However, after the passage of the Civil Rights Act of 1964, formally discriminatory policies were unconstitutional. The Griggs court expanded the ambit of the Civil Rights Act to policies that were substantially discriminatory in their effect, even if they were non-discriminatory in form. This rule was later limited by the Supreme Court in Washington v. Davis, in which the court required that it be proven substantially discriminatory policies were adopted with the intent to achieve that discriminatory effect.

The Supreme Court, the ultimate authority on U.S. law, holds that laws which have disparate impact are bad law. But disparate impact, as it is defined by the Court, is exactly what the Southern Strategy aimed at. Say one thing, which is superficially acceptable, but mean another thing, which is expressly forbidden. Hence the “law” of the Southern Strategy’s “law and order” is not law at all.

How much of this dynamic any particular law-and-order candidate, much less the people that vote for them, is aware of is an open question. Here, the deliberate verbal irony becomes tragic irony. Anyone who has learned the lessons of history knows what will happen, while those who have not learned do not.

Under Discussion: On Cancelling “Cancel Culture”

photograph of a fountain pen and a signature on yellow paper.

This piece completes our Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: The Harper’s Letter.

On the morning of July 7th, Harper’s Magazine published its “Letter on Justice and Open Debate” that portended all manner of dangers to contemporary society if the “stifling atmosphere” it referenced was allowed to continue eclipsing the “free exchange of information and ideas.” With over 150 signatories — many of whom were either popular celebrities, Ivy League academics, or a strange combination of the two — the letter commanded a considerable amount of attention and has since spawned a host of responses, critiques, endorsements, rejections, and parodies — including by multiple writers for The Prindle Post — as well as its own Wikipedia article.

But on the evening of July 7th (mere hours after the letter’s initial online release), two of its signatories issued retractions of their original endorsements: Kerri Greenidge, a historian and director of the American Studies program at Tufts University, and Jennifer Boylan, an English professor at Barnard College, both released short statements on Twitter indicating that they did not, in fact, support the letter’s message. Although her name has been deleted from the list, Greenidge has not commented publicly about her decision to retract her support. However, Boylan offered the following explanation on Twitter: “I did not know who else had signed that letter.  I thought I was endorsing a well meaning, if vague, message against internet shaming. I did know Chomsky, Steinem, and Atwood were in, and I thought, good company. The consequences are mine to bear. I am so sorry.”

Critics quickly attacked these pivots as being either disingenuous or cowardly; for example, journalist (and letter-signer) Malcolm Gladwell quote-tweeted Boylan’s retraction, sarcastically quipping, “I signed the Harpers letter because there were lots of people who also signed the Harpers letter whose views I disagreed with. I thought that was the point of the Harpers letter.” Others, such as Jesse Singal (another letter-signer), took them as further proof of the pervasiveness of the problem the letter purported to highlight in the first place; in a now-deleted response to Boylan’s tweet, Singal said: “Ah yes, here it is — the first official apology for signing a statement condemning the climate of conformity, fear, and mutual surveillance that has descended upon public intellectual life.” For many, the apparent irony of “cancelling” a letter decrying so-called “cancel culture” was simply too much to avoid ridiculing.

But those complaints miss the point.

Even if we set aside the unusual (and potentially deceptive) way that Harper’s Magazine collected signatures for the letter in the first place, it is not hard to understand why someone could initially agree to sign the letter, then change their mind after seeing the final product in all of its context. Put differently: it is not ridiculous for someone to make (or affirm) a public statement, then cancel that erstwhile claim in light of new information learned later.

Of course, everything turns on what you mean by ‘cancel’ here. In its broadest strokes, “cancellation” is a kind of collective public shunning of an individual, typically as a backlash to something for which that person was responsible: examples could include the ways that the reputations of entertainers like Louis CK and Shane Gillis have been damaged as a result of their past sins coming to light. “Cancel culture,” then, is a social force that promotes the cancellation of individuals; Barack Obama recently compared it to “casting stones” instead of actually “bringing about change” and its critics argue that its unforgiving expectations are unrealistic. In its more academic forms, cancellation is akin to censorship and involves critics deploying various social pressures to throttle conversations of which they do not approve — this seems to be the target of the warnings trumpeted by the Harper’s Letter (which denounced a perceived cultural trend towards “an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty”). In its most public forms, cancel culture manifests as protests or other coordinated actions to sanction thinkers for the ideas that they choose to express, such as the unsuccessful attempt by over 600 academics who recently petitioned the Linguistic Society of America to remove (Harper’s Letter–signer) Steven Pinker from its list of “distinguished fellows.”

Instead of “cancelling” thinkers who espouse distasteful ideas, the critics of cancel culture typically argue that those ideas should be freely and openly discussed and considered — if they truly are distasteful, then conversations held in good faith will render such judgments in due time (see also Desmonda Lawrence’s explanation of John Stuart Mill’s views on free speech). By instead targeting the person encouraging an idea’s discussion, a would-be canceller is unfairly shifting the focus of a conversation from “what is said” to “who is saying it.”

Or so the complaint goes.

I’ve written here before about the importance of distinguishing between the semantics and pragmatics of a sentence — in short, between a speech act’s “propositional content” and the complex manner in which that content is deployed in a given conversation. Most simply, this is the difference between what a speaker says and what they mean by saying it. Often, these two features align, such as when my wife asks me if the coffee is ready and I reply, “No, we ran out of filters” — the content of this sentence is twofold (we have neither coffee nor coffee filters) and my intention in speaking it is simply to inform my audience of these tragic facts. But if I instead reply “Someone forgot to buy coffee filters,” I still mean to report that we have no coffee, but I do so in a way that simultaneously highlights  “someone’s” choices during our most recent shopping trip (which is the target of my reply’s semantic content). In his book The Language Instinct, Pinker explains how “It is natural that people exploit the expectations necessary for successful conversation as a way of slipping their real intentions into covert layers of meaning. Human communication is not just a transfer of information like two fax machines connected with a wire; it is a series of alternating displays of behavior by sensitive, scheming, second-guessing, social animals.”

This feature of natural language — the socially-embedded pragmatic applications of our speech acts — is something that the Harper’s Letter (and critics of so-called “cancel culture” writ large) overlook by focusing primarily on the abstract propositions within a discursive exchange. By saying that various public criticisms and professional consequences have resulted in an “intolerant society” concerned to “steadily narrow the boundaries of what can be said without the threat of reprisal” (emphasis added), the Letter seems to pretend like the semantic content of an article, speech, tweet, or what have you is the only meaningful element to consider about a conversation. But often, what would-be cancelers are also concerned with is what is meant by what is said, including what is meant by even having the conversation in the first place.

For one example: Recently, when Tom Cotton, the junior senator from Arkansas, wrote an editorial in The New York Times calling for the deployment of the American military against American citizens, critics not only condemned Cotton’s ideas, but also the paper’s Editorial Board for allowing those ideas to be spread. The concern was not simply about Cotton’s meaning, but about what The New York Times at least tacitly intended by lending the legitimacy of the “newspaper of record” to Cotton’s violent hopes. The important thing to notice here is that even though The New York Times wasn’t “saying” anything in Cotton’s article (because only the senator was responsible for the article’s semantic content), it must have been the case that the Editorial Board meant something by allowing it to be released (insofar as The New York Times must have had a reason for approving its publication) — that meaning is fully eligible for assessment on its own terms. Contra the Harper’s Letter, criticizing the Board’s approval of Cotton’s article is far from a “restriction of debate” that “invariably hurts those who lack power” — instead, it seems a legitimate critique within standard norms of discourse.

And while each incident of alleged “cancellation” must be considered individually, most Americans agree that speakers should be held accountable (and potentially experience “social consequences”) as a result of the positions they defend; for example, a recent POLITICO survey found that fewer than one-third of respondents actually agreed that “There should not be social consequences for expressing unpopular opinions in public, even those that are deeply offensive to other people because free speech is protected.” Because “expression” includes both semantic and pragmatic forms of a speaker’s meaning, it again seems quite normal to expect that a given speech act can have all manner of consequences.

Crucially, the attention needed to interpret what all a conversation expresses is far more complicated than the “literal/non-literal” binary recently suggested by Agnes Callard. In her defense of Aristotle’s lasting value, despite his problematic moral track record, she says that “The answer is to take him literally — which is to say, read his words purely as vehicles for the contents of his beliefs” — in so doing, she says, we can come to see Aristotle’s full-throated defenses of sexism or slavery as being free of any anachronistic “messaging” relevant to contemporary political debates and can instead simply see Aristotle taking an “empirical” approach to the world he knew. According to Callard, “‘Cancel culture’ is merely the logical extension of what we might call ‘messaging culture,’ in which every speech act is classified as friend or foe, in which literal content can barely be communicated, and in which very little faith exists as to the rational faculties of those being spoken to.”

This seems mistaken; “cancel culture” is often just the natural manifestation of the pragmatic effects of a speech act (or a pattern of speech acts) — something easily divorcable from political signaling or social “messaging.” As Bryan Van Norden explains, not only are there legitimate questions of pedagogical priorities with the time constraints of a given syllabus, but the pragmatic effects of both Aristotle’s beliefs and of teaching Aristotle’s beliefs are additional facts that responsible philosophers cannot ignore: “To primly insist that we (and they) treat [Aristotle’s] views as merely ‘empirical’ hypotheses or focus only on the ‘literal content’ of what he says is to leave out too many important — and philosophically interesting! — issues.” Notably, as Van Norden also points out, talking about teaching Aristotle’s beliefs can have additional pragmatic effects, as shown by Callard’s NYT article rippling into more explicitly partisan publications (whose ideological goals are notably different from both Callard and Aristotle).

In short, what I mean to say is that it is simply wrong to pretend like expressed propositions can be fully analyzed “literally” in isolation from the social contexts of their expression. Furthermore, it’s quite unremarkable that those social contexts often include effects on things like a speaker’s reputation. So, if someone’s speech act redounds upon their opportunity to make additional speech acts of a similar kind at a later date, this is ultimately just a function of how societies organize themselves. Certainly, it is a far cry from any sort of “political weapon” wielded by nefarious agents (as Donald Trump has recently asserted): it is instead an epiphenomenal manifestation of public opinion, collectively organized. (This also explains why it is, by definition, impossible for an individual to “cancel” another in the way described here.)

So, what all does this mean for the people who retracted their initial support for the Harper’s Letter? Although she may well have approved of the semantic (or, perhaps, “literal”) content of the Letter in isolation, when Jennifer Boylan learned how that Letter was actually deployed and the likely sorts of interpretations that its many pragmatic features could engender — in particular, features arising from the reputations of multiple other signatories — she could easily reconsider and even retract her endorsement in light of those new facts without violating any moral or rational norms. In fact, Boylan’s cancellation of her support for the Letter is ultimately not that different from the cancellation of a Gricean implicature: she was simply clarifying what she did and did not actually mean.

This is all captured best by Lucía Martínez Valdivia, an English professor at Reed college who also signed the Harper’s Letter: in a tweet posted on July 8th (the day after the Letter’s publication), Martínez Valdivia shared a screenshot of an email where she wrote,

“The presence on that list of those who have shown through their actions that they do not in truth agree with the principles outlined in the letter turned it from a sincere presentation of a difficult but valuable ideal into an entirely different and hypocritical text…it saddens me that a statement that could have fought for the common good and equal protection of everyone…was instead poisoned and perverted by the insincere voices of people who have wielded their considerable influence, platforms, and resources to silence those who would disagree with or criticize them.”

That is to say, the pragmatic implications of the list of signatories means something different from the literal, semantic content of the Letter itself, and approving of the latter does not equate to supporting the former. Martínez Valdivia concluded her email by also withdrawing — or cancelling, in the Gricean sense — her endorsement.

How Words Translate to Action: The Ramifications of Trump’s Rhetoric

photograph of packed arena at Trump rally

“[The coronavirus] has more names than any disease in history,” President Donald Trump said at a campaign rally in Tulsa, Oklahoma on Saturday. “I can name kung flu. I can name 19 different versions of names.”

Saturday’s rally was not the first time Trump used racist rhetoric to divert criticisms toward his administration for its mishandling of the coronavirus crisis. Since March, the president has cast China as the “invisible enemy” and bragged about his early ban on Chinese travelers in almost every public appearance. In addition, he repeatedly used the phrase “the Chinese virus” despite concerns from public health experts, and again referred to the coronavirus as “the China virus” in a self-congratulatory tweet in May.

Critics of Trump have argued that his words have contributed to the rise of hate crimes against Asian Americans. From March to April, the New York Police Department documented 25 hate crimes against Asian Americans, marking a stark increase from a total of 3 incidents in 2019. Meanwhile, STOP AAPI HATE — a database that San Francisco State University and Asian advocacy groups created in late March — has recorded more than 1,700 incidents ranging from verbal assaults to stabbing. Still, the president has defended that his words have been anything but racist: “It’s from China. That’s why. It comes from China. I want to be accurate,” he said at a press briefing. How could have his words have translated into real hateful and discriminatory actions?

Although the president argues that he only intended to convey his disapproval of China’s pandemic response, literature on the philosophy of language elucidates the connection between Trump’s words and hateful actions. With the benefit of hindsight, we can study such language — and the phrase “the Chinese virus” in particular — and learn how to respond to similar rhetorical moves as the president escalates his attacks on China and on other minorities.

When Trump justified the phrase “the Chinese virus” in March, he took advantage of the vagueness of language. Compound nouns — like “spa water,” “arm pillow” and the “Chinese virus” — are ambiguous, because the relationship between the two nouns, like “spa” and “water,” is unclear. Although Trump claimed he meant that the disease originates from China, “the Chinese virus” could also signify ‘a virus carried by Chinese people’ or ‘a virus of Chinese people.’ The president acted as if the intention of the speaker — which he promised was not racist — controls how words are understood.

Contrary to Trump’s defense, however, many philosophers of language argue that the meaning and effect of words are also governed by how they are used in society. Of course, in regular conversations, words communicate a speaker’s transparent intent. However, should Trump’s press conferences and tweets — or any politician’s speech for that effect — considered to be in context of a typical conversation? Often in political discourse, words affirm belief systems and the communal practices in which they are embedded.

Specifically, when one uses words that have been shaped by social practices, one legitimizes the connotations and value systems attached to them. One can insist that they only meant the inside of a city when using the phrase “inner city,” but the racist ideology associated with that term persists nevertheless. “There are tools like a hammer or a screwdriver which can be used by one person; and there are tools like a steamship which require the cooperative activity of a number of persons to use,” philosopher Hilary Putnam writes in his paper the Meaning of “Meaning.” “Words have been thought too much on the model of the first sort of tool.”

Philosopher Lynne Tirrell offers a relevant example in her 2012 paper Genocidal Language Games. According to Tirrell, for years preceding the Rwandan genocide, the Hutu majority called their Tutsi counterparts “cockroaches (inyenzi)” and “snakes (inkoza).” These were mindless slurs at first, Tirrell explains, intended to insult an individual rather than to convey the ethnic inferiority of the Tutsis. But these words were said in the context of a culture where snakes are public health dangers and cutting the heads of snakes is considered a rite of passage into manhood. When the conflict between the two groups intensified, these slurs helped connect murdering the Tutsis to a celebrated act of killing snakes. In retrospect, a Hutu calling his Tutsi neighbor a “snake” or “cockroach” was participating a linguistic practice embedded in ethnic discrimination and legitimizing hatred toward the Tutsis. “What we do with our speech acts often outstrips our own mastery, and in cases in which the social functions of speech have been co-opted, we can see that participants might not see the full scope of the games that they are playing,” Tirrell explains.

Tirrell’s account of the Rwandan genocide is instructive not because Asian Americans are at the risk of getting massacred, but because it illuminates how words can activate longstanding discriminatory sentiments and help authorize actions. Like the insults hurled against the Tutsis, Trump’s attacks on China are embedded in the context of oppression against minorities. His administration’s nativist agenda has rekindled centuries of discrimination against Asian Americans, dating from the Chinese Exclusion Act of 1882.

In addition, the phrase “the Chinese virus” draws on a history of nativist attempts to scapegoat immigrants about public health. During a smallpox outbreak in 1900, the government exclusively imposed a quarantine on San Francisco’s Chinatown and called it a “laboratory of infection.” In English, metaphors are often used to compare a nation to a body — such as “head of state,” “body politic” and “arm of the government” — and Trump has frequently equated immigrants to an illness penetrating it. They bring “tremendous infectious disease,” “communicable disease” and a “tremendous medical problem coming into a country,” Trump has said.

“Like the ordinary farmer in Rwanda who did not think that calling his Tutsi neighbors ‘snakes’ and ‘cockroaches’ would help authorize the killing of his neighbors, people who repeat the phrase ‘the Chinese virus’ may not realize its pernicious impact,” Tirrell explains. “I don’t think we should assume that there is a war planned against the Chinese in America but I do think that it sows the seeds of discrimination by connecting Chinese people with the virus.”

By rebaptizing the coronavirus as “the Chinese virus” with the authority of a president and insisting on the phrase, Trump has affirmed the racist and anti-immigrant narratives behind it. Calling coronavirus “the Chinese virus” had the effect of connecting practices one would take against the spreaders of a deadly virus — such as shunning them, kicking them out and even attacking them — to those who appear Chinese. One might argue that this rhetoric convinced people to rationalize discriminatory and hateful actions against Asians as fighting the virus.

The power of words can seem mysterious and insignificant, particularly in light of a rapidly spreading disease that has taken more than a hundred thousand lives. However, literature on the philosophy of language shows that words do make things happen. Though Trump’s coronavirus rhetoric cannot — and most definitely should not — be censored, we must acknowledge and discuss the damages inflicted by his anti-Chinese narrative.

Figleaves, Bothsidesing, and the Ethics of Implication

cartoon image of group of confused people

This is an article about the Plandemic video that made waves online at the beginning of May, but, before I can say what I mean to say about it (and how people have interacted with it online), we need to talk about a little philosophy of language.

Imagine that we’re walking down the street one evening when we pass by a panhandler asking for change. After another few minutes, I say, “Did you see that fellow on the corner back there? Do you think anyone would notice if he went…missing?” Hopefully, you’d be both surprised and troubled to discover that such thoughts were on my mind. Moreover, those concerns would not be dispelled if I continued by saying, “What? I’m not saying that we should kill anyone! I was only asking questions! What’s the big deal with that?”

When thinking about how people communicate their ideas to each other, philosophers of language often make a principled distinction between the semantics and the pragmatics of a sentence. The former (which is sometimes called the sentence’s “propositional content”) is simply a matter of how the words in a sentence are defined and what they mean when combined together, while the latter is determined by how a speaker in a given context intends that sentence to be understood by an audience. Put differently, the semantic content of a sentence is “what is said” by a speaker, while the sentence’s pragmatics are a matter of “what is meant” by the speaker.

Often, a sentence’s semantic and pragmatic meanings are the same thing: when I tell  you “Many apples are red,” I mean to communicate simply that many of the pieces of fruit we both know to be apples are colored red. So, of course, the more interesting cases are when these two things come apart: let’s say that you ask me whether or not I’m planning to attend a party that our mutual friend is hosting and I reply with the sentence “I have to work that night.” Technically, the semantics of my reply only mean “I am expected to work a shift at my job on that night,” but, in context, I am still clearly answering your question—even though I didn’t say “No, I can’t go to the party,” that is what I meant, nevertheless.

In his essay “Logic and Conversation,” H.P. Grice used the term ‘implicature’ to describe this curious (and common) feature of how we communicate. In unpacking several different kinds of implicatures, Grice pointed out that they can take many forms. Imagine that you’re hiring someone for a job and receive a reference letter that simply says “Mr. X’s command of English is excellent, and his attendance at tutorials has been regular. Yours, etc.” Even though this letter says only positive things about the applicant, it still clearly means that the letter-writer does not think that Mr. X should get the job (or else the reference letter would be far more substantial and informative). We can even mean things without actually saying anything at all, as when a regular patron in a shop wordlessly places their usual amount of money on the counter and waits for the familiar shopkeeper to hand them their usual purchase.

And implicatures can certainly take the form of questions, too. Imagine I’m walking towards the kitchen and you ask me “Do we have beer in the fridge?” If I simply answer “Yes,” I will have responded fairly to the semantics of your question, even while likely ignoring the pragmatic reasons that you had for asking it (namely: you would like me to bring you a bottle). Or think about someone asking you “Are you really planning on wearing that outfit tonight?”—although the question alone might seem to seek a yes/no answer, it also implicates that the questioner already has a strong opinion about your outfit.

So, a question like “Do you think anyone would notice if that person went missing?” might, on the semantic level, be relatively boring—it’s just a question about the number of people who might notice if the panhandler was no longer around. But, depending on what I mean by asking it (that is, what my pragmatic intentions are), this question could cover a wide range of other interpretations, many of which could be terrible. Notably, my saying “I’m just asking questions!” does nothing to negate the operation of this sort of implicature – what I’m saying is just a question, but what I mean by the question is what really matters.

It’s true that Gricean implicatures are “cancellable,” by which Grice meant that we can clarify what we mean by what we say in a way that explicitly clarifies what we’re trying to communicate. Consider if my initial question about the panhandler instead went like this: “This might sound odd, but do you think anyone would notice if that panhandler went missing? I’m not saying that they should go missing! I’m just wondering if people would care.” Such a construction doesn’t seem problematic at all – the potentially concerning implicature (which could suggest that I was thinking of myself causing him to go missing) was cancelled by me making my meaning explicit.

Simply saying “it’s just a question,” however, is insufficient to actually cancel the implicature; instead, such a phrase functions like what Jennifer Saul has called a “figleaf” by merely “providing a bit of cover for something that is unacceptable to display in public.” Figleaves are like a conversational distraction that purport to shield a speaker from responsibility for what they say: Saul offers the example of someone saying “I’m not racist, but…” before going on to express something racist—the “I’m not racist” figleaf might make it seem (to some) like the speaker has done nothing blameworthy without actually excusing or addressing the problem of the subsequent racist expression. In a similar way, saying “It’s just a question” might make it seem like a question’s implicature is being cancelled without genuinely clarifying or correcting what the speaker really means.

And the ethics of this kind of speech can be important to consider. At their core, manipulative conversational moves like gaslighting and sealioning can both be fueled by seemingly-innocent questions that mask exploitative (or otherwise immoral) implicatures. Imagine an abusive boyfriend trying to twist his girlfriend’s emotions by perpetually questioning her memories and perceptions: not only is this boyfriend doing something wrong, but it is likewise wrong to pretend like he is innocent simply because he’s “just asking questions.” Furthermore, reckless usage of this kind of figleaf can easily contribute to manifestations of the Dunning-Kruger effect (where someone incorrectly believes themselves to be more knowledgeable about a topic than they actually are): if experts agree that a certain course of action is best, my self-confidence to criticize their consensus by asking probing questions suggests that I consider myself equally informed on the matter. So, even in conversations where the main victim is the truth of the matter, we have reasons to be suspicious that a figleaf like “I’m just asking questions” sufficiently covers one’s epistemic obligations to be clear.

In the case of the Plandemic video, the creators repeatedly question numerous elements of the nature, origin, and significance of the coronavirus pandemic, as well as the motivations of figures like Anthony Fauci and Bill Gates. Repeatedly throughout the 26-minute production, both the host and the interviewee cast doubt on the general consensus that the pandemic is both real and significant, often by asking questions that suggest nefarious (though unstated) answers. For example: towards the beginning of the video, the interviewer asks “How can a man [Fauci] who’s giving—any person who’s giving global advice for health own a patent in the solution of the vaccine?” While this certainly suggests that Fauci’s expert opinion has been corrupted by his financial interests, it does not explicitly say this, nor does the interviewer’s response (which simply names it a “conflict of interest”) fully capture the immorality implicated of Fauci by the question. Similarly to how the fallacy of the loaded (or “complex”) question can rhetorically force an interlocutor into a hopelessly bad-looking conversational position, Plandemic repeatedly deploys such tricks to suggest that competing sources of information are not to be trusted. Despite its idiosyncratic posture, Plandemic paints itself as though it is on an epistemic par with (or even superior to) the public position, simply by self-confidently presuming it has the evidential authority to ask the questions that it asks.

And when it comes to the online response to Plandemic, the same problem recurs. When someone defends their choice to share a link to the video on their social media feed by saying “I’m just asking questions,” this sort of figleaf hides their assumed belief that they have the background knowledge necessary to ensure that the questions being asked are relevant and fair. (Obviously, everyone has the political right to ask questions, but that is different from the epistemic right to do so, which requires an informed understanding of the matter up for debate.) Brute “bothsideism” is unhelpful enough for creating a respectful, functioning political system; it is even worse for people trying to understand what others actually mean to say.

Meaning in Political Discourse

image of "Marxist" black-and-white label

Political polarization has become an important issue in recent years. On matters of public policy it seems like there is little room for rational conversation when people of different political stripes cannot agree on the same basic facts. Solutions to the problem of polarization are likely to be as complex and as plural as the causes of the problem, but there is one issue that may be an important starting point: meaning. A lack of clarity in the meaning of certain terms in political discussion only weakens our ability to engage in fruitful dialog. It fragments our political culture and encourages us to continue to talk past one another. If refining the meanings of our words helps to improve our political discussions, then the issue takes on a moral importance as well as a logical one.

For the first example, let’s consider what the term “socialist” means. The issue has become important as presidential candidate Bernie Sanders has at times used terms like “socialist” or “democratic socialist” to describe his positions. Yet according to some he is not a socialist at all, he is a democratic socialist and not a regular socialist, or perhaps he is not a democratic socialist but a social democrat, or still others insist that he is not a socialist but an all-out Marxist. Why is it so difficult for us to decide what Sanders is? Debates about the finer points separating these different views are not new; political ideologues have argued about these distinctions for years. However, much of these debates has always centered on articulating a position relative to an entire political and metaphysical philosophy.

For example, the historical materialism of Marx and Engels was the philosophical driving force behind Marxism. It held that history and society is largely structured by matter, by the control of material forces, rather than ideas or ideals. Borrowing from Hegelian thought, this historical materialism will result in the end of history; historical change is driven by class conflict and will end with the elimination of class: communism. Such philosophical views not only affected Soviet economic thinking, but their thinking about everything else. For example, Soviet science initially rejected genetics since the notion that an inherited trait made an organism better than others ran counter to the Marxist understanding of history and nature.

But terms like socialist, communist, democratic socialist, social democrat are, in many ways, now divorced from these larger philosophical systems. Instead they have been giving new meanings and associations in contemporary contexts. This is important. For example, some try to distinguish between socialism and communism purely in terms of what governments should enact as policy. Communism means the elimination of all property, socialism allows for the retention of varying forms of private markets we are told. While this may be partially true, it wouldn’t be how a Soviet communist in 1920 would have understood the term. But here in 2006 Sanders defines democratic socialism in terms of making sure that income isn’t a barrier to healthcare, education, and a clean environment. Sanders tends to refer to policies not philosophies when situating himself on the political spectrum.

If these terms no longer refer to specific systems, then the meaning of such terms becomes fuzzy. Sanders doesn’t seem to mind using any of these terms so long as he can redefine them in terms consistent with policies that he supports; the label is of secondary importance. This may be why he continues to use terms like “socialist” and “democratic socialist” interchangeably. Little distinguishes him policy-wise from self-proclaimed capitalist Elizabeth Warren or from FDR, except additional rhetoric. But the looseness of such terms can only serve to create confusion and room for empty political name-calling. It is problematic to take concepts which had their original meaning embedded in a 19th century philosophy and place them in a 21st century context. If something sounds vaguely like what the Soviets would do, then you are labeled a “communist.” Given this tendency toward oversimplification, we must be vigilant about how we use such terms moving forward. The real question we should ask ourselves is whether favoring policies of universal healthcare or education constitute necessary or sufficient conditions for using terms like “communist” or “socialist.”

Another example of unclear meaning creating problems can be found in a term like “social construct.” Originally coming out of sociology, social constructivism examines the ways in which society and social interaction structures our understanding of reality. Like “socialism” it emerged from academia operating under certain methodological and philosophical principles, and like “socialism” the term can be understood in several different, complicated ways. At one end of the spectrum social constructivism might hold that how we understand and use concepts is socially influenced while at another end it may hold that our entire understanding of reality is socially determined; the world is the way society says the world is.

It is often noted that gender and race are social constructs, but how is the term “social construct” being used and why is it controversial? Those who insist that there are only two genders, for instance, will tend to argue that biology tells us that we are born with XX or XY chromosomes and that this is all that matters. Part of this political conflict is a matter of epistemic and metaphysical disagreement. A hard realist about epistemology may hold that there is only one way that the world is, and that science is the path to find objective truths about it; social factors only get in the way. To their way of understanding it, only the two-gender theory “carves nature at its joints” so to speak. Social influences, including all of the values, perspectives, and experiences that come with them, only serve to lead to biased or subjective science. Thus, any discussion of more than two genders can be accused of engaging in mere “political correctness” (another poorly defined term) or ideology.

But one does not need to understand “social construct” in this way. Social influences and interactions, practical concerns, and even values, can affect the way we understand and study a topic without falling into complete epistemic relativism. For example, in her book Studying Human Behavior Helen Longino argues that different scientific approaches to studying human behavior do not all agree about what constitutes an environmental causal factor and what constitutes a biological causal factor. To some extent how we divide the world up into what constitutes a behavior, what is environmental, and what is biological is a social construct, but that does not imply that the end conclusion is simply made up or that accepting the findings of any given approach constitutes only subjective or ideological agreement.

Yet, unfortunately the notion persists that social construction implies subjectivity. For instance, Joshua Keating of Smithsonian Magazine notes that time is a social construct. Social interaction influences our understanding of concepts like minutes and seconds, “being early”, “late”, “fashionably late”, or “workday.” Yet he notes, “Those subjective views help explain why the standardization of time has often been met with reluctance if not outright resistance.” But there is no good reason to accept the conclusion that simply because a concept is (at least partially) a social construct, it is therefore subjective.

As philosopher Ray Lepley notes, our interests and desires do not confer value, they create value in interaction with reality. The way we understand and use the concept of time has (generally) proved itself objectively well at allowing us to navigate our environment in ways that are relevant to our society. For other societies a different understanding of time may prove itself objectively good at allowing them to navigate their environment. When different societies interact, it creates new problems for our established ways of tracking time that need to be worked out.

On the other hand, socially constructed terms can be merely subjective in the sense that they do not allow us to successfully interact with the world. For example, Adam Rutherford’s How to Argue with a Racist tells us that race is a social construct in the sense that it is a socially-created concept that does not allow us to predict factors like intelligence or explain innate differences between peoples and populations. Empirically, it is an empty concept. So long as we avoid understanding the terms “social construct” and “reality” in mutually exclusive ways, questions like: How many genders there should be? work themselves out empirically over time as various societies, (and sub-societies,) and their environments all continue to interact with each other. Thus, as long as we clarify the meaning of these terms, we can have conversations about these topics and discuss the merits of using concepts without talking past one another or worry that one side is merely trying to instill their ideology over others.

There are countless other extremely loaded terms which can be used to create arguments to attack others while avoiding serious debate and discussion. What do terms like “political correctness”, “liberal,” “democratic,” “conservative”, or “fascist” mean in their 21st century contexts? These questions are not new. Early analytic philosophers concerned about the rise of nationalist and racist beliefs in Europe argued that clarifying meaning could help resolve political questions. It is still worth taking up the task today.

Considering the N-Word: To Reject or Reclaim?

A silhouetted crowd at a late-night concert.

Editor’s note: this article contains derogatory language due to the nature of its content.

Netflix’s original series, Dear White People, released its first season this year with the summary: “Students of color navigate the daily slights and slippery politics of life at an Ivy League college that’s not nearly as ‘post-racial’ as it thinks.” In the fifth episode, Reggie Green, a student of color, is at a fraternity party. One of the white students hosting the party, Addison, sings along to a rap song, including the word, “nigga.” When Reggie asks him not to say that word, Addison gets defensive. A fight breaks out, and the situation escalates. The police arrive and end up pulling a firearm out and aiming it at Reggie.

Continue reading “Considering the N-Word: To Reject or Reclaim?”