← Return to search results
Back to Prindle Institute

Dungeons & Dragons & Oppression

photograph of game dice and figurine

On September 2nd, Wizards of the Coast, the company that produces official Dungeons & Dragons (D&D) materials, apologized for offensive content in its lore concerning the Hadozee race released in the most recent Spelljammer: Adventures in Space boxed set. The Hadozee are a monkey-like humanoid race known for their sailing abilities and love of exploring. They have been included in the Spelljammer series since 1990, but recent updates to the lore, as well as some of the Hadozee artwork, prompted criticism that the Hadozee evoked anti-Black stereotypes.

The updates restructured the Hadozee’s origins, stating that the race was created after a wizard captured wild Hadozee and gave them an experimental elixir that made them intelligent, more human-like in appearance, and, as a byproduct, more resilient when harmed. The wizard’s plan was to sell the enhanced Hadozee as slaves for military use; however, the wizard’s apprentices helped the Hadozee escape with the rest of the elixir. The Hadozee returned to their homelands to use the rest of the elixir on other wild Hadozee.

As reddit user u/Rexli178 explained, the main issues players had with this lore were that “the Hadozee were enslaved and through their enslavement were transformed from animals to thinking feeling people,” and that “the Hadozee had no agency in their own liberation.” This, coupled with the fact that anti-Black stereotypes often compare Black people to monkeys, plus the well-known historical racist sentiment that enslavement was necessary for the improvement of the Black race, plus the idea that Black people don’t feel as much pain when harmed, plus Hadozee artwork that seemed to evoke the imagery of minstrel showsplus the fact that the Hadozee were characterized in other places as happy servants of the Elves all came together to paint a picture that many players found damning.

It is worth noting that the critique of the Hadozee lore was not that the Hadozee reminded players of Black people. The critique was that the Hadozee echo anti-Black stereotypes and narratives that have been used to oppress Black people.

Stating that something is similar to a stereotype of a group is not the same as stating that that thing is similar to the group itself; this is especially clear when the stereotype in question is plainly false and dehumanizing.

The recent Hadozee controversy is not the only misstep Wizards of the Coast has made in the past few years — the 2016 adventure Curse of Strahd  contained a people called the Vistani who evoked negative stereotypes associated with the Roma people. At the same time, Wizards of the Coast is slowly trying to change how race functions in D&D, removing alignment traits (good vs. evil, chaotic vs. lawful) and other passages of lore text to allow for greater freedom when constructing a character.

What went wrong with the Hadozee storytelling?

Whether the parallels with real life oppression and negative stereotypes were intentional, it seems clear that this lore pulled players out of the fantasy world and recreated negative tropes associated with anti-Black racism.

How strongly it pulled on those tropes might be a matter for debate; however, I think the more interesting philosophical question here is: How should D&D include stories of oppression into their game materials, if at all? The answer to this question will likely depend upon particular histories of oppression and the details of how a given narrative of oppression is woven in the story, but I think we can say a few things to answer the general version of the question.

The first observation to make is that D&D is a fantasy series. When playing, we want to be able to escape our mundane world and experience the excitement of casting spells, fighting monsters, and just joking around with our friends. Because D&D is a fantasy series, it seems that the stories about oppression that the game facilitates should be sufficiently removed from actual histories of oppression. Even more so it seems that they should avoid reifying oppressive stereotypes in worldbuilding.

Some historical themes can be pulled upon, but WotC should be careful not to let too many of those themes overlap.

If a story maps too closely onto the experiences of oppressed groups that still exist and are still oppressed in some ways, D&D has left the realm of fantasy, doing a disservice to its storytelling and potential for healthy escapism.

And, if a story maps too closely onto oppressive stereotypes that have been used to denigrate certain groups of people historically, that can also set off alarm bells.

It is worth noting, too, that just because a piece of lore does not bring certain players out of the game – because they do not see the parallels to real-life oppressive tropes or narratives – that alone does not mean the lore is passable. The point here is for players of all different backgrounds, including from different marginalized groups, to be able to suspend disbelief and enjoy the fantasy world of D&D. Storytelling that makes it difficult for members of certain marginalized groups to equally enjoy and participate in the game is unjust and can practically exclude people from the game.

Now, this isn’t to say that Dungeon Masters (DMs, or the person who runs the D&D game) should not be allowed to modify or reinvent D&D materials to tell stories that more closely map onto real-life instances of oppression. Some members of oppressed groups might appreciate being able to navigate oppressive frameworks in a world in which they have power and can become heroes. Whether these homebrew stories are exclusionary is another tricky question.

The point is, while players should have the freedom to adapt and change stories, the basic blueprint put out through official D&D materials should be maximally inclusive.

This brings me to my second point — Dungeons & Dragons should not shy away from storytelling that allows players to explore oppressive societies, complex social issues, and other uncomfortable situations that they may face in their real lives. The trick is that the players need to be able to make their own choices about how they want to engage in these issues. And, if the storytelling is sufficiently removed from real-life histories of oppression, that will make it a safer space for players to explore how their characters might respond in those scenarios. In order to include these elements and tell these stories well, it would be good for Wizards of the Coast to hire writers who are familiar with common oppressive tropes and narratives and who could redirect problematic stories in a different direction.

It’s interesting to note that the people who have reacted against the Hadozee controversy and claimed that the Spelljammer material was just fine seem to agree with these two principles. One common reaction appears to be something like “fantasy is different than reality, and we shouldn’t confuse the two.” While the bulk of those reactions insinuate that people are seeing connections between Black stereotypes and the Hadozee that aren’t there, I think that they are roughly in line with the idea that fantasy is something that is distinct from reality and should be kept that way. I take it that those who are critiquing the Hadozee lore are critiquing it for this same reason.

Another common reaction seems to be along the lines of “we shouldn’t get rid of conflict and difficult themes just to make some politically correct folks happy,” which lines up with the idea that D&D should be a space where those scenarios can be explored by all players. I imagine that those who are unhappy with the Hadozee lore would also agree with this principle, so long as players can be active in shaping their characters and experiences and the game does not exclude certain groups of players.

To be inclusive to all D&D players, however, Wizards of the Coast needs to have better representation of people with different life experiences and understandings of the world in their writing rooms. This will not only make for better storytelling, but it will also facilitate gameplay that does not alienate certain players in the room. Let’s hope that Wizards of the Coast as well as the larger D&D community start to head more in that direction.

What ‘The Rings of Power’ Criticism Really Shows

photograph of The Rings of Power TV series on TV with remote control in hand

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The Rings of Power, a prequel to Tolkien’s The Lord of the Rings, has come in for a barrage of criticism. Much of this is not simply about, say, the content of the series taken in isolation, but how it relates to Tolkien and – more nebulously – how it relates to current social issues.

Concerning Tolkien, Alexander Larman, writing in The Spectator, called this series “artistic necrophilia.” He seems to worry it’s expensive and lacks star power, while also suggesting that Tolkien’s Silmarillion, which this is based on, is not coherent enough. His worry, which he expresses more clearly elsewhere, is that Tolkien’s work is being diluted and we should avoid that.

Perhaps there is something to this, we might worry that too much Tolkien is a bit like producing new versions of Monet by using some AI tool; at some point, this wouldn’t have much to do with Monet’s vision and would lack something that his originals possess.

Though I disagree we have reached this point, I can see his concerns.

Ben Reinhard, writing in Crisis, thinks that, in the hands of these writers, “Tolkien’s moral and imaginative universe is simply gutted.” His concern is that the plot lines and characters are new – perhaps, supposedly, based on Tolkien, but failing to capture the true meaning of Tolkien. It is, he thinks, stripped of the values Tolkien cared about.

This evidence for this, though, is mixed at best. He has a problem with Nori, the Harfoot (a proto-Hobbit), transgressing boundaries and showing a disdain for her staid and conservative society. (Well, he might want to meet some of the Hobbits in Tolkien’s trilogy.) And is Galadriel just some modern Girl Boss for those whose political engagement goes about as far as having a Ruth Bader Ginsberg bobblehead? Maybe. But we can’t judge that off of a few episodes. He complains that she isn’t the serene vision she is in the Lord of the Rings, but it shouldn’t surprise us that a character has to age into such grace (the show is, after all, set five thousand years earlier).

Perhaps the most contentious criticism concerns race and other social justice issues – and how these should relate to Tolkien’s original work.

Brandon Morse, in a couple of pieces, alleges that this show is just another example of something being “ripped out from the past in order to be revamped and remade for modern times, and this always includes an injection of woke culture and social justice values.” He wrote this based on the trailer, which appears to be “woke” simply because it features a female warrior and people of color.

Morse’s claim that when diversity is the focus, the storyline suffers amounts to sheer speculation – three episodes in and there is certainly a story developing. And I have no idea how anyone could determine how good the story might be from a few minutes of trailer.

But these complaints haven’t been taking place just in the pages of magazines on the right of the political spectrum. Plenty of mainstream ink has been spilled about the relationship between this show and social justice issues – some of it more worthy of discussion than Morse’s screed. At CNN, John Blake has documented the culture wars breaking out over the show, surveying many of the opinions I discuss here. But even his framing of the debate is more contentious than it need be: “Does casting non-White actors enhance the new series, or is it a betrayal of Tolkien’s original vision?”

Why does enhancement need to be the issue: why can’t we just cast non-White actors and expect them to be no more or less enhancing than White actors?

Here are some other ways of putting the question. Ismael Cruz Cordóva plays an elf in the new adaptation. He said he wanted to be an elf, but people told him “elves don’t look like you.” But is there any reason why elves shouldn’t look like him? They should be tall, they should be elegant and enchanting, but why would they need to be white? Even if they are white in the books, does that whiteness play any particularly important role?

Some think so. Louis Markos thinks we lose our ability to suspend disbelief when we see a non-white elf. It somehow jolts us out of the story. But I’m not sure why this should be true, beyond a personal view that this is what elves should look like.

We all face issues about what characters should look like – we read a book and have an image in our mind, then we see the character on screen and they look very different. For many of us, most of the time, we can easily adapt.

(More pointedly, Mark Burrows, also cited in the CNN article, is confused by people who can accept walking tree-people but who think “darker skinned dwarves are a bit far-fetched.”) It seems to me that if we don’t think whiteness is essential to elves being elves, then we shouldn’t have any problem with non-white actors playing elves. Add to this that representation is important – a kid who looks Cordóva, too, can dream of being an elf – and the argument doesn’t get us far.

And if we do think elves are essentially white, we might face bigger issues: is Tolkien, in presenting elves as superior, a racist? There is certainly an argument to be made here, but we would like to hope not, and we would like to hope that even if this were the case, his art needn’t be bound to those attitudes.

Part of my concern here is with knee-jerk responses to a show that’s just getting started. As Adam Serwer of The Atlantic notes we’re beginning to see “reflexive conservative criticism of any art that includes even weakly perceptible progressive elements.” And our own A.G. Holdier has demonstrated how this conservative nostalgia – for a whiter media – can lead to moral risks.

Reinhard admits that his more “paranoid and conspiratorial” tendencies – which he does his best to keep down – show him “Luciferian images and undercurrents.” I wonder whether, if he could keep those thoughts at bay, he, and other critics, might try to watch the show in a slightly more generous mood. When all you have is a hammer, everything might look like a nail – which is why those who go into this show expecting to see wokeness everywhere might not have all that much fun. Better to suspend both belief and your commitment to the culture wars, you might enjoy watching it a bit more.

Why Some Fear “Replacement”

photograph of cracked brick wall with several different colors of peeling paint

On Saturday, May 14th, yet another mass shooting occurred in the United States. Ten people were killed, and three more injured. This was not a random act of violence. The shooter drove about three hours to reach a grocery store in Buffalo, NY rather than a location closer to his home in Conklin, NY. He claims he chose this area as it had the highest percentage Black population of potential target locations. Why target a Black neighborhood? The shooter apparently believes white Americans are being “replaced” by other racial and ethnic groups.

The once fringe idea of “replacement” has become mainstream.

This is the conspiracy theory that some group is working to ensure the decline of the white population in the U.S. and Western Europe, in order to “replace” them with people of other races and ethnicities. Originally presented as an anti-Semitic conspiracy, “replacement” has entered into American politics in a different form; some Republican politicians and media pundits claim that Democrats want increased immigration for the purpose of “replacing” white, conservative-leaning voters with those more likely to vote blue.

It is very easy to dismiss the idea of “replacement.” Indeed, much recent reporting immediately labels it racist without much explanation (never mind that the account is factually mistaken). But given the trend of claiming that left-leaning individuals call any idea they do not like “racist,” it’s worth spelling out exactly why fearing “replacement” relies on racist assumptions.

First, it is worth noting that “replacement” for political gain would be a poor plan. Immigrants are not a monolith. For instance, Donald Trump actually gained support among Hispanic voters between 2016 and 2020. In general, the relationship between demography and political outcomes is not so clean cut. Further, the plan would take a long time to develop – you must be a legal resident for five years before qualifying for citizenship, not including time it takes to apply for and receive a green card, provided one even qualifies. Of course, this may dovetail with other conspiracies.

Second, there is something antidemocratic about feeling threatened by “replacement.” It is impossible for an electorate to remain static. Between each election, some children reach voting age, some voters die, events happen which change our views and which motivate us to get out the vote or simply stay home. Just as Heraclitus suggested we can never step in the same river twice, we can never have the same election twice. Provided that elections are fair, open, and secure, objecting to a changing electorate because you perceive that your favored political goals will be threatened is to deny the starting premise of democracy – that every citizen’s political preference counts equally.

To fear changing demographics out of concern for the impact on elections is to value your preferred outcomes over the equality of your fellow citizens.

So perhaps some find the idea of “replacement” frightening because they fear its impacts on culture. They might view it as a kind of cultural genocide; the decreasing portion of the white population threatens to destroy white, American culture and replace it with something else.

In 1753, Benjamin Franklin expressed anxieties about German immigration into the colonies. He claimed that, although some Germans are virtuous, the majority of the new immigrants were the “most ignorant or stupid sort of their nation.” He bemoaned that they do not bother to learn English, instead creating German language newspapers and street signs in both English and German. He feared that, unless German immigration was limited, “they will soon so outnumber us, that … [we will not] be able to preserve our language, and even our Government will become precarious.”

In 2022, Americans eat bratwurst and frankfurters with sauerkraut. We send our children to Kindergarten. The most popular American beers originated from Adolph Coors, Adolphus Busch and Frederick Miller. Franklin’s concerns about German immigration echo those we hear today about immigrants from different places. But Germans did not replace Americans or topple the government.

Instead, these immigrants altered our culture. Like our electorates, our culture is never static. It is constantly changing, in response to global events and in response to new knowledge and traditions that immigrants bring. As our culture changes, who we label as outsiders changes; two hundred years ago, it was non-Anglos and non-Protestants.

If Franklin was wrong to fear German influence on American culture, it’s hard to see any relevant difference with fearing the effects of contemporary immigration.

Some fear “replacement” for a different reason, claiming that changing demographics will result in new majorities exacting revenge. The idea being that, after white citizens become a political minority, the new political majority will engage in retributive measures for past injustices.

This view of the dangers of “replacement” indicates that a majority can use our political institutions in ways that unjustly harm minorities. In fact, it seems to even acknowledge that this has occurred. So, why leave that system intact? The far better response would be to reform or maybe even replace current systems that allow a majority to perpetuate injustices against a minority.

And we now see clearly why fear of “replacement” stems from racism. Being afraid of changing demographics requires denying that all citizens of a nation deserve an equal say in how it is run. It means conceiving of a particular culture as superior to another. And, ultimately, it involves thinking our institutions ought to be designed in ways that allow a majority to commit injustices against a minority. In these ways, the person who fears “replacement” endorses a hierarchical worldview where some deserve to count for more, are superior to, and deserve power over, others. It is only through this lens that a change in racial and ethnic demographics can be worrisome.

But given all this, why would anyone find the idea of “replacement” a compelling one? Finding an answer to this question is crucial if we are to counteract it. The U.S. is still very segregated. This is due to the interaction of numerous historical, political, and economic factors, at both the local and national levels. I grew up in a suburb of Buffalo, called Hamburg. According to 2021 data, the population of Hamburg is 96.1% white. 2021 census estimates that 95.7% of the population of Conklin is white. These figures are remarkable given that the U.S. as a whole is 57.6% white.

To live in a place like Hamburg or Conklin is to live in a white world. You can complete an entire day in town – a trip to the grocery store, a doctor’s appointment and a deposit at the bank – and only encounter white people.

It is no wonder why some may feel threatened by the idea of “replacement”; a world where people of color are increasingly visible is not their world. They have little exposure to a world that is not (nearly) entirely white, thus the prospect of it triggers the fear of the unknown. Hence why “replacement” is frightening – it threatens to “destroy” their world.

So, responding to terrorist acts like those in Buffalo requires a lot more than athletes telling us to choose love or teaching President Biden about “Buffalove.” It requires significant institutional change. To truly eliminate the grip that ideas like “replacement” have on some, we must work to counteract the injustices that leave many of us living in separate worlds. Given the increasing frequency of racially-motivated terrorist acts in the U.S., this task is only becoming more pressing.

Life Imitates Art (and So Does the News)

image of movie opening title sequence

There is an old saw that life imitates art. But what exactly does it mean? Is it not the other way around – that art imitates life?

Many answers have been given to this question, but here’s one that I find plausible: life imitates art insofar as it reveals truths about us and our world. Such truths are not true because we find them corroborated by personal experience or the annals of history. The truths of art are true because they frame how we understand ourselves and our history in the first place. We might say, then, that life imitates art insofar as the truths of art help us make sense of life. They help us make sense of our human condition and what we value in it.

Take Homer’s Odyssey. According to one classicist, the great epic poem tells us “something true about life…It’s about homecoming…It’s about the bonds that connect family members over many years despite time and distance.” This is platitudinous, but nonetheless correct. The poem still speaks to us today partly because it transfigures our conceptions of what home and family are. That is, the poem compels us to understand homes and families differently, including our own. And we can appreciate such truths even when we have never left home, much less been to war.

If life imitates art, then so does the news. And there is one little-known artwork that seems to ring especially true given the current state of our union. The work I have in mind is “Stars in My Crown” (1950), a small-budget western film directed by Jacques Tourneur. The film tells the story of Walesburg, a small, predominantly white town in the postbellum South. Their story is strikingly similar to ours. Or we could say that our story imitates theirs.

Like our country right now, Walesburg is sick in body and soul. The town is not only plagued by an epidemic, but also struggling with the scourge of racism. The nature of these ills, as well as the town’s responses to them, are telling.

The racial troubles start – at least in the film – one lazy afternoon. An orphan named John Kenyon is fishing with his dear friend, a former slave named “Uncle” Famous Prill. John is a wide-eyed and well-mannered boy who is deeply loyal to Famous, and with good reason. Famous is a humble old man with a heart of gold. He has long been a guiding light in the community. As John tell us: “I don’t guess there was a boy or man in Walesburg who hadn’t had him for a teacher.”

While John and Famous are sitting along the creek beside their fishing rods, Lon Backett pulls up on his buckboard. Lon runs the general store, as well as a small mining operation outside of town. He wants to speak with Famous because the mica vein his workers have been mining runs under Famous’s property, and Lon wants to buy him out.

Lon makes several offers, but Famous graciously declines each one: “I got a long-tailed coat for Sundays. A house, got a bed, And I gets my vittles three times every God’s day, don’t I? Mr. Backett, what does I want with $16?” Lon drives off in a huff.

A few minutes later, Parson Josiah Gray comes along. The three discuss what had just transpired. They try to calm Famous down, assuring him that he is entitled to his land. After all, he is a free man under the law. But Famous knows better: “just saying a good thing don’t make it so.” The parson gets it. He acknowledges that no matter what assurances he gives, Famous will not have it easy: “I guess Lon Backett will have to kick up an almighty big stink before he learns his lesson.” This is a terrible understatement. Lon’s “stink” will nearly cost Famous his life.

While Lon drums up hostilities against Famous, the citizens of Walesburg start falling deathly ill with “slow fever.” Typhoid. Eventually they will discover that it is from the contaminated school well. Until then, the town goes into a lockdown. School closes and the church is shuttered. The graveyard begins to fill. The doctor and parson work double-time to serve the sickly and dead. (It is only then, by the way, that the doctor becomes integrated into the community. He was an educated elite from the big city and with a disdain for small town life. Townspeople sensed it, and for a long while they distrusted him. Sound familiar?)

During the epidemic, the threats against Famous intensify. Lon’s men are out of work and angry. One night they tear up his corn crop, destroy his winter food stores, and set loose his livestock. They come back another night as Night Riders, clad in white hoods and brandishing torches. They leave a burning cross in front of the porch and pin on Famous himself a note demanding that he give up his land or suffer the consequences.

When the note reaches Parson Gray the next day, he storms into the saloon where Lon and his clansmen hang: “Haven’t you seen one poisoned well spread grief and trouble through half the town? Don’t you realize the poison in that well was catlap compared to this?” The men are unmoved. If the parson wants a fight, they will give it to him.

Later that night the lynch mob surrounds the home of Famous, rope in hand, and orders him to come out. The parson intervenes. He asks that he be permitted to read Famous’s will before the dreadful deed is done. As the parson reads the will, he names each of the hooded men one by one. Famous intends to bequeath something to each of them: a razor for Bill Cole, who had wanted a beard since he was “knee-high to a hop toad,” an axe for Matt Gibson, his dog to Justin Briley, and even the mica vein for Lon Backett, since he seems to want it “powerful bad.” The men realize that they cannot go through with their plans. Not against Famous.

The film closes with a scene from church the next Sunday. The parson and his flock are singing:

“I am thinking today of that beautiful land
I shall reach when the sun goeth down;
When thro’ wonderful grace by my Savior I stand,
Will there be any stars in my crown?”

The camera pans the room, showing many of the townsfolk we have come to know. Most of them have been regularly attending services, but some have come for the first time. Everyone in the town seems to be there, celebrating together. The camera trains on Lon, with his hands piously clasped as he pours himself into the hymn. He looks as though he has, finally, learned his lesson and is now praying that there may still be stars in his crown.

This wholesome final scene has all the trappings of a feel-good Hollywood ending. A community looks healed and the credits will soon scroll. But then, just behind Lon through a church window, we catch a quick glimpse of Uncle Famous walking down the road, alone. The shot is easily missed. It is a subtle indication that the devastating effects of the peculiar institution continue, and often in ways that go unnoticed by those not suffering from them. The joyous churchgoers are unaware. And to the extent that we viewers believed everything in the town to be turning out alright, we, too, were complicit in the self-deception.

Today we face a similar situation. Coronavirus vaccinations promise an end to this terrible pandemic. Yet while our body politic has a path to health, there is no easy inoculation for the racism that has been poisoning our collective soul. And while most of us acknowledge the difficulties of combating racism, “Stars in My Crown” presents those difficulties in an especially perspicacious way.

First, the film shows how deep-rooted racism is often sustained because it advances the interests of the rich and powerful. This is not to say that racism is reducible to economic or class warfare. The point is rather that racist beliefs and practices are often reinforced because they serve the privileged. Lon Backett foments racial tensions in Walesburg because it advances his business interests. In America today there are many such people who sow racial division for their own gain. The billionaire businessman Charles Koch recently confessed that he and his political associates had “screwed up by being so partisan.” Koch seems well-intentioned. He seems to believe in equality and justice for all. But he and his Koch Network (now named, and not without irony: Stand Together) have invested millions of dollars in the very political messaging that has helped bring racial tensions in America to a fever pitch. This is hardly a new problem. And it persists because those who stand to benefit from systemic racism, however well-intentioned they may be, are easily blinded to the unjust reality they help create.

The film also shows the complexities of the human heart and how it so readily accommodates brotherly love, racial resentment, and economic anxiety. The Night Riders are undeniably racist, and their hate is further stoked by fears that without mining work they will be unable to feed their families. Yet however much racial hate they have, and however much that hate is exacerbated by worries about money, their enmity is nevertheless counterbalanced by a love and respect for Famous. “Sinners also love those that love them” (Luke 6:32). This is a complicated psychology, but not an uncommon one. What makes it complicated is that we cannot easily determine root causes. What is the real reason behind the Night Riders acting as they did, and what was mere pretense? Was their economic anxiety heightened by racial resentment? Or vice versa? Was their brother-love genuine, or just racism suffering from weakness of will? These very sorts of questions are being intensely debated right now (see, for example, here, here, and here).

Above all, the film reminds us how easily we ourselves are prone to overlook these challenges. When “Stars in My Crown” first debuted, The New York Times praised it: “The true spirit of Christmas – Peace on Earth, Goodwill Toward Men – is reflected both in word and deed in the heartwarming Metro-Goldwyn-Mayer picture.’” How far from the truth. The film does not warm our hearts, but rather warns us about our hearts. The film enjoins us, Ta-Nehisi Coates does, to “resist the common urge…toward fairy tales that imply some irrepressible justice.”

Now some readers might be saying to themselves: “I’ve read Coates, and I’ve thought quite a bit about these issues. I doubt that I really need to watch some B-western made nearly a century ago by an aristocratic Frenchman.” This may very well be true. Or it may not be. As Famous tells us in the film, just saying a good thing don’t make it so. We may think we understand what’s going on around us and in the news, and yet we may also be poorly mistaken.

More Than Words: Hate Crime Laws and the Atlanta Attack

photograph of "Stop Asian Hate' sign being held

There’s an important conversation happening about how we should understand Robert Aaron Long’s murder of eight individuals, including six Asian women (Daoyou Feng, Hyun Jung Grant, Suncha Kim, Soon Chung Park, Xiaojie Tan, Yong Ae Yue) last week. Were Long’s actions thoughtless or deliberate? Is the attack a random outburst at an unrelated target, or “a new chapter in an old story”? Is the attack better explained as a byproduct of anti-Asian American sentiment left to fester, or merely the result of a young, white man having “a really bad day”? Behind these competing versions lies a crucial distinction: in judging the act, should we take on the point of view of the attacker or his victims?

In the wake of the tragedy, President Biden urged lawmakers to endorse the COVID-19 Hate Crimes Act aimed at addressing the rise in violence directed at Asian Americans. The bill intends to improve hate crime reporting, expand resources for victims, and encourage prosecution of bias-based violence. As Biden has emphasized, “every person in our nation deserves to live their lives with safety, dignity, and respect.” By publicly condemning the Atlanta attack as a hate crime, the president hopes to address the climate of fear, distrust, and unrest that’s set in.

Unfortunately, hate crime legislation has proven more powerful as a public statement than a prosecutorial tool. The enhanced punishment attached to those criminal offenses motivated by the offender’s biases against things like race, religion, and gender are rarely sought. Part of the problem stems from the legal difficulty in demonstrating motive. This requires going beyond mere intent — assessing the degree to which one meant to cause harm — and instead considering the reasons why the person acted as they did. We’re encouraged to judge the degree to which prejudice might have precipitated violence. Establishing motive, then, requires us to speculate as to the inner workings of another’s mind. Without a confession, we’re left to try to string bits of information together into a compelling narrative of hate. It’s a flimsy thing to withstand scrutiny beyond a reasonable doubt.

This trouble with motive is currently on clear display: Long has insisted that race and gender had nothing to do with the attack, and the police seem willing to take him at his word. On Thursday, FBI director Christopher Wray deferred to the assessment by local police saying that “it does not appear that the motive was racially motivated.” Instead, Long’s actions have been explained as the consequence of sex addiction in conflict with religious conviction; Long’s goal has been described as the elimination of temptation.

How this explanation insulates Long’s actions from claims of bias-inspired violence is not clear. As Grace Pai of Asian Americans Advancing Justice suggested, “To think that someone targeted three Asian-owned businesses that were staffed by Asian American women […] and didn’t have race or gender in mind is just absurd.” The theory fails to appreciate the way Long’s narrative fetishizes Asian American women and reduces them to sexual objects. Rather than avoiding the appearance of bias, the current story seems to possess all the hallmarks. Sure, it might prove a bit more difficult to establish in a court of law, but as Senator Raphael Warnock argued, “we all know hate when we see it.”

So what makes politicians run toward, and law enforcement run from, the hate crime designation? In addition to the difficulty in prosecution, hate crime laws have a shaky record as a deterrent, made worse by the fact that they are rarely reported, investigated, or prosecuted. Despite all but three states now having hate crime laws on the books, rates of bias-inspired violence and harassment over the past several years have remained relatively high. (Many attribute this trend to the xenophobic and racist rhetoric that came out of the previous White House administration.)

But perhaps the value of hate crime legislation can’t be adequately captured by focusing on deterrence. Maybe it’s about communication. Perhaps the power of these laws is about coming together as a community to say that we condemn violence aimed at difference in a show of solidarity. We want it known that these particular individuals — these particular acts — don’t speak for us. Words matter, as the controversy regarding the sheriff’s office explanation of the attacker’s state of mind makes clear. Making the public statement, then, is a crucial step even if political and legal factors mean the formal charge is not pursued. It’s a performance directed at all of us, not at the perpetrator. The goal is restoration and reconciliation. Failing to call out bias-inspired violence when we see it provides cover and allows roots to take hold and to continue to grow unchecked.

Still, the importance of signalling this moral commitment doesn’t necessarily settle the legal question of whether hate crime legislation can (and should) play the role we’ve written for it. Hate crime laws are built on our belief that bias-inspired violence inflicts greater societal harm. These crimes inflict distinct emotional harms on their victims, and send a specific message to particular members of the community. Enhanced legal consequences are justified, then, on the basis of this difference in severity and scope. Punishment must fit the crime.

Some critics, however, worry that hate crime laws reduce individuality to membership of a protected group. In a way, it’s guilty of a harm similar to that perpetrated by the attacker: it renders victims anonymous. It robs a person of her uniqueness, strips her of her boundless self, and collapses her to a single, representative label. Because of this, hate crime laws seem at once both necessary for securing justice for the victim — they directly address the underlying explanation of the violence — and diametrically opposed to that goal — the individual victim comes to be defined first and foremost by her group identity.

The resolution to these competing viewpoints is not obvious. On the one hand, our intuitions suggest that people’s intentions impact the moral situation. Specifically targeting individuals on the basis of their gender or ethnicity is clearly a different category of moral wrong. But the consequences that come from the legal application of those moral convictions have serious repercussions. Ultimately, the lasting debate surrounding hate crime legislation speaks to the slipperiness in pinning down what precisely justice demands.

Acknowledging a Violent Past: Disney’s Racist Fairy Tales

photograph of Walt Disney Statue with Disney Castle in background

After months of protests by the Black Lives Matter movement in the wake of George Floyd’s death some white people in the U.S. began to notice that perhaps the world is not as equal as they once thought. They also began to notice that this inequality was perpetuated by their lack of education on race in the U.S. This became obvious as book sales about race began to skyrocket from May to June with Robin DiAngelo’s White Fragility topping the list. This gave way to the Anti-Racist movement, where white people take it upon themselves to unlearn their inherent racist behaviors that they have been educated in since childhood. People began to acknowledge that the history we learn in school is drenched in the long legacy of white supremacy that this country was built on.

Debate was sparked about how to teach a history that highlights, rather than hides, the violent and racist past of the U.S., and how to deliver this material in a form suitable for children. It’s necessary to have these conversations early on with kids as studies show that children as young as three begin to associate certain races with negative stereotypes, while most adults tend to think they should wait until their children are at least five to begin discussing race. Since children are visually recognizing not only physical race, but the behaviors attached to different groups of people made up of different races, it is important to start teaching children about the history of race, rather than shy away or leave it up to a school curriculum that will likely only perpetuate racist stereotypes and histories.

This debate has arisen again recently with the decision by streaming platform Disney+ to restrict the ability of children to watch certain films. Disney+ placed a warning of racist depictions on certain older films last year, but has now blocked those films from Kids Profiles, which include the ages of seven and below. Some of the films include classics like “Dumbo” (1941) because of depictions of racist minstrel shows and “Peter Pan” (1953), which includes racist stereotypes of Native Americans. On their website, Disney acknowledges the roles that stories have in shaping perspectives in the world and makes a pledge to review the films it provides in an attempt to spark conversation on history. Certainly, these moves by Disney are a step in the right direction, but perhaps the billion-dollar company can afford, and even has a responsibility, to do a lot more than take just a step towards conversation.

Disney is considered a blockbuster powerhouse by the film industry, and it certainly has an enormous cultural impact in America, as well as internationally. Children for generations have grown up watching their dark-folk-story-turned-romantic-fairy-tales, with young girls longing to be princesses in search of their long-lost knight. Now that screens have become even more accessible, children can sit with their own iPads at home for hours watching these films. A summer trip to Disneyland or Disneyworld is considered a rite of passage for tens of millions of families. Disney has literally become its own world with all of its theme parks combined taking up as much land as the entire city of San Francisco. Now that Disney has also developed a streaming service, its reach only widens as 55 percent of their subscriptions belong to families with children. Disney obviously plays a large part in a lot of American children’s lives by way of the education they provide through storytelling.

Given the formative power Disney wields, when these stories contain racist histories it is necessary to acknowledge and discuss that history. While Disney mentions “negative depictions” and “mistreatment” in their advisory statement that appears before certain films, they never once mention racism or white supremacy. Instead, it seems like they are trying to walk a fine line of appeasing new voices critical of not-so-hidden racism and a consumer base that is unwilling to believe that such a thing as white supremacy still exists in the U.S., or believe children are old enough to read its signs. Considering both the enormous fan base and amount of content made by Disney that children consume, they should be more aggressive in their policies towards rooting out their white supremacist past by using educational tools on their streaming platform.

Realistically, there are two different types of education regarding race that happen in America. For white children, race is evidently something that they notice at an early age, and then they begin to unknowingly recognize, learn, and perpetuate racism, perhaps without even noticing what they are doing. For children of color in America, especially black children, race is something that they become aware of through macro- and microaggressions they experience as a result of the white supremacy that encumbers and constructs life in America. When students of color start their schooling, they are immediately placed in an environment that is built against them.

If Disney is willing to acknowledge that stories matter, then they perhaps need more than just an alert regarding “negative depictions” in order to address the problematic actions and behaviors shown in their films. By recognizing stories matter, they must also recognize the influence they have in teaching children, often without any parental interference, as Disney is a most often considered a kid-friendly source. They owe the children of color watching these films more than an acknowledgement of the harm that they perpetuated for so long. They could use their platform as an educational opportunity to spread anti-racist awareness to the millions of children and even adults using this platform.

Starting these conversations is a helpful step, but Disney has both the money and influence to be able to help spread awareness and education through a far more extensive system. It is important to remember that it is not Disney’s sole burden to undo the racist history that is perpetuated in history books and through word of mouth, nor would it even be possible for them to do so as that is a task people have struggled with and will be struggling with for decades. Ideally, schools and parents would be able to have truthful and as unbiased as possible conversations about the racist history of America, but realistically this does not seem to be possible for most American children. As can be seen from the protests and politics of 2020, white adults struggle to talk about race or even accept that racism is systemically ingrained in life in America. If adults struggle to talk about racism with each other, how can they be expected to have productive conversations with their children?

If talking about racism in America is normalized for this generation of children, then perhaps a productive cycle can begin for future generations about reckoning with a white supremacist past. This is not yet the reality, therefore, Disney has such a reach and connection with families that they might have a better opportunity to recount accurate histories of peoples in America. And they may have a moral obligation to do so given their not insignificant contribution to the problem.

Under Discussion: Dog Whistles, Implicatures, and “Law and Order”

image of someone whispering in an ear

This piece completes our Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

For the last several days, The Prindle Post has explored the concept of “law and order” from multiple philosophical and historical angles; I now want to think about the phrase itself — that is, I want to think about what is meant when the words ‘law and order’ appear in a speech or conversation.

On its face, ‘law and order’ is a term that simply denotes whether or not a particular set of laws are, in general, being obeyed. In this way, politicians or police officers who reference ‘law and order’ are simply trying to talk about a relatively calm public state of affairs where the official operating procedures of society are functioning smoothly. Of course, this doesn’t necessarily mean that ‘law and order’ is always a good thing: by definition, acts of civil disobedience against unjust laws violate ‘law and order,’ but such acts can indeed be morally justified nonetheless (for more, see Rachel Robison-Greene’s recent discussion here of “substantive” justice). However, on the whole, it can be easy to think that public appeals to ‘law and order’ are simply invoking a desirable state of peace.

But the funny thing about our terminology is how often we say one thing, but mean something else.

Consider the previous sentence: I said the word ‘funny,’ but do I mean that our terminology is designed to provoke laughter (or is humorous in other ways)? Certainly not! In this case, I’m speaking ironically to sarcastically imply not only that our linguistic situation is more complicated than simple appearances, but that the complexity of language is actually no secret.

The says/means distinction is, more or less, the difference between semantics (what is said by a speaker) and pragmatics (what that speaker actually means). Often, straightforward speech acts mean precisely what a speaker says: if I ask you where to find my keys and you say “your keys are the table,” what you have said and what you mean are roughly the same thing (namely, that my keys are on the table). However, if you instead say “your keys are right where you left them,” you are responding with information about my keys (such as that they are on the table), but you also probably mean to communicate something additional like “…and you should already know where they are, dummy!”

When a speaker uses language to implicitly mean something that they don’t explicitly say, this is what the philosopher H.P. Grice called an implicature. Sarcasm and ironic statements are a few paradigmatic examples, but many other kinds of figures of speech (such as hyperbole, understatement, metaphor, and more) function along the same lines. But, regardless, all implicatures function by communicating what they actually mean in a way that requires (at least a little) more analysis than simply reading how they appear on their face.

In recent years, law professors like Ian Haney López and philosophers like Jennifer Saul have identified another kind of implicature that explicitly says something innocuous, but that implicitly means something different to a subset of the general audience. Called “dog whistles” (after the high-pitched tools that can’t be heard by the human ear), these linguistic artifacts operate almost like code words that are heard by everyone, but are only fully understood by people who understand the code. I say “almost” like code words because one important thing about a dog whistle is that, on its face, its meaning is perfectly plain in a way that doesn’t arouse suspicion of anything tricky happening; that is, everyone — whether or not they actually know the “code” — believes that they fully understand what the speaker means. However, to the speaker’s intended clique, the dog whistle also communicates a secondary message surreptitiously, smuggling an implicated meaning underneath the sentence’s basic semantics. This also means that dog whistles are frustratingly difficult to counter: if one speaker uses a dog whistle that communicates something sneaky and another speaker draws attention to the implicated meaning, the first speaker can easily deny the implicature by simply referencing the explicit content of the original utterance as what they really meant.

Use of dog whistles to implicitly communicate racist motivations in government policy (without explicitly uttering any slurs) was, infamously, a political tactic deployed as a part of the Republican “Southern strategy” in the late 20th century (for more on this, see Evan Butts’ recent article). As Republican strategist (and member of the Reagan administration) Lee Atwater explained in a 1981 interview:

“You start out in 1954 by saying, ‘[n-word], [n-word], [n-word].’ By 1968 you can’t say ‘[n-word]’—that hurts you, backfires. So you say stuff like, uh, forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites.…”

Of course, terms like ‘forced busing’ and ‘states’ rights’ are, on their faces, concepts that are not necessarily associated with race, but because they refer to things that just so happen, in reality, to have clearly racist byproducts or outcomes —  and because Atwater’s intended audience (Republican voters) knew this to be so — the terms are dog whistles for the same kind of racism indicated by the n-word. When a politician defended ‘forced busing’ or when a Confederate apologist references ‘states’ rights,’ they might be saying something about education policy or the Civil War, but they mean to communicate something much more nefarious.

Exactly what a dog whistle secretly communicates is still up for debate. In many cases, it seems like dog whistles are used to indicate a speaker’s allegiance to (or at least familiarity with) a particular social group (as when politicians signal to prospective voters and interest groups). But other dog whistles seem to signal a speaker’s commitment (either politically or sincerely) to an ideology or worldview and thereby frame a speaker’s comments as a whole from within the perspective of that ideology. Also, ideological dog whistles can trigger emotional and other affective responses in an audience who shares that ideology: this seems to be the motivation, for example, of Atwater’s racist dog whistles (as well as more contemporary examples like ‘welfare,’ ‘inner city,’ ‘suburban housewife,’ and ‘cosmopolitan elites’). Perhaps most surprisingly, ideological dog whistles might even work to communicate or trigger ideological responses without the audience (and, more controversially, perhaps even without the speaker) being conscious of their operation: a racist might dog whistle to other racists without any of them explicitly noticing that their racist ideology is being communicated.

This is all to say that the phrase ‘law and order’ seems to qualify as a dog whistle for racist ideology. While, on its face, the semantic meaning of ‘law and order’ is fairly straightforward, the phrase also has a demonstrable track record of association with racist policies and byproducts, from stop-and-frisk to the Wars on Drugs and Crime to resistance against the Civil Rights Movement and more. Particularly in a year marked by massive demonstrations of civil disobedience against racist police brutality, politicians invoking ‘law and order’ will inevitably trigger audience responses relative to their opinions about things like the Black Lives Matter protests and other recent examples of civil unrest (particularly when, as Meredith McFadden explains, the phrase is directly used to criticize the protests themselves). And, crucially, all of this can happen unconsciously in a conversation (via what Saul has called “covert unintentional dog whistles”) given the role of our ideological perspectives in shaping how we understand and discuss the world.

So, in short, the ways we do things with words are not only interesting and complex, but can work to maintain demonstrably unethical perspectives in both others and ourselves. Not only should we work to explicitly counteract the implicated claims and perspectives of harmful dog whistles in our public discourse, but we should consider our own words carefully to make sure that we always mean precisely what we think we do.

Under Discussion: The Multiple Ironies of “Law and Order”

photograph of a patch of the confederate flag

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Law and Order.

You hear a person running for office described as a “law and order” candidate. What, if anything, have you learned about them and their policies? The answer is either “nothing” or “nothing good.” The only wholesome association with the phrase is the infinitely replicating and endless Law and Order television franchise. Otherwise, this seemingly staid phrase misleads — and that is exactly the intention. As we all are routinely reminded, “law and order” is a deliberate verbal irony. When people don’t heed these reminders, it becomes a tragic irony.

In 1968, two conservative candidates were running for President of the United States: Richard Nixon and George Wallace. Nixon, the Republican nominee, won the election. However, Wallace won the electoral votes of five southern states and garnered 13% of the popular vote. Both candidates ran on explicitly law and order platforms and articulated them as such. During the course of that campaign, Nixon was often challenged to distinguish himself from Wallace on the issue of law and order. During a televised interview on Face the Nation, Nixon demonstrated the slipperiness of the term “law and order.” He said that each of the three candidates in the 1968 presidential election — Hubert Humphrey, George Wallace, and himself — were each in support of law and order. The difference was what they meant by it and how they would achieve it.

Each presidential candidate in 1968 presented a different vision of law and order, during a period of significant unrest. Wallace gave full-throated support to a segregationist and populist message, couched in terms of the rights of states to shape their culture free from heavy-handed federal meddling. Hubert Humphrey was an advocate of civil rights legislation and nuclear disarmament. Though his anti-war credentials were tarnished by his role as Lyndon B. Johnson’s vice-president during the Vietnam conflict, Humphrey’s view of law and order was broadly one of egalitarianism and peace. Nixon’s avowed interpretation of law and order was the rule of law, and freedom from fear. Here, the irony begins.

The details of the strategy by which Nixon and the Republican Party won over voters in the states of the US south are now well-known. The practically named “Southern Strategy” first took the presidential stage with the 1964 campaign of Barry Goldwater. He took a pronounced stance against civil rights legislation that garnered him the few electoral votes he received in his presidential run —all from southern states (and his home state of Arizona). Opposing civil rights legislation, and any other federally mandated policies of integration and egalitarianism, was the core. This was not done in an explicitly racist manner, but under the banner of preserving the sovereignty of individual states, as Republican strategist Lee Atwater laid bare in a 1981 interview.

This is deliberate verbal irony: the strict meaning of the words actually uttered differs from the meaning intended by the speaker. Atwater confirms that when Republican candidates for office say, “preserve state’s rights” what they mean is “preserve the white southern way of life.” Nor is this an idiosyncrasy of Atwater. The intellectual basis for the Southern Strategy comes from William F. Buckley‘s 1957 editorial in the National Review, in which he states that whites in the south are entitled to ensure they “prevail, politically and culturally, in areas in which it does not predominate numerically.” Law and order, but only for white people. Freedom from fear, but only for white people. This is the Southern Strategy.

This direct verbal irony entails more irony at the level of political philosophy and general jurisprudence (i.e., theories of the concept of law). Predominant theories of general jurisprudence, especially among conservatives, see law as being generally applicable: that is, every person is subject to the same laws in the same way. This is the meaning of the phrases “rule of law” and “equal under the law.” However, talk of states’ rights in the context of the Republican Southern Strategy stands for exactly the opposite proposition: the law should apply in one way to white people and a different way to non-white people. The legal legerdemain achieved is profound in its pernicious effect. When the law is articulated in a sufficiently abstract fashion, it will not say that one group will be disparately, negatively affected. Because it doesn’t say it, many people will be convinced that it doesn’t actually affect people differently. This allows people to shift blame onto those whose lives are made more difficult, or ruined, by the law.

Disparate impact, however, has become one of the trademark U.S. Supreme Court tests for unconstitutional practices. The test arose from Griggs v. Duke Power Co., in which Black employees sued their employer over a practice of using IQ tests as a criterion for internal promotion. Previously, the company had directly forbidden Black employees from receiving significant promotions. However, after the passage of the Civil Rights Act of 1964, formally discriminatory policies were unconstitutional. The Griggs court expanded the ambit of the Civil Rights Act to policies that were substantially discriminatory in their effect, even if they were non-discriminatory in form. This rule was later limited by the Supreme Court in Washington v. Davis, in which the court required that it be proven substantially discriminatory policies were adopted with the intent to achieve that discriminatory effect.

The Supreme Court, the ultimate authority on U.S. law, holds that laws which have disparate impact are bad law. But disparate impact, as it is defined by the Court, is exactly what the Southern Strategy aimed at. Say one thing, which is superficially acceptable, but mean another thing, which is expressly forbidden. Hence the “law” of the Southern Strategy’s “law and order” is not law at all.

How much of this dynamic any particular law-and-order candidate, much less the people that vote for them, is aware of is an open question. Here, the deliberate verbal irony becomes tragic irony. Anyone who has learned the lessons of history knows what will happen, while those who have not learned do not.

‘Bon Appetit’ and the Politics of Food

photograph of halved fruits and vegetables arranged around yellow plate in the center

In the same way that the #MeToo movement encouraged women to speak out about sexism in their workplaces, the return of the Black Lives Matter movement to the forefront of mainstream consciousness has given BIPOC a platform to start a conversation about racism in their fields. Notably, one such conversation is currently unfolding in the food industry. In early June of 2020, Adam Rapoport stepped down from his position as the editor-and-chief of Bon Appétit magazine when a photo of Rapoport wearing brownface at a party surfaced on Twitter. In the last few years, Bon Appétit has been steadily amassing an online following through its YouTube channel, which has helped the magazine present itself as an inclusive and diverse brand to its massive twenty-something audience. In an article for Vox, Alex Abad-Santos describes how

“A dramatic part of Rapoport’s resignation was watching the wall tumble between what he was presenting to the outside world—socially conscious, thoughtful, empathetic—and his real-life actions, which according to staffers included microaggressions, underpaying staff, and taking advantage of his assistant. The ousting of a man who wrote about the killing of George Floyd and standing in solidarity with immigrants and minorities while he was, at the same time, treating his black and brown staffers inequitably, feels a lot like justice.”

However, many former employees have pointed out that Bon Appétit’s problems cannot be solved merely by firing Rapoport. The magazine (and the food industry at large) are still built on a foundation of structural racism, a foundation which is obfuscated by gestures towards multiculturalism. Despite these hollow gestures, BIPOC within the industry have been undermined by their editors in insidious ways. Assistant editor Sohla El-Waylly, for example, claimed in an Instagram post that she would be “pushed in front of video as a display of diversity,” and that only white editors were paid for their video appearances on the magazine’s YouTube channel. Former employees like Alex Lau felt pressured to only make food from their culture, and were told by their editors that “ethnic” food would not be interesting enough to the magazine’s audience. Nikita Richardson, a former black employee, struggled with the emotional toll of working in such a toxic work culture, explaining how “You see your coworkers every day of your life, and to go into work every day and feel isolated is misery-inducing . . . Nowhere have I ever felt more isolated than at Bon Appétit.

It is especially important that this interrogation of white hegemony is happening within the food industry. We tend to think of food as apolitical, one of the few neutral grounds where all people can meet without cultural or ideological baggage. There’s a reason that cooking shows are a safe bet for major networks hoping to attract the largest possible audience. Cooking shows are generally innocuous and uncontroversial, and because food makes up such a large part of our daily lives, it’s impossible for all viewers not to relate on some level. However, food is a deeply moral and political subject. The foundational story of Christian moral philosophy, the story of Adam and Eve from the book of Genesis, is, after all, a story about eating, which indicates that food is a central symbol within philosophical discourse.

Food is political chiefly because it connects us to the world and reveals our place within larger structures of power. As scientist Louise Fresco explains in her book Hamburgers in Paradise,

“Every mouthful we eat connects us with those who long ago started to domesticate plants and animals, with the migrants and traders who spread them across the world . . . with the farmers who are proud of their land and their work, and with the laborers who pick beans and mangoes and pack them and in some cases endure appalling working conditions.”

Food is such a potent way of conceptualizing how social networks function under capitalism that in the 2017 play Young Marx, a fictionalized version of Marx uses the ingredients of his breakfast to explain how capitalism (and the things produced by it) alienate us from other people. He says, “Before capitalism I could see my brother’s hand in the labor content of my breakfast,” pointing out the division between factories that produce food and the tables those items eventually end up on. “A sausage could explain my life,” Marx exclaims, because food (as a young Engels chimes in) “maps your social relations.”

This relationship between food and consumer becomes even more muddled when we consider the online cooking-as-entertainment industry, which Bon Appétit participates in. Even when produced for the sake of entertainment and not consumption, food doesn’t lose its ability to map social relations. Media critic Dan Olson points out in a video released shortly before Rapoport’s resignation that

“Cooking entertainment can’t avoid [food politics] . . . any show is going to inherit those meanings and symbols purely by virtue of the kinds of food the show considers normal, what it considers exotic, and what it assumes the viewer is familiar with or has access to.”

Olson explains that spectacle is generally the main element of online cooking shows. The spectacle can be the chef’s outrageous or charismatic personality (popular celebrity chef Gordon Ramsay has built his entire brand on this) or outlandish ingredients (donuts draped in gold-leaf or five hundred-dollar steak dinners, to name a few examples). Bon Appétit’s most popular series, Gourmet Makes, is about a pastry chef who attempts to recreate processed snack foods like Twinkies using high-quality ingredients, a spectacle which draws in millions of views per video.

But the spectacle can also be an “exotic” dish or regional cuisine unfamiliar to American viewers. Travel food shows, both on television and on the internet, often participate in this not-so-subtle racism. A white foodie will visit a non-Western culture and “discover” dishes unfamiliar to Westerners, emphasizing how new or outlandish such dishes are. So-called “superfoods” often rely on the same racist assumptions. Labeling goji berries or acai a superfood gives those products a veneer of the unfamiliar, even imbuing them with magical properties. Bon Appétit has specifically come under fire for this practice. An apology released by the magazine on June 10 in the wake of Rapoport’s resignation acknowledges that “Our mastheads have been far too white for far too long. As a result, the recipes, stories, and people we’ve highlighted have too often come from a white-centric viewpoint. At times we have treated non-white stories as ‘not newsworthy’ or ‘trendy.’” Non-white labor has historically been invisible in white kitchens and restaurants, which is why the tokenization of non-white food and culture for the sake of a magazine spread is especially wrong.

It’s difficult to say if Bon Appétit will actually follow through on its promise to be better. Matt Hunziker, a white video editor who has vocally challenged the racism his colleagues experienced at Bon Appétit, was suspended from the company on June 25, supposedly because of his willingness to speak out against the company. If Bon Appétit is unable to change its ways, one possible response would be to decenter massive media conglomerates like Condé Nast (the company that owns Bon Appétit, as well as Vogue, The New Yorker, Vanity Fair, and GQ) by investing more material resources in BIPOC chefs and food writers working outside of mainstream food discourse. Only paying lip-service to non-white food without giving chefs the material advantages will only perpetuate an unequal and immoral system.

COVID-19 and Systemic Racism

photograph of "No Justice No Peace" sign at protest

As more information about COVID-19 and its effects comes to light, it is clear that the impacts of the disease are not the same everywhere or for everyone. Some communities are hit harder than others. In many cases, COVID-19 hot spots highlight systemic problems that existed before “coronavirus” was a household word. The public action that a society takes when things get rough reflects its values, in this case, its judgments about who and what is really important. Unsurprisingly, the circumstances of marginalized groups are not sufficiently taken into account in the construction of social programs and systems. When these social programs serve as the circulatory system of a nation during a pandemic, marginalized groups are the hardest hit. One lesson that this great tragedy should teach us is that we must recognize and embrace the diversity in our communities. Respect and appreciation for our cultural differences can help us to construct preemptive, life-saving policies.

If we’re willing to collectively put forth the work, the multiple tragedies we’ve recently gone through as a nation could give rise to transformative action. The murder of George Floyd and the subsequent protests to amplify the message that Black Lives Matter have cast the issue of racial justice onto center stage. The disproportionate effect of COVID-19 on communities of color can and should help people to understand what it means for racism to be systemic. One barrier to meaningful dialogue about racism is that some people think that for an action to be racist, it must be done with an explicit, hateful, discriminatory intention. Certainly, there are cases in which these conditions are met—some people are explicit, hateful racists. Systemic racism, however, has the potential to be even more pernicious and impactful. Understanding systemic racism requires us to think more holistically. We need to ask ourselves: How do we design our cities? Where do we put institutions that generate pollution and waste and why do we put them where we do? What social programs do we provide and to whom? What steps are we taking to see to it that upward mobility and human flourishing are attainable for all members of society? When answers to these questions suggest that people of color are consistently more negatively impacted by our practices, we have problems of systemic racism to fix. We find ourselves in just that situation when it comes to our response to COVID-19.

One critical component of emergency response is the transmission of information. Across the country, there have been huge challenges to information dissemination, created by a cluster of assumptions. Chief among these assumptions is the idea that everyone can speak English or is in regular contact with someone who can. For instance, meatpacking plants have been among the hardest hit institutions worldwide. As I have written in a previous article, conditions in slaughterhouses create a perfect storm for the spread of coronavirus. People work shoulder-to-shoulder doing strenuous activities that cause them to sweat and breathe heavily. Many employees at these facilities are immigrants and refugees who don’t speak English. Even if health and safety materials about COVID-19 are being created and widely disseminated, if a person can’t understand that material, they are in a poor position to help themselves or those around them. In crafting public health policy, we need to take into account the diverse nature of our communities. We need to provide information in more than one language. What’s more, we need to find ways of being proactive with these communities. We shouldn’t assume that everyone has access to television or the internet.

Florida governor Ron DeSantis made headlines last week for blaming his state’s spiking COVID-19 cases on migrant farm workers. This is a common move from the emerging coronavirus playbook—blame an outbreak on one event or group of people and imply that the spike is, therefore, somehow not real. Far from being exculpatory, increased cases among migratory farm workers is evidence of failure in governmental strategy. Florida public policy officials are aware that migrant farm workers exist in their state. However, in thinking about public health and the economy, concern for what might be happening on the margins came much too little and too late.

Racial injustice often leads to a snowball effect of harms. Consider the case of Louisiana’s infamous “Cancer Alley,” an 85-mile stretch of land along the Mississippi River that is home both to a majority black population and to roughly 150 petrochemical plants. The pollution in this area causes a range of health issues for those who live there. According to the EPA’s 2014 National Air Toxics Assessment, residents of this area are 95% more likely than most Americans to develop cancer from air pollution. These communities were already disenfranchised; pollution makes it worse. Pollution also causes pre-existing conditions, so, unsurprisingly COVID-19 has ravaged communities in Cancer Alley. At one point in April, a community in the area had the highest per capita COVID-19 death rate in the country.

The Navajo Nation has also been disproportionately affected by COVID-19—at one point it had the unfortunate distinction of having the highest per capita infection rate in the United States. The Navajo community has enacted strict lockdown and prevention measures, which have appeared to flatten the curve, at least for now. Help was slow to arrive. The CARES Act set aside 600 million to assist the Navajo Nation in its fight. To combat such an infectious disease, assistance is needed urgently. However, in order to receive the money to which they were entitled, the Navajo Nation had to sue the U.S. Treasury. By this point, people were already dead. Given the position in which the United States government stands to native people, swift assistance should have been a top priority.

When we say that Black Lives Matter and when we say that the lives of people of color matter, we take on responsibilities. We need to be reflective and active not just about our criminal justice system, but about the broad social and economic systems that give rise to inequity and injustice.

Black Lives Matter: Australia

Protest in Australia; two signs are visible: one reads "lest we forget the frontier wars, black lives, white lies" and one shows a black and red image of Australia with the word "genocide" written on it

Our public discourse [is] full of blak [sic] bodies but curiously empty of people who put them there. Alison Whittaker

This weekend protestors for Black Lives Matter in Australia took to the streets in contravention of Covid-19 health warnings to join worldwide protests sparked by the murder of George Floyd to highlight police violence against people of color and to once again raise the issue of Aboriginal deaths in custody.

The statistics and the stories of Black deaths in custody is a vexed issue in Australia, and a national disgrace. In the 30 years since a royal commission was conducted, successive governments have failed to implement many of its key recommendations; and in that time 432 Aboriginal Australians have died in police custody. Despite the manifest violence, negligence, and displays of overt racism around these deaths, charges against police are rarely brought, and there has never been a conviction for an Aboriginal death in custody in Australia. 

Indigenous activists and families of victims have been trying, with only incremental and limited success, to elevate the issue in the wider Australian public. Most of the names and stories of these people are not known to most Australians. 

In a piece for The Conversation, Alison Whittacker, law scholar, poet and Australian Indigenous activist, writes,

“Do you know about David Dungay Jr? He was a Dunghutti man, an uncle. He had a talent for poetry that made his family endlessly proud. He was held down by six corrections officers in a prone position until he died and twice injected with sedatives because he ate rice crackers in his cell. Dungay’s last words were also “I can’t breathe”. An officer replied ‘If you can talk, you can breathe.'”

The statistics for Aboriginal incarceration in Australia are mind-blowing. In some areas in the country, Aboriginal people are the most incarcerated people on earth; They make up roughly 3.3% of the overall population but account for 28% of the prison population. Aboriginal women represent 34% of the overall national female prison population.

The 460 deaths in custody since 1990 is a terrible number, and to each belongs a story – a life, and then a death of indignity, of violence, of neglect. As in the US, in Australia it belongs to an historical legacy of rapacious, brutal colonial expansion. 

May 27 to June 3 is Australia’s National Reconciliation Week. These dates mark two significant milestones for Aboriginal people. One is the 1967 referendum, which for the first time recognized Aboriginal Australians as citizens. The other is the High Court native title decision known as Mabo, which overturned the legal doctrine of ‘terra nullius’ – the principle by which the Crown acquired sovereignty of the continent in 1788, on the basis that the lands were lands ‘belonging to no one.’ 

But there is still a long way to go for Australians to come to terms with the history of frontier wars, which morphed into state maintained forms of oppression and violence, and then into official government policy of forced removal of Aboriginal children from their families. This history is not visible enough to, nor unflinchingly acknowledged by, wider Australia. Nor are the tendrils visible which reach through that history into the present, holding Aboriginal people in all sorts of disadvantage. Disadvantage that is reflected in the statistics. As the Uluru Statement from the Heart says:

“Proportionally, we are the most incarcerated people on the planet. We are not an innately criminal people. Our children are aliened from their families at unprecedented rates. This cannot be because we have no love for them. And our youth languish in detention in obscene numbers. They should be our hope for the future. These dimensions of our crisis tell plainly the structural nature of our problem. This is the torment of our powerlessness.” 

What, at this time, now, can be said and done about the work of reconciliation? In 2000, 300,000 people walked across Sydney Harbour Bridge to show their support for reconciliation. This year, then, marks the twentieth anniversary of ‘the bridge walk’. Yet material change has been frustratingly slow, and in some indicators, things are going backwards. 

The 2018 Close the Gap report on Indigenous health and education targets and outcomes found child mortality at twice the rate for Aboriginal children, school attendance rates declining, and a persistent life-expectancy gap of almost a decade between Indigenous and non-Indigenous people. 

Perhaps reconciliation has had its moment. It was maybe only the first word Australians have learned in the lexicon of change and of justice. Recognition of the nation’s shameful history is a starting point on the long road to equality and justice. But perhaps it has become a platitude, a way for white Australians to settle the ledger of their guilt, a way to paper over deep-seated systemic injustice that is thwarting real progress for Aboriginal lives and that continues to create privilege for settler Australians.  

The problem, as many voices have been saying (for a long time but) especially in the weeks since the BLM protests broke out in the US following the murder of George Floyd, is that white and settler oppression of Black and Indigenous people is thoroughly baked in to the system; baked into the system of colonial expansion– which included slavery and dispossession under terra nullius (both mechanisms used to dehumanize people for the purpose of wealth creation) – and it is baked into its neoliberal iterations. 

Perhaps the problem, rather, is that we have been reconciled to these things, to the reality of Indigenous disadvantage and risk of police violence and incarceration, for too long. 

How, then, can we reimagine and re-engage the concept, the work of reconciliation, or do we need to move beyond it to another stage? The national conversation in Australia has been painfully slow to get going. 

National Sorry Day is marked on May 26th, began in 2007 when the Australian Government, following the release of the Bringing Them Home report, formally apologized to Aboriginal people who were forcibly removed as children from their parents in a government assimilation policy. 

Australian philosopher Raimond Gaita writes that the findings of the report “[were] a source of deep shame for many Australians, and for some a source of guilt” ( A Common Humanity, 1999, pg. 87). While, as Gaita observes, many people feel shame and guilt, many also resisted such feelings, and felt that they were being asked to take responsibility for past wrongs they felt no part of. 

The refusal of shame sometimes takes the form of national pride, in which being proud of one’s nation is mutually exclusive with acknowledging its brutal history and recognizing the remnants of that history. 

Those who hold this conception of national pride take the view that history in which racial injustice is afforded a more central place in our story and our journey to self-understanding is overly bleak. It is known by its detractors as the ‘black armband view of history’ and they argue that we should be focusing on trying to fix the current inequalities rather than looking backwards into a troubled past. This obviously ignores the fact that these current inequalities, created by that past, are able to continue because it has never been reckoned with. 

Therefore the corrupted, shallow conception of national pride can never do anything other than let the deep national wounds fester. To be authentic in our attempts to reconcile, we should not contrast our national truth telling with our national interest, and reconciliation cannot be about ‘moving on’ until the appalling statistical gaps between white and black Australia are well and truly closed. 

But the injustice is not just expressed in the material conditions (by these gaps), or even the systemic problems. Simply moving forward means that there is no proper acknowledgement that those who suffered —  and continue to suffer these injustices — are wronged, and that to be wronged, is itself a distinctive and irreducible form of harm. 

Jacqueline Rose, on the 2018 conference on ‘Recognition, Reparation and Reconciliation’ in Stellenbosch, South Africa, wrote: “thinking was not enough. Not that ‘feeling’ will do it either, in a context where expressions of empathy – ‘I feel your pain’ – are so often a pretext for doing nothing.”

Guilt and shame are part of a pained acknowledgement of wrongs we have committed or in which we are in other ways implicated. But they must also be part of what forces us to change the system and ourselves. 

As protests in response to George Floyd’s murder and in support of the Black Lives Matter movement against systemic racialized violence and oppression raged across the US last week, a Sydney police officer was filmed handcuffing and then sweeping the legs out from under a sixteen-year-old Aboriginal boy who had just issued a vulgar verbal threat; the officer slammed the boy’s face into the pavement. 

Shortly afterwards the New South Wales police minister defended the officer, saying he was provoked and threatened. The minister, in public remarks, expressed far more outrage at the verbal abuse from the teenager than at the officer’s brutal response. 

How can reconciliation occur if such blatant power differentials cannot even be recognized, if the historical weight of wrongs done to a people and the humiliation and disadvantage they continue to suffer is totally invisible? Nothing, then, has been reckoned with. 

The worst thing about this story from Sydney is the grim, horrific moral equivalence being drawn between a lippy teenager and an officer of the law, whose duty is to ‘protect and serve’ using brutal and retributive force.  

When a teenager can be face-slammed for giving a mouthful of foul language to a police officer and this act can be defended by his superiors as a response to a threat, we are nowhere. 

When We Forget Our Dignity

Young person sitting on cement wearing a mask and holding a sign, turned away from camera. More people also sitting and holding signs are visible in the background.

The death of George Floyd should not have happened. An independent autopsy requested by the family concluded that Floyd died of asphyxiation from sustained pressure, disputing the Hennepin County medical examiner’s conclusion that he died from the combined effects of being restrained, underlying conditions, and possible intoxication. Based on footage now widely circulated, it is clear that Derek Chauvin unnecessarily knelt on the neck of a nonviolent offender who used a counterfeit $20 bill at a convenience store. According to the criminal complaint against Chauvin, the sustained pressure continued for 3 minutes after Floyd stopped moving and 2 minutes after another officer failed to find a pulse. 

Chauvin has been arrested and was charged with 3rd-degree murder and 2nd-degree manslaughter, which has now been elevated to 2nd-degree murder. Protests ensued soon after Floyd’s death, engulfing many American cities. Many protesters are not simply mourning the wrongful death of George Floyd but are also targeting their demonstrations against the systemic racial injustice that permits regular police brutality against people of color

The protests are not necessarily about Floyd’s killing in particular, but about the savagery and carnage that his death represents,” Charles M. Blow writes. “It is an anger over feeling powerless, stalked and hunted, degraded and dehumanized.”

This anger over this degradation and dehumanization has manifested in peaceful protests, destructive riots, and reciprocal violence. As a video revealed Derek Chauvin’s neglect for Floyd’s pleas for air and his sustained pressure on the unconscious man, other disturbing clips posted on social media reveal violence by police against demonstrators and by demonstrators against other civilians and police officers. Viral clips are prone to misinterpretation because they exclude proper context and limit the complexity that often accompanies the captured event. Opinions can be formed on erroneous or partial recording of events. Even so, one thing is clear: the violence captured by these videos display violations of human dignity. 

Such an observation may seem so banal, so obvious that it is not worth even mentioning. But at a moment when protesters are lashing out against racial injustice and violence is increasingly justified as an appropriate response, the assumption of human dignity is no longer obvious. Therefore, it is worth contemplating what respect for human dignity entails, how it is violated, and how it can be protected.

Human dignity is defined as “the recognition that human beings possess a special value intrinsic to their humanity and as such are worthy of respect simply because they are human beings.” It is thought to be inherent, indivisible, and inviolable. The dignity of each human being is a basic foundation of Christian social thinking and enjoys broad consensus in many cultures and philosophical traditions. While the term “dignity” as used is thought to be a product of the Enlightenment, the notion the term conveys predates the Enlightenment by many centuries. Other philosophers such as Thomas Aquinas and Cicero imply the inherent value of human beings in their writings on natural law. 

It is this assumption of the inherent value of human beings that underpins human rights as a part of international law; dignity transcends state boundaries and is the fountain from which other rights flow. The concept features in the preamble of the Charter of the United Nations: “We the people of the United Nations determined […] to affirm the faith in fundamental human rights, in the dignity and worth of human persons, in the equal rights of men and women”. Human dignity is the first article of European Union Charter of Fundamental Rights: “Human dignity is inviolable. It must be respected and protected.” Countless constitutions of various countries contain some reference to dignity. Of course, simple observation demonstrates that mere codification of this ethical concept does not ensure its protection. 

“[T]hat same human dignity is frequently, and deliberately violated all over the world,” Professor Paul van Tongeren observes. “When people are murdered, tortured, oppressed, or traded it is indeed a flagrant violation of their dignity”. Other violations are argued to include humiliation, instrumentalization, degradation, and dehumanization. 

In response to the death of George Floyd and the resulting demonstrations, Robert P. George, an American legal scholar who has written about human dignity, wrote the following in a statement released on behalf of Princeton’s James Madison Program: “What unites us—what makes us ‘out of many, one’—is our shared commitment to principles we believe to be essential to the full flourishing of human beings, the principles of the Declaration and the Constitution. If we were to distill those principles to a core idea, it is, in my opinion, this: the profound, inherent, and equal dignity of each and every member of the human family. When we truly embrace that idea, we know that racism and racial injustice are unacceptable and must be resolutely opposed.”

Racism and racial injustice could then be understood as one of the many abhorrent effects of a failure to embrace the core idea of human dignity. The degradation and dehumanization of people of color observed by Charles M. Blow is another. Unjust murder is another. So, what can be done?

While institutional reforms are being demanded, social crises, such as the one the U.S. is enduring, also reveal the need for something more basic, more fundamental: ethics education. But this need must contend with the decline of philosophy, the relative absence of ethical training for students in academia, and the growing irreligiosity of America. The traditional reminders of human dignity are slowly dying and their death ought to be mourned, if not reversed. The U.S. is ablaze; a man was unjustly killed; peaceful protesters are met with force, tear gas, and rubber bullets; rioters exert physical violence towards their fellow civilians; a legacy of racism endures. Because this is what happens when we forget our dignity. 

Call It What It Is: On Our Legal Language for Racialized Violence

photograph of Lady Justice figurine with shadow cast on wall behind her

This week, as seems to be the case every week in the U.S., we have seen Black people threatened with harm and killed for no other reason than their race.

I use “see” purposefully. The increased use of cameras to document the context surrounding the harassment, assault, and murder of Black people has raised awareness in recent years beyond the communities that have been experiencing this violence continuously. The incredible and outsized use of force and aggression towards non-white people is laid bare by the traumatic videos capturing these violent acts.

White supremacist violence from the past week includes the murder of George Floyd, an unarmed man, by four Minnesota police officers and the racist, false police report of Amy Cooper against Christian Cooper, a bird watcher in Central Park.

While both the police officers and Amy Cooper are no longer employed and may face further consequences, their actions form part of a much broader system of oppression and violence — a system we seem to lack sufficient moral language, or the proper legal framework, to fully capture.

Amy Cooper can face misdemeanor charges for making a false report under New York law. The penalty for such misdemeanors in the state is up to one year in jail, and a fine up to $1,000. The reasoning behind such statutes is that by making a false report you have done harm to the criminal justice system itself. The aspects of the law include a mens rea element (a state-of-mind aspect), requiring that you knew the report to be false, and an actus reas element (the behavior of the violation), which is actually reporting the crime to the relevant authorities. Cooper meets both of these elements pretty straightforwardly.

However, as this statute targets the harm done to the justice system and peace officers, it is easy to see that there is more to the moral and legal context of her behavior than simply tying up police resources.1 As is clear in the video Christian Cooper recorded, Amy Cooper focused on Christian Cooper’s race both in her threats to him personally off the call, and by heightening her vocalization of distress while describing him as “African American” to the operator. She thus put Cooper at significant risk, harnessing her power as a white woman and targeting him as a Black man, by directing police attention on him. The shared understanding of the danger that Christian Cooper experiences in the world is necessary for her threats to land and her harassment to be effective. As Christian Cooper said when explaining his filming of her harassment, “We live in an age of Ahmaud Arbery, where Black men are gunned down because of assumptions people make about Black men, Black people, and I’m just not going to participate in that.”

In order to capture the racist motivations behind Amy Cooper’s behavior, we could look to the legal category of hate crimes. These crimes involve specifically targeting members of specific groups. Those specific identities protected by hate crime legislation include race, gender, religion, age, disability, sexual orientation, as well as others. “Reckless endangerment,” for example, is one of the crimes that can be classified as a hate crime when targeting an individual based on their race. To include Amy Cooper’s behavior under this category would be expanding the current understanding of reckless endangerment, but it could be a route to adequately identifying the power being wielded and the threat being made.

This crime possesses both mens rea and actus reas elements; Cooper shows disregard for the foreseeable consequences of her action, and her behavior imposes a substantial risk of serious physical injury to another person. Note that the accused person is not required to intend (aim explicitly at) the resulting or potential harm in order to qualify as reckless endangerment. If, however, a case could be made that Amy Cooper was, in fact, intending for Christian Cooper to be harmed by her actions, the crime would qualify as some degree of attempted assault. Regardless, the distinction that is important here is that Amy Cooper is aware of the risk of harm she is placing Christian Cooper in by drawing police attention to him, but she is either disregarding that risk or marshaling that risk (bringing us into the realm of intentionality).

It’s not hard to imagine a potential objection claiming that Amy Cooper can’t know the danger her phone call places Christian Cooper in, and therefore can’t be held responsible for the harm that might ensue. But that would suggest at least culpable negligence, given the many recorded and shared instances of police violence towards Black people and the fact that Amy Cooper pointedly racializes the interaction.

To appeal to negligence, Amy Cooper would have to claim to not have recognized that her actions drawing police attention to Christian Cooper in Central Park would create a substantial risk of physical injury — she would be claiming to be unaware of the systemic violence that she is wielding.

To paraphrase Christian Cooper, we live in the world of Ahmaud Arbery, whose death again showed that assumptions made about Black men mean that even jogging while Black can be a serious risk. We also know that relaxing in the comfort of one’s own home can put Black people at risk of serious threat when confronted by police (#BreonnaTaylor, #BothemSean and #AtatianaJefferson). So can asking for help after being in a car crash (#JonathanFerrell and #RenishaMcBride), having a cellphone (#StephonClark), playing loud music (#JordanDavis), cashing a check (#YvonneSmallwood), or merely taking out a wallet (#AmadouDiallo).

We have also seen that the assumptions made by white people put Black people at risk of death when they sell CD’s (#AltonSterling), sleep (#AiyanaJones), walk from the corner store (#MikeBrown), play cops and robbers (#TamirRice), go to church (#Charleston9), or walk home with Skittles (#TrayvonMartin).

It is dangerous for a Black person to be at his own bachelor party (#SeanBell), to party on New Year’s (#OscarGrant), to decorate for a party (#ClaudeReese) or simply leave one to get away (#JordanEdwards), to lawfully carry a weapon (#PhilandoCastile), to shop at Walmart (#JohnCrawford), and to be a 10yr old walking with his grandfather (#CliffordGlover).

When police confront Black people, they are at serious risk to their life when they have a car break down on a public road or have a disabled vehicle (#CoreyJones and #TerrenceCrutcher), get a normal traffic ticket  (#SandraBland), or if they read a book in their own car (#KeithScott).

Police officers use excessive and lethal force when confronted with Black men that run (#WalterScott), ask a cop a question (#RandyEvans), or are in custody (#FreddieGray) or breathe (#EricGarner).

The list goes on. And it should feel overwhelming. The extent of the violence, and the context of the activities that put these individuals at risk, make any claim Amy Cooper has to being unaware of the danger she was placing Christian Cooper in dismissable.

In response to public outcry, Amy Cooper claims to have been scared, not motivated by race and not to have intended any harm come to Christian Cooper. However, both in our moral and legal evaluations of actions, whether or not someone intended the harm or potential harm is not the only standard we have.

Consider the following set of examples. Imagine I have friends over for a bonfire and am excited to use a new purchase “Rainbow Fire.” These packets, when added to a fire, make the flames appear in multiple colors — very exciting. However, because the packets involve chemicals in order to achieve the colorful result, they end up causing harm to those in close proximity to the fire. In effect, my adding the packets to the fire have caused harm to come to my friends. Our moral (and, roughly, legal) evaluation of my behavior is more nuanced than a simple judgment as to whether I intended to cause them harm or not.

Even if I wasn’t harming intentionally, I still was engaging in behavior that DID cause harm. It was a risk I should have been aware of. Packets that make fires colorful, after all, are pretty likely to be full of chemicals, and if I didn’t check the packets, I neglected dangers I should have attended to, and my friends have every right to be upset that I failed to take precautions and appreciate risks.

A step up from this kind of negligence is being aware of risks, but choosing to disregard them. If I read the packet but decided to proceed anyway, I behave “recklessly.” My friends will have moral (and legal) grounds to blame me.

But if I know that the packets will harm you (it’s a guarantee, not just a degree of likelihood), this goes beyond assessing risk or being ignorant of them. Acting knowingly is just short of intentionally, because though I might not be plotting your lung damage and was aiming at something else, I was aware that the lung damage was going to be a result of my behavior, not merely a risk disregarded. This is a level of mental engagement that we take more seriously, morally and legally.

So, even if we take seriously Amy Cooper’s denial of intentionally causing harm, we still have moral and legal concepts with which to evaluate her responsibility. She put Christian Cooper at risk, which is morally and legally problematic no matter what her mental state. And, unlike in my fire example above, she has weaponized race in a way we should hope to be able to acknowledge in our legal framework somehow. We need a means of capturing the unique abuse of power and the violence Amy Cooper threatened Christian Cooper with on the 25th of May.

I hope to have at least offered suggestions of standards that could be used. The mental state, behavior, and power structure that harnesses the racial targeting are all relevant to the legal evaluation of Amy Cooper’s actions. These considerations can give us further tools to establish the particular features of the racist harms in other violent behaviors being recorded every week.

 

1 We can note that often with false report violations, there can be civil suits filed. Civil suits, in contrast to criminal suits, are between citizens instead of between a citizen and the state. They focus on the damage one citizen caused another, rather than crimes (such as assault, theft, trespassing) that the state has determined its own authority to be justified. Civil cases include emotional distress, defamation, etc., and a successful result is typically financial restitution. Civil cases treat potential harms and exposure to risk differently, and so aren’t apt for this scenario.

Racial Health Disparities and Social Predispositions

photograph of Surgeon General Adams at podium during coronavirus briefing

Remarks made by U.S. Surgeon General Jerome Adams at last week’s coronavirus press briefing have sparked a heated debate. Most of the commentary surrounding those remarks has focused on accusations of patronizing language or, alternatively, the ever-expanding grip of PC culture. But the real controversy lies elsewhere. The true significance of the Surgeon General’s words rests in parsing ambiguous language; we need to know what is meant by the observation that people of color are “socially predisposed” to COVID-19 exposure, infection, and death.

The Surgeon General’s comments were aimed at addressing a troubling trend. Statistics continue to pour in underscoring racial health disparities: The population of Chicago is 30% Black, but Black people make up 70% of the city’s coronavirus deaths. Similarly, in Wisconsin’s Milwaukee County, African Americans make up 25 percent of the population, but 75 percent of the confirmed deaths. In Louisiana, Black people make up 33 percent of the population, but account for 70 percent of deaths.

What could explain these figures? Adams highlighted several of the underlying factors placing Black Americans at greater risk: they are more likely to have complicating conditions such as diabetes, high blood pressure, and heart disease, as well as being more likely to lack access to health care. All of these factors mean that Black Americans are “less resilient to the ravages of COVID-19.”

What is more, people of color, generally, are also more likely to be exposed to infection in the first place. They are more likely to live in multi-generational homes, reside in high-density housing, and make up “a disproportionate share of the front-line workers still going to their jobs.” As Jamelle Bouie explains,

“Race […] still answers the question of ‘who.’ Who will live in crowded, segregated neighborhoods? Who will be exposed to lead-poisoned pipes and toxic waste? Who will live with polluted air and suffer disproportionately from maladies like asthma and heart disease? And when disease comes, who will be the first to succumb in large numbers?”

Skeptics continue to contend that it is reductionist to blame racism for these inequities, and offer in its stead the familiar trope of private behavior and individual choice. But casting the problem as one of personal responsibility not only overlooks the history of systemic racism and structural socioeconomic oppression — that define things like one’s housing and job opportunities which in turn determine one’s relation to this disease — it perpetuates the false narrative that the sufferer is responsible for her suffering.

And that is the problem of the language of “social predisposition,” and the subtle claim that word choice makes in regards to responsibility for racial health disparities. (If you were biologically or genetically predisposed to infection how much responsibility would you bear for contracting it? How much responsibility do you bear by being “socially predisposed”?)

One the one hand, “social predisposition” can be read as vaguely acknowledging the history of institutional racism and the consequences it has wrought (and continues to work). Structural forces have conspired (consciously and unconsciously) to disadvantage minorities and enshrine differential access to goods and opportunities on the basis of skin color. On the other hand, “social predisposition” can just as easily be understood as gesturing at social habits, predilections, and weaknesses.

Does such fine analysis of the Surgeon General’s comments make a mountain of a molehill? John McWhorter of The Atlantic, for example, describes this type of criticism of Adams’ remarks as overblown. It is inappropriate and impractical, McWhorter argues, to insist that every talking head reference the prescribed origin story whenever a racial disparity arises. “Members of a certain highly educated cohort,” McWhorter writes, “consider it sacrosanct that those speaking for or to black people always and eternally stress structural flaws in America’s sociopolitical fabric past and present as the cause of black ills.” What’s worse, “writers and thinkers give an impression that their take is simple truth, when it has actually devolved into a reflexive, menacing brand of language policing.”

But the Surgeon General’s remarks cannot themselves be regarded as neutral. The message behind “social predisposition” is ambiguous without context. But when it gets coupled with a plea aimed directly at people of color to change their habits concerning drugs and alcohol because “we need you to step up,” it starts to sound a lot less ambiguous. It threatens to transform the claim about “social predisposition” from a statement about constraining factors to a question of volition. It moves from the language of preexisting conditions to elective tendencies. It reduces structural injustice to a matter of choice.

It also changes our conversation about the link between race and health outcomes from one of correlation to one of causation. Thus, it seems only fair that other potential “causes” should get a hearing. It may not be within the Surgeon General’s purview as a public servant for national health to comment on the root cause of social injustices, but then it can’t be within his purview to subvert that project either. Even if his intention was merely to offer “wise counsel in hard times,” it matters how that advice gets heard and who all hears it.

“Chinese Virus”? On the Ethics of Coronavirus Nicknames

image of World Health Organization emblem

The recent Coronavirus outbreak has undoubtedly affected the physical and economic well-being of many Americans and people across the world. With total Coronavirus cases over 300,000 and counting, economies have plunged and hospitals are overloaded with patients. However, amid this crisis, a new controversy has emerged concerning the various Coronavirus nicknames.

Recently, President Trump referred to the Coronavirus as the “Chinese Virus.” Other nicknames for the virus have emerged such as “Wuhan Virus” and “Kung Flu.” These nicknames have drawn criticism from the left, mainstream media, and Asian Americans while the right has called these nicknames appropriate and has criticized the political correctness of the left. To determine the morality of Coronavirus nicknames, it is first necessary to see the current context of Asian discrimination.

Without a doubt, Coronavirus has furthered racist discrimination toward Asians and Asian Americans. In schools, Asian students face xenophobic comments like “stop eating bats” and the infamous “go back to your country.” Additionally, Asian businesses, particularly restaurants, had seen significant drops in sales even before the quarantine happened. There have also been countless cases where Asians are harassed due to Coronavirus. In the media, Fox News commentator, Jesse Walters, went as far as to say, “I’ll tell you why it started in China. They have these markets where they eat raw bats and snakes. They are a very hungry people.” Not only are these blatantly racist generalizations of Chinese people, it is a myth that diseases come only from so called exotic animals. Diseases come from all animals such as pigs (swine flu), cows (mad cow disease), and chickens (bird flu), and two of those diseases had their first cases in the Western world. But these facts have held little sway as Asian discrimination has been widespread on social media with many comments seizing on Walters’ (and even Senator John Cornyn’s) words that all Asians eat bat soup and snakes.

In this context of widespread Asian racism, many people have started to call nicknames such as “Chinese Virus” and “Wuhan Virus” racist. However, critics say there is an established record of naming diseases based on their original location. Just to name a few, there is Ebola fever (Ebola River), Lyme disease (Lyme, Connecticut), West Nile virus, Lassa fever (Lassa, Uganda), and St. Louis encephalitis. Diseases have also been tied to nationality such as German measles, Japanese encephalitis, and the Spanish flu (though the Spanish flu started in Kansas). Therefore, many conservatives have argued that President Trump’s comments only follow an established pattern of naming diseases.

Even though we’ve had a trend of naming diseases based on their origin, it is important to recognize that popularity does not equate to morality; just because we have named diseases by origin in the past, doesn’t mean that we should continue doing so. In fact, the naming of diseases by origin is actually now frowned upon by the medical community. The World Health Organization has set guidelines in which they state, “Terms that should be avoided in disease names include geographic locations.” This guideline was made in 2015 before the Coronavirus pandemic. Unlike critics’ claims as a common scientific practice, geographic locations are now not used by medical organizations. The morality of the nickname “Chinese Virus” can’t be based on the popularity of past disease naming customs, but must instead be considered according to the negative impact it has for society at large. Calling Coronavirus “Chinese Virus” for the sake of accuracy of original location can’t outweigh the potential further perpetuation of Asian discrimination. Given the fact that Chinese and Asian people are unfairly associated with Coronavirus and other negative stereotypes, associating “Chinese” with the Coronavirus would be a dangerous path to take.

To see this it might be helpful to consider the shift in attitude if a virus were to be named “America Virus” in the midst of a global pandemic where Americans were discriminated against while dying by the thousands. It wouldn’t be well-received by the many Americans who are doing everything they can to save fellow American lives. This is what is happening in China: selfless doctors tirelessly work overtime and overwhelmed nurses rush from bed to bed; all of them giving their heart and soul to save human lives. The doctors and nurses sacrifice their time to save their fellow countrymen, all just for the US to slap their Chinese nationality on the virus they are fighting to save their fellow Chinese people. Using the term “Chinese Virus” not only risks further Asian discrimination, it is disrespectful to the Chinese nurses and doctors risking their lives to save their fellow countrymen.

Prejudice in the NFL?

painting of lamar jackson in NFL game

The NFL is over for the next six months. The Superbowl has been won, all the player and coach accolades have been handed out, and teams are busy looking to build on the 2020-2021 season in free agency and the upcoming draft. But in today’s contemporary media environment, the NFL can’t be just about football. Over the past few seasons, the NFL has endured a series of serious media crisis–player safety, television ratings, and scandalous players (mostly Antonio Brown). But an issue that continues to linger is about diversity and the impact of racial issues on the game. This is no surprise to anyone, as the diversity issues were the subject of host Steve Harvey’s monologue at this year’s NFL 100 Awards ceremony. Indeed, the small pool of minorities that sit in front offices and on coaching staffs, as well as recent decisions regarding players of color raise the question of who’s to blame for the NFL’s diversity issues as well as who’s responsible for finding solutions for them.

70% of NFL players are black–the lineman, the runningbacks, the defense, the receiving core. But if you look at one position in particular, it’s not reflective of the majority demographic–the quarterback. Per The New York Times, 12 black quarterbacks started for the NFL 2019-2020 season, but it’s one QB short for tying the record of most black quarterback starts in a single season. There’s even been a bit of controversy regarding black quarterbacks in the last few seasons. The most recent being about the NFL 2019 MVP Lamar Jackson. The Ravens quarterback was unanimously voted the league’s most valuable player, but his talents weren’t always recognized. Many sports analysts, league owners, and coaches considered Jackson a running back disguised as a quarterback. Some even suggested that he move to the wide receiver position. On one hand, comments about Jackson’s game could be purely based on what he demonstrated at the combine. But on the other hand, a black man being judged predominantly by white males hints at something deeper. Maybe it wasn’t just Jackson’s performance at the combine, it was that he didn’t fit the traditional image of a NFL quarterback–Joe Montana, Dan Marino, or Tom Brady (who Jackson happened to beat last season). However, in the same token, Superbowl champ Patrick Mahomes and Texans QB Deshaun Watson are also impacting the traditional image of a quarterback through their style of play.

Lamar Jackson isn’t the only black quarterback that has received pushback for what he does on the field. There’s Colin Kaepernick, the former San Francisco 49ers QB who exited the league after kneeling on the sidelines during the national anthem in protest of police brutality of African Americans. Team GM’s, owners, and even the President of the United States condemned Kaepernick for his actions. Now, are the comments from NFL GM’s and owners indicative of prejudice? Like Lamar Jackson, Kaepernick’s critics were mostly white men. The fact that they were against speaking out against police brutality, no matter how controversial the topic might be for the league, is questionable. But at the same time, once Kaepernick left the league and couldn’t sign with a team, the main reason he couldn’t get a job was because he was considered a PR nightmare. Regardless if teams agreed with Kaep’s kneeling or not, no team wanted the news stories that would come from signing him. If so, then the issue of prejudice would be about the fans’ bias if they condemned Kaepernick for kneeling. To complicate matters even further, Dak Prescott, QB of the Dallas Cowboys, said that Kaepernick’s protests had no place in the league despite being a black man himself. Either way, some sentiment surrounding Jackson and Kaepernick might go beyond what they do on the field.

Jackson and Kaep are only the most recent cases though. Since black men were allowed to play quarterback in the league, they were often considered not smart enough to run offenses or read defenses. Marlin Briscoe, the first ever black quarterback to start in the league, threw 14 touchdowns during his rookie season with the Denver Broncos. John Elway, a legend Broncos QB, only threw half as many touchdowns as Briscoe during his rookie season. Despite the performance, Briscoe never played quarterback again. Warren Moon, the only black quarterback in the NFL Hall of Fame made MVP for the 1977 Rose Bowl and still wasn’t invited to the NFL Combine. He didn’t play in the NFL for six seasons after he left college. Like Jackson, Moon was also told to switch to running back or wide receiver.

The same negative sentiment didn’t only apply to players either. Although 70% of the players in the NFL are black, only 9% of the managers in league’s front offices are and 0% are CEO’s or team presidents. There is only one black general manager and out of the 32 NFL teams, 3 of the league’s head coaches are black. Back in 2003, the league introduced the Rooney Rule, a policy aimed at addressing the lack of diversity at the head coaching level. Per the Rooney Rule, teams are required to interview at least one minority for head-coaching positions and front office jobs. But per a study by the Global Sport and Education Lab at Arizona State University, the Rooney Rule didn’t improve minorities’ chances of being hired. According to The Atlantic, in the past three years 19 head coaching positions were made available and only 2 black coaches filled the openings. Some black coaches are rarely given a chance to make an impact on a team either. Former Detroit Lions coach Jim Caldwell was fired after back to back 9-7 records for the 2017 and 2018 season. Bob Quinn, the Lions’ GM, said that Caldwell wasn’t meeting expectations. But Quinn then went on to hire former New England Patriots defensive coordinator Matt Patricia, who went 9-22 in his first two seasons as head coach. Last season, the Lions record was 3-12-1.

It could be argued that rather than prejudice, the NFL’s diversity issues are purely “best man for the job” decisions. Teams look for the best quarterbacks that fit their offense and can lead a team. Team owners and GMs bring in coaches that can draw up plays accustomed to their team’s culture. But simultaneously, race is the driving force behind many if not all of the United States’ issues. Politics, advertising, music, fashion, literature, and every other medium that can be thought of is influenced by race is some form or fashion. Is it so farfetched to think that sports isn’t any different? Perhaps some personnel decisions are purely based on skill and compatibility. But at the same time, the league has been around for decades, and maybe some of the racist sentiment of the past century has seeped into the present.

When Is Comedy Over the Line? The Departure of Shane Gillis from SNL

photograph of Radio City Music Hall

Earlier this month, the famous sketch comedy program Saturday Night Live announced that Shane Gillis would be joining the troupe. The comedian was allegedly cast in an attempt to appeal to more conservative potential viewers. In recent years, the show has been perceived by many to have a liberal bias, and its creators wanted to draw more politically diverse viewership. Several days later, however, SNL announced that Gillis would not be joining the cast after all. The show’s representatives acknowledged that they cast Gillis on the basis of the strength of his audition, but failed to adequately vet him before offering him the job. In the days immediately following the casting announcement, comedic material surfaced that many found appalling. A good number of the offensive remarks came from a podcast co-hosted by Gillis in which he makes unambiguously racist, sexist, homophobic, and transphobic remarks. There are also recordings of Gillis making rape jokes and mocking people with disabilities.

This is not the first time a comedic institution has decided to part ways with Gillis over the nature of his comedy. The Good Good Comedy Theater, a prominent Philadelphia Comedy Club, tweeted the following,

We, like many, were very quickly disgusted by Shane Gillis’ overt racism, sexism, homophobia and transphobia – expressed both on and off stage – upon working with him years ago. We’ve deliberately chosen not to work with him in the years since.

This event had an impact on the national stage more broadly. On one of his podcasts, Gillis referred to presidential candidate Andrew Yang using a series of racial slurs.

Yang replied to Gillis on Twitter, saying:

Shane — I prefer comedy that makes people think and doesn’t take cheap shots. But I’m happy to sit down and talk with you if you’d like. For the record, I do not think he should lose his job. We would benefit from being more forgiving rather than punitive. We are all human.

It appears that Yang opted to take a measured and forgiving approach during a politically challenging time. Not everyone agrees with his strategy, but plenty of people also disagree with the choice made by SNL.

Some support for Gillis was grounded in concerns about free speech. To the extent that these are concerns about Gillis’ constitutional rights, they are misguided. Our first amendment rights to freedom of speech are rights we have against governmental restrictions of or punishment for speech, not rights we have against private individuals or institutions. SNL is not constitutionally obligated to retain any particular cast member, especially if they believe that cast member will damage their product.

Charitably, however, even if the concern is not a constitutional matter, one may still think that there are moral issues dealing with freedom of speech more broadly. Some of these considerations have to do with comedy specifically. Comedy plays a special role in society. Comedians shine a light on power dynamics within cultures, challenge our existing paradigms, and provide us with a cathartic outlet for dealing with our frustrations.

A third set of free speech concerns has to do with call out culture. Contemporary generations live in a world that is far removed from the one occupied by their ancestors. Our past speech is no longer lost to memory—if we say something online, it’s there forever. Some argue that we should have some freedom to make mistakes, especially in youth, that won’t spell ruin for our careers later in life. We are all human, after all, and forgiveness is a virtue. That said, it’s worth noting that many of the problematic comments made by Gillis were made earlier this year.

Others argue that SNL did the right thing. It is certainly true that we all make mistakes, and that all of us have said things that we later wish that we hadn’t. Nevertheless, Gillis’ behavior does not seem to be behavior of that type. The offensive jokes he made were not aberrations that it would be appropriate to view as juvenile mistakes. These behaviors were routine, habitual, part of his comedy style. What’s more, Gillis only appeared to demonstrate remorse for the content of these jokes when he was in the national spotlight, called out in public space to do so. Many viewed his apology as insincere.

Many critics of Gillis would agree that comedy serves an important social function. But, they might argue, there is a difference between being pushing the comedic envelope and being the equivalent of a schoolyard bully. If your child started a YouTube channel dedicated to mercilessly mocking his peers, you’d be likely to punish him and/or get him counseling rather than praising his creativity.

Critics may argue further that SNL tends to be a collection of the best comedic talent this country has to offer. People work for years to develop a background that makes them qualified to be a cast member. If a person wants a job with a high level of prestige and public attention, that person needs to be attentive to their character development generally. Impressive opportunities should be reserved for impressive people. Or, at the very least, genuinely apologetic people.

What’s more, inclusion of Gillis in the program doesn’t do conservatives any favors, and it doesn’t honor the viewership that SNL is attempting to generate. Reasonable, ethical republicans will certainly object to the characterization of Gillis’ brand of humor as “conservative.”

A further controversy has to do with the way in which presidential candidate Andrew Yang handled this issue. Yang has attempted to brand himself as a candidate from outside of traditional politics, stressing a message of civil discourse intended to have broad appeal. Some view his engagement with Gillis to be tone deaf when it comes to race. Many feel that the message that should be sent to Gillis is that his comedy isn’t funny, it’s offensive. No one is trying to censor or stifle his speech. Gillis is free to work in the kinds of venues for which such behavior is not a deal breaker. He can say what he wants, but if what he wants to say is cruel, perhaps society will not be willing to pay him lots of money in support of those kinds of messages.

Implicit Bias and the Efficacy of Training

colored image of a human brain

On September 12th, California’s state legislature passed a series of measures designed to reduce unconscious biases among medical practitioners and other public servants; under the new laws, doctors, nurses, lawyers and court workers will be expected to undergo implicit bias training as a regular continuing education requirement. A number of advocacy groups argue that it is these unconscious biases that strongly contribute to wage gaps, differential education outcomes, criminal justice proceedings, and healthcare results – such as the fact that pregnant black women are three to four times more likely to die from complications during labor and delivery than are pregnant white women. Bias training is supposed to be a tool for chipping away at the generations of crystallized racism encasing our society.

The only problem is that implicit bias training probably doesn’t work – at least not in the way that people want it to. 

At this point, the data seem clear about two things: 

    1. Unconscious biases are pervasive elements of how we perceive our social environments, and
    2. Unconscious biases are exceedingly difficult to permanently change.

Since Saul Tversky and Daniel Kahneman first drew attention to the phenomenon of cognitive biases in the early 1970s, researchers have explored the varieties of mental shortcuts on which we daily rely; tricks like ‘confirmation bias,’ ‘the halo effect,’ ‘the availability heuristic,’ ‘anchoring’ and more have been explored by everything from psychologists and philosophers trying to understand the mind to marketers trying to convince customers to purchase products

One of the more surprising things about implicit biases is how they can sometimes conflict with your explicit beliefs or attitudes. You might, for example, explicitly believe that racism or misogyny is wrong while nevertheless harboring an implicit bias against minority groups or genders that could lead you to naturally react in harmful ways (either behaviorally or even just by jumping to an unfounded conclusion).  You can explore this sort of thing yourself: implicit association tests (IATs) purport to be able to peel back your natural assumptions to reveal some of the underlying mental shortcuts that operate behind the scenes of your normal thought processes. In general, implicit bias training aims to highlight these cognitive biases by making the implicit processes explicit, with the hope that this will allow people to make conscious choices they actually endorse thereafter.

However, a study published this month in The Journal of Personality and Social Psychology indicates that the demonstrable effect of a variety of implicit bias training modules was, at best, a short-term affair that did not contribute to lasting changes in either explicit measures or behavior. By analyzing evidence from nearly 500 separate studies, researchers discovered that, although implicit bias training seminars, workshops, classes, or other short-form lessons could provoke short-term shifts in mood or emotions, there was next-to-no evidence that these shifts would ultimately translate into different patterns of actual behavior

This fits with a general pattern of casting doubt on the efficacy of intensive bias training; in fact, by focusing on implicit problems (rather than the manifest explicit issues), some have argued that implicit training is simply distracting from the systemic issues underlying the real problem – some evidence even suggests that mandatory training (as opposed to voluntary exercises) might even make said biases stronger. Overall, this is likely intuitive: the notion that biased attitudes built up over decades of a person’s life could somehow simply be broken apart by a single day’s training is, at best, naive. 

If there is one consistent beneficiary of implicit bias training, it’s the companies mandating them. Consider what happened after a video of two black customers being racially profiled at a Starbucks in Philadelphia went viral: the coffee company closed its stores nationwide for several hours so that its workforce could undergo bias training. By appearing decisive, Starbucks was able to address (and generally sidestep) an intensely damaging PR incident at the cost of a few hours of profit. The fact that the bias training was not likely to effectively change the racist environment that precipitated the video was beside the point. As Brian Nosek, one of the psychologists who helped develop the IAT, put it, “I have been studying this since 1996, and I still have implicit bias.” Nonetheless, Starbucks apologized and the news cycle moved on.

So, it remains to be seen what the future holds for the state of California. Certainly, the move towards action regarding the problems of implicit bias is a step in the right direction. However, that sort of training by itself, without a systemic addressals of the institutional problems that promote oppressive environments (intentionally or otherwise), will be ultimately powerless.

Is It Wrong to Be a Nationalist?

photograph of Trump hugging flag on stage

When President Trump declared himself a “nationalist” last autumn, some wondered if that was good or bad for the country. One writer pointed out that “for many Americans, mention of the word summons up visions of Hitler and Nazism.” Michael McFaul, the ambassador to Russia during the Obama administration, tweeted: “Does Trump know the historical baggage associated with this word, or is he ignorant?” Shortly after Trump’s declaration, President Macron of France warned against “chaos and death,” calling nationalism “the betrayal of patriotism.” 

The largely negative reaction to President Trump’s self-identification as a nationalist presents an opportunity to examine timely ethical questions: What does it mean to be a nationalist in 2019? Is being a nationalist morally wrong? Is nationalism inherently noxious and inevitably violent or is it merely warped and twisted to justify noxious and violent acts?  The distinction is important in uncovering whether the political force should be condemned outright or tolerated and even supported. 

Examples of nationalism’s marriage with racism, ethnic cleansing, and genocide punctuated the last century. Ethnonationalism, and its entanglement with religion, plagued the Balkans, most recently in the 90s when Yugoslavia splintered under the pressure of civil war. A desire for Hutu ethnic supremacy in Rwanda led to the mass murder of hundreds of thousands of Tutsi Rwandans. The extreme, racialized fascism espoused by the Nazis resulted in the Holocaust. Sensitivity to the ‘nationalist’ label is understandable. 

Opponents of President Trump’s hugging embrace of nationalism may be nobly motivated to prevent those moral evils from recurring. But to conclude that the mere expression of nationalism entails the tolerance of or advocacy for such evils is wrongfully anticipatory. To automatically conflate nationalism with the acts it has dubiously been used to justify neglects the intellectual complexity of the concept. The fundamental question is: Can nationalism exist without the violence with which it is so often associated? Or does the prioritization of a nation’s interests at the expense of all others represent incitement?

To answer this question, one must define nationalism and parse through its different varieties. The “nation” has been called “an imagined community” of strangers because most individuals will never know the majority of their fellow compatriots. When using this definition of nation, it is clear that a strong force is required to bind these strangers and foster a sense of shared community. 

Ethnicity is often used as this binding force. Ethnic nationalism is based on promoting a singular culture, religion, and language and securing its dominance in defining national identity. The potential for violence is obvious: preferring one culture over all others leads to the relegation or exclusion of others and can sour into the aforementioned evils of the 20th-century. It points to homogeneity, and establishes clear in-groups and out-groups.

Civic nationalism, on the other hand, avoids cultural preferences–and the potential of violence–and bases national identity on shared liberal, democratic values. One example of this form of nationalism is Scottish Nationalist Party, whose raison d’etre is independence for Scotland, defines the country’s national identity not by race or ethnicity but rather democracy and self-determination. The United States of America, lacking any formal endorsement of a national religion or language, is another prominent example of civic nationalism, even if some may endeavor to define the country’s identity through a racial or cultural lens. Merely the existence of these different forms of nationalism suggests that it can indeed exist without violence. 

But even if the concerns about the historical baggage associated with the term “nationalist” are assuaged, there remains other reasons to be critical of it. Placing the question of nationalism within the context of globalization and an increasingly interconnected world reveals as much. President Macron, who has called for strengthening the powers of the EU, characterized nationalism as “our interests first, who cares about others.” While his condemnation appears unconditional, he demonstrates the threat it poses specifically to a globalized world. 

Rising nationalism and populism in Europe has been reflected in the elections of anti-establishment parties, support for Eurosceptic leaders, and, most notably, Brexit. And it is perhaps the erosion of commercial borders caused by globalization and the cessation of governance to more distant political bodies that has led to this resurgence of nationalism; a resurgence driven by a fear of “losing” one’s country.

If the goal is to further the interdependence of countries, to strengthen international bodies, and to encourage the free movement of people and goods, and with them, culture, nationalism is certainly an obstacle. But if the goal is to support localized governance and ensure nations retain their sovereignty, nationalism is inevitable.

It is important to recognize then that to criticize nationalists is to criticize the concept of the nation, too. For those who oppose nationalism, the only possible implication of their opposition is that the nation is not worth supporting with such fervor or pride, a lost cause running counter to the progress of a globalized world. But for as long as the nation exists and is the predominant base upon which the modern state is structured, promotion and prioritization of one’s nation should strike no one as inherently wrong.

The Political Response to Racism: Trump vs. the Squad

photograph of "Welcome Home Ilhan" sign held by supporter with others gathered at MSP

While Donald Trump tweeting something awful may barely qualify as news these days, his tweets on July 14th were awful enough to be considered by many to be crossing a line. Addressing “progressive democrat congresswomen” – in other words, Alexandra Ocasio-Cortez, Ilhan Omar, Rashida Tlaib, and Ayanna Pressely, sometimes referred to as “the squad”  – Trump told them to “go back and help fix the totally broken and crime infested places from which they came.” Trump’s remarks have widely been condemned as racist, bigoted, and xenophobic, although many Republicans have not yet openly denounced them.

There has, unsurprisingly, been a flurry of reactions to Trump’s tweets. While many politicians both local and international have expressed their disapproval, some have stopped short of calling Trump a racist, perhaps because of the moral condemnation the term implies, or perhaps because they are busy arguing about the semantics of the term. There is, however, little room for mental gymnastics here: telling four women of color to go back to where they came from is unambiguously racist. The question is not whether Trump’s remarks are morally reprehensible (they are), but instead what should be done in the aftermath.

Journalists have suggested a number of different answers to this question. For instance, those at Fox News took the unsurprising stance that Trump’s remarks shouldn’t be taken seriously, and that they were at most an “unforced error.” Others have called for some form of punishment, most notably in the form of the House passing a resolution to officially censure Trump for his remarks. While such a censure would not impede Trump in any significant sense, it would at least be a symbolic gesture that put him in the company of the last president to officially be censured, Andrew Jackson, all the way back in 1834.

While it seems clear that Trump’s remarks deserve condemnation, and that Trump himself ought to be held accountable for them in some way or another, some journalists have urged caution in deciding the best next course of action. The thought is something like the following: to Trump’s most diehard fanbase, racism is not a deal breaker (Trump’s history of racist remarks and actions is, after all, well-documented). By calling for official censure, then, one is only going to accomplish the riling up of his loyal supporters. Furthermore, by drawing attention to Ocasio-Cortez, Omar, Tlaib, and Pressley, one then associates the Democratic party with these four specific women, ones who have tended to be unpopular amongst those on the right. Really, then, condemning Trump only ultimately helps his cause of rallying the Republican troops.

Instances of this take are not hard to find. For example, Jonathan Freedland at The Guardian writes:

It’s race-baiting, no doubt about it. But it might also be effective, as Trump’s 2016 campaign proved. The result is a dilemma for Democrats. Do they try to win back those white, low-income voters who supported Trump last time or do they use the president’s hateful behaviour, including his attacks on the squad, to drive up turnout among those appalled by it – especially black voters and young people?

Or consider Republican strategist Ford O’Connell, who stated that Trump’s remarks were “very smart from an electoral strategy perspective” and that he is helping to make the squad “the face of the 2020 Democratic Party.” While not saying that Trump should not be condemned for his remarks, Ocasio-Corte, Omar, Tlaib, and Pressley have themselves issued caution about letting Trump’s remarks serve as a distraction from a number of pressing issues, and cautioned Americans that they should not “take the bait.” Other commentators have echoed this sentiment. Consider finally the following from journalist Vinay Menon:

There is no point in trying to shame a shameless man. If by now his fans do not see his profound failings as a human, those blinders can’t be removed. Trump is not a politician. He is a cult leader who is bending his party to his will.

And the more awful he is, the more his base rejoices.

This summer, critics should pretend he no longer exists. Put the focus elsewhere. Stop taking the bait. Cease giving him a power he has failed to earn.

We have, then, something of a dilemma, in that those who wish to condemn Trump risk helping him in the long run by doing so; or, as Freedland puts it,

The result is that Democrats face a choice between doing what is morally right and what is politically smart. When you’re dealing with an amoral bigot in the White House, those two things are not always the same.

Unfortunately, this has not been the first time that we have been faced with this dilemma during Trump’s presidency: for instance, some argued that beginning impeachment procedures after the release of the findings of the Mueller report would only ultimately help Trump’s cause, as it would be perceived as whining on behalf of the democratically controlled House. Indeed, it has been a consistent refrain any time after Trump does something awful: censuring him in one form or another only encourages his base, so why bother trying to punish him?

So what ought one do in this situation? If there is a risk that implementing a punishment for Trump’s morally egregious acts would actually help him in the long run, is this reason not to pursue that punishment?

I don’t think there’s an easy answer here. But I do think it is worth keeping in mind that many of those writing on the situation are doing so from something of a distance. In other words, it is easier to take the position that the risk of political backlash warrants inaction when one has themselves not been directly impacted Trump’s behavior. For instance, consider the headline “Before they can beat Donald Trump, his foes must learn to ignore him – even his racism.” This might sound like decent political advice, but it is harder to swallow when one is the target of the kind of racism that Trump is inciting. Indeed, the fact that during a recent Trump rally crowd members chanted the racist creed “send her back” indicates that ignoring his racism may not be the best course of action.

Jacinda Ardern, Christchurch, and Moral Leadership

Jacinda Ardern, leader of the NZ Labour party, was at the University of Auckland Quad on the first of September, 2017.

Shortly after the Christchurch massacre on March 15, in which a white supremacist gunned down worshipers at two Mosques in the New Zealand city killing fifty people during Friday prayer, the NZ Prime Minister Jacinda Ardern spoke with US President Donald Trump, who had called to condemn the attack and offer support and condolences to the people of New Zealand.

Ardern later told a press conference that “[Trump] asked what offer of support the United States could provide. My message was: ‘Sympathy and love for all Muslim communities.'”

Following the attack the connection between casual racism in public discourse,  often serving populist political ends, and an emboldened white supremacist movement prepared to commit violent acts was widely discussed (as explored in my previous article).

Yet, all Donald Trump’s past behavior, public remarks, tweets and policies indicate that such a request would have been incomprehensible to him; indeed the weekend following the massacre and Ardern’s request for “sympathy and love” for Muslim communities, Trump fired off a tirade of tweets in support of Fox News’s Jeanine Pirro, reprimanded by Fox News for making racist remarks about Ilhan Omar, the U.S.’s first Muslim Congresswoman.

Ardern asked for “sympathy and love.” And sympathy and love was at the core of her response in the agonizing aftermath of this massacre. For that response and the leadership she extended, Ardern has been internationally lauded. Indeed, Ardern has shown what moral leadership looks like by bringing a natural and genuine love and sympathy to the affected community and the country. She set the tone and spirit of the nation’s response with language of inclusivity that refuted and negated the perpetrator’s attempt to create division.

Fronting the press immediately after the attack, looking visibly shaken, Ardern said of the Muslim community, and of all migrants and refugee communities in New Zealand “they are us,” and of the perpetrator she said “he is not us.” The simple language of inclusion of “they are us” and its sympathy and compassion, immediately disavowing the othering of Muslims, was a rejection of any suggestion that those who had been targeted are outsiders in the community of Christchurch and in the society of New Zealand.

Ardern visited Christchurch to support affected community. As she met people Ardern placed her hand on her heart, a traditional Muslim gesture, and said “Asalaam alaykum,” (peace be with you). She wore a hijab as a gesture of solidarity with the Muslim community, showing that ‘they are us’ does not mean ‘they’ are the same as ‘us’, but that the category of ‘us’ is inclusive of different religions, ethnicities, and cultures; and that New Zealand is proud of being a multicultural, open and inclusive society. And she held survivors and grieving families in her arms and cried with them. Ardern’s leadership — her words and her actions — visibly helped the whole community feel connected to the victims and gave non-Muslims a cue for identifying with the Muslim community. The following Friday, exactly one week after the massacre, the call to Friday prayer was broadcast on public radio and television networks.

There is no doubt that Ardern’s response to this tragedy stood out across the world as exemplary leadership, strength, compassion, and integrity. We should be able to expect good moral leadership from our political leaders, but in this era of populism, defined as it is by the characteristic tactic of appealing to the lowest common denominator, such leadership is rare.

Love or virtues that might pertain to, or emerge from it, such as compassion and sympathy, are not always obviously operative in our moral philosophy, ethical systems or political sphere. Contemporary analytic moral philosophy tends to work with concepts right and wrong more than concepts like love.

Love of one’s neighbor is of course a central tenet of the moral teachings of Christ, and the spirit of universalization that maxim evokes is present in some form in most ethical systems from religious to secular. There is a version of universalization present in the familiar systems of moral philosophy: in utilitarianism we treat the interests of stakeholders equally, and we do not favor those closest to us – in proximity, or in family, culture, religion etc. The Kantian categorical imperative, too, is based on a method of universalism, so that one finds one’s moral imperative only in what one could will to become a universal law.

In these systems, both of which attempt to make moral judgements objective, a concept like love would appear sentimental, and these systems of moral philosophy are designed specifically to remove elements of sentiment that might have been thought to confuse, distract or subjectify moral thinking. Yet for the philosopher Iris Murdoch, love was an important concept for ethics. She writes: “We need a moral philosophy in which love, so rarely mentioned now by philosophers, can again be made central.” (Iris Murdoch, The Sovereignty of Good, Routledge, London, 1970, 2010, p45.)

Murdoch wrote about love in morality as being related to acts and efforts of attention – of attending to the humanity of others. Indeed, as against the blindness of racism, the notion of a ‘loving and just attention’ for Murdoch is part of the capacity to deeply acknowledge others as one’s fellow human beings. This is precisely what racism cannot do. Racism is radically dehumanizing. It is a moral failure to see the other as ‘one’s neighbor’ – that is, to see them as one of us, as part of the human family, or as sharing in a common humanity. (See Raimond Gaita, A Common Humanity, Text Publishing, Melbourne, 1999.) She observed that an effort to see things as they are is an effort of love, justice and pity.

Ardern’s response was to refute, and to deny, this racist denial of humanity – without entering into dialogue with it. That ‘loving and just attention’ of which Murdoch speaks is visible in the context of Ardern’s response in the way she attended to the victims and their families. This includes the focus of her attention on the suffering of those who were affected, and also the quality of that attention, which brought out their humanity at a time when someone had sought to deprive them of it – not just by murdering them and their loved ones, but by proudly justifying it as ideology.

The refutation of the ideology of racism is the affirmation of the humanity in each other. It is not clear that affirmation can be fully realized in arguments that, morally, have as their object, right and wrong; but Ardern has demonstrated that the moral sense of a common humanity can be realiszed through sympathy and love.

Meanwhile this week The White House escalated its assault on the Muslim American congresswoman Ilhan Omar after Donald Trump repeatedly tweeted video footage of September 11 and accused Omar of downplaying the terror attacks. 

What Does It Mean to Be an Ally?

Photograph of protesters holding "Stop Police Brutality" banner

Despite the social progress the United States has made, it still has many shortcomings. Amidst its many flaws, race is always one that persists–specifically regarding the treatment of black people. As long as black people are judged and disenfranchised because of the color of their skin, race will remained unresolved in the United States–a mass of prejudice, discrimination, and injustice that dates back centuries. But there is some progress that has been made and some racial tension in the country has been assuaged. After all, white Americans have showed their support for black people as they struggled with police brutality, the killings of black men, and the Black Lives Matter movement. But is the support shown enough? The term ally has often been used when referring to an individual who supports a marginalized group of people. Regarding the injustices done to African Americans, many white people have declared themselves avid supporters in their struggle. What form does this support come in and to what extent? What does it really mean to be an ally?

A year ago, DePauw University invited actress Jenna Fischer to speak as an Ubben Lecturer, serving as part of the Ubben Lecture series where notable public figures are invited to campus to interact with students and give a public address. Only about a week or two prior to Fischer’s arrival, a series of racially charged incidents occurred on DePauw’s campus. Racial slurs had been written on campus bathrooms and some large stones in DePauw’s nature park had been rearranged to spell out the n-word. Students of color on campus were infuriated and felt as if the campus administration was not doing enough to ensure the safety of students of color. The campus was filled with racial tension, and it finally erupted during Fischer’s lecture. In the middle of her address, one by one, students of color began standing up and declaring “we are not safe.” Eventually, the whole auditorium was filled with “we are not safe” chants. The protest left campus on high alert as tension between students rose. The protest also brought media coverage, with black students standing at the forefront.

Not too long after the Ubben Lecture protest, white students on DePauw’s campus began to show their support for students of color by taking to social media. All across Instagram and Facebook, white students declared that they stood in solidarity with students of color. On a tree in the middle of campus, white students also made a sign declaring that they stood with students of color. But even as white students expressed their support, was it enough? Did their actions embody allyship? Heather Cronk, co-director of Showing Up for Racial Justice, a network of activists that organizes white people to fight racial injustice, stresses that allyship, in terms of supporting black people, needs to consist of trusting black leadership to direct white allies in ways that are helpful to the movement. Cronk goes on to explain that allyship also means building deep relationships with black people and other people of color. The willingness to discuss controversial topics such as police brutality and educate oneself are other integral components to allyship as well. Simply being a physical presence also represents allyship. In photos of Black Lives Matter protests, white people can be seen marching with their black counterparts holding up the Black Lives Matter banner.

With allyship in regards to the Black Lives Matter movement in mind, did white students on DePauw’s campus really demonstrate allyship? Taking to social media and posting something on one’s Instagram story and hanging a banner in the middle of campus can have meaning and influence, but the extent of that meaning and influence is questionable. When people post to their Instagram and Facebook stories, they last for 24 hours and then disappear. It is likely that both the people who viewed the story and even the person who posted the content eventually forgets about it. Did the same situation happen when white students at DePauw posted about standing in solidarity with students of color? The social media posts are a form of support, but it could be argued that the white students who made them were simply trying to deflect any criticism from themselves. Perhaps the question of allyship comes down to the old saying “actions speak louder than words.” It’s so easy to declare one’s support, but how does one demonstrate it? What if the same white students on DePauw’s campus who declared their allyship passively watch as their friends use racial slurs and disregard the struggle of students of color? What if the same white students who showed support to students of color don’t understand the importance of recognizing the difference between Black Lives Matter and All Lives Matter? Is that still allyship?

If more white students on DePauw’s campus stood in black spaces and were willing to have tough conversations with students of color, would that be enough? Possibly. But the support that white students demonstrated through social media and the use of the banner cannot be ignored. Regardless of what the true definition of allyship is, perhaps it can be agreed that in order for racial issues to be resolved in the United States, black and white bodies must come together.