← Return to search results
Back to Prindle Institute

McKamey Manor: The House of No Consent

black-and-white photograph of silhouetted figure behind glass

Since 2005 Russ McKamey has been running McKamey Manor, an extreme horror attraction. When patrons sign-up for the tour they are signing-up for being physically and psychologically mistreated. Before participating, patrons must go through extensive interviews, a medical examination, and sign a long legal waiver. However some participants complain that the experience is too extreme and that the legal waiver does not excuse their behavior. The nature of the attraction brings up a host of issues concerning the nature and extent of consent.

A waiver is a voluntary surrender of a right or opportunity to enforce a right. Many horror attractions require patrons to sign a waiver before entering, in which the participants acknowledge that they are knowingly taking on the risk of various losses and relinquish the right to seek damages they may suffer while attending the attraction. For example, if a person who attended a horror attraction suffered from a heart condition and experienced a heart attack during their participation, they would not be able to sue that attraction for any medical expenses incurred as a result of that heart attack. In the case of McKamey Manor the waiver is reportedly about 40-pages long. In addition to the waiver, potential patrons are required to watch videos of other people’s experiences at McKamey Manor. The participants in these videos all ask to have their experience ended prematurely, and advise the potential participants that they “don’t want to do this.”

But does it follow that potential participants, duly informed of what may happen to them, truly consent to be buried alive, forced to ingest their own vomit, held under water, cut, struck, and verbally abused? Not necessarily. Not even a signed legal form, or other explicit signal of consent automatically creates genuine consent. There are several conditions which render void apparent consent such as when no genuine choice is available to participants or when the participant is offered something that undermines their ability to make rational decisions. McKamey Manor offers participants $20,000 if they can survive the entire experience (which is of variable length, ranging from 4 – 10 hours). Even in the longest scenario a successful participant would stand to make $2,000 per hour of their time — an inducement that undermines a person’s ability to think clearly.

While recent McKamey attractions allow participants to create safe words to automatically end their horror experience, this was not always this case. And McKamey patron Amy Milligan claims that even when she begged the actors to stop, they continued to torment her. If a person cannot end the experience at will — if they are at the mercy of the actors creating the experience — then that person has been robbed of their autonomy, even if only for a limited time. This creates another type of situation in which the explicit consent signal, in the form of the waiver, is a legal fiction. It is not possible for a person to fully waive their autonomy, as doing so would be to essentially sign themselves into slavery.

The idea that such “voluntary slavery” could exist is discounted as a possibility by philosophers with views and methodologies as different as Jean-Jacques Rousseau and John Stuart Mill. Rousseau argued that once a person becomes a slave by losing all autonomy, they cease to be a moral agent at all. As such to consent to being a slave would be to consent to no longer being a moral or legal person. Mill argued that voluntary slavery was an exception to his harm-to-others principle, which stated that any person could do as they pleased so long as they did not harm someone else. He claimed that although a person attempting to sell themselves into slavery may not be causing harm to anyone but themselves, it nonetheless stood in contradiction with the whole point of the harm-to-others principle — to maintain maximum individual liberty.

Though McKamey Manor residents do not sign themselves away into permanent slavery, they do “waive” their autonomy for a limited amount of time. Importantly, the effective duration of this “waiver” is determined not by the participants, but rather by the actors. Moreover some of the experiences patrons are subjected to are essentially torture. Here again the substantiveness, or, at least, relevance, of patrons’ consent is dubious. Consider waterboarding, a form of simulated drowning. (McKamey contends that no participants are waterboarded, but admits that they will be made to feel like they are drowning — a spurious distinction.) The problem with military detainees being waterboarded is not that they weren’t asked for their permission first. Indeed lack of permission is not the sole moral shortcoming of any form of torture. The problem is instead the nature of the activity and the relationship it creates between people: a relationship in which one person is inflicting suffering an another for enjoyment or profit.

McKamey and his defenders claim that the screening and waiver process creates a situation in which McKamey Mansion patrons consent to a prolonged period of physical and emotional abuse. However there are some things that no waiver, no matter how length and legalistic can create consent for. A person’s autonomy is inalienable. This doesn’t just mean that it cannot be taken away, but also that it can’t be given away.

The Killing Joke: The Ethics of ‘Joker’

photograph of joker graffiti on wall

Batman and his archnemesis the Joker have been battling for almost eighty years. Since the Joker’s first appearance in Batman #1, the Batman versus the Joker rivalry has been taken from comic book pages and blown up on the big screen. From Cesar Romero’s slapstick take on the clown to Jack Nicholson’s off putting rendition, to Mark Hamill’s comically creepy voice acting, to Heath Ledger’s version, and finally Jared Leto’s, the Joker character has equally creeped out and engaged audiences for decades. Now, the clown has made his return to the big screen in director Todd Philips’ Joker. But this isn’t your typical Batman versus Joker story. It’s all about the homicidal clown’s backstory and how he takes over Gotham City. While the film has received great reviews, there’s a narrative of discontent attached to it. In the wake of a surge of mass shootings in the United States, some moviegoers have called Joker insensitive for how the film handles the character. The controversy surrounding the film asks the question: Should Joker have even been released at the time that it was?

The obvious answer here, and one that a business person or really anyone who can count, is yes. After all, the film earned $849 million globally, and $47.8 internationally over the weekend, with a budget of $64 million. But money isn’t the issue here; it’s what the movie means and how it’s message has translated to audiences.

It all started with the premiere of Joker at the Venice Film festival. The story of mentally ill Arthur Fleck, a struggling comedian in Gotham who has everything taken from him and descends into madness, resonated with the audience in Venice. So much so that the film was awarded a Golden Lion for best film. But on the other hand, critics pointed out that the disturbing story of Arthur Fleck hit too close to home regarding some of the recent events that have occurred in the United States. In Joker, at the peak of Fleck’s misery, he commits murder and realizes that he enjoys it. Finally, at the high point of the movie, Fleck “becomes” the Joker as he commits murder in front of a studio audience.

In response, critics explained that the Joker’s character inspires angry, misogynistic young men who’ve been responsible for far-right and white supremacist violence. Indeed, some of the most recent mass shootings have been caused by white men. For example, in August, Patrick Crusius entered a Walmart in El Paso, Texas and killed 22 people. Later, it was revealed that his motive was to kill as many Latinx people as possible. Nikolas Cruz, the gunman who murdered 17 students at Stoneman Douglas High School in Parkland Florida, was known to have a “desire to kill people.” Self-proclaimed white supremacist Dylan Roof entered a church and killed 9 African American worshippers in hopes of starting a race war. With these mass shootings in mind, it’s then understandable why Vanity Fair’s Richard Lawson would say that Joker “may be irresponsible propaganda for the very men it pathologizes.” He might have a point. In the film, Fleck’s life automatically garners sympathy, as the opening shot of the film is him getting beaten up in a clown suit. Misfortune after misfortune, it’s almost as if Fleck has no choice but to become the Joker. And at the same time, the film suggests that maybe–just maybe–if a few lies weren’t told and Fleck was loved a bit more, he wouldn’t have become what he did. Now, with this in mind, how many more Patrick Crusiuses and Nikolas Cruzes are out there? What are the chances that they see Joker and identify with the character to such an extent that they feel inspired by him? Even the background of Adam Lanza, the gunman who killed 20 children and 6 adults at Sandy Hook Elementary School mirrors Arthur Fleck’s in Joker, as they both have behavioral issues, mental health problems, and detrimental relationships with their mothers. 

But Lawson wasn’t the only one with these concerns either. Families of the victims of the Aurora shooting in 2012, where a gunman opened fire on moviegoers watching The Dark Knight Rises, penned a letter to Warner Bros, the studio that made Joker, calling for them to use their platform to fight gun violence. In response, Warner Bros. said that Joker is not an endorsement of any real-world violence. Todd Philips then went on to say that the movie is more about a lack of compassion in the world than anything, and Joaquin Phoenix, the actor who plays the Joker, remarked that viewers should simply take the film for what it is. Maybe Philips and Phoenix have a point. Philips went on to say that art can be complicated, and it’s often meant to be complicated. Maybe that’s what Joker should be taken as–art. As a movie. Just because the film is relevant to some real-world events shouldn’t mean that it can’t be released or it should be criticized for reflecting real-world issues. The tragic shootings that have happened will always be a part of U.S history, so what difference does it makes if the film came out 5 or 10 years from now? No matter when this movie would come out, the real-world events that have happened would be associated with it.

But then, there’s another side to this Joker controversy. Protesters in Beirut over thecountry’s financial crisis have started to paint their faces like the Joker. Photos of people in Joker masks and face paint have been popping up in Hong Kong and Chile as individuals protests against their respective governments. Internationally, it’s as if the Joker has become a symbol of revolution, not a twisted justification for violence. But if the Joker has then become this symbol for protest, can the film still really be seen as just art–as just a movie? It seems that the film has gone past box office expectations, not in terms of money, but becoming a global phenomenon. In the same vein, the film’s international influence almost prevents it from being contained within itself. It’s sheer influence brings it into the real world. So maybe, the film did need to be released and the world needed to see the Joker on the big screen again. Because either way you look at it, the film proposes an idea–be it terrorism or revolution. Now, since the film’s release, there haven’t been any mass shootings, but perhaps the reason that the film shouldn’t have made it to theaters is the fear of what someone who thinks that those two ideas are synonymous would do.

The DOJ vs. NACAC: Autonomy and Paternalism in Higher Ed

black and white photograph of graduation

Last month, the National Association for College Admission Counseling (NACAC) voted to remove three provisions from their Code of Ethics and Professional Practices. These changes will now allow schools to offer early-decisions applicants special considerations like priority housing and advanced course registration. Schools are also now allowed to “poach” students already committed to other institutions. And, finally, the May 1st National Candidates’ Reply deadline will no longer mark the end of the admissions process, as schools can continue to recruit into the summer. Together, these changes threaten to drastically alter the college recruitment landscape, and it’s unclear whether those changes will be positive or even who the beneficiaries might be.

NACAC’s move to strike these provisions was motivated by a two-year inquiry by the Department of Justice into antitrust claims. The prohibition on universities offering incentives to early-decision students and wooing already-committed recruits was deemed anti-competitive and a restraint of trade. NACAC was given a straightforward ultimatum: strike the provisions or engage in a legal battle whose only likely outcome would be being dissolved by court order.

As Jim Jump suggests, the DOJ appears to see NACAC as a “cartel” — coordinating behavior, fixing prices, and cooperating so as to insulate themselves from risk. From the DOJ’s point of view, NACAC is merely acting in the best interests of institutions, and prevents students from getting the best economic deal possible on their education. By prohibiting certain kinds of recruiting and incentives, NACAC limits competition between institutions for the industry’s gain and students’ loss.

The DOJ’s perspective is purely economic: The price of attending college has been increasing eight times faster than wages. Demand for education is at an all-time high, the need for student services is ever-increasing, and state-funding hasn’t been responsive to growing student numbers and institutions’ swelling size. Rather than increase government subsidy of higher education, the hope is that increasing competition between providers may drive costs down for consumers. The DOJ’s position is simple: “when colleges have to compete more openly, students will benefit.”

In response to these allegations, NACAC supporters claim that the rules are designed to safeguard students’ autonomy. By prohibiting institutions from poaching or offering better early-decision incentives, NACAC’s provisions shield impressionable high-schoolers from manipulation and coercion. Should colleges be permitted to offer priority housing or advanced course registration to early applicants, over-stressed teenagers will only be more likely to make their college choices prematurely. Should universities be allowed to court newly-matriculated students only just adjusting to college life, susceptible youths will always be swayed by the promise of greener pastures. In the end, these paternalistic measures are intended merely to preserve the possibility of effective student agency.

But, to many, treating prospective college students as vulnerable on the one hand, and competent and self-sufficient on the other, seems disingenuous. The average student debt is $38,000; if applicants are old enough to incur such large financial burdens, then surely they are old enough to navigate the difficult choices between competing financial and educational offers. As consumers of such high-priced and valuable goods, it should not be within others’ purview to doubt the truth, rationality, or sincerity of prospective students’ expressed preferences.

What the DOJ ruling may be missing, however, is the particular value for sale that makes the marketplace for colleges unique. As DePauw’s Vice President for Enrollment Management, Robert Andrews, argues, “There are real drawbacks to making your educational decisions like you would make your purchasing decisions around less-intricate commodities.” By reducing a college education to a simple dollar amount, we ignore the larger value a college education and the formative role it can play in students’ lives. It’s difficult to accurately assess in retrospect, (and certainly predict beforehand,) the meaning “an undergraduate education and the developmental experiences that occur when 18-22 year-olds live and learn on a college campus” will have, as well as all the factors that made that experience possible. As such, relative cost should perhaps not be billed as the crucial factor. Unfortunately, Andrews argues, striking these NACAC guidelines, prioritizes the wrong thing:

“Students may be enticed by larger scholarship and financial aid packages and choose a school they had previously ruled out for very valid reasons, (i.e. size, academic offerings, availability of student services, etc.) thus putting their successful educational experience in serious jeopardy. Will saving $5,000 more per year mean anything if it takes a student 5-6 years to graduate when they could have made it out in 4 at the “previous” institution?”

At bottom, the disagreement between the DOJ and NACAC centers on whether consumers know best their own interests. In particular, the question is whether NACAC is better-positioned to anticipate students’ needs than the students themselves. Folk wisdom claims that “You cannot harm someone by giving them an option,” and we must decide whether prospective college students represent a vulnerable population that needs to be protected from choice. Is the very possibility of new financial and educational incentives enough to undermine and override students’ true preferences? Does a policy of general prohibition on financial incentives support or frustrate those core preferences?

As of yet, whether the removal of NACAC’s guidelines will deliver positive or negative consequences for students, institutions, and higher education in general can’t be seen. Prophecies are in no short supply, and college administrators are desperately trying to anticipate how the new “Wild West” will play out.

On the Question of Strategic Voting

photograph of "voting" sign on a wall

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On October 21st Canada elected a new parliament. In this election the issue of strategic voting became prominent. There were six political parties considered to be capable of electing members to parliament. Three of those parties are commonly grouped as “progressive,” including the Liberal Party who won a plurality of seats in the election, the center-left New Democratic Party (NDP), and the environmentally focussed Green Party. Because of this competition voters had to weigh the option of voting for the party that is their first choice or strategically voting for a party that is less favored but more likely to win in order to avoid victory for a party that is more strongly opposed. This tactic has been discussed and debated amongst those in the media and in the academy. Strategic voting is an ethical issue because it can affect the quality of democracy, however even the language used to discuss the issue reveals something about how we make value judgments.

In Canada certain electoral ridings tend to be traded back and forth between the Liberal Party and the Conservative Party. If a voter prefers the NDP, for example, they are confronted with a choice: They can vote according to preference, even though it may be very unlikely those candidates will win, or they might decide to vote tactically. While they may not prefer the Liberal candidate to win, they may want the Conservative candidate to win even less; the voter may then strategically switch their vote for the Liberal Party in order to avoid a Conservative victory. The effect is that the vote share that would normally go to the NDP or the Green Party is suppressed.

This phenomenon is not foreign to American voters. Political scientist Gar Culbart has analyzed data from four presidential elections and found evidence that primary voters tend to select candidates more likely to win the Presidential election, rather than their first-choice preference. But strategic voting can apply past the nomination stage as well. In the last Presidential election left-leaning voters (particularly Sanders supporters) had to face a difficult decision between not voting at all, voting for a third-party candidate like Jill Stein, or, despite not liking her candidacy, voting for Hilary Clinton in order to prevent a Trump victory.

The issue of strategic voting has become a controversial topic. One the one hand, if a voter wishes to prevent a certain candidate from winning, and this is more important to them than voting for their first-choice candidate, it seems like a sincere preference and for some voicing it may be considered to be a moral obligation. Pundits like Bill Maher have been fiercely critical of those who do not vote strategically. Drawing attention to issues like climate change and the Supreme Court, Maher has criticized voters who opted for Jill Stein or even not voting at all instead of voting for Clinton because she was “the lesser of two evils.” Similar criticism followed the 2000 election where 537 votes separated George W. Bush from Al Gore in the state of Florida. Had left-leaning Nader supporters voted strategically, Gore would have won the state and the presidency. In other words, failure to vote strategically can lead to negative consequences.

On the other hand, arguments have been made that strategic voting is wrong. In 2006 in response to pressure placed on voters by the Liberal Party for NDP supporters to vote Liberal to stop a Conservative victory, Jack Layton noted that it is “frankly offensive” for Liberals “to tell Canadians they are limited to two choices, that they are limited to a choice between corruption and conservatives.” Indeed, it can lead to complacency amongst the political class if they can use the specter of the other side winning in order to secure votes, knowing that because voters lack a better option they don’t need to be as responsive to what voters want. This can lead to disengagement and frustration with the democratic process.

While the choice to vote strategically is an ethical issue, the way strategic voting is characterized can also raise ethical concerns. Strategic voting involves value judgments, and as a value judgment the language and rhetoric surrounding the issue is problematic and misleading, even amongst academic writers. In the wider public discussion, voting strategically has been described as “voting against” something rather than voting for something. To avoid a strategic vote, some politicians will suggest that voters “vote their conscience” rather than engaging in prudential reasoning. Academics studying the matter will compare strategic voting to “sincere” voting or will describe a strategic vote as a voter not choosing their “preferred candidate.”

But such language is misleading. As Philosopher John Dewey notes, value judgments are always specific. He argues that “A decision not to act is a decision to act in a certain way; it is never a judgment not to act, unqualifiedly.” Thus, if one does not wish to elect a politician, they are never merely voting against something. Instead. they are deciding that an election is worth boycotting, or that another politician is worth supporting (if they weren’t, one would have no reason to be strategic). Thus, it is never merely the case that we vote against things.

Dewey further argues that in forming a value judgment there is a difference between what we like and what we would prefer. Indeed, I may like the idea only eating donuts for the rest of my life. However, I consider both the means required to do this and the effects it would produce problematic and so I reject the idea. As Dewey sees it, “reflection is a process of finding what we want, what, as we say; we really want, and this means the formation of a new desire, and a new direction for action.” Does it make any sense then to claim that if my diet includes things other than donuts, I am not eating sincerely? Am I not, after careful reflection, eating my preferred diet?

The debate regarding strategic voting is complicated enough without including connotative language which suggests that a strategic vote is not “sincere,” not a vote “for something,” or that it means one is not following their preferences; all of these have the potential to illegitimately question the legitimacy of a vote and drag the debate in an unhelpful direction. By the same token, calling a vote that is not made for strategic reasons a “wasted” vote is not helpful either since the vote may be intended to avoid the long-term problem of an unresponsive political class. Perhaps the best way to examine the ethics of strategic voting is to clarify our language and to examine the issue carefully in terms of what voters are trying to achieve by making such value judgments and whether their judgments deliver the results they expect and are comfortable with.

The Black Wall Street Massacre, Contributory Injustice, and HBO’s Watchmen

black and white aerial photograph of Tulsa Race Riot

On October 20th, the latest adaptation of Dave Gibbons and Alan Moore’s ground-breaking 1987 graphic novel Watchmen premiered on HBO; its opening scene featured the Tulsa Race Massacre, potentially “the single worst incident of racial violence in American history,” where thousands of buildings were burned and hundreds of black Oklahomans murdered in the Spring of 1921. Also known as the Black Wall Street Massacre, it was sparked when tensions escalated after a local black shoeshiner was accused of accosting a white elevator operator; because there was talk of an impending lynching, the black community protested, leading to an exchange of gunfire.

For many HBO viewers, the most surprising thing about the scene was not its graphic violence, but the later realization that the Massacre was, indeed, a historical event – an especially bloody episode in American history which, by and large, goes undiscussed in American schools.

Consider now the message that President Donald Trump, embroiled within an impeachment inquiry about multiple cases of corruption and misconduct, tweeted on October 22nd:

“So some day, if a Democrat becomes President and the Republicans win the House, even by a tiny margin, they can impeach the President, without due process or fairness or any legal rights. All Republicans must remember what they are witnessing here – a lynching. But we will WIN!”

Immediately, Trump was criticized for comparing the constitutionally-outlined impeachment process to the lawless brutality of lynching, a form of domestic terrorism almost exclusively used to reinforce racist oppression throughout the country by torturing and murdering black men. For anyone to draw (or defend) such an analogy requires, at best, an embarrassing level of ignorance or insensitivity about the actual history of racial abuse in the United States.

In different ways, both of these cases evidence what Ta-Nehisi Coates has called “patriotism à la carte” – a selective awareness of our national history that highlights certain favorable elements (or, at least, elements favorable to a particular subset of Americans) while quietly ignoring others. To Coates, such an approach to history is dishonest and, when it prevents some groups of Americans from being able to fully understand and engage with their current social situation, oppressive. Rather than cherry-pick the stories which we collectively magnify into cultural icons, Coates argues that an honest treatment of history will include multiple perspectives – even, and especially, if some perspectives emphasize that the U.S.A. (and its heroes) has not always been heroic for everyone: “If Thomas Jefferson’s genius matters, then so does his taking of Sally Hemings’s body. If George Washington crossing the Delaware matters, so must his ruthless pursuit of the runagate Oney Judge.”

Furthermore, both the general ignorance about Black Wall Street and the specific ignorance about the cruelty of lynching demonstrate various forms of what Kristie Dotson, professor of philosophy at Michigan State University, has dubbed “third-order epistemic injustice” or, more simply, “contributory injustice.” In general, epistemic injustice relates to the ethical implications of how society mistreats knowledge claims from various parties. If a woman accuses a man of sexual assault, but her testimony is, as a matter of principle, treated with skepticism, then she may be the victim of first-order epistemic injustice, often called “testimonial injustice,” because her testimony is unjustly discredited. Cases of second-order injustice – also known as hermeneutical injustice – result when a person is not only unable to communicate their experiences, but is prevented from even privately conceptualizing their own experiences, such as in the case of harassment or assault victims prior to the coinage of terms like “sexual harassment,” “date rape,” or “marital rape.”

Contributory, or third-order, epistemic injustice comes about as a matter of what Dotson calls “situated ignorance” which prevents the voices of marginalized groups from contributing to the wider cultural conversation. By “maintaining and utilizing structurally prejudiced hermeneutical resources,” perpetrators of contributory injustice define what “counts” as “real” history; the fact that audience members of HBO’s Watchmen were surprised to learn about the violent mistreatment of the actual residents of Greenwood, Oklahoma may well stem from the systemic “à la carte” approach to America’s racial history that Coates decried. Importantly, those guilty of maintaining dominant perspectives may not consciously realize that they are silencing marginalized groups, but – whether such actions are intentional or not – such silencing remains and, therefore, remains a problem.

And when Donald Trump or others try to dilute the severity of America’s racist past by comparing professional accountability (and potential prosecution for legitimate crimes) to the painful history of the illegal and immoral lynching of innocent people, this also evidences Dotson’s concern to highlight the role that social power plays in maintaining the process of contributory injustice. As she points out, hermeneutical injustice entails that both a speaker and an audience are unable to understand the thing in question; in a case of contributory injustice, the marginalized group can fully conceptualize their own experience, but differential social positions prevent the confused people in power from attending to the less-powerful perspective – it is a lopsided confusion propped up by the ignorance of the powerful.

Interest in philosophical considerations of epistemic injustice, and the wider field of “social epistemology” as a whole, is growing; it remains to be seen just how long it might take for its insights to substantively contribute to the broader public conversation.

California’s “Deepfake” Ban

computer image of a 3D face scan

In 2018, actor and filmmaker Jordan Peele partnered with Buzzfeed to create a warning video. The video appears to feature President Barak Obama advising viewers not to trust everything that they see on the Internet. After the President says some things that are out of character for him, Peele reveals that the speaker is not actually President Obama, but is, instead, Peele himself. The video was a “deepfake.” Peele’s face had been altered using digital technology to look and move just like the face of the president.

Deepfake technology is often used for innocuous and even humorous purposes. One popular example is a video that features Jennifer Lawrence discussing her favorite desperate housewife during a press conference at the Golden Globes. The face of actor Steve Buscemi is projected, seamlessly, onto Lawrence’s face. In a more troubling case, Rudy Giuliani tweeted an altered video of Nancy Pelosi in which she appears to be impaired, stuttering and slurring her speech. The appearance of this kind of altered video highlights the dangers that deepfakes can pose to both individual reputations and to our democracy more generally.

In response to this concern, California passed legislation this month that makes it a crime to distribute audio or video that presents a false impression about a candidate standing for an election occurring within sixty days. There are exceptions to the legislation. News media are exempt (clearing the way for them to report on this phenomenon), and it does not apply to deepfakes made for the purposes of satire or parody. The law sunsets in 2023.

This legislation caused controversy. Supporters of the law argue that the harmful effects of deepfake technology can destroy lives. Contemporary “cancel culture,” under which masses of people determine that a public figure is not deserving of time and attention and is even deserving of disdain and social stigma, could potentially amplify the harms. The mere perception of a misstep is often enough to permanently damage a person’s career and reputation. Videos featuring deepfakes have the potential to spread quickly, while the true nature of the video may spread much more slowly, if at all. By the time the truth comes out, it may be too late. People make up their minds quickly and are often reluctant to change their perspectives, even in the face of compelling evidence. Humans are prone to confirmation bias—the tendency to consider only the evidence that supports what the believer was already inclined to believe anyway. Deepfakes deliver fodder for confirmation bias, wrapped in very attractive packaging, to viewers. When deepfakes meet cancel culture in a climate of poor information literacy, the result is a social and political powder keg.

Supporters of the law argue further that deepfake technology threatens to seriously damage our democratic institutions. Citizens regularly rely on videos they see on the Internet to inform them about the temperament, behavioral profile, and political beliefs of candidates. It is likely that deepfakes would present a significant obstacle to becoming a well-informed voter. They would inevitably contribute to the sense that some voters currently have that we exist in a post-truth world—if you find a video in which Elizabeth Warren says one thing, just wait long enough and you’ll see a video of her saying the exact opposite. Who’s to say which is the deepfake? The results of such a worldview would be devastating.

Opponents of the law are concerned that it violates the first amendment. They argue that the legislation invites the government to consider the content of the messages being expressed and to allow or disallow such messages based on that content. This is dangerous precedent to set—it is exactly the type of thing that the first amendment is supposed to prevent.

What’s more, the legislation has the potential to stifle artistic expression. The law contains exemptions for the use of deepfakes that are made for the purposes of parody and satire. There are countless other kinds of statements that people might use deepfakes to make. In fact, in his warning video, artist Jordan Peele used a deepfake to great effect, arguably making his point far more powerfully than he could have using a different method. Peele’s deepfake might have resulted in more cautious and conscientious viewers. Opponents of the legislation argue that this is precisely why the first amendment is so important. It protects the kind of speech and artistic expression that gets people thinking about how their behavior ought to change in light of what they viewed.

In response, supporters of the legislation might argue that when the first amendment was originally drafted, we didn’t have the technology that we have today. It may well be the case that if the constitution were written today, it would be a very different document. Free speech is important, but technology can cause harm now in an utterly unprecedented way. Perhaps we need to balance the value of free speech against the potential harms differently now that those harms have such an extended scope.

A lingering, related question has to do with the role that social media companies play in all of this. False information spreads like wildfire on sites like Facebook and Twitter. Many people use these platforms as their source for news. The policies of these exceptionally powerful platforms are more important for the proper functioning of our democracy than anyone ever could have imagined. Facebook has taken some steps to prevent the spread of fake news, but many are concerned that it has not gone far enough.

In a tremendously short period of time, technology has transformed our perception of what’s possible. In light of this, we have an obligation to future generations to help them learn to navigate the very challenging information literacy circumstances that we’ve created for them. With good reason, people believe that they can trust their senses. Our academic curriculum must change to make future generations more discerning.

The Ethics of Homeschooling

photograph of young girl doing school work in room

The National Home Education Research Institute labelled homeschooling one of the fastest growing forms of education in the US with an estimated two to eight percent rise in the population of homeschooled children each year over recent years. Although home-based learning as a concept is an old practice, it is now being adopted by a diverse range of Americans. This trend of homeschooling extends to countries around the globe including Brazil, the Philippines, Mexico, France, and Australia, among other nations.

One of the commonly cited motivations for homeschooling children is parents’ concern for their child’s safety. Homeschooling provides children with a safe learning environment, shielding them from exposure to possible harms such as physical and psychological abuse, bullying from peers, gun violence, and racism. Exposure to such harms can lead to poor academic performance and long-term self-esteem issues. Recent research suggests that homeschooled students often perform better on tests than other students. Additionally, homeschooling can also provide an opportunity for an enhanced parent-child bond, and is especially convenient for parents of special needs children requiring attentive care.

Homeschooling was legalized throughout the US in 1993, but the laws governing homeschooling vary from state to state. States with the strictest homeschool laws (Massachusetts, New York, Pennsylvania, Rhode Island, and Vermont) mandate annual standardized testing and an annual instruction plan. But policing in the least restrictive states (Texas, Oklahoma, Indiana and Iowa) border on negligence. Iowa, in particular, has no regulations at all, and considers notifying the district of homeschooling merely optional.

Even though homeschooling is legal and gaining traction in the US today, it is not immune to skeptics who view homeschooling as an inadequate and flawed form of education for students. The prevailing critique of homeschooling has to do with the lack of social interaction amongst homeschooled children with peers, which is an important aspect of a child’s socialization into society. However, as most of homeschooled children’s social interactions are limited to adults and their family members, this could lead to the child developing issues in the future regarding learning to handle individuals with different backgrounds, belief systems, and opinions. Homeschooling advocates counter this critique by contending that the environment at home is superior to the environment children are exposed to at school, but it raises the question, at what cost?

Another aspect of homeschooling that is a point of contention is the lack of qualification of parents who choose to homeschool their children. While teachers have experience teaching students over the course of years and therefore develop action plans that work best with students, the same cannot be said for most parents who are not teachers by profession. Therefore, while homeschooling parents may have the best intentions for their children, they may be ill-equipped to provide the standard of education offered in public or private schools. Furthermore, the learning facilities offered by parents at home may not be on par with the learning facilities available in schools.

An additional issue that must be taken into consideration is that homeschooled children in states with lax regulations are at increased risk for physical abuse that goes unreported and undetected, as a result of children being sequestered in their homes. Approximately 95% of child abuse cases are communicated to authorities by public school teachers or officials. By isolating the homeschooled child, unregulated homeschooling allows abusive guardians to keep their abuse unnoticed. Isolating children at home also poses a public health risk as schools require students to be immunized, but this is legally required of homeschooled children in only a few states. Not only are unimmunized children vulnerable to a multitude of diseases, but also put other children and adults alike at risk of contracting illnesses.

Parental bias is an added complication that homeschooled children must deal with. Parental bias refers to dogma a homeschooled child may be exposed to on account of being raised solely on their parents’ belief systems. For example, most homeschooled children come from pious, fundamentalist Protestant families. Elaborating on the possible repercussions of unregulated homeschooling, Robin L. West, Professor of Law and Philosophy at Georgetown University Law Center writes in her article The Harms of Homeschooling, “[..] in much of the country, if you want to keep your kids home from school, or just never send them in the first place, you can. If you want to teach them from nothing but the Bible, you can.” Parental bias can therefore cause an individual to develop a skewed understanding of the world and can also pose issues in the individual’s life outside of home, when they are exposed to ideologies that are at odds with their own. If the homeschooled individual was raised in an environment with a homogeneous view on political, social or cultural issues, and if that is the only outlook that the child is exposed to, adjusting to the outside world with a plethora of opinions and values could cause internal dissension within the individual.

Given that one’s early experiences in life can shape our persona as an adult, going to a regular school instead of being homeschooled can serve as a primer to being better equipped at handling the “real world.” Furthermore, with the rising demand of homeschooling, it becomes essential to ask if the child is better off by learning about the “real world” while being sheltered by one’s guardians. If homeschooling is indeed the superior option, perhaps constructing a standard curriculum for homeschooling could address the concerns raised by critics of home-based learning.

Pia Klemp and The Ethics of Migrant Taxiing

photograph of seawatch3

Pia Klemp made waves over the summer for rejecting the Grand Vermeil Medal. Paris had intended to award the German boat captain for her bravery, having rescued thousands of migrants in the Mediterranean, but Klemp refused to accept the award stating that, “We do not need authorities deciding about who is a ‘hero’ and who is ‘illegal’.” (Klemp is currently awaiting trial in Italy and could face up to 20 years in prison for aiding the illegal immigration of African migrants across the Mediterranean to Italy.)

The case of Pia Klemp is the culmination of several geopolitical factors. The European migrant crisis began in 2015 when refugees fleeing political instability and violence in the Middle East and Northern African countries began arriving on European soil. In particular, many migrants hailing from sub-Saharan Africa have recently undertaken journeys to the coast of Libya in an attempt to board a raft and cross the Mediterranean in search of safety and the promise of a better life in Europe.

Over the past few years, Pia Klemp has allegedly aided over 6,000 migrants in their crossing of the turbulent Mediterranean Sea by locating their ill-equipped and overcrowded rafts off the coast of Libya, helping them aboard one of the various search-and-rescue ships she has captained, and taxiing them to the shores of southern Italy.

Klemp and her supporters contend that the migrants are legitimate asylum-seekers who are willing to risk their lives in attempting to cross the Mediterranean on what are often inflatable pontoon boats. Some migrants have been quoted as saying that they would rather die than go back. Klemp argues that the migrants have compelling reasons to flee their home countries, but that they are being forced to come to Europe via the Mediterranean because European countries have closed their borders and there is no other legal way of getting there.

Another relevant concern stems from the principle of non-refoulement, a cornerstone of international law that states that no one should be returned to a country where they face persecution or danger. The Libyan Coastguard is currently under orders to return migrant-carrying vessels to Libya for processing. Recent investigations have shown that in some cases, the Libyan Coastguard has brought rescued migrants back to the Tripoli Detention Center where they experienced a lack of food and water, and beatings by armed guards with pipes and ropes. It was also reported that while some people were released to their country of origin, others were sold to a captor who tortured them and attempted to extract ransom from their families.

Klemp claims that the Italian government is wrongfully putting on a “show trial” and that “the worst has already come to pass […] Sea rescue missions have been criminalized.” However, there is another side to the story. Detractors of Klemp’s actions argue that engaging in migrant taxiing is wrong for two main reasons.

First, migrant taxiing does nothing to solve the root cause of the problem – political instability in sub-Saharan African countries. In fact, it may even contribute to increased rates of flight from these countries. Additionally, increased emigration could, in turn, lead to harsher penalties imposed on citizens who are caught attempting to leave their home nation.

Second, detractors of Klemp argue that although her actions may be driven by altruism, they have resulted in dire consequences in reality. The United Nations Refugee Agency estimates that the death rate of refugees attempting to cross the Mediterranean has risen sharply from 0.3% in 2015 to 1.95% in 2018. Some contend that this rising death rate is a byproduct of the increased presence of rescue vessels, such as Klemp’s Iuventa, present in the Mediterranean. The argument operates on the assumption that the greater the number of rescue vessels present (or believed to be present), the greater the chance migrants believe they have of being rescued at sea, and therefore, the greater the number of migrants willing to risk their lives crossing the Mediterranean. It also may be true that the presence of NGO rescue boats can encourage migrants to board vessels that are incapable of crossing the Mediterranean on their own and if they are not spotted, will be doomed to drown.

Evidently, several ethical concerns loom large in the case of Pia Klemp and migrant taxiing. Klemp and her supporters are firmly rooted in their belief that they have a moral obligation to rescue migrants at sea, while critics contend that migrant taxiing fails to address the root cause of the problem and may be a contributing factor in the higher rates of deaths of migrants attempting to cross the Mediterranean.

While Klemp’s heart may be in the right place, her actions in taxiing refugees across the Mediterranean could actually have adverse consequences of increasing the number of migrants and their deaths at sea. Perhaps a better, albeit more challenging, long-term solution would be to redirect and increase aid to the troubled regions of Africa from which refugees are fleeing. Such an approach would address the root cause of the problem by aiming to stabilize their political systems and reduce the number of desperate migrants seeking to make the dangerous voyage across the Mediterranean.

EEE and the Eradication of Mosquitoes

closeup photograph of mosquito

Mosquitoes have continuously posed a threat to humanity because of their ability to transmit dangerous diseases such as dengue, Zika, yellow fever, and others. Eastern equine encephalitis (EEE) is the newest viral epidemic that has hit the United States. EEE has actually been around for years, with an average of 5-10 people per year contracting the disease. However, this year there has been an increased amount of cases with 12 known deaths so far, the most recent being a resident of Elkhart County of Indiana.

EEE is spread through the mosquito species Culiseta melanura which feeds almost exclusively on birds and horses which is why it has been so rare. Transmission to humans requires a “bridge” species which will bite humans like the commonly known Aedes family, responsible for Zika virus transmission. Symptoms of EEE set in approximately 4-10 days after exposure and include headache, fever, chills, and body and joint aches. Typically, the immune system can fight off the infection on its own however 1-20 cases will develop the brain infection, encephalitis. This will result in tremors, seizures, paralysis, and possibly death. There are no current treatment options for this disease to date.

The virus has been predominantly affecting the Midwest and Eastern regions of the United States. Government official and environmental specialists are attempting to find a way to eliminate the risk of the disease for the community by taking preventative steps. For the public, they suggest wearing long sleeve clothing, not going out around sunset, and wearing bug spray. Unfortunately, these methods are only somewhat effective. Mosquitoes will continue to be out at a high density until the first frost. Further, the Connecticut Agricultural Experiment Center recently found data suggesting that the virus can survive over the winter, even if the mosquitoes won’t. This means that the outbreak will not just be limited to this year but next summer we could face another outbreak with more severe consequences. As the data suggests, it is more urgent than ever to find a way to protect people from contracting these terrible diseases spread through mosquitoes. Thus, the question forms, what is the best way to do this?

Scientists have been researching models that work with direct modification of the species to create a more effective form of protection. Recently published was a study done over 2016-2017 on the Islands of the city of Guangzhou, China. It was able to take out 94% of the Asian Tiger mosquito. This study was a combination of two methods: sterilization of the female mosquitoes and infecting the male mosquitoes with a bacterium that hinders the insect’s ability to reproduce and spread disease. Other methods of genetic modification have looked at ways to detect specific species of mosquitoes by wing beat and making them resistant to parasites that cause human diseases. These methods are a promising step towards protecting future generations from EEE and other outbreaks.

There are still limits to methods of genetic modification. None of the methods have yet to be 100% effective. Most of them require releasing millions of modified insects over an area, which makes it hard to set up for entire continents. Although this method was effective, translating it into a scaled-up technique for larger regions requires a lot more. If we genetically modify these species to be unable to reproduce and are able to put it in a wide scale method, the long-term consequences points towards full eradication.

When we look to the past, one of the most effective disease control methods was the eradication of the virus, Variola, which was responsible for smallpox. Is eradication of mosquitoes a justifiable method of disease prevention to protect people from epidemics, like that most recently of EEE?

Mosquitoes do have many negative qualities which would support eradication of a species as a whole. According to Vox, mosquitoes are responsible for killing 52 billion people that have lived on earth out of the total 102 billion. They carry yellow fever, malaria, Zika, dengue, West Nile, and now EEE, which have all taken many lives. Mosquitoes are universal, spread more disease than any other animal, and have been deemed “masters of evolution” because of their invincibility to pesticides and previous prevention methods. Not to mention, with climate change on the rise, there is a proliferation of mosquitoes increasing the risk of disease spread. By eliminating them, you would be protecting many, especially developing countries who are most commonly targets of the outbreaks.

On the other hand, not all mosquitoes are harmful. It is only the female mosquitoes that bite and spread disease. Females and males don’t excrete waste or aerate soil and are pollinators, feeding on plant nectar. They are also food sources for many birds, bats, fish, and frogs. Eliminating all mosquitoes could have effects on the food chain with a bottom up effect. Some say that this niche would be quickly replaced but Phil Lounibos, an entomologist from Florida University, says that this is an even greater risk. It is likely that mosquitoes would be replaced with an insect that is “equally, or more, undesirable from a public health viewpoint.”

While these are all valid considerations for why to protect the species, what really stands in opposition to full eradication is the moral argument that eradication is just wrong. Our justification for eradication is that this is a species that is dangerous to our species (humans), yet we are so dangerous to so many other species in the world. What kind of precedent does it set when we fully kill out a species? Who decides what species remain or die?

According to biologist Olivia Judson, eradication of disease causing mosquitoes, would save approximately 1 million lives and would only decrease genetic diversity of mosquito families by 1%. Although this outcome may sound ideal, there is the unknown of the long-term consequences of these actions. With diseases like EEE advancing, the pressure is on for scientists to find a way to contain disease transmission.

Morality on the Side: Peter Handke and the Nobel Prize

photograph of Peter Handke

Debates about separating the art from the artist are not waning. In the last couple of years, we have seen the rise and fall of many artists. Yet, with some of the artists the fall seemed less imminent, and perhaps even unlikely. The world was fast in condemning the actions of Kevin Spacey, praised for his role in House of Cards. While on the other hand, Woody Allen’s work seems to experience little hindrance, despite allegations of sexual abuse. This is certainly not an isolated case, as Michael Jackson and Johnny Depp prove. But how do we decide which art and what artist to condemn, and which art we ought to separate from questionable morality of its creator. Can we appreciate art in isolation from moral considerations?

The Nobel Committee awarded the Nobel Literature Prize of 2019 to Peter Handke, Austrian novelist, playwright, translator, poet, film director and screenwriter. What followed was an explosion of moral outrage, with vast number of individuals and institutions condemning the Nobel Committee’s decision. Handke is widely known for speeking at the Slobodan Milosevic’s funeral, espousing strong nationalistic views and refusing to acknowledge the crimes of Milosevic regime during the war in the Balkans. When NATO decided to react to the ongoing massacres, Handke wrote an open letter stating:

“Mars is attacking and since March 24th, Serbia, Montenegro, the Republika Srpska and Yugoslavia are the fatherland of all those who have not become martial, green butchers.”

Thus, it ought not come as a surprise that many consider Handke a person of deplorable morality. Rafia Zakaria described his work as “represent[ing] so well the right side of the political divide in Europe, the one that seeks to revitalize an imagined past and perceive the present as a moment of victimization by encroaching others, women, minorities and particularly the irksome migrants.” Many commentators do not find value in separating Handke’s literary works from his politics. Can we even understand an artist’s art if isolated from the views they publicly espouse? Should we?

There are several ways to answer that question. First, the simplest answer would be: yes, one must separate the art from the artist. The reasoning for such an argument might be based on the claim that truly great art is transcendent, it “can stand on its own outside of history and speak to anyone from any place and time,” and if it cannot “it’s not really great.” In case of Handke, this would mean that we foremost need to look at Handke’s work in isolation from the author, and if the work has some kind of value for us, then appreciation is warranted.

Another potential answer derives from the Roland Barthes statement that the author is dead. This viewpoint holds that it is the reader who assigns the meaning to the text by engaging with it, rather than the author’s internal considerations. Hence, the author is irrelevant for understanding of the text and “has no control over … final interpretation.” As Swift and Hayes-Brady put it “engaging critically with a work of art is completely different from endorsing the morality of the artist.” Looking at how this fits into our consideration of Handke’s work, one might say that it is fundamentally understanding that we assign to a certain work that matters, and not some sort of a universal meaning assigned by the author.

In addressing the question of whether we can separate art from artist, we might also ask, as Constance Grady puts it, whether the “work of art is asking me as a reader to be complicit with the artist’s monstrosity.” In appreciating the art, are we also forced to acknowledge traits of artist’s deplorable morality and actions in his/her art? Is the art asking us to be complicit or adopt their particular worldview? Depending on one’s answer, engagement with the work of art might need to change.

Lastly, by engaging the art of morally questionable artists, one might be providing economic benefit to the author. We have reason not to show this kind of support and to avoid public sponsorship (for discussion see Benjamin Matheson and Alfred Archer’s “Should We Mute Michael Jackson?”). By buying Handke’s book we are awarding the work that is based on the perpetuation of immoral viewpoints.

Regardless of the ongoing debate on separating art from the artist, Peter Handke’s statements, worldview, and actions that serve to minimize the genocide which occurred within living memory, and even go so far as deny the existence of concentration camps, must be condemned. There is no debate or question regarding that. As such, one must wonder how we should regard the Nobel Committee’s controversial decision? As Handke takes the stage in Stockholm to receive the prize, it will not only be him who will be awarded but also his professed views. One is no longer talking about separating the artists from their art, but supporting and condoning the artist in their totality. When Albert Camus was accepting 1957 Nobel Prize for literature he proclaimed that there are two roles of a writer: “the service of truth and the service of liberty.” Handke does service to neither. Not only did Handke himself show willingness to defend a war criminal but his art did, too. In Journey to the Rivers: Justice for Serbia “he went out of his way to give credence to mass murder and, in this context, as importantly, to lies.” Handke even offered to testify on Milosevic’s behalf at the Hague. Hence, Handke is both a person of dismal morals and an artist who uses art to profess his viewpoints. The Nobel Committee erred badly this time by deciding to silence the voices of genocide victims and instead gave credibility to, and encouraged, an apologist of genocide.

MAGA, Morality, and the Paradox of Tolerance

photograph of "Coexist" bumper sticker in back window of a BMW

In the last week, three incidents across the country highlight the central tension in the structure of the principle of tolerance. A crucial aspect of liberal society — societies that aspire to allow for a plurality of perspectives on what constitutes a good life — is that these perspectives must respect one another’s right to pursue their different visions of a good life. For a society to permit many valid ways of living, some form of toleration of those different lives must be a basic principle. When one value system considers a good life to include a restriction (such as in diet, type of relationship, clothing, or career options), those who disagree can live without the restriction while still acknowledging the restriction within that group. If another value system prioritizes a certain sort of pursuit (such as ritual, career, relationship goals, etc.), value systems that disagree can passively allow them to get on with their valued pursuit and simply not join in. Liberal societies assume that many different views on such matters are reasonable (and inevitable), and the need for tolerance will naturally arise. However, some particular conflicts between value systems don’t allow for passive acknowledgement and coexistence.

There are two potential reasons for these limitations: first, one could claim from a purportedly objective perspective that a value system was unreasonable and therefore didn’t qualify for respect or tolerance (say, it caused undue harm to its members or was based on certain empirical understanding the consensus rejected). Second, and of particular relevance this past week, society could be concerned that the value systems of some threaten either the possible pursuit of others’ good lives, or the continuation of society itself. This second form of concern with tolerance leads us to the Paradox of Tolerance, and three recent events highlight how such concern arises.

On October 9th, a student at the University of Wisconsin, Madison, put up signs on windows outside the College Republican club’s meeting room. The student used painters tape to label Trump an ignorant sexist, racist homophobe, and bigot. She calmly continued to put up the signs after a university employee approached and seemed to say, “Yeah, I’m sorry, you can’t do that.” The G.O.P. Badgers posted a video of the exchange on twitter calling the protesting student an example of the “intolerance from the left.” The student attempting to highlight Trump’s intolerance of women, non-cis people, non-straight people, and non-white people itself was labeled as morally problematic for being intolerant towards those supporting Trump. (The College Republicans made a statement standing behind the president in response to the protest). UW tweeted regarding the incident, noting that policies ban the posting of unapproved signs, and saying both that the university supports students’ right to free speech and civil discourse around political issues, and that the Office of Student Conduct and Community Standards is reviewing the incident and will follow up as appropriate.

After a Trump rally in Minneapolis on the 11th of October, protesters removed the red MAGA hats from attendees’ heads and burned them. Groups supporting anti-war policies as well and women’s and immigrant’s rights had been protesting since the afternoon. At around 9:30pm, there are videos of the MAGA hats burning in a small fire, and at around 10pm, some protestors were seen chasing an identified Nazi. These acts of protest against the positions represented by Trump supporters have made some people hesitant to purchase or wear the hats in public. A city employee in Madison, WI, was asked not to wear a MAGA hat to work in May. These protesters are actively attempting to make it uncomfortable to be publicly associated with positions like Trump’s or his supporters. They are not tolerating a political position.

Last week, Ellen DeGeneres attended a Cowboys NFL game seated next to former President George W. Bush, and later defended her friendly demeanor throughout the game despite their differing political views. Bush not only advocated for the PATRIOT Act, which eroded civil rights in the US, he began wars that led to human rights abuses that now the majority acknowledge were unjustified. On top of his war crimes and his actions that led to thousands of deaths and countless instances of torture, Bush was an outspoken advocate for curtailing LGBT+ rights. Ellen defended her friendly interaction with the former president on her show, saying,

“Just because I don’t agree with someone on everything doesn’t mean that I’m not going to be friends with them. When I say, ‘Be kind to one another,’ I don’t mean only the people that think the same way that you do. I mean, ‘Be kind to everyone, it doesn’t matter.’”

This unqualified call to kindness is in line with the principle of tolerance and the value of public civility. However, it doesn’t acknowledge that there might be any constraint on those values. We could consider the constraints to take three forms:

First, DeGeneres’ tolerance of Bush’s repeated denial of LGBT+ folks like her the human rights straight folks like him have enjoyed brings to mind the famous James Baldwin quote: “We can disagree and still love each other unless your disagreement is rooted in my oppression and denial of my humanity and right to exist.” This standard for the limit of tolerance is rooted in justice and human rights. A value system should not be tolerated if it doesn’t equally respect the humanity of all. Tolerance here has a substantive constraint: in order to qualify for tolerance, a value system must respect the right of humans to exist. (Some of Bush’s policies failed to do this, as do some of Trump supporters’ positions now.)

The second constraint on tolerance is perhaps the most well-known in philosophical circles. Karl Popper coined the Paradox of Tolerance: “In order to maintain a tolerant society, the society must be intolerant of intolerance.” A society, according to Popper, that tolerated intolerance would end up destroyed by the intolerant party. Therefore, acting against intolerance is a collective act of self-preservation. The intolerance of Trump supporters, Bush, and Trump, according to this standard, is potentially society-destroying and cannot be tolerated.

John Rawls has a less strong view of intolerant groups, not believing they necessarily threatened the existence of society. Therefore, it is only when intolerant groups do reach this threshold of threat to the preservation of society that there is justification for not tolerating the intolerant. The principle of tolerance must be upheld, according to Rawls, in more scenarios than for Popper. Each value system or group would be judged based on its actual impact on the health of the society overall. If tolerating the presence or activity of a group or individual didn’t suitably threaten society, then we should tolerate it. In the case of the UW Madison protester and the MAGA hat burners, some have judged it is dangerous for Trump supporters to feel comfortable in our society. Whether Rawls would agree is unclear.

What is most clear, perhaps, is that the object of the Madison and Minneapolis protests, as well as the object of DeGeneres’ kindness, are themselves intolerant. The possibility of having a purely tolerant society is off the table. When the discourse becomes about whether or the extent to tolerate these intolerant views or groups, it is important to note that we are debating the application of our paradox, not simply worrying about having an intolerant view ourselves.

American Social Media Support of the Hong Kong Protests

photograph of protest in NYC with many participants streaming on iphones

Since March of this year, there have been protests in Hong Kong which have gained mainstream media attention and regular coverage since they began. The protests began over a bill proposed by the Hong Kong government that would have allowed for the extradition of Hong Kong citizens to mainland China if China’s government found them guilty of some crime.

Hong Kong, a British colony until 1997, has a long history of more liberal and democratic governance than the mainland. When returned to China by the British in 1997, Hong Kongers were promised a policy of “two systems, one country.” However, many believed that this law would erode the independence of the Hong Kong government and the freedoms of its citizens. Mainland China is known for not being friendly to antagonistic voices, jailing those who dissent and censoring speech generally. While free speech is technically guaranteed by the Chinese Constitution, people can be arrested for endangering vaguely defined “state secrets” which allows for mass censorship. If a Hong Konger, used to the free speech protections afforded to a citizen of Hong Kong, dissented against the mainland government to such an extent that the Chinese government wished to arrest him for endangering state secrets, this proposed bill would allow him to be extradited to China. Essentially, the free speech of Hong Kong would become the “free speech” of China.

As these protests and confrontations between protestors and police grow more violent, Hong Kong is getting more attention from Western media and from Western social media. Many people on social media are calling for boycotts of the NBA and of Blizzard, a video game production company, for bowing to China in silencing employees supporting the Hong Kong protests. Far more are simply expressing support for the Hong Kong protests, a fact being taken advantage of by Hong Kong protestors. During the protestors’ occupation of the Hong Kong airport in August, signs like this one saying “Sorry for the inconvenience. We are fighting for the future of our home” made the rounds on social media. Importantly, the message on the sign was written in English, as are many of the signs used in the protests. While English was the official language until the 1970s, far more people know the local dialect of Chinese, Cantonese, than know English.

Clearly, the purpose of these signs being written on in English is for people to take photos of them and to spread them around on English-speaking social media rather than for other Hong Kongers or even mainland Chinese to read them. English-speaking nations and their people are typically very supportive of the sorts of liberal democratic values for which Hong Kongers are fighting. However, one has to wonder to what extent English-speakers, particularly Americans, should be spreading these Hong Kongers’ messages around.

The United States has a long history of intervention in the affairs of foreign nations. Some people believe that this period of intervention should end, that Americans and the American government should focus on domestic affairs instead of sticking their noses into the affairs of others. People point to the chaos in the Middle East, or the historic meddling of the US in Latin America to demonstrate the common proverb that “the road to hell is paved with good intentions.”

As China would have people see it, the Hong Kong protests are an internal affair (for discussion see Tucker Sechrest’s “The Hong Kong Protests and International Obligation”). Rather than fighting for freedom, mainland Chinese people and a portion of Hong Kongers see protestors as damaging social stability. Indeed, in response to the Houston Rockets’ general manager Daryl Morey’s tweet in support of the protests, the Chinese consulate in Houston said that “anybody with conscience would support the efforts made by the Hong Kong Special Administrative Region to safeguard Hong Kong’s social stability.” If Americans have anything to say about the protests, China says, it should be in support of normal governmental processes working to resolve the conflict and maintain stability. Supporting the protestors, no matter one’s personal beliefs on the issue, clearly is disrupting the social order. Roads are sparse and hotel rooms are cheap as tourists decline to visit. Fights between protestors and police are regular. Typically, when the US destabilizes another country’s governmental authority, collapse and chaos follow.

At the same time, while there are clear examples of US intervention going wrong, especially when it is militaristic and government-backed, it is not clear that a bunch of Americans tweeting in support of the protests will cause the same damage. For a long time, people’s social media posts in support of this or that social issue, especially with regards to protests, were labeled examples of “slacktivism” and “virtue-signalling.” The idea is that the posts people make on social media do not foment any real social change but are selfish attempts for people to make themselves look like good people. In essence, some claim that people posting about the protests do not care enough to actually support the protestors, but are simply “making it about themselves.”

Ultimately, however, this analysis falls apart when social networks are analyzed. Research out of NYU and University of Pennsylvania shows that “occasional contributors,” that is, people who are not political activists, posting about this sort of thing constantly, are vital for information about the protests to spread. Importantly, this pattern, dependent on occasional contributors, was not found in other large scale social media discussions, such as about the Oscars or the minimum wage. Hong Kong protestors recognize this fact as, again, demonstrated by their use of English in their protests. To get real change, even a ton of protestors on a small island off the coast of China cannot act alone. Rather, Hong Kong protestors, if they want their government to be pressured need to get the attention of the powerful English-speaking nations of the world. Social media posts bubble upward with even world leaders eventually taking heed of them. Donald Trump has even suggested talks with Chinese President Xi Jinping as a result of this social media attention, he himself tweeting about it.

Whether the United States, its government or its people, should be commenting on or intervening in the domestic affairs of other nations is an open question. However, it is undeniable that the Hong Kong protestors, if they are to maintain their liberal democratic society, need the support of other nations. And, that support is greatly influenced, in nations with free speech, by the most common avenue of political speech today, social media. As is often said “the revolution will not be televised,” but, as we see today, it might be tweeted.

Ellen, George W. Bush, and the Duty to Be Kind

photograph of Ellen Degeneres relaxing on couch

In early October, Ellen DeGeneres – host of the eponymous daytime talk show Ellenstirred up controversy when she was seen sitting next to, and sharing a few laughs with former President George W. Bush at a football game. That the two would be sitting next to one another and acting congenially came as a surprise to many, not only because Bush is a controversial figure, but especially because of his public stance that same-sex marriage should be illegal, one that DeGeneres vehemently opposes. While it was then odd to see the two together, what some found ever odder was DeGeneres’ explanation, which she stated on her talk show as follows:

“Here’s the thing, I’m friends with George Bush. In fact, I’m friends with a lot of people who don’t share the same beliefs that I have. We’re all different and I think we’ve forgotten that that’s okay that we’re all different…When I say, ‘Be kind to one another,’ I don’t mean only the people that think the same way I do. I mean be kind to everyone.”

As an example, DeGeneres stated that while she does not think that people ought to be wearing fur, she is still friends with people who wear fur, despite their difference in views. Here, then, is one lesson we might think we should draw from DeGeneres’ explanation: disagreement on principles should not preclude one’s obligation to be kind to others.

Is DeGeneres right? Is it the case that we ought to be kind to everyone, and perhaps especially to those who disagree with us?

There is something very appealing about this line of thinking, especially when there appears to be so much division across political lines in America. One might think that fostering a general spirit of kindness towards people with differing viewpoints would help counteract some of this divisiveness, and that our default stance towards others should just be this kind of kindness. What’s more, all of us have likely had experiences where we thought the best course of action was to be kind to those who disagree with us, especially when it comes to family and friends. A principle of universal kindness – be kind to everyone, regardless of your disagreements – might then sound pretty good.

Presented as a general principle, though, to say that we ought to be kind to everyone seems clearly to be false. Cases in which I don’t have any obligation to be kind to others are easy to come up with: I don’t owe you kindness if you’ve consistently been a complete jerk to me, for example, nor does it seem like I’m obligated to be kind to you if you’re a genuinely awful person. As some commenters have suggested, Bush’s actions during his presidency may very well put him in the latter category, and thus deserves little in the way of kindness. One worry with showing kindness to even to those who have done morally egregious things is that we might be giving off the message that those transgressions should be forgiven, or at least that they are not that big of a deal.

We might also think that there are problems with saying that one ought to show as much kindness to those with whom one has minor disagreement as to those who have done morally egregious things. Molly Roberts at The Washington Post summarizes the concern as follows:

“Owning a mink…is different from orchestrating a historic foreign policy failure punctuated by a secret torture program and hundreds of thousands of civilian deaths. There exists a sliding scale of badness determining who deserves complete and total cancellation. You can probably hang out with fur person on one end and you absolutely can’t hang out with neo-Nazis at the other. George W. Bush falls somewhere in between.”

Of course, some might want to argue about the nature of Bush’s presidency: how morally culpable we should find him, and whether we should, overall, classify him as a genuinely awful person. And we still do, of course, have the problem that a failure to show kindness to one’s political and moral opponents could be seen as fostering further divisiveness: if Ellen were to snub Bush’s offer of friendly banter, for example, that might be seen as a more general snub towards Bush supporters and Bush-friendly Republicans. Indeed, Chris Cillizza at CNN writes that chumming around with Bush was actually well in-line with DeGeneres’ left-wing politics:

“What DeGeneres is advocating there is sort of anti-Trumpism in its purest form. Because what this President represents, more than any issue stance or policy position, is the idea that people who disagree with you are to be mocked, to be villainized, to be bullied. If you disagree with Trump on, well, anything, you are his enemy. The only way to be in his good graces — and therefore, in the good graces of those who support him — is to agree with him on absolutely everything.”

According to Cillizza, then, it is a form of anti-Trumpian defiance to show kindness towards one’s opponents, as Trump would never show such kindness to those who disagree with him. Failing to show such kindness, then, would again risk siding oneself with the forces of divisiveness.

Failing to show kindness, however, does not require the kind of mockery, villainizing, and bullying that Cillizza ascribes to Trump: the options that one has when interacting with those one disagrees with are not limited solely to either acting like old friends or viciously attacking them. While there are many potential courses of action in between, one obvious action one could take when presented with someone of morally questionable character with whom one fundamentally disagrees is simply to ignore them.

That is not to say that this is what one should always do – there may indeed be many cases in which showing kindness to one’s opponents is the best course of action. However, it will certainly not always be the case that a failure to show kindness will be equivalent to sowing the seeds of divisiveness, or mockery, villainizing, or bullying. Perhaps, then, Ellen should have just sat somewhere else.

The Murder of Botham Jean and the Ethics of Forgiveness

photograph of one hand in another

On Tuesday October 1, 2019, Amber Guyger was sentenced to ten years in prison for the murder of Botham Jean. Guyger, a former Dallas, TX police officer was off-duty and shot Botham in his own home. She claims to have mistaken his apartment for hers and, believing him to be an intruder, shot Botham. At her sentencing Botham’s brother, Brandt, announced that he forgave Guyger for her crime, and proceeded to hug her in court

Brandt Jean forgiving his brother’s killer occasioned critical remarks. People argue that Brandt Jean, and other black victims forgiving white attackers, are systemically coerced into forgiveness because public anger from black people and communities is not acceptable to white society. Likewise people argued that Brandt Jean’s forgiveness does nothing, and signifies nothing, about the large-scale problem of violence and discrimination against black people in the justice system of the United States.

What exactly is forgiveness and under what conditions is it appropriate to give it? To answer this it is helpful to look at three separate answers: that forgiveness can be obligatory, that forgiveness can be forbidden, and that forgiveness is always optional. What would it mean for the Jean case for any one of these answers to be true? If forgiveness can be obligatory under some conditions, then what needs to be determined is whether those conditions obtained in the Jean case. If forgiveness is forbidden then Jean’s forgiveness might be inappropriate. Of course, if forgiveness is optional then it is entirely up to Jean whether he decides to forgive Guyger or not.

One prominent tradition committed to an obligation (under certain circumstances) to forgive is the Talmudic scholarship of the philosopher Maimonedes. In the Mishneh Torah he argues that forgiveness is required when the person who has done wrong is sincere in their contrition, has made amends, and has asked for forgiveness. In the Jean case, Guyger expressed regret in court for killing Botham and will begin serving her sentence soon. These two facts make at least a provisional case that she qualifies under Maimonides’ criteria: that is, that those who Guyger has wrong are obligated to forgive here. Botham’s brother himself expressed a sentiment similar to the criteria in the Misneh Torah saying, “If you are truly sorry—I know I can speak for myself, I forgive you.” Moreover he expressed the wish that Guyger not serve any jail time at all. This is an act of what Maimonides calls mechilah, which is forgiveness is the sense of removing a debt. 

Importantly, Brandt Jean’s statement implies that there are more people from whom Guyger needs to seek forgiveness. He speaks only for himself, and he was not the only one wronged. The Talmudic tradition is clear that a wrongdoer must seek forgiveness from each and every person that they have wronged. Moreover most views of forgiveness agree that only those who were wronged are in a place to forgive in the first place, meaning that forgiveness is a fundamentally interpersonal thing. This touches on an aspect of many critical remarks surrounding Jean’s forgiveness of Guyger. It should not be mistaken as general absolution for the pattern of police violence against black people, nor put forward as a model of how all victims of police violence should behave. Forgiveness, even if it can be obligatory, is a case-by-case thing. 

An alternative to the sort of response found in Maimonides comes from the Roman Stoic philosopher, Seneca. He argues that if a person’s deeds are genuinely worthy of punishment or incurring a debt then to forgo that punishment or debt is unjust. As such Seneca would vehemently object to Brandt Jean’s expressed wish that Guyger not face any jail time at all. Guyger’s action is clearly one that is genuinely worthy of punishment: she killed Botham in his own home. Seneca would view as more apt the reaction of Botham Jean’s father, Bertrum Jean, who said that though he forgave Guyger he wanted to see her receive a longer sentence. This expresses a different form of forgiveness, what Maimonides refers to as selichah. This is, rather than removing a debt, expressing an understanding of the wretchedness of a wrongdoer and their situation. However, this is not the form of forgiveness that is obligatory in Maimonides’ view—only mechilah can be obligatory. Selichah remains optional but represents a significant moral achievement on the part of the forgiver. 

Viewing forgiveness as an optional, but laudable, achievement is to say that forgiveness is a supererogatory act: that is, an act which is morally good but not morally required. The paradigmatic supererogatory act is something heroic—jumping in front of a bullet, for example. When someone does something supererogatory they have “gone beyond the call of duty.” The concept of selichah Maimonides puts forward fits the bill, and generally it’s clear why forgiveness might be treated as supererogatory. Just as it would be overly demanding to require people to risk their lives to save strangers, it would be overly demanding to require a person to forgive someone who caused them tremendous harm or trauma. If a victim can bring themselves to forgive a person who has ever wronged them—as is the case with the Jean family—this could be seen as a sign of a honed moral sensibility and significant effort. 

If there are any grounds for thinking Brandt Jean’s forgiveness of Amber Guyer is inappropriate, it could only be that it is unjust to let deserving offenders go unpunished. While Bertrum Jean’s statements are unexceptionable on any of the views of forgiveness presented here, the critical remarks concerning the whole episode also ring true. In the end, as forgiveness is an interpersonal phenomena, no general lessons or absolution are in the offing.

Should POC Forgive Kanye?

Drawing of Kanye West in profile

Kanye West is arguably one of the most polarizing figures in the 21st century. The “Ultralight Beam” rapper has made headlines for the better part of two decades and continues to impact the entertainment industry. Whether that impact is positive or negative is up for debate. In the past few years, West has been amid both political and entertainment controversy, drawing support and criticism alike. In the wake of a series of disputes with fellow rappers Jay-Z and Drake, Yeezy pledged his support to controversial President Donald Trump, and in an interview with TMZ, he said that slavery was a choice. West’s words were a gut-wrenching blow to fans, especially to his black and brown fanbase. How could someone, who for so long rapped about black life say something that ludicrously diminished an institution whose implications still impact the country today went against it? The “All Falls Down” rapper had gone from saying former President Bush didn’t care about black people, to rapping about “the white man” getting paid from black consumerism, to aligning himself with a president who has been accused and found guilty of racist and misogynistic commentary. But lately, Yeezy seems to have found himself paving a road for redemption. If redemption is indeed the case, the question that lingers is: should people of color forgive Kanye?

Everyone says they “miss the old Kanye.” They miss the rapper who would rock pink polos, have a starry-eyed teddy bear on the cover of all of his early discography, and heavily incorporate gospel music and themes in his music. The self proclaimed “Christian in Christian Dior,” West’s gospel/hip-hop smashup hit “Jesus Walks” is believed to be one of the many that changed his career. But since then, people would say they miss the old Kanye so much so that he even made a song about it.

Now, such sentiment might not have as much agency as it once had. During October of 2018, after a long stretch of supporting Trump, West vowed that he was done with politics. The statement was surely a sigh of relief to POC hip-hop fans who couldn’t completely condemn West but didn’t agree with his comments on slavery and support of Trump.

Then, in early 2019, Yeezy gathered fellow A-list celebrities in an undisclosed location for what has been dubbed “Sunday Service.” In videos covering the event, Ye is shown standing with a choir dressed in all white, performing old hits such as “Jesus Walks,” and even what appears to be new music. The response to Sunday Service has been overwhelmingly positive, so much so that West brought the performance to Coachella.

Only a few months later, Yeezy brought a rendition of Sunday Service to comedian Dave Chapelle’s Gem City Shine benefit for the lives lost after a mass shooting in Dayton, Ohio.

Shortly afterward, Ye brought the performance to his hometown of Chicago, performing with fellow South Side native Chance the Rapper. Tickets for the free event were put up and all of them were claimed. In a video that has now gone viral, Yeezy is shown telling a security guard “Watch this. This is my city.” Ye then makes his way through the crowd, parting starstruck attendees in a way that unintentionally but understandably has drawn biblical references. It just so happens, too, that many people in the crowd, awestruck by Kanye, were black and brown.

It seems as if Kanye is doing what everyone has wanted–make music and gear it towards something that is positive that everyone can enjoy. With that said, one might think that the information presented answers the question of whether POC should forgive Kanye or not. But the question is should Kanye be forgiven, not do they. If the question was framed as the latter, there would be no point in evaluating Ye’s actions. The buzz and support surrounding Sunday Service indicates that many people have willingly overlooked Kanye’s past comments and allegiances. But if you look online, you can still find photos of him in a MAGA hat cocked to the side. You can still find photos and videos of him standing with Donald Trump and screenshots of the tweets supporting the President. Is this something that POC should overlook? To some, the MAGA hat is a symbol of bigotry and misogyny. Some wearers of the hat are often seen condemning diverse religions such as Islam and berating diverse sexualities such as homosexuality. While these actions affect all U.S citizens, many POC identify with these groups. Kanye has stopped talking politics, but does that mean he’s changed his beliefs as well? Does he still have that hat in his closet?  Who’s to say Ye doesn’t still secretly meet with Donald Trump and talk shop? Or he makes outlandish comments about the current state of society that would infuriate the public? If that was the case, should POC still accept what Ye has been doing?

On Instagram, Yeezy’s wife Kim Kardashian teased the tracklist for his new album “Jesus is King.” With Sunday Service in full effect and the public in Ye’s favor, the album seems as if it will be reminiscent of his past work but with something new–not quite the backpack rapper with the pink polo, and not quite the controversial artist that people both love and hate, but something in between. And maybe that’s the best way to answer whether or not Kanye should be forgiven. Some will say his past actions should be overlooked and others will say he should still be condemned. All that can be done now is to watch and see how the rest of his legacy unfolds.

Busch Light and Carson King: The Good and the Bad of Cancel Culture

Image of "CANCELLED" stamp in red

Two weeks ago, Carson King, after soliciting money with a sign that read “Busch Light Supply Needs Replenished” and his Venmo username, received more than one million dollars, most of which was donated to the University of Iowa Stead Family Childrens’ Hospital. In response to the attention King received, Anheuser-Busch promised to match the donation as well as send a year’s supply of personalized beer with King’s face on it to him. However, after racist tweets posted by King seven years ago resurfaced, Busch rescinded the latter part of their offer, and many have decided to boycott King as a means to shame him for his past problematic behavior, a phenomenon termed ‘cancel culture.’ King did issue a public apology after his tweets were brought to the public eye, saying “I am embarrassed and stunned to reflect on what I thought was funny when I was a 16-year-old.” (In an interesting twist, the reporter who dug up King’s racist tweets was also found to have posted multiple offensive tweets in the past, and now no longer works for the paper.)

With the development of social media platforms contributing to rapid global communication, many believe that one use to which technology ought to be put is the educating of others on their seemingly problematic behaviors (i.e. actions that are racist, homophobic, transphobic, etc.). Others believe that unsolicited shaming is often unnecessarily harsh and incapable of fostering meaningful moral dialogue or even establishing clear, universal boundaries of unacceptable conduct. While the ideal of “calling someone out” intends to promote the public expression of ethical beliefs and dissuade problematic behavior, many still think that this fad is actually counterproductive to the ends it aims to achieve (for discussion see Byron Mason II’s “Cancel Culture“).

Cancel culture has many obvious advantages, namely that calling someone out and “cancelling” them for problematic behavior holds them accountable for unethical behavior. King himself claimed that he was unaware of his past racist beliefs and behavior until the seven-year old tweets resurfaced. Often times, however, victims of “cancelling” are unmoved by public backlash which seems to suggest cancel culture does not actually hold individuals accountable for their actions.

Cancel Culture is also believed to further develop the moral beliefs of people who witness the backlash against problematic behavior by promoting discussion about the underlying moral principles behind such behavior. In “cancelling” King, individuals sent the message that public figures and people in general should be held accountable for their past actions, and that tweets like King’s were morally unacceptable. Using the public shaming and “cancelling” of King as a platform to dissuade racist jokes, individuals involved in cancel culture expanded the space to discuss moral issues in general. This aim of promoting moral discussion and fostering a more morally conscious community is only achievable if the calling out does not leave the individual “cancelled” unwilling to be held accountable for his actions and does not shut down dialogue as a whole because of the self-righteous, overwhelming barrage of imposed values to the public. Perhaps cancel culture can never escape these problems.

Many still support King and continue to donate in spite of his problematic tweets. It may seem unfair to call out King in such an aggressive way as to “cancel” him largely because of lack of context of King’s background. It may be unfair to label someone as racist or morally reprehensible because of a singular action of their past. That is, it may seem wrong to judge King based solely on two tweets he posted at the age of sixteen because that single action of tweeting fails to fully capture who King was and who King is now. However, some might argue that King should still be criticized because any action no matter how minuscule or temporally distant affects his character as a whole.

Under the guise of moral discussion, however, cancel culture itself seems to be problematic. In addition to ignoring context, intent is also often irrelevant to those “cancelling.” After posting a picture of herself in a qipao – a traditional Chinese dress – at prom, eighteen-year-old Keziah Daum similarly faced the backlash of cancel culture. Daum has stated that she meant “no harm” by wearing said dress, and some would say she received unnecessarily harsh consequences for her appropriative behavior.

Additionally, it seems as though cancelling is ineffective at changing public opinion, especially if one grants that those “cancelling” usually belong to a group with niche moral intuitions that the general public has not yet caught up to. As Damon Linker of The Week explains, cancelling may not be fostering the kind of moral dialogue we might hope for. When “activists … demand that transgressors against … nascent norms be cast out,” they impose their own morality onto a culture that lacks a moral consensus and has not yet fully accepted the views of activist ethics. In such cases, those “cancelled” are more than likely to be unresponsive to such “cancelling.”

Cancel culture appears to have many advantageous consequences, and, in its ideal form, strengthens moral beliefs in general. But when applied poorly, cancel culture can have many unforeseen consequences largely because of the generalization of a single, isolated action to the humanity of an individual being “cancelled.” Perhaps cancel culture is only permissible in its ideal form, and cannot be applied practically. Whether or not one believes “cancelling” is morally permissible, it may be imprudent to say cancel culture always fosters a more morally-conscious society and always holds the person being “cancelled” accountable. Rather, cancel culture is only good when moral conversation is promoted in a civil, positive, and productive way.

The Castle Doctrine and the Murder of Botham Jean

photograph of entrance to a castle

On October 1st, former police officer Amber Guyger was convicted of second-degree murder in the shooting of her neighbor, Dallas-area accountant Botham Jean. According to Guyger’s defense, she was returning home from work when she entered the wrong apartment; finding Jean inside and, believing him to be an intruder, shooting Jean in (claimed) self-defense. Nevertheless, the prosecution argued that Guyger’s actions were intentional, her contentious history with her victim was suspicious, that it was unlikely she could have been mistaken about her location, and that her training as a police officer should have better prepared her to think rationally under pressure. After only an hour of deliberation, jurors sentenced Guyger to ten years in prison.

A key component of Guyger’s defense was her stated belief that she was in her own home when she attacked Jean. Under Texas law, a defendant can be justified in using deadly force against an assailant if, among other conditions, the person “knew or had reason to believe that the person against whom the deadly force was used…unlawfully and with force entered, or was attempting to enter unlawfully and with force, the actor’s occupied habitation, vehicle, or place of business or employment.” Often dubbed the “Castle Doctrine” (after the adage that someone’s “home is their castle”), this concept is similar to so-called “Stand Your Ground” laws elsewhere in the country.

Passed in 2007, the Texas statute is designed to shield a defendant from legal penalties for killing a threat to their person. However, unlike many criminal proceedings, defendants making a self-defense claim must provide evidence that they were reasonably threatened and reacted rationally in the moment. Over the last decade, applications of the Castle Doctrine have ranged from homeowners fighting off armed robbers to the operator of a taco truck shooting and killing a man who had stolen and fled with a jar of about twenty dollars in tip money.

Asserting the Castle Doctrine is no guarantee that one’s defense will succeed – Raul Rodriguez, for example, was found guilty of murder after shooting his neighbor in 2010 during an argument over loud music – but several unusual cases, including Ezekiel Gilbert’s acquittal after killing a sex worker in 2009 and Joe Horn’s infamous 2007 murder of two men in his front yard just months after the rule’s passage (Horn was neither arrested nor indicted by a grand jury) exemplify the potentially problematic nature of the law. Since its creation, homicide rates have increased statewide with many of them evidencing racial bias against non-white defendants.

Philosophically, considerations of how one is allowed to protect themselves tend to emphasize two key factors for justifying an act of self-defense: proportionality and necessity. Proportionality captures the sense that self-defensive actions are only allowed to meet, but not exceed, the degree of threat posed to an agent: so, if someone is about to flick your nose with their fingers, it would violate proportionality if you shot them with a gun. Necessity, however, is simply whether any other option is available to the person considering lethal action; if you’re attacking me and I could either fight you or easily escape, necessity would require me to flee.

The interplay of these (and other) concepts results in several intuitively familiar principles: for example, if a person is able to run away from a threat, then they have a Duty to Retreat (because of necessity); or the Imminence restriction, which allows lethal force (because of proportionality, constrained by necessity) only in cases where threats are clearly about to result in harm.

The Castle Doctrine amounts to a denial of the necessity principle if certain other facts are true. Even if a defendant could feasibly escape from their attacker, defenders of the Castle Doctrine argue that, because of their property rights (for one example), they should not have to flee. Details beyond this vary from case to case; some argue that castle-defenders are also allowed to do whatever they want to a trespasser (on the notion that intruders forfeit all rights by breaking into a home), while others maintain that home-based self-defensive actions are still constrained by proportionality considerations.

Which returns us to the case of Amber Guyger, who (reportedly) thought she was in her own home, but was actually not. Many were surprised to learn that the presiding judge explicitly allowed jurors to consider the Castle Doctrine when deliberating over the case’s verdict; Guyger’s claim that she mistakenly entered the wrong apartment may have seemed unlikely enough to disqualify this as a potential legal shield. Nevertheless, a key element of the ethics of self-defense is often the perceived facts about the case, not necessarily the actual facts – given that self-defense often (though not necessarily always) happens in a momentary reaction without the opportunity for much reflection upon the available evidence. If Guyger somehow genuinely believed that she was in her own home, then there may indeed have been a legal case for applying the Castle Doctrine here.

However, the fact that there was considerable evidence to doubt the authenticity of Guyger’s belief regarding her location was clearly sufficient for the jury to rule against her claim of self-defense.

 

1 My thanks to Blake Hereth, Adam Blehm, and Stephen Irby for discussions regarding the ideas underlying this article.

On the Legitimacy of Moral Advice from the Internet

Cropped, black-and-white headshot of a white woman with dark hair, pearl earrings wearing a white blouse and dark blazer.

The subreddit “Am I The Asshole?” describes itself as a “catharsis for the frustrated moral philosopher in all of us.” It is a forum in which users can make posts describing the actions they took in the face of sometimes difficult decisions in order to have users in the community provide their evaluation of whether they did, in fact, act in a morally reprehensible way. Recent posts describe situations ranging from complex relationships with family – “AITA for despising my mentally handicap sister?” – to more humorous situations with friends – “AITA for wearing the “joke” bikini my friend got me?” – all the way to relatively minor inconveniences – “AITA for not wanting to pick up my wife from the airport at 12:30 a.m.?” Verdicts can include “NTA” (not the asshole), “YTA” (you’re the asshole), “ESH” (everyone sucks here), and claims that there’s not enough information to make an informed judgment. Sometimes there is consensus (in the above cases the consensus for the first was that the person was not an asshole, and that they were in the third) and sometimes there is disagreement (the jury continues to be out on the second case).

Seeking moral advice from strangers is nothing new: the newspaper advice column “Dear Abby,” for example, has been running since 1956. But it is worth asking whether these are good places to get moral advice from. Can either an anonymous collective like Reddit, or a pseudonymous author like Dear Abby, really give us good answers to our difficult moral questions?

One might have some concerns about appealing to the aforementioned subreddit for moral advice. For instance, one might question the overall use of soliciting the opinions of a bunch of random strangers on the internet, people whom one knows nothing about, some of whom may very well be moral exemplars, but some of whom will almost certainly also be complete creeps. Why think that you could get any good answers from such a random collection of people?

There is, however, one significant benefit that could come from asking such a group, which is that one can reap the benefits of cognitive diversity. Here’s the general idea: you can often solve problems better and more efficiently if you have a lot of different people with different strengths and different viewpoints contributing to a solution than if you had a group of people who all had the same skills and thought in the same kinds of ways. This is why it’s often helpful to get a new set of eyes on a problem that’s been stumping you, or why sometimes the best solutions can come from outsiders who haven’t thought nearly as much about the problem as you have. So we might think that seeking advice from a massive online community like Reddit can offer us the same kind of benefits: there will be a diversity of views, with different people drawing on different life experiences to offer a variety of perspectives, and so any consensus reached will then be a good indication of whether you really are morally culpable for your actions.

Unfortunately, while the community might give off the impression of being diverse, a recent study from Vice suggests that it is a lot more homogeneous than one might like. For instance, the study reported that:

“Over 68 percent of the internet’s asshole-arbiters are in North America and 22 percent are in Europe, while over 80 percent are white. The survey also found that 77 percent of AITA subscribers are aged between 18 to 34 years old, with over 10 percent aged under 18 and only 3.4 percent aged 45 and over.”

These numbers do not exactly represent the kind of variety of life experience that would allow for the full value of diversity. One particularly telling consequence is the subreddit’s reputation for advising that fights in one’s marriage should almost always result in a divorce, advice that might be different if it weren’t the case that, according to the Vice study, about 70% of the responding users weren’t married.

This is not to say that you will never get any good moral advice from Reddit. It is, however, to say that perhaps the advice you seek from there should be taken with a grain of salt, and perhaps run by a few different types of people before coming to any conclusions. So if an anonymous mass of online users isn’t good enough, then where should one turn instead?

There are no doubt people we’ve come across who we think are good sources of moral advice – family members, perhaps, or close friends – and we might have reason for seeking out advice from these people rather than others – perhaps they are generally able to provide good reasons to support their advice, or maybe good things tend to happen when you listen to them, or maybe they just seem really wise. We might worry, though, whether a friend or family member is the best source of moral advice. Maybe what we really want is an expert. In the same way that I would prefer to get medical advice from my doctor (a medical expert), or advice about how to fix my car from a mechanic (an expert in car repair), perhaps what I should do in seeking out moral advice is to find a moral expert.

How do we find such an expert? Philosophers have debated extensively about what it would take to be a moral expert (as well as if moral experts exist at all), and while these are still open questions, we might think that, in general, a moral expert has to know a lot about the kinds of difficult situations you find yourself in and be able to convey that knowledge when needed in order to address problems that come up. Often when seeking out moral advice, then, we look to thoughtful people who have been through similar situations before, and have come out well as a result. These people might then display some moral expertise, and might be a good source of moral advice.

That we seek out moral experts can explain why people have been writing into advice columns like Dear Abby for so long: Abby is purportedly an expert, and so we might think that her advice is the best available. Abby has, however, seen her share of criticism in the past, and in some recent cases has offered up some real stinkers in terms of advice. While it would be nice, then, if there were a moral sage who could offer us the perfect advice in all circumstances, something that we might take away from the problems with seeking out advice from Reddit and Dear Abby is that the best moral advice might come from not just one source, but rather a variety of viewpoints.

Climate Emergency and the Case for Civil Disobedience

photograph of "to exist is to resist" mural

In Plato’s Republic, during a sustained dialogue on the nature of justice and the structure of a just society, Socrates remarks that we are talking of no small matter, but of how we should live. If that question remains central to moral philosophy, any contemporary answer the question of ‘how we should live’ must acknowledge that to ask it in ‘our’ time is fundamentally different from asking it in any other time in history. The question of what a good human life is in an age of environmental crisis cannot be answered without considering our individual and collective responsibility to mitigate the damage which no longer lies ahead of us, but which is happening now.

Governments, policy makers, corporate institutions, et al, have failed to respond to decades long warnings from scientists that CO2 emissions from industrial and domestic activities pose serious risks to human life and human society, to the world’s ecosystems and perhaps ultimately to much of life on Earth. Those scientists, conservationists and activists who have understood this, have nevertheless failed to effect the change necessary to prevent an ecological and climate emergency. There are complex reasons for these failures, and though it is vitally important that we try to fully understand them, I will not speak to them here.

I want to focus on the urgent question ‘what do we do now?’ by considering the response emerging from the new and quickly growing environmental mobilizations such as Extinction Rebellion in which people are beginning to resort to techniques of disruption and civil disobedience in the face of governmental and systemic inaction. Are these measures necessary, are they are morally justified, and are they perhaps even morally required?

Civil disobedience (which I shall assume is necessarily non-violent) has historically played an important part in effecting change, as for instance in the suffragette and the civil rights movements. In one of the most famous endorsements of civil disobedience, Henry David Thoreau (in 1849, after refusing to pay taxes to a government which legally sanctioned slavery) wrote:

“All men recognize the right of revolution; that is, the right to refuse allegiance to and to resist the government, when its tyranny or its inefficiency are great and unendurable.”

Thoreau’s point is simple and obvious: morality or justice does not necessarily line up with the law.  Are we reaching a point now at which the inefficiency of governments and the tyranny of corporate interests have become unendurable; where the refusal to adequately address the climate emergency can no longer be tolerated?

A brief (and incomplete) survey of where we are paints a sobering picture: The latest report published by the Intergovernmental Panel on Climate Change (IPCC) warns that countries must triple their emissions reductions targets to limit global heating to below 2C. Even a 2C increase is not safe, but on the current trajectory heating is likely to result in an increase of between 2.9C and 3.4C by 2100. This will bring about catastrophic climate change globally. The social and geopolitical outcomes of such a scenario are deeply frightening. Rising seas will displace billions of people. Not only will costal habitations be inundated, arable land will be poisoned by salinity and made barren by drought. The effect will be devastating, widespread famine, which, along with water scarcity, will almost certainly cause political instability and conflict. It is likely that humans cannot adapt to an increase of 4C.

Clearly, urgent and serious action is needed. Two of the things that most threaten the possibility for action lie at opposite ends of the spectrum of responses to these predictions. The first is climate denialism  – including the views that climate change is not real, is not caused by human activity or that the likely effects are being wildly exaggerated. The other is climate defeatism – the view, (espoused by Jonathan Franzen in a recent article) that we are already too late. However, many argue that there is still a cause for hope because there is still a window in which to act to keep warming below 2C. Scientists and activists including Tim Flannery and Naomi Klein, are calling for radical action because that window is small, and vanishing quickly.

The question of what kinds of radical action we need brings us back to the question of what role acts of disruption and civil disobedience can play, and how those actions are to be morally reckoned with, given the situation we face.

Civil disobedience can be defined as “a public, non-violent and conscientious breach of law undertaken with the aim of bringing about change in laws or government policies.” The main objection to engaging in civil disobedience is that in a stable, functioning democracy there are effective and non-disruptive pathways to change through campaigning and electoral process. Indeed, the democratic system itself is based on the principle that citizens hold a type of sovereign power in that governments receive their legitimacy through ‘the will of the people.’

But what if democracy is not functioning properly? What if politicians, rather than representing the views and interests of their constituents, seek to dictate those views and interests. And what if they do so to advance their own views and interests? For democracy to function properly, for a citizenry to be self-determinate, citizens need the opportunity to make informed choices about their own welfare. People can only make informed choices if they are in fact informed, and governments have a responsibility, which they are currently abrogating, to tell the truth.

The Australian government is not telling the truth about the climate emergency, and has absolutely no intention of addressing the problem. It is resisting and frustrating renewable energy investment while actively pursuing new fossil fuel projects, of which coal is a major part. In Australia (as elsewhere) the powerful vested interests of the fossil fuel lobby have direct lines to government. The country’s policies and laws under these circumstances do not represent the best interests of the people but rather, at their expense, advance the interests of the few. This triple whammy of government obfuscation, policy inaction, and active support of heavy carbon emission activities is creating intense anger and frustration for climate realists from across the political and social spectrum, and support for disruptive, direct action is rapidly growing.

There is, of course, the question of how creating disruption by, for example, blocking bridges, swarming intersections and surrounding government buildings or corporate offices, would achieve the desired results. On one hand it is unlikely that the government will cave to the demands of protestors. On the other hand, Extinction Rebellion’s sustained protests across London in October 2018 resulted in the UK government declaring a climate emergency. Some dismiss this as merely symbolic, as indeed without meaningful policy change it is – but nevertheless, it is not nothing, and it has given impetus and hope to the movement for solving the climate crisis.

Those engaging in acts of civil disobedience do not know with any certainty if these tactics can or will work, but they do know that ordinary, legal forms of protest can not now be effective enough quickly enough. In this sense civil disobedience is a resort taken by people to express their anger and frustration at a destructive and intransigent system. Disruptive action has a cost – to the individuals risking arrest by disobeying the law and also to society. Those taking such action recognize that the stakes are very high, and that the costs of inaction are far greater.

I do not think it is difficult to make a case that under these circumstances civil disobedience is morally justified. Can we, though, defend the stronger claim that it is morally required?

Ahead of the September 20 School Strike for Climate, an open letter was published from over one hundred Australian academics from a variety of disciplines and universities endorsing and supporting Extinction Rebellion and its activities. The letter concluded that:

“When a government willfully abrogates its responsibility to protect its citizens from harm and secure the future for generations to come, it has failed in its most essential duty of stewardship. The ‘social contract’ has been broken, and it is therefore not only our right, but our moral duty, to rebel to defend life itself.”

This statement clearly makes the move from acts of civil disobedience being justified to their being required – as a moral duty. Though I agree with the claim, its defense is trickier.

For example, exactly whose duty is it? Who is morally required to engage in civil disobedience? Even if someone feels that they, morally, have no choice – are they justified in making that demand of others? Our moral intuitions would suggest that there are reasons for rejecting that inference. And this appears to put it – as a moral duty – into conflict with one of the fundamental features of moral duties, which is that they are universal. If I recognize something as a moral duty for myself, then, all things being equal, I recognize it as such for others as well.

I do not see this as an insurmountable problem for the claim that we have a moral duty to rebel against a fundamentally unjust system in the face of looming existential catastrophe. Perhaps one way of fleshing out an answer would be to return to the ‘all things being equal’ clause. Perhaps also there is a way to acknowledge that while each person must freely choose – and be free not to choose – to take such action, there is still a collective responsibility governing the moral musts. These are difficult philosophical issues and they require further reflection.

I began by saying that at the core of ethics is the Socratic question of what it means to live a good human life. Humanity is at a crossroads, and how we understand Socrates’ question, and how we choose to respond to what it asks of us, needs to be reassessed in light of where we are. It seems clear that, given the current situation, living a good human life cannot mean going about one’s business as if the world might not be ending.

The Ethics of Climate Change Protest: Should Protest Be Funny?

climeme protest sign

The Global Climate Strike, which took place last September and involved over 150 countries, counted nearly 4 million young people among its numbers. This admirable show of support perhaps seems less shocking given the increasing prominence of young people in climate change activism. Greta Thunberg is perhaps the most famous of these, but others like Autumn Peltier and Xiye Bastida have also become important advocates for the fight to save the planet.

Because political protest itself has become increasingly visible online, signs from the climate strike inevitably went viral. The vast majority of signs spoke to the unblunted rage and helplessness inspired by political ineptitude (a perfect example, seen in the header of a Vox article on the climate strikes, simply reads “DON’T FUCKING KILL US”). However, many other drew on the language of memes and online humor to articulate frustration. In one example, a teenage girl holds up a sign with the words “THIS IS NOT WHAT I MEANT WHEN I SAID…DIE LIT” floating above a planet half-engulfed in flame. Another sign reads, “Winter is Not Coming,” a distortion of a Game of Thrones quote that has become a meme in itself. These signs, and many others like them, require fluency in the language and culture of social media. Almost all young people are equipped with this form of literacy. As Bridget Read notes in an article for The Cut, “Gen Z has a knack for incorporating its politics into its internet-inflected, ironic, and earnest self-expression so uncannily, so it’s to be expected that its IRL signs would be as funny, charming, and devastating as the best ‘climemes’.”

Read coins a startling new word in that last sentence, though climate change memes were hardly invented by the protests of last September. While “climemes” is a useful way of describing the ever-growing phenomenon of climate change memes, it should prompt us to ask what the moral ramifications of “memeifying” political protest are. Does humor have a place in our collective reckoning with the environmental catastrophe, or does it impede active and sustained engagement in social change?

On the one hand, memes are more likely to be seen by younger people who aren’t already actively engaged in environmental activism. Because they are made to be shared, memes certainly increase the visibility of issues like climate change for a diverse audience. If many people didn’t read lengthy articles about the climate protests, most at least saw images of funny protest signs on their Twitter feeds. However, memes inherently have an expiration date, and it eventually becomes blasé to share older memes. Given that climate change will have long-lasting ramifications, is such a short-lived format really best for fostering long-term engagement?

This leads into another question, of whether or not memes encourage those who share them to physically participate in activism. The idea of “armchair activism,” or activism that involves nothing more than sharing information with others online, has become controversial in recent years, but one could argue that sharing memes falls under this category. However, it should be clear that the protestors who make such signs are by no means working against their own cause, or that encouraging engagement is even the goal of climemes. A bitter sense of humor may be all we have in the face of looming catastrophe, a way for us to vent frustration and grief.

This issue is rooted in a much older debate about the overall purpose of humor. Aristotle, for example, was skeptical about the purpose of humor, and separated it sharply from tragedy. In Chapter 5 of The Poetics, he states that,

“The tragic and the comic are the same, in so far as both are based on contradiction; but the tragic is the suffering contradiction, the comical, the painless contradiction […] the comic apprehension evokes the contradiction or makes it manifest by having in mind the way out, which is why the contradiction is painless. The tragic apprehension sees the contradiction and despairs of a way out.”

His argument is that both tragedy and comedy are rooted in contradiction. This could be the contradiction between appearance and truth on which much of comedy hinges, or the contradiction between desire and reality which is often at the center of tragedy. Contradiction is just one thing climate change protests are pushing back against; namely, the contradiction between grim reality and the insulated world in which many politicians are living it, the contradiction between the urgency of the situation and the lack of response to it.

Aristotle’s definition of humor vehemently excludes pain. However, the kind of humor utilized by protestors has a painful edge. As Aristotle said, tragedy and humor are closely linked, but as climate change alters every aspect of life on earth, the lines between tragedy and comedy become indistinguishable. This is evident in all climemes, and whether or not circulating them is fully ethical, their existence speaks volumes about the modern day tragedy of environmental destruction.

Are Green Burials an Ethical Good?

image of burial mound in field

Roughly 7000 years ago a group of hunter-gatherers in Chile began to mummify their dead. According to Helen Thompson, the evidence suggests that this change was locally driven rather than being introduced from elsewhere. In fact, this cultural practice may have been influenced by climate change, which has spurred other past cultural developments as well. With climate change now becoming a major concern, there are those who argue we now have good ethical reasons to rethink what we should do with the dead. Several new environmentally-friendly ways of dealing with the dead have developed in recent years and this raises a moral question about what we should be doing with dead bodies.

Generally, there are two ways dead bodies are commonly dealt with; they are either buried or they are cremated. Cremation has become far more popular over the last century, and in some countries it is the far more common method. In Canada, for instance, cremation occurs roughly 65% of the time. In the United States the rate of cremation is far lower (only 47%), but this is an increase from only 25% in 1999. One of the reasons cremation is a popular method is because it is fairly cost-effective. In especially populated regions the difference between the cost of a burial and the cost of cremation can be several thousands of dollars. Cremation can also be less wasteful since it doesn’t inherently require cemeteries, headstones, or concrete burial vaults.

However, arguments have made about the moral superiority of burial over cremation. In an article published in the journal The New Bioethics, Toni C. Saad argues that cremation deprives a local community of a shared memory of those who were once apart of it and made the community what it was. He notes,

“of course, gravesite maintenance and location might become tiresome, but the continuing possibility of family memory-pilgrimage is not negligible. Additionally, since the memory of private loved ones is permanently tied to a public physical location, there remains a visual reminder to all, not merely relatives, of the significance of this person who is now dead.”

He suggests that private cremation contributes to a privatization of memory whereas a public cemetery allows us to connect to our local ancestry and allow us to better process the idea of death and mortality.

Both practices of the standard burial and cremation have become socially-engrained and there may be an argument that they are both morally important as part of our culture. However, there is a growing argument that these practices, as typically performed, are not environmentally friendly. Every year 90,000 tons of steel, 1.6 million tons of concrete, and 800,000 gallons of embalming fluid are used to bury the deceased. In addition, cemeteries take up large amounts of land and require pesticides in their upkeep. A single cremation requires two SUV tanks worth of fuel. It can also release substances like dioxin, hydrochloric acid, sulfur dioxide, and carbon dioxide into the atmosphere. Such practices do contribute to climate change, and if we have moral obligations to do something to reduce the threat of climate change, then we may be morally obligated to reconsider our rituals regarding death.

In the last few years several eco-friendly alternatives have been presented. For example, instead of an expensive wood casket, biodegradable caskets are now available and can ensure that bodies that decompose over time will become part of the local ecosystem. Instead of burial in a traditional cemetery, burial options are now available in more natural landscapes. Instead of a headstone, a tree may be planted over the burial site. A similar option is available for those who are cremated; ashes are placed into a biodegradable urn that contains a seed. Or, ashes can be placed underwater as part of an artificial reef.

Even the embalming process offers new possibilities. As opposed to formaldehyde, natural and essential oils may be used to preserve the body. In place of the standard cremation one alternative allows for the use of pressure and chemicals to dissolve the body. This process called alkaline hydrolysis uses 90% less energy than traditional cremation. There are new technological possibilities as well. Promession involves freeze-drying a corpse with liquid nitrogen and then breaking the body apart. Mercury fillings and surgical implants are removed and the powdered remains are buried in a shallow grave. This allows water and oxygen to mix with the remains and turn them into compost.

The fact that there are these alternatives and the fact that they may be more environmentally friendly does not necessarily mean that they are more ethical. However, given the climate crisis, there may be ethical reasons to adopt such new practices. In an article for the Journal of Agricultural and Environmental Ethics, Chen Zeng, William Sweet, and Qian Cheng argue that green burials reflect a number of ecological values including a harmonious relationship with nature, recognizing the worth of nature, the rights of all living things, and the limits of resources. They note,

“Green burial offers a way to minimize ecological pollution during the process of funeral, interment, and related religious rituals; it offers a means by which the affected environment can return to its prior, ‘natural’ state in a short time. Thus, the practice of green burial manifests a positive environmental and ethical attitude towards life.”

This only raises more questions. If it is more ethical to adopt eco-friendly practices than traditional practices for dealing with the dead, should we carefully study which practice is the least harmful the planet, and if so, are we then morally obliged to adopt that practice uniformly? As I began, climate change has affected the way humans deal with death. But how exactly should climate change today affect how we deal with death? Are we obliged to change our usual practices regarding death and would be it be morally wrong not to?

When Is Comedy Over the Line? The Departure of Shane Gillis from SNL

photograph of Radio City Music Hall

Earlier this month, the famous sketch comedy program Saturday Night Live announced that Shane Gillis would be joining the troupe. The comedian was allegedly cast in an attempt to appeal to more conservative potential viewers. In recent years, the show has been perceived by many to have a liberal bias, and its creators wanted to draw more politically diverse viewership. Several days later, however, SNL announced that Gillis would not be joining the cast after all. The show’s representatives acknowledged that they cast Gillis on the basis of the strength of his audition, but failed to adequately vet him before offering him the job. In the days immediately following the casting announcement, comedic material surfaced that many found appalling. A good number of the offensive remarks came from a podcast co-hosted by Gillis in which he makes unambiguously racist, sexist, homophobic, and transphobic remarks. There are also recordings of Gillis making rape jokes and mocking people with disabilities.

This is not the first time a comedic institution has decided to part ways with Gillis over the nature of his comedy. The Good Good Comedy Theater, a prominent Philadelphia Comedy Club, tweeted the following,

We, like many, were very quickly disgusted by Shane Gillis’ overt racism, sexism, homophobia and transphobia – expressed both on and off stage – upon working with him years ago. We’ve deliberately chosen not to work with him in the years since.

This event had an impact on the national stage more broadly. On one of his podcasts, Gillis referred to presidential candidate Andrew Yang using a series of racial slurs.

Yang replied to Gillis on Twitter, saying:

Shane — I prefer comedy that makes people think and doesn’t take cheap shots. But I’m happy to sit down and talk with you if you’d like. For the record, I do not think he should lose his job. We would benefit from being more forgiving rather than punitive. We are all human.

It appears that Yang opted to take a measured and forgiving approach during a politically challenging time. Not everyone agrees with his strategy, but plenty of people also disagree with the choice made by SNL.

Some support for Gillis was grounded in concerns about free speech. To the extent that these are concerns about Gillis’ constitutional rights, they are misguided. Our first amendment rights to freedom of speech are rights we have against governmental restrictions of or punishment for speech, not rights we have against private individuals or institutions. SNL is not constitutionally obligated to retain any particular cast member, especially if they believe that cast member will damage their product.

Charitably, however, even if the concern is not a constitutional matter, one may still think that there are moral issues dealing with freedom of speech more broadly. Some of these considerations have to do with comedy specifically. Comedy plays a special role in society. Comedians shine a light on power dynamics within cultures, challenge our existing paradigms, and provide us with a cathartic outlet for dealing with our frustrations.

A third set of free speech concerns has to do with call out culture. Contemporary generations live in a world that is far removed from the one occupied by their ancestors. Our past speech is no longer lost to memory—if we say something online, it’s there forever. Some argue that we should have some freedom to make mistakes, especially in youth, that won’t spell ruin for our careers later in life. We are all human, after all, and forgiveness is a virtue. That said, it’s worth noting that many of the problematic comments made by Gillis were made earlier this year.

Others argue that SNL did the right thing. It is certainly true that we all make mistakes, and that all of us have said things that we later wish that we hadn’t. Nevertheless, Gillis’ behavior does not seem to be behavior of that type. The offensive jokes he made were not aberrations that it would be appropriate to view as juvenile mistakes. These behaviors were routine, habitual, part of his comedy style. What’s more, Gillis only appeared to demonstrate remorse for the content of these jokes when he was in the national spotlight, called out in public space to do so. Many viewed his apology as insincere.

Many critics of Gillis would agree that comedy serves an important social function. But, they might argue, there is a difference between being pushing the comedic envelope and being the equivalent of a schoolyard bully. If your child started a YouTube channel dedicated to mercilessly mocking his peers, you’d be likely to punish him and/or get him counseling rather than praising his creativity.

Critics may argue further that SNL tends to be a collection of the best comedic talent this country has to offer. People work for years to develop a background that makes them qualified to be a cast member. If a person wants a job with a high level of prestige and public attention, that person needs to be attentive to their character development generally. Impressive opportunities should be reserved for impressive people. Or, at the very least, genuinely apologetic people.

What’s more, inclusion of Gillis in the program doesn’t do conservatives any favors, and it doesn’t honor the viewership that SNL is attempting to generate. Reasonable, ethical republicans will certainly object to the characterization of Gillis’ brand of humor as “conservative.”

A further controversy has to do with the way in which presidential candidate Andrew Yang handled this issue. Yang has attempted to brand himself as a candidate from outside of traditional politics, stressing a message of civil discourse intended to have broad appeal. Some view his engagement with Gillis to be tone deaf when it comes to race. Many feel that the message that should be sent to Gillis is that his comedy isn’t funny, it’s offensive. No one is trying to censor or stifle his speech. Gillis is free to work in the kinds of venues for which such behavior is not a deal breaker. He can say what he wants, but if what he wants to say is cruel, perhaps society will not be willing to pay him lots of money in support of those kinds of messages.