← Return to search results
Back to Prindle Institute

Santa Clarita Diet and Moral Imperfectionism

aerial photograph of suburbs

Note: This piece contains spoilers.

At first glance, Santa Clarita Diet appears to be just another light-hearted, zombie-in-suburbia romp. But at its heart, the show rivals other forward-thinking series like Black Mirror and The Good Place for tackling extraordinary ethical scenarios. Santa Clarita Diet even sheds new light on familiar contexts in its “suburban white woman as zombie” conceit.

Drew Barrymore plays Sheila Hammond, a straight-laced, thin-lipped “realator” (her idiosyncratic pronunciation of “realtor” becomes a running gag in the show). We are given glimpses into pre-undead Sheila’s personality, a woman who quietly fumes at her abusive boss, blows off fun-seeking neighbourhood moms, and is immune to the appeal of spontaneous sex offered by hopeful spouse Joel (Timothy Olyphant).

This version of Sheila exemplifies the constraint that is iconic of the competing demands of suburban white womanhood. She muses wistfully: “I’d like to be 20% bolder. No, more, 80%. No, that’s too much.” Spoiler alert: things change.

Through a mysterious (and gross) transformation, Sheila becomes undead. Sheila, Joel, and their daughter Abby come to a slow realization of Sheila’s new abilities with the help of Eric Bemis (nerd kid next door) and under the gaze of dispassionate drugstore clerk Ramona, treated as oracle and therapist in turn by the stressed Joel, Abby, and Eric.

Sheila transforms from a constrained personality – someone who was beholden to unspoken rules – to someone who throws herself into life with joy and abandon.

Feminist themes are among the ethical perspectives that pervade the show, particularly through symbolism. Sheila develops her first taste for human flesh, ironically, when a coworker (played by Nathan Fillion) is attempting to coerce her into sex. The show has some on-the-nose moments (Sheila later attacks a misogynist at the point of his declaring victimhood) but avoids heavy-handedness under Barrymore’s tart and gleeful execution. The theme of bodily integrity recurs for woman and man, living and undead. These issues are treated thoughtfully and yet with a light touch.

It is easy to interpret various moments in the story as that of stages in a cis-woman’s life – the bodily fluids excreted by Sheila recall menstruation, treated in many cultures as a transformative moment. The raw power in her subsequent personality recalls duelling conceptions of post-menopausal women the typically negative Western view as changes affecting emotions, libido, impulse control, and that typical of other cultures as a time of increased freedom and power. It also deals with ageing: Sheila’s body is more prone to deterioration in its undead state, but at the same time she has never been more in touch with her physical energy and gusto. This mirrors the stages of life that women (and men too) can experience as a time of simultaneous inconvenience and liberation.

At first, Joel Hammond struggles with his wife’s brute strength (at one point he pleads, “I want Abby to grow up thinking men can kill, too”). But as he himself indicates on the cusp of an unwanted barfight, Joel does not need to prove himself. He carved out a much-valued life that does not depend on macho posturing. While his mixed emotions are sometimes played for laughs, they reveal his character’s fundamental values of open-mindedness, self-awareness and maturity which leave us rooting for him and his relationships.

While Sheila initially embarks on an impulse-ridden spree, seeking adventure and smiting problematic people with equal gusto, with the help of Joel, even zombified Sheila comes to quickly realize that her actions have consequences. This leads us to the central ethical problem: what are the actions of a good person? The twist (and joke) of course lies in the premise that said person could only survive by eating fresh human.  After breaking one of the most fundamental taboos, why enter moral niceties into the details? The progression of episodes’ breezy titles reflect the tug-of-war between existing moral imperatives and concessions to these radical circumstances: “We Can’t Kill People” to “We Can Kill People”; “We Let People Die Every Day” and “Moral Gray Area”.

The fact is, as the show teases, we deal with issues of moral magnitude all the time. Abby, the Hammonds’ normal teen daughter, is emboldened by her parents’ taking on more than they can chew. After joining a school environmental club, she decides to blow up a nearby fracking site. Unintended consequences quickly follow, with the FBI investigating her best friend who now risks decades in prison. But, in a keen moment of speaking truth-to-power, one of her classmates declares under the baleful eye of a Skittles-wielding FBI agent, the real crime is in the destruction of the planet.

As moral philosophers like Peter Singer point out, many people across the planet are facing death over situations in which we are all implicated – global inequality and extreme environmental degradation. The Hammonds see their place in a world bigger than themselves. When Sheila and Joel discover the cause of her condition, they hasten at personal risk and effort to eliminate its spread. Abby is a chip off the old block for seeing her role in a world bigger than her private circle.

In this light, the Hammonds’ quandaries and escapades take on a different hue. Rather than eschew morality altogether in an extreme situation, they mostly take care to accomplish the most good and effect the least harm. As such, the Hammonds’ ethic is shaped by utilitarianism. They also regularly exemplify care ethics, by being motivated and informed by relationships with others.

I also use the term “moral imperfectionism” to describe the show’s vision because it represents a coherent ethical position, one that can be contrasted to “moral perfectionism” or an “objective,” deontological account of the good. The show emphasizes epistemic uncertainty and the impossibility of perfect decisions in the face of enormous moral stakes, an ethical approach that is existential, humble, and optimistic. At no point do the Hammonds’ impossible positions and patchy outcomes lead to the adoption of nihilism, or conversely, the assumption of rigid, unchanging rules. The Hammonds constantly evaluate what they owe to the world: they don’t want to be ”assholes”. They show their commitment to grow, depend on input, support, and new information from others and dole out care of their own. They treasure the people in their lives, dead or undead, bipedal or eight-legged. For a show that deals in so much death, it has a lot to say about how to live one’s life.

The Ethics of Brand Humanization

close-up photo of Wendy's logo

Brand humanization is becoming increasingly common in all arenas of advertisement, but it’s perhaps the most noticeable on social media. This strategy is exactly what it sounds like; corporations create social media accounts to interact directly with customers, and try to make their brand seem as human and relatable as possible. It’s ultimately used to make companies more approachable, more customer-oriented. The official Twitter account for Wendy’s, for example, has amassed a massive audience of nearly three million followers. Much of their popularity has to do with their willingness to interact with customers, like when the account famously roasted other Twitter users, or when they post memes to reach out to a younger demographic. The goal is to make the brand itself feel like a real person, to remind the consumer of the human being on the other end of the interaction.

In an article advising brands how to humanize themselves in the eyes of consumers, Meghan M. Biro, a marketing strategist and regular contributor to Forbes, describes how a presence on social media allows companies,

“to build emotional connections with their customers, to become a part of their lives, both in their homes and—done right—in their hearts. The heart of this is ongoing, online dialogue. Both parties benefit. The customer’s idiosyncratic (and sometimes maddening) needs and wants can be met. The company gets increased sales, of course, but also instant feedback on its products—every online chat has the potential to yield an actionable nugget of knowledge.”

The tactic of presenting ads as a mutually beneficial conversation between consumer and brand has become increasingly prominent in recent years. Studies have shown that millenials hate being advertised to, so companies are adopting strategies like the one Biro recommends to restructure the consumer-company interaction in a way that feels less manipulative. However, not everyone believes this new arrangement is truly mutually beneficial. In an article for The New Inquiry, Kate Losse takes a critical view of conversational advertising. “The corporation,” she notes, “while needing nothing emotional from us, still wants something: our attention, our loyalty, our love for its #brand, which it can by definition never return, either for us individually or for us as a class of persons. Corporations are not persons; they live above persons, with rights and profits superseding us.” On the subject of using memes as marketing, she says, “The most we can get from the brand is the minor personal branding thrill of retweeting a corporation’s particularly well-mixed on-meme tweet to show that we ‘get’ both the meme and the corporation’s remix of it.” In this sense, the back-and-forth conversational approach is much more one-sided than it seems.

There is, however, a difference between traditional marketing strategies and the tactics employed by social media accounts to gain popularity. If you follow Wendy’s on Twitter, it’s because you choose to follow them, because you want to see their content on your feed. For those who don’t want to be directly advertised to, it’s as simple as not following (or if you want to be more thorough, blocking) corporate Twitter accounts. Responding to transparent advertising with a sarcastic meme, an increasingly common and often funny response to these kind of Tweets, only gives the brand more exposure online, so the best strategy is to not engage at all.

Furthermore, a 2015 study on brand humanization conducted by the Vrije University of Amsterdam provides another dimension to this issue. When studying the positive correlation between social media presence and a brand’s reputation, they wondered whether “the fact that exposure to corporate social media activity is, to a large degree, self-chosen raises the question whether these results reflect a positive effect of exposure on brand attitudes, or rather the reverse causal effect–that consumers who already have positive brand attitudes are more likely to choose to expose themselves to selected brand content.” No extensive studies have been done on this yet, but it might provide valuable insight on the actual impact of corporate Twitter accounts.

Using a Facebook page to take questions or criticism from consumers seems like a harmless and even productive approach to marketing through social media. Even corporate Twitter accounts posting memes, while not as beneficial to the consumer as companies like to present it as, is hardly unethical. But brand humanization can steer companies into murky moral waters when they try too hard to be relatable.

In December of 2018, the verified Twitter account for Steak-umm, an American frozen steak company, posted a tweet that produced significant backlash. The tweet reads, “why are so many young people flocking to brands on social media for love, guidance, and attention? I’ll tell you why. they’re isolated from real communities, working service jobs they hate while barely making ends meet, and are living w/ unchecked personal/mental health problems.” A similar tweet from February of 2019, posted by the beverage company Sunny-D, reads cryptically, “I can’t do this anymore.” Both of these messages demonstrate two things; firstly, the strategy employed by modern companies to speak to customers in the more humanizing first-person, to move away from the collective corporate “we” to the individual (and therefore more relatable) “I”. The voice of corporations has changed; once brands were desperate to come across as serious and professional, but now brands marketing to a twenty-something demographic want to sound cool and detached, and speak with the voice of an individual rather than a disembodied conglomerate of shareholders and executives.

Secondly, these brands are now appropriating and parroting millennial “depression culture”, which is often expressed through frustration at capitalism and its insidious effect on the individual. To quote Kate Losse again, “It isn’t enough for Denny’s [another prominent presence on the social media scene] to own the diners, it wants in on our alienation from power, capital, and adulthood too.” There is something invasive and inauthentic about this kind of marketing, and furthermore, something ethically troubling about serious issues being used as props to sell frozen food. The point of the Steak-umm tweet may be salient, but the moral implications of a corporate Twitter account appropriating social justice issues to gain attention left many uneasy. As John Paul Rollert, a professor of business and ethics at the University of Chicago, said in an interview with Vice, “It can’t say anything good about society when depressed people feel their best outlet is the Twitter account for Steak-umm.”

The Ban on Trans Service Members and Injustice of Healthcare Cost Disparities

close-up photograph of the boots of four servicepeople

President Trump has banned trans members of the military from openly serving and from joining up. The reasoning behind the ban is that inclusion would result in higher medical costs and lower troop cohesion. On January 22nd, SCOTUS lifted an injunction on enacting the ban, and lower courts will proceed with evaluating the ban while the military will be more free to follow it.

As a Vox report articulates, there are multiple dimensions along which this ban is offensive: “Trump’s ban could lead to some very ugly consequences: trans service members staying in the closet, even when it’s dangerous for their service and their personal health and safety; trans troops being discharged or abused; and trans Americans more broadly receiving yet another signal that society still doesn’t accept or tolerate them.”

Besides issues of discriminatory injustice, this ban has significant practical effects: over 134,000 American veterans are transgender, and over 15,000 trans people are serving in military today. The US has been at war for decades, so it is unclear why barring willing people from serving would be a wise strategy, especially for this demographic, as it’s been reported that “twenty percent of transgender people have served in the military, which is double the percentage of the U.S. general population that has served.”

The most suggestive support for the ban comes from research from the RAND Corporation which indicates that including openly serving trans folk in the military would make up “a 0.04- to 0.13-percent increase in active-component health care expenditures.” However, research from countries that allow openly serving in the military according to your gender identity, including the UK, Israel, and Canada, suggests that there is no cost to military preparedness or problems with the military’s budget.

The supposed extra cost of healthcare has been used as a tool of discriminatory practices both inside and outside of the military. Before Obamacare, it was allowable practice for women’s health insurance to be more costly than men’s, for instance. Even harsh critics of the law admit, “The Affordable Care Act enacted pricing rules that largely prohibited charging women higher health-insurance premiums than men, and the Republican plan would relax some of those restrictions, which probably would result in women’s paying higher premiums.”

Debates over whether being a woman should play the role of a “preexisting condition” bring to light the way healthcare should be conceived of and distributed. It is true that women pay more over their lifetime for healthcare than men, on average, despite, again on average, taking better care of themselves.

Health is a human good that is unevenly distributed by a natural lottery – both at birth with conditions that make health needs vary and later in life in the form of health-altering events such as accidents and disease. That some individuals may need more assistance in order to maintain health does not undermine its status as a fundamental human good.

There isn’t evidence that being trans interferes in any way with one’s ability to serve in the military – the inclusive policies of other nations serve as evidence to the contrary. The proposed ban on openly trans military service member is thus at best a matter of medical discrimination, but that justification is thin, given the diverse medical needs of diverse populations. In reality the ban is a barely veiled instance of putting transphobia into policy.

Bad Behavior During Political Primaries

photo of empty studio with debate podiums

The new presidential election cycle brings with it both a sense of hope for the future and cause for frustration over bad behavior in an increasingly hostile political environment. As primary candidates emerge, it’s worth pausing for reflection on what appropriate behavior during the primary season and beyond looks like.

This may be interpreted as a pragmatic question. If we understand it in this way, the question amounts to something like: how should members of a political party behave if they want their party’s candidate to ultimately win the general election? Notice that this is not necessarily a moral question. It may turn out to be the case that the best way to get a candidate elected is to behave as morally as possible, but recent elections don’t lend a lot of support for that view. It may turn out that playing fast and loose with facts and spreading misleading or outright false information on the internet is useful for getting a candidate elected, but such behavior is likely unethical. On the other hand, some argue that what really matters at the end of the day are the consequences of the election. According to this view, the ends justify the means. Though there may be something to the view that consequences matter most, one significant consequence of this kind of behavior worth taking into account is that it contributes to the decline in critical thinking skills of the population at large, and it diminishes the trust that we have for one another. This could potentially result in an irredeemably broken political system.

One of the most visible issues during the primary season is the way that voters treat candidates running against their preferred candidate choice. There is nothing wrong with passionately supporting a candidate; in fact, caring deeply about politics is, at least on its face, a virtue. Politics matter, and many political choices are moral choices—people suffer to a lesser or greater degree depending on what kinds of policies are implemented. It makes sense to support the candidate that you believe will maximize well-being. But what does this entail about how the other candidates in the field should be treated?

Now that so many of our behaviors and comments are recorded and easily accessed decades after the fact, there are many more considerations that can be brought to bear on the decision of which candidate to support. The past behavior of a potential candidate matters. We need to take a look at how a candidate has voted in the past, the ways in which that candidate reliably treats other people, and the virtues and vices that might be easily observable in their character. But we need to use good critical thinking practices when we make these judgments. First of all, we should make sure that we are employing consistent standards across the field of candidates. No person exhibits perfect behavior in every circumstance. It will always be possible to point to some bad decision making on the part of any candidate. Like offenses should be treated in similar ways. We should avoid treating behaviors as disqualifying in an opposing candidate that we wouldn’t treat as disqualifying in the case of our own preferred candidate. It’s also important to recognize that some bad behavior is worse than others and we need reasons beyond our political preferences for treating a particular instance of bad behavior as disqualifying.  
A further question worth considering is the standard to which it is appropriate to hold candidates for political office. We often treat our family and close friends with empathy and compassion. We recognize that people grow and evolve and make mistakes in the process. As a result, we are frequently willing to forgive those to whom we are close. How much forgiveness should we be willing to offer candidates for office if they express contrition for past bad behaviors?

We also need to resolve the question of how to react to various changes both in people and in political, social, and ethical climates. There is some language that it is arguably inappropriate to use in any context, but it is also important to recognize that language is dynamic and changes over time. Should we judge comments made by candidates according to the social standards of the current environment, or should we view them in the context of the environment in which they were expressed?  

The same considerations apply to a political candidate’s voting record if they have previously served as a legislator. This is a real challenge, because it is undeniable that bad legislation exists and we shouldn’t minimize that fact. On the other hand, very few people follow politics closely enough to be fully aware of the political context in which particular decisions are made, especially when those decisions are decades old. Hindsight is 20/20, and often the folly of past political decisions is weaponized. One proposal for the way we should look at a candidate’s record is in terms of what their reliable dispositions seem to be. If a candidate routinely, through the course of a career, makes decisions that, for example, put poor people at a disadvantage, then it is appropriate to conclude that the candidate in question is bad for poor people. It is unproductive to use isolated political decisions out of context to score points against a candidate we dislike.

There is a big picture to keep in mind. Some primary candidates maybe better than others, and it may turn out to be the case that a charismatic candidate wins over a candidate with more productive substantive policy proposals. If we want ideals that resemble our own to prevail because we think those ideals aren’t just political, but are, fundamentally, moral ideals it would be useful to have a theoretical framework in mind in advance for what kinds of behaviors count as disqualifying, and to treat candidates accordingly.

Second-Victim Phenomenon

close-up photograph of physician's face with mask and headlamp

In 2000, Albert W Wu introduced the term second-victim phenomenon to describe the emotional distress that physicians have when they make a medical error that results in harm for a patient. Since then, the term has been cited over 400 times in The Web of Science, and PubMed has over 100 titles with “second victim” in the title or abstract. The purpose of the term, as described by Wu, was to bring attention to the need of emotional support for doctors who are involved in a medical error. Since then, however, it has been debated whether that phrase is appropriate terminology given the severity of the implications.

Doctors and nurses experience an emotional rollercoaster everyday with high pressure jobs and people’s lives at hand. Mistakes in the field can have lethal, legal, and emotionally- distressing effects for the provider as well as the patients and families involved. Doctors have been under an increasing amount of pressure recently as well with the increasing amount of productivity requirements and increased required documentation. The nature of the job with the additional pressures leads to an increase of physician burnout.

Hospitals analyze mistakes made by physicians in a few different ways. They have Morbidity and Mortality conferences, critical incident debriefs, and even peer review systems. Hospitals aim to find the root cause of the problem. They want to identify any safety nets that were missed and find ways that mistakes can be prevented. However, these initiatives typically lack to touch on the emotional distress that the physician can feel after the accident.

One way that hospitals have attempted to incorporate a mental health support system for these cases is called Schwartz rounds, named after a healthcare attorney who died of lung cancer that was touched by simple acts of kindness he had in his last days of care. During a Schwartz round, a doctor talks about the care of patients in the hope of increased compassion and collaboration between physicians and patients. Dr. Elaine Cox, the Chief Medical Officer of Riley Children’s Health says, “The overwhelmingly positive response to such rounds underscores the desire for connection and support that all of our clinicians are thirsting to find. This type of intervention, and others that accomplish the same end, may be the most pressing need to support those who work in health care.” Unfortunately, there are very few resources that exist to further these initiatives.

There are dangers in not addressing the emotional traumas that professionals are left with after a medical mistake. Physicians can react in ways that are detrimental as a way to protect themselves. They respond to their own mistakes with anger, projection of blame, or by acting defensively and/or blaming the patient or other members of the healthcare team. Long-term ramifications for the emotional distress experienced by physicians can lead to burn out, substance abuse, or even suicide. On average, 400-700 physicians take their own lives per year. “Second victim” terminology was designed to draw attention to these serious consequences of medical mistakes that are so often overlooked.

However, at the root of medical professional’s job, is the oath of “do no harm.” Some believe that describing doctors as ‘victims’ diminishes the responsibility and accountability of the mistake. Calling physicians ‘victims’ seems to imply that the medical error was a random event, a piece of bad luck, or an unpreventable occurrence. It is reasonable for patients and their families to expect providers to be accountable for their actions. Physicians foremost have an ethical responsibility to tell their patients of an error, especially if the error has caused harm. This is to respect the patient’s autonomy, as they have the right to be told the logistics of their medical care.

Management of patient care is carried out by a combination of institutional systems and the professional actors within it, and healthcare professionals can be agents of harm, even if unintentional. The need for transparency is pressing. Without responsibility and accountability, the effectiveness of patient safety and care is undermined.

Patients represent the central focus of healthcare. While physician health is an important piece, referring to doctors who have committed medical errors as “second victims” obscures the difference between the two in terms of agency. The emotional distress suffered by physicians should be addressed, as it can affect their quality of care with their patients, but we might question whether referring to both patients and providers alike as “victims” is the best way to address the situation.

From Picking Fruit to Buying It: The Health of California Farmworkers

Photograph of 5 figures in a field, wearing hats and bending over to reach for fruit

Picking fruit in temperatures reaching upwards of 100 degrees Fahrenheit is a reality for California farmworkers. However, most Americans scarcely think about the implications of this hard labor as they purchase fruit on their weekly grocery trip. Raising awareness about the degree of heat exposure farmworkers experience is imperative, particularly when considering how climate change will raise average temperatures and contribute to work conditions with lasting health consequences for these individuals. Ensuring accessible healthcare, strictly-enforced safety policies, and proper compensation are steps which need to be implemented in order to address this growing crisis.

The New York Times article titled “Long Days in the Fields, Without Earning Overtime” by Joseph Berger outlines the unequal pay characteristic of these workers livelihoods, which can be attributed to a power dynamic created due to the workers immigrant identities. Berger interviews one worker who discusses the long work days and lack of compensation: “Sometimes we don’t get a day of rest…This week my boss told me I don’t have a day off.” The long hours are expected and no overtime pay is provided — in fact, eight dollars an hour is all these workers make.

Stories like this one describing grueling work schedules with minimal payoff are numerous. Beyond long days in the sun, workers describe the enormous strain this fruit picking job has had on their health and a lack of medical attention. An article published in High Country News introduces Maria Isabel Vásquez Jiménez, a nineteen-year-old who was working in grapevine fields in 95 degree weather. After a few hours she collapsed next to her fiancé. The water cooler was a ten-minute walk from their location and farm management did not even immediately take her to the hospital. Maria went into a coma and died two days later. The neglect of farmworkers has become an issue so grave that individuals are risking their lives to work these jobs. Arturo Rodríguez president of the United Farmworkers Union made this statement after Vásquez Jiménez’s death: “The reality is that the machinery of growers is taken better care of than the lives of farmworkers. You wouldn’t take a machine out into the field without putting oil in it. How can you take the life of a person and not even give them the basics?”

Although California’s labor policies are stricter than many other states, significant issues remain with the enforcement of their laws and consideration of heightening temperatures due to climate change need to be addressed. Research provided by the University of California projects that “The average annual temperature in the Central Valley region is projected to increase by 5 to 6 degrees during this century… heat waves will be longer, more intense, and more frequent than they were a decade or two ago.” As we move towards a future where farm working will become even more dangerous, it is imperative that states introduce stricter regulations which prioritize safety over productivity.

Understanding that many of these workers lack access to healthcare coverage due to undocumented status is an important facet of this crisis. Investigative research into the health of Californian farmworkers by anthropologist Sarah Horton exposed these injustices by following individuals’ stories over many years, and the struggles associated with seeking out help when working with undocumented status. For example, Silvestra, a corn harvester who had been working in the fields since he was 16, began experiencing extreme nausea and vomiting during work. Silvestre eventually took a day off of work to go to a doctor who told him to return in order to undergo some tests to figure out a cause to the nausea. Horton writes, “Undocumented and unable to pay for the tests, Silvestre worked for an additional month and a half, retching each morning.” The danger of not being able to afford immediate medical attention has put undocumented farmworkers in deadly and entirely preventable situations. The primary assistance granted to these workers relies on a proof of disability as Horton explains, “The government’s disproportionate weighting of applicants’ age in determining eligibility automatically disqualifies many middle-aged farmworkers with severe chronic disease. The price of such delayed assistance is seen in aggravated chronic illness and a diminished quality of life.” Overall, the lack of preventative medical attention available for working age undocumented farmworkers contributes to a larger crisis concerned with dangerous heart and kidney complications, often resulting in permanent disability or even death.

Overall, examining the source of the fruit in our grocery stores unveils an important ethical dilemma concerning which individuals are often forgotten in the fight for labor rights. Discussing pay reform in addition to national discussions about providing preventative healthcare to these workers in order to reduce the number of deaths from repeated heat exposure is a critical issue.

Lil Nas X vs. Billboard: The Charting Conundrum of “Old Town Road”

Photograph of a type writer with the words "Lil Nas X" having just been typed

In December of 2018, Montero Hill, AKA Lil Nas X dropped his song “Old Town Road” on TikTok, a video-sharing app from China. The song was originally used as a meme on TikTok, but it blew up after videos sampling the song amassed over 67 million views. “Old Town Road” has since reached the top of Spotify United States Top 50 as well as global Apple Music charts. It also sits at #15 on the Billboard Hot 100 chart. In addition to the Hot 100 chart, “Old Town Road” has made it to the Hot R&B/Hip-Hop Songs chart as well as the Hot Country Songs chart. But something odd happened a bit after the song caught fire–Billboard removed Lil Nas X’s single from the Hot Country Songs chart. Social media exploded after the discovery, with users confused and disappointed at Billboard’s decision. But what prompted Billboard to remove Lil Nas X’s song? Is the song country music at all? Could it be that the song truly didn’t fit some criteria that Billboard has for music genres, or could it be that the whole ordeal is based off of racial undertones?

Though “Old Town Road” blew up as an internet meme, there’s more to the song than meets the eye. Lil Nas X weds rap/hip-hop and country music genres by using rap beats and country influenced lyrics. In a deep, country music style voice over a bass boosted instrumental, Lil Nas X sings:

I got the horses in the back

Horse tack is attached

Hat is matte black

Got the boots that black to match

Ridin’ on a horse

You can whip your Porsche

I been in the valley

You ain’t been up off that porch now,

When Billboard removed “Old Town Road” from the Hot Country Songs chart, they explained that it was because the song did not embrace enough elements of today’s country music to be charted as a country song. Perhaps it is the rap/hip-hop elements of the song that prevent it from being classified in the country genre of music–a combination of the instrumental and the song’s lyrics. For instance, in the song’s second verse, Lil Nas X says:

Ridin’ on a tractor

Lean all in my bladder

Cheated on my baby

You can go and ask her

My life is a movie

Bull ridin’ and boobies

Cowboy hat from Gucci

Wrangler on my booty  

Unlike the first verse of the song, the second appears reminiscent of standard rap/hip-hop elements. Lean, a combination of prescription cough medicine and soft drink beverages, has been a staple in contemporary rap. When combined with opioids, lean can seriously impair one’s motor functioning, which the user hopes to achieve. The mention of adultery and “boobies” in the verse aligns with the disregarding of women in rap music as well. These motifs of rap, which are proportionately negative, could be a hint as to why Billboard removed “Old Town Road” from the Hot Country Songs Chart. But even then, it raises the question: what makes country music? The answer to such a question might involve race matters. Billboard has denied considerations of race influencing their decision to take “Old Town Road” off the country songs charts. However, race seems like an unavoidable factor in such a situation where a black artist is being excluded from a predominantly white music genre.

In addition, Billboard’s argument of “Old Town Road” not incorporating enough country elements appears weakened when acknowledging the fact that white country music artists such as Florida Georgia Line and Sam Hunt incorporate hip hop elements in their songs. Some country music oversexualizes women like rap and hip-hop does. For example, in Sam Hunt’s song “Body Like a Back Road,” he sings “The way she fit in them blue jeans, she don’t need to belt, but I can turn them inside out, I don’t need no help.” Even with hip-hop elements and misogynistic lyrics, these artists’ music is still considered country. Why is Lil Nas X’s music in question?

Perhaps it is this double standard between rap/hip-hop and country that suggest a racial component. Shane Morris, a former record label executive for Nashville records, a country music label, said that Billboard removed Lil Nas X’s song from the charts as a compositional problem because they didn’t know how to justify the matter without sounding racist. Whether Morris’s words are true or not has yet to be determined, but the situation with Lil Nas X–where a black artist is working with a predominantly white genre– is not the first. Per The New York Times, Charles Hughes, the director of the Lynne & Henry Turley Memphis Center at Rhodes College, explained that black music artists have been influencing country music for a long time. Despite their contribution, black artists haven’t enjoyed the benefits  their white counterparts have.

Hughes went on to compare Billboard removing Lil Nas X from the Hot Country Songs chart to country radio stations ignoring Ray Charles’ 1962 album “Modern Sounds in Country and Western Music.” “Old Town Road” and Ray Charles’ music are almost sixty years apart, but seem to possess some interesting similarities. Just like Ray Charles, Lil Nas X’s music is a modern take on country music as he combines the genre with a contemporary sound. In addition, just as radio stations ignored Ray Charles’ music, country radio stations have not been supporting “Old Town Road.” This past week, the song was only played 5 times on country radio stations.

The lack of radio stations playing “Old Town Road” could provide some insight as to why Billboard removed the song from the country songs chart. In a statement released to the Washington Post, a Billboard spokeswoman explained that when categorizing genres for chart inclusion, Billboard incorporates audience impression and airplay in addition to musical composition. The statement went on to explain that “Old Town Road” charted on the Hot Country Songs chart because its rights owners tagged the song as country and it hadn’t been vetted for Billboard’s criteria. The song had 63 plays on Billboard’s R&B/Hip-Hop airplay chart and zero plays on the Country airplay chart. Therefore, it was removed from the Hot Country Songs chart. The data provided makes Billboard’s decision understandable, as it appears as if listeners are deciding what is charted and what is removed. However, doesn’t the artist get some say in how their music is categorized? Lil Nas X himself said that the song was country rap, suggesting that it should be on both the R&B/Hip-Hop and Country charts. If an artist intends for a song to fit in with a certain genre, who is to say otherwise?

Recently, as if to challenge Billboard’s decision to remove “Old Town Road” from the Hot Country Songs chart, Lil Nas X released an “Old Town Road” remix, featuring country legend Billy Ray Cyrus. The “Achy Breaky Heart” singer hopped on the song for a verse and it has been received well by social media. The question now is, if the song gains enough popularity, how will Billboard chart it? Is the song still not to be considered country despite a famous country singer being featured on it? It has yet to be seen. The question of the impact of race to the song’s characterization has yet to be seen as well, but it can’t be ignored due to the long history of racial tension in black music. And as for “Old Town Road” being considered country music, its plays seem to determine that. But perhaps the vision of the artist who creates the song should be considered as well. Regardless, Lil Nas X has shifted contemporary music and how it has been classified. Boundaries are hard to distinguish because artists like Lil Nas X are destroying them by combining genres. The whole ordeal is a just a testament to the fact that music is evolving and those who monitor, document, and evaluate it should evolve with it.

Cultural Value, Charitable Giving, and the Fire at Notre Dame

The Notre Dame cathedral in Paris photographed in 2015 from the side

On Monday, April 15, viewers looked on in horror as Notre Dame Cathedral was devastated by fire. Onlookers hoped that the flames would be fought back before too much damage was done, but the cathedral’s spire came crashing down, taking much of the roof with it. The extent of the damage remains to be seen.

Construction on the stunning piece of gothic architecture began in 1163, and the wood out of which it was built was taken from trees that are hundreds of years older.  Among other noteworthy events, Henry VI of England was crowned King of France inside of Notre Dame in 1431, and it was also inside its walls that Napoleon Bonaparte was crowned emperor in 1804. Perhaps Notre Dame is most famously known for the attention drawn to it by Victor Hugo in his 1831 novel The Hunchback of Notre Dame. The now classic novel raised awareness of the dilapidated condition of the cathedral at that time, and led to restoration and greater appreciation for the historic site.

The landmark is valuable for many reasons. Arguably, it has both instrumental and intrinsic value—that is, it has value both in light of the joy it brings to people, and value in its own right. Some argue that cultural artifacts that have stood the test of time have intrinsic value in light of their continued existence. The more important events take place within a structure, and the more tribulations that structure withstands, the greater its value in this respect. Key historical figures participated in sacred rights in its chapels. This sets the cathedral apart from most other buildings with respect to significance.

The cathedral is also one of the most beautiful buildings ever constructed. All things being equal, it is a tragedy when a thing of beauty is destroyed. Art and architecture have the potential to represent the heights of human creativity. The building is not simply beautiful to look at; it expresses something both existential and essential about the human ability to bring monumental, almost inconceivable visions into reality. As a result, when a building like this is destroyed, it hurts us all in a way that is difficult to fully articulate. The building was a testament to our values, our resilience, and the transcendent ability we have to express appreciation of those things we take to be deserving of our best efforts.

Notre Dame speaks to us all in the way that all great works of art do. It is especially significant, however to French citizens. The art and landmarks of a country are a tremendous source of pride for its citizens, and the destruction of Notre Dame no doubt changes how it feels to be French.  

Finally, Notre Dame has substantial religious value for many people. Pilgrimages are made to Notre Dame frequently—an experience at the Cathedral is often a profound one. Notre Dame is home to artifacts that many consider to be relics, including a crown of thorns purportedly placed on the head of Christ, a piece of the “true cross,” and a nail from that cross on which Christ was executed. These relics were salvaged from the flames, but the fact that the Cathedral was the sacred home to such important artifacts in Catholic history highlights the gravity of the loss of the structure.

In light of all of these considerations, our hearts rightly ache at the thought that Notre Dame will never be exactly what it once was. There seems to be no question about whether renovations will occur. In the immediate aftermath of the destruction, Emmanuel Macron, the President of France, vowed to rebuild the beloved landmark, commenting that, “We will rebuild Notre Dame even more beautifully and I want it to be completed in five years, we can do it. It is up to us to change this disaster into an opportunity to come together, having deeply reflected on what we have been and what we have to be and become better than we are. It is up to us to find the thread of our national project.” Restoration experts anticipate that the project will take closer to ten to fifteen years. Before construction can even begin, the site must be secured—a substantial task on its own.

To advance the objective of renovating the building, both individuals and private organizations have, in the immediate aftermath of the fire, donated hundreds of millions of dollars to renovate the cathedral. The combined donations of the L’Oreal cosmetics company, the Bettencourt Meyers family, and the Bettencourt Schueller foundation came to 226 million. On Tuesday, the CEO of Apple, Tim Cook, pledged a donation of an unspecified amount to restoration efforts. The University of Notre Dame in the United States pledged $100,000 dollars to the cause. Many individuals and institutions understandably don’t want the iconic building to remain in a skeletal form of its former glory.

This event raises interesting philosophical and moral questions about the causes that motivate us to come together to donate resources. The reasons one might want to donate to the renovation are clear. In addition to the recognition of the value of Notre Dame across a variety of domains, people want to continue to have meaningful experiences at the site, and they want future generations of people that they care about to be able to have such experiences as well. It is unsurprising that we should feel motivated to donate money to preserve things that help provide meaning to our lives.

These are moments in which it is appropriate to be reflective about what charitable giving should look like. One important question to ask is, “is charitable giving superogatory?” That is, is it the case that donating money, time, and effort to the world’s problems is something that it is good to do, but not bad not to do? Or is charitable giving, when one has discretionary resources, the kind of thing that we are morally obligated to do, such that we would be remiss, morally speaking, if we failed to do it?

A situation like this might also give us cause to reflect on the motivation and reasoning behind charitable donation. Should we pull out our pocketbooks whenever we feel a tug at our heartstrings? Should we be primarily motivated to donate to those causes that are near and dear to us, such as local causes or causes to which we otherwise feel a close personal connection? It may be the case that feeling satisfaction in response to making a donation of a certain type plays an important role in motivation to donate again in the future. For example, the public’s passion for donating to renovate to Notre Dame has motivated people in Louisiana to rebuild three churches that were seriously damaged by arson that are located in historically black neighborhoods. People recognize the value that churches often have for communities, and this tragedy has put them in the giving spirit.

An alternative theory about how our charitable funds should be directed is that we should give our resources to those causes where our money would do the most good. Imagine that your money could either go to a cause that prevented 1 unit of suffering, or it could go to cause that prevented 5 units of suffering. Intuitively, we should prevent more suffering when we can, so the rational choice is to donate to the second cause. Are we morally obligated to make our decisions in this more calculated way?  

Hundreds of millions of dollars could go a long way to prevent needless suffering in the world. Millions of people die of preventable diseases every year. Countless people don’t have reliable access to food, shelter, clean drinking water, and basic medical care. In a world in which people collectively have hundreds of millions of dollars to spare, is it morally defensible for that money to be spent on the restoration of a building, no matter how beautiful or historically significant that building was?

Some might think that the answer to this question is yes. There are some human cultural achievements that we simply must preserve, if we are able. If we accept this conclusion, however, we must also be willing to admit that the preservation of some art seems to be more important to us, as a human family, than the suffering of our fellow beings.

Power, Pollution, and Golf

Photograph of a golf course showing a pond in the foreground, a distant person with a bag of clubs, and trees in the background

Despite the closure of over 800 golf courses in the last decade and the fact that young people have virtually no interest in the sport, golf may be the emblematic pastime of the 21st century. So many of the key issues our society must grapple with in the next hundred years or so, from environmental change to the concentration of wealth and political power in the hands of an elite few, are borne witness to on the vast stretches of meticulously maintained green. Given the ethical ramifications of those issues, it’s pertinent to ask whether or not the continuation of the sport of golf itself is ethical, and what the prevalence of this sport might say about our future.

The first and most pressing objection to golf is its environmental impact. Apart from impact of pesticides, environmental scholars note that “Golf course maintenance can also deplete fresh water resources [… and] require an enormous amount of water every day,” which can lead to water scarcity. A golf course can take up nearly 150 acres of land and can displace the area’s native flora and fauna in favor of an artificial and homogenized landscape. Furthermore, the impact of a golf course can be felt beyond the land it physically occupies. From 2017 to 2019, a teenage diver found over 50,000 golf balls underwater off the coast of California, the byproduct of five nearby golf courses. This is especially concerning to environmentalists, because, as the NPR reporter who covered the story noted, “golf balls are coated with a thin polyurethane shell that degrades over time. They also contain zinc compounds that are toxic.” They eventually break down into microplastics, an especially insidious form of pollution.

However, some argue that golf courses enclose and protect rather than damage fragile ecosystems. One such often-referenced paper, “The Role of Golf Courses in Biodiversity Conservation and Ecosystem Management,” was written by Johan Colding and Carl Folke and published in 2009. After examining the effect of golf courses on local insect and bird populations, Colding and Folke concluded that “golf courses had higher ecological value relative to other green-area habitats,” and “play essential role in biodiversity conservation and ecosystem management.” They argue that golf courses can be a refuge for wildlife that’s been pushed out from other areas, and that golf courses can foster biodiversity by working hand-in-hand with conservationists. However, this paper was published by Springer Science+Business Media, a global publishing company of peer-reviewed scientific literature that had to retract 64 scientific papers in 2015 after it was discovered that the articles hadn’t actually been peer reviewed at all. Seen in that light, this research (and the conclusion it draws) becomes questionable. Another study, “Do Ponds on Golf Courses Provide Suitable Habitat for Wetland-Dependent Animals in Suburban Areas? An Assessment of Turtle Abundances, published in The Journal of Herpetology in 2013, examined the potential for golf courses to contain turtle habitats with mixed results. The researchers noted that turtle habitats within golf courses did have the potential to foster wildlife, but were negatively impacted by residential development projects, which many golf courses today contain. To summarize, there is no clear consensus on this issue, though researchers uniformly note the very act of building a golf course in the first place does disrupt wildlife, whether or not conservation efforts are made after the fact.

Golf may have an ultimately negative impact on the environment, but its continuance has ethical implications for our social and political landscape as well. Golf has long been considered an elite pastime, and President Trump’s fondness for the sport is often used to demonstrate his insufficiencies as a leader. Rick Reilly, a contributing writer for ESPN’s SportsCenter and ABC Sports, released a book in early April of this year entitled Commander in Cheat: How Golf Explains Trump. In an article for The Atlantic explaining how Trump has sullied the reputation of golf through his propensity to cheat and tasteless displays of wealth, Reilly laments,

“[The situation] stinks because we were finally getting somewhere with golf. It used to be an elitist game, until the 1960s, when a public-school hunk named Arnold Palmer brought it to the mailmen and the manicurists. Then an Army vet’s kid named Tiger Woods brought it to people of color all over the world. We had ultracool golfers like Woods, Rickie Fowler, and Rory McIlroy, and pants that don’t look like somebody shot your couch, and we’d gotten the average round of golf down to $35, according to the National Golf Foundation.”

However, it’s difficult to stand by Reilly’s assertion that golf has entirely outgrown its elitist roots. In an interview with Golf Digest, Trump remarked,

“First of all, golf should be an aspirational game. And I think that bringing golf down to the lowest common denominator by trying to make courses ugly because they want to save water, in a state that has more water […]

I would make golf aspirational, instead of trying to bring everybody into golf, people that are never gonna be able to be there anyway. You know, they’re working so hard to make golf, as they say, a game of the people. And I think golf should be a game that the people want to aspire to through success.”

Replace the word “golf” with “power,” and you’ve got an almost eerily succinct and transparent summary of capitalist conservative dogma (in which the playing field is never intended to be even, the environment is devalued in favor of aesthetics, and the American dream is only illusory for the masses). But furthermore, Trump’s comment encapsulates many of the elitist attitudes and expectations that still attend golf today, regardless of the price for a single round at a public course. The resorts and country clubs frequented by Trump and his ilk are beautifully manicured arenas of power, places where politicians and businessmen can solidify ties and network over club sodas. When he was attacked for misogynistic remarks about women, Trump’s defense was that he’d heard Bill Clinton say worse things about women on the golf course, going so far as to call Mar-a-Lago, the resort attached to a golf course owned and frequented by Trump, the “The Southern White House.” The words “golf course” have become shorthand for private spaces of leisure for powerful men, a place for unethical behavior sheltered from the public eye and more traditional structures of power by miles of dense greenery.

Unlike sports that are not as white or monolithic, like basketball and football, contemporary golf is not fertile ground for political or cultural resistance. Golfers are notably non-vocal about politics. As golf correspondent Lawrence Donegan points out, many famous pro-golfers are pressured to play golf with the president, and show almost uniform deference to him out of fear of losing corporate sponsorships. This deferential attitude is taken up by most elites who play golf. Donegan says,

“The acquiescence of golf’s leading figures and governing bodies [to the Trump administration] is amplified […] down the sport’s hierarchy, especially in the (sometimes literally) gilded country clubs of states such as Florida, New Jersey, New York, and Texas, which depend on a narrow, and narrow-minded, membership of wealthy, white couples who pay their subscriptions as much for the social cachet as for the sport. Within the confines of the club, they are free to rail against minorities, free to declare Trump the greatest president since Lincoln, free to act like the genteel segregationists they prefer to be.”

The fact is that golfers tend to be wealthy, and that the golf course is a place where hierarchy and prestige are not only respected but built into the very foundation of the culture.

Many agree that golf is both a waste of resources and a symbol for the mechanisms of capitalism, but these two issues have become intertwined in recent years. Golf, some have argued, has been yoked in the service of capitalism and corporate “greenwashing.” Rob Millington explores this idea in his paper “Ecological Modernization and the Olympics: The Case of Golf and Rio’s ‘Green’ Games,” published in the Sociology of Sport Journal in 2018. He defines ecological modernization as “the idea that capitalist-driven scientific and technological advancements can not only attend to the world’s pending environmental crises, but even lead to ecological improvement, thus allowing sustainability and consumption to continue in concert.” This idea is promoted by corporations who want to greenwash themselves, or to appear green to consumers without changing their essential business models. It is very similar to the conclusion drawn by Colding and Folke, who argue that environmental destruction in the name of leisure and consumerism can take place alongside conservationist efforts without contradiction.

Millington notes that “In response to the growing tide of environmental opposition since the 1960s, the golf industry took up an ecological modernist approach to promote golf as a natural, green, and environmentally friendly sport that allows people to connect with nature.” According to Millington, this is precisely what happened in 2016 Olympic games at Rio De Janeiro, for which a golf course was built on environmentally protected land in the spirit of ecological modernization. The design of the course was presented as enhancing rather than fighting the natural landscape, despite the fact that any incursion into a natural space can disrupt the ecosystem. In this sense, the continuing relevance of golf can be employed for neoliberal ends, under the guise of environmentalism or unity between nations.

In “Is Golf Unethical?”, a 2009 article published in The New York Times, writer Randy Cohen covers the basic environmental impact and bourgeois ethos of golf. On the question of whether or not the sport itself is ethical, he concludes that “perhaps the only moments of grace and beauty and virtue in any game occur during actual play, and we should not look too closely at its broader culture and implicit ethics without expecting to be dismayed.” This is just one defense of the sport, that the skill that goes into mastering it outweighs any moral scruples we should have. Another thing often said in defense of golf is that it, like any sport, builds bridges and creates a sense of fellowship across the world, that it gives us a common language in which to communicate our values and abilities across international lines. But does it actually build bridges between nations or just import elite bourgeois culture and sources of pollution to other parts of the world? The act of swinging a golf club has no objective moral value attached to it, but the trappings of golf, the privilege and waste and unnecessary consumption of resources, certainly do.  

Jacinda Ardern, Christchurch, and Moral Leadership

Jacinda Ardern, leader of the NZ Labour party, was at the University of Auckland Quad on the first of September, 2017.

Shortly after the Christchurch massacre on March 15, in which a white supremacist gunned down worshipers at two Mosques in the New Zealand city killing fifty people during Friday prayer, the NZ Prime Minister Jacinda Ardern spoke with US President Donald Trump, who had called to condemn the attack and offer support and condolences to the people of New Zealand.

Ardern later told a press conference that “[Trump] asked what offer of support the United States could provide. My message was: ‘Sympathy and love for all Muslim communities.'”

Following the attack the connection between casual racism in public discourse,  often serving populist political ends, and an emboldened white supremacist movement prepared to commit violent acts was widely discussed (as explored in my previous article).

Yet, all Donald Trump’s past behavior, public remarks, tweets and policies indicate that such a request would have been incomprehensible to him; indeed the weekend following the massacre and Ardern’s request for “sympathy and love” for Muslim communities, Trump fired off a tirade of tweets in support of Fox News’s Jeanine Pirro, reprimanded by Fox News for making racist remarks about Ilhan Omar, the U.S.’s first Muslim Congresswoman.

Ardern asked for “sympathy and love.” And sympathy and love was at the core of her response in the agonizing aftermath of this massacre. For that response and the leadership she extended, Ardern has been internationally lauded. Indeed, Ardern has shown what moral leadership looks like by bringing a natural and genuine love and sympathy to the affected community and the country. She set the tone and spirit of the nation’s response with language of inclusivity that refuted and negated the perpetrator’s attempt to create division.

Fronting the press immediately after the attack, looking visibly shaken, Ardern said of the Muslim community, and of all migrants and refugee communities in New Zealand “they are us,” and of the perpetrator she said “he is not us.” The simple language of inclusion of “they are us” and its sympathy and compassion, immediately disavowing the othering of Muslims, was a rejection of any suggestion that those who had been targeted are outsiders in the community of Christchurch and in the society of New Zealand.

Ardern visited Christchurch to support affected community. As she met people Ardern placed her hand on her heart, a traditional Muslim gesture, and said “Asalaam alaykum,” (peace be with you). She wore a hijab as a gesture of solidarity with the Muslim community, showing that ‘they are us’ does not mean ‘they’ are the same as ‘us’, but that the category of ‘us’ is inclusive of different religions, ethnicities, and cultures; and that New Zealand is proud of being a multicultural, open and inclusive society. And she held survivors and grieving families in her arms and cried with them. Ardern’s leadership — her words and her actions — visibly helped the whole community feel connected to the victims and gave non-Muslims a cue for identifying with the Muslim community. The following Friday, exactly one week after the massacre, the call to Friday prayer was broadcast on public radio and television networks.

There is no doubt that Ardern’s response to this tragedy stood out across the world as exemplary leadership, strength, compassion, and integrity. We should be able to expect good moral leadership from our political leaders, but in this era of populism, defined as it is by the characteristic tactic of appealing to the lowest common denominator, such leadership is rare.

Love or virtues that might pertain to, or emerge from it, such as compassion and sympathy, are not always obviously operative in our moral philosophy, ethical systems or political sphere. Contemporary analytic moral philosophy tends to work with concepts right and wrong more than concepts like love.

Love of one’s neighbor is of course a central tenet of the moral teachings of Christ, and the spirit of universalization that maxim evokes is present in some form in most ethical systems from religious to secular. There is a version of universalization present in the familiar systems of moral philosophy: in utilitarianism we treat the interests of stakeholders equally, and we do not favor those closest to us – in proximity, or in family, culture, religion etc. The Kantian categorical imperative, too, is based on a method of universalism, so that one finds one’s moral imperative only in what one could will to become a universal law.

In these systems, both of which attempt to make moral judgements objective, a concept like love would appear sentimental, and these systems of moral philosophy are designed specifically to remove elements of sentiment that might have been thought to confuse, distract or subjectify moral thinking. Yet for the philosopher Iris Murdoch, love was an important concept for ethics. She writes: “We need a moral philosophy in which love, so rarely mentioned now by philosophers, can again be made central.” (Iris Murdoch, The Sovereignty of Good, Routledge, London, 1970, 2010, p45.)

Murdoch wrote about love in morality as being related to acts and efforts of attention – of attending to the humanity of others. Indeed, as against the blindness of racism, the notion of a ‘loving and just attention’ for Murdoch is part of the capacity to deeply acknowledge others as one’s fellow human beings. This is precisely what racism cannot do. Racism is radically dehumanizing. It is a moral failure to see the other as ‘one’s neighbor’ – that is, to see them as one of us, as part of the human family, or as sharing in a common humanity. (See Raimond Gaita, A Common Humanity, Text Publishing, Melbourne, 1999.) She observed that an effort to see things as they are is an effort of love, justice and pity.

Ardern’s response was to refute, and to deny, this racist denial of humanity – without entering into dialogue with it. That ‘loving and just attention’ of which Murdoch speaks is visible in the context of Ardern’s response in the way she attended to the victims and their families. This includes the focus of her attention on the suffering of those who were affected, and also the quality of that attention, which brought out their humanity at a time when someone had sought to deprive them of it – not just by murdering them and their loved ones, but by proudly justifying it as ideology.

The refutation of the ideology of racism is the affirmation of the humanity in each other. It is not clear that affirmation can be fully realized in arguments that, morally, have as their object, right and wrong; but Ardern has demonstrated that the moral sense of a common humanity can be realiszed through sympathy and love.

Meanwhile this week The White House escalated its assault on the Muslim American congresswoman Ilhan Omar after Donald Trump repeatedly tweeted video footage of September 11 and accused Omar of downplaying the terror attacks. 

Airplane Crashes and the Diffusion of Responsibility

Photograph of a Sky airplane taking off

Air travel has become steadily safer and more reliable for decades, but the second crash involving Boeing’s new 737 Max aircraft has created newfound uncertainty among potential flyers. The crash of an Ethiopian Airlines plane on March 17 has been linked to a similar crash of a Lion Air plane on October 29, 2018, pointing to a disturbing trend. In the wake of such a tragedy, we are often left looking for answers out of both pragmatic and moral motivations: we want to prevent future accidents, but we are also looking for someone to blame. Ultimately, such searches are often unsatisfying, particularly in the latter respect.

Although investigations are ongoing, early information seems to absolve the pilots, both of whom were highly experienced, and the focus has shifted to concern about the planes themselves. Software on Boeing’s new 737 Max airplanes called the Maneuvering Characteristics Augmentation System (MCAS) seems to have malfunctioned, causing the planes to angle downward and become uncontrollable. In light of this possibility, the United States Federal Aviation Administration (FAA) has grounded all 737 Max aircraft, and Boeing has slowed down—although not halted—production on what was their fastest-selling model.

If there is a problem with the airplane, who is to blame? Most fingers point to Boeing, the company which designed and manufactured the aircraft. Not only did Boeing’s software critically malfunction, but a report by The New York Times found potentially vital safety features being sold as optional extras for the airplanes, calling into question the excessively profit-oriented strategy of the company. Some, including the Ethiopian government, have criticized regulators like the FAA for their failure to enforce more stringent testing and safety requirements. It is also worth noting that airlines are responsible for safety inspections of aircraft before they fly. However, leaving the blame on any one of these corporations seems insufficient. All of these entities—manufacturers, regulators, airlines—represent vast networks of individuals, each of whom seems to bear little to no individual responsibility. What do we do when everyone does their job correctly, but things still go wrong?

This problem is exceptionally acute for companies like Boeing, which (one would assume) includes many fail-safes in its quality-control procedure. Even with careful review, it can be difficult to pinpoint exactly where in the process the error or oversight was introduced. Although these fail-safes are valuable and necessary for the safety and success of Boeing’s enterprise, they also create an ethical problem by diffusing responsibility across a wide network of individuals and systems. At worst, a network of diffused responsibility can create a bystander effect in which every individual assumes that someone else will deal with a problem, while in reality the problem goes unaddressed.

There is a tendency to reduce this problem to one of a few simple parameters. Who was the last person to inspect the aircraft or test the software? Perhaps they should have caught the problem after others had failed to do so. But one person’s position along a chain of safety checks is accidental: if the first person and the last person were switched, the result would be the same. Who has the greatest power in the organization? Often it is the C.E.O.s and presidents of companies that are left giving statements to the media when disaster strikes, but they are so distant from the daily operations of their company that it seems unreasonable to expect them to vet every action and decision. Neither these nor any other singling out will ever satisfy the larger question of culpability. Individuals caught up in this game of passing the buck are faced with two bad options: to point fingers is ignoble, but to accept responsibility can be a discrediting and thankless feat.

An alternative would be to blame the system itself. Because the system allowed for the diffusion of responsibility, the system itself must be flawed. One interpretation is that the way in which Boeing goes about producing aircraft is an unethical system which fails to protect the basic right to life of its customers. We could point a finger at the architects of the system, which in this case would probably comprise some past and current executives at Boeing, but again, no architect could possibly predict every outcome of the system they design. A broader interpretation might condemn the corporation model itself: in the ruthless pursuit of profit, corporations make forgetting human lives all too easy. A cynic might look at the common initialism “LLC,” or “limited liability company,” as an indicator of the role of the corporate structure in reducing liability and responsibility among its owners.

The search for answers regarding these plane crashes will probably arrive at many ways to prevent further tragedies. In all likelihood, new systems for design and quality control will be implemented, and greater oversight and cross-checking will be mandated. While the path forward on the pragmatic side is clear, the ethical dimension of these systemic failures is more uncertain. Humans desire obvious villains on whom to place blame, but, unfortunately, true tragedies are rarely so simple. Too often, we fall into the practice of scapegoating to resolve these dilemmas, but it would be better to embrace and negotiate the inherent complexities of these situations.

 

The Ethics of Telling All: What’s at Stake in Memoir Writing?

Photograph of author Karl Ove Knausgard standing, holding a microphone, and reading from a book where the title "My Struggle" is visible

When Norwegian author Karl Ove Knausgaard published the first volume of his My Struggle series in 2009 it was a startling commercial success, but also a personal disaster. Knausgaard’s infamous six-part series of autobiographical novels (titled Min Kamp in Norwegian) recounts the “banalities and humiliations” of his private life. While My Struggle is classified as a “novel”, it is described by Pacific Standard as a “barely-veiled but finely-rendered memoir”. After his first two fictional novels A Time for Everything (1998) and Out of This World (2004) received critical acclaim in Norway, Knausgaard found that he was “sick of fiction” and set out to write exhaustively about his own life. Consequently, My Struggle reveals his father’s fatal spiral into alcoholism, the failures of his first marriage, the boredom of fatherhood, the manic depression of his second wife, and much more.  “Autofiction” has become an increasingly mainstream mode of contemporary writing, but how authors should balance the ethical dilemma of exposing the private life of their friends and family remains unclear.

The first book of the My Struggle series, titled A Death in the Family, meticulously chronicles the slow, pitiful demise of Knausgaard’s alcoholic father. When Knausgaard first shared the manuscripts of his work with relatives, his father’s side of the family called it “verbal rape” and attempted a lawsuit to stop publication. Under the weight of bitter family and legal action, Knausgaard was forced to change the names of My Struggle and refers to the villainous alcoholic of the novel only as “father”. For Knausgaard, the suppression of true names weakened the goal of his novel: “to depict reality as it was.”

The issue with ‘reality’, however, is that everyone seems to have their own version. Part of the legal action against My Struggle were defamation claims disputing the circumstances surrounding the death of Knausgaard’s father. In another dispute over reality, Knausgaard’s first ex-wife recorded a radio documentary, titled Tonje’s Version, where she details the trauma of having her personal life publicly exposed. What’s striking about the documentary is Tonje’s point that her own memories came second to Knausgaard’s art. For Knausgaard, depicting reality meant his own reality. But, if memory is colored from our own perspective, how much claim can he have on what’s ‘true’ and not? Hari Kunzru writes in an article for The Guardian, “But he [Knausgaard] is, inevitably, an unreliable narrator. How could he not be? We live a life of many dinners, many haircuts, many nappy changes. You can’t narrate them all. You pick and choose. You (in the unlovely vernacular of our time) curate.”

Even when people accept the ‘truth’ presented by a memoir it can damage and destroy personal relationships. Knausgaard was married to his second wife, Linda, while writing My Struggle. After Linda read Knausgaard’s frank account of their marriage in his manuscript, she called him and said their relationship could never be romantic again. The media storm generated from the first few books of the series led to Linda having a nervous breakdown and divorcing Knausgaard. In an interview, Knausgaard admits to striking a Faustian deal with the publication of My Struggle saying, “I have actually sold my soul to the devil. That’s the way it feels. Because . . . I get such a huge reward.”, while “the people I wrote about get the hurt.” My Struggle is now an international bestseller and revered as one of the greatest literary accomplishments of the 21st century, yet on the final page of My Struggle Knausgaard admits “I will never forgive myself”. Critical acclaim and popular fame could not justify the damage done to Knausgaard and his family, but can anything positive emerge from the pain of writing such an unforgiving memoir?

Ashley Barnell, a contributor to The Conversation, writes in an essay, “By representing the conflicts and silences that families live with writers can introduce more diverse and honest accounts of family life into public culture.” From Instagram photos to popular humor people work hard to hide what hurts and feign happiness. As a collective unit, families are no exception. Norway found My Struggle particularly scandalous because of its violation of family privacy, which an article by The Guardian says was “profoundly shocking to the Lutheran sensibilities of a country that is less comfortable with public confessions than the Oprah-soaked anglophone world”. Knausgaard’s reckless exposition does not simply leave behind the outward facing mask individuals and families show the rest of the world, it shatters it all together and instead exposes deliberately, albeit painfully, the reality of one’s life.

Thematically speaking, shame is a core aspect of My Struggle. “Concealing what is shameful to you,” Knausgaard reflects, “will never lead to anything of value.” In a piece of literary criticism, Odile Heynders writes that shame in My Struggle, “. . . is connected to questions of humanness, humanity and humility. The capacity for shame makes the protagonist fragile, as it constitutes an acute state of sensitivity”. Advocates of literary fiction often cite its ability to increase one’s capacity for empathy. The shame and sensitivity of My Struggle, mixed with a self-deprecating humor, similarly accomplishes this feat by bringing readers to consider their own openness about pain they have both felt and delt. Barnell’s essay also points out that “The memoirist’s candid account of family struggles can destigmatize taboo topics – such as divorce, sexuality, and suicide.” In My Struggle, tough subjects like alcoholism, manic depression, existential dread, and broken relationships are not constructed neatly within the pages of fictional novel, but laid bare in their honest existence.

My Struggle, which has sold over half a million copies in Norway alone, may be helpful in encouraging more candid discussions of emotional pain. Yet, those whose private lives are thrust into the spotlight through nonfiction writing can be deeply disrupted. I think Knausgaard would argue that, to move past pain, it must be addressed in its most raw, authentic form. However, not everyone may be looking for such a public reconciliation. Authors working with the powerful mode of tell-all memoirs should consider the wellbeing of those immediately affected by publication and then the work’s potential benefit to the rest of the world.

The Ethics of Philosophical Exemptions

photograph of syringe and bottle of antiobiotics

While every state in America has legislation requiring vaccinations for children, every state also allows exemptions. For instance, every state allows a parent to exempt their child from vaccinations for legitimate medical reasons: some children with compromised immune systems, for example, are not required to be vaccinated, since doing so could be potentially harmful. However, many states also allow for exemptions for two other reasons: religious reasons, and “philosophical reasons.” While religious exemptions are standardly granted if one sincerely declares that vaccinations are contrary to their religious beliefs, what a “philosophical reason” might consist in varies depending on the state. For example, Ohio law states that parents can refuse to have their children immunized for “reasons of conscience”; in Maine a general “opposition to the immunization for philosophical reasons” constitutes sufficient ground for exemption; and in Pennsylvania “[c]hildren need not be immunized if the parent, guardian or emancipated child objects in writing to the immunization…on the basis of a strong moral or ethical conviction similar to a religious belief” (a complete list of states and the wordings of the relevant laws can be found on the National Conference of State Legislatures website).

Of course, not all states grant exemptions on the basis of any reason beyond the medical: California, Mississippi, and West Virginia all deny exemptions on the basis of either religious or philosophical reasons. And there seem to be plenty of good reasons to deny exemption except only in the most dire of circumstances, since vaccinations are proven to be overwhelmingly beneficial both to individuals, as well as to the community at large by contributing toward crucial herd immunity for those who are unable to be vaccinated due to medical reasons.

At the same time, one might be concerned that, in general, the law needs to respect the sincere convictions of an individual as much as possible. This is evidenced by the fact that many states provide religious exemptions, not only for vaccinations, but in many other different areas of the law. Of course, while some of these exemptions may seem reasonable, others have become the target of significant controversy. Perhaps most controversial are so called “right to discriminate” conditions that, for example, have been appealed to in order to justify unequal treatment of members of the LGBT community.

While there is much to say about religious exemptions in general, and religious exemptions to vaccinations in particular, here I want to focus on the philosophical exemptions. What are they, and should they be allowed?

As we saw above, the basis for granting philosophical exemptions to vaccinations seems to simply be one’s sincere opposition (how well-informed this opposition is, however, is not part of any exemption criteria). In practical terms, expressing philosophical opposition typically requires the signing of an affidavit confirming said opposition, although in some cases there is the additional requirement that one discuss vaccinations with one’s doctor beforehand (Washington, for example, includes this requirement). In general, though, it is safe to say that it is not difficult to acquire a philosophical exemption.

Should such exemptions exist? We might think that there is at least one reason why they should: if sincere religious conviction is a sufficient basis for exemption (something that is agreed upon by 47 states) then it seems that sincere moral or philosophical conviction should constitute just as good of a basis for exemption. After all, in both cases we are dealing with sincere beliefs in principles that one deems to be contrary to the use of vaccinations, and so it does not seem that one should have to be religious in order for one’s convictions to be taken seriously.

The problem with allowing such exemptions, of course, is the aforementioned serious repercussions of failing to vaccinate one’s children. Indeed, as reported by the PEW research center, there is a significant correlation between those states that present the most opportunity to be exempted – those states that allow both religious and philosophical grounds for exemption – and those that have seen the greatest number of incidents of the outbreak of measles. Here, then, is one reason why we might think that there should be no such philosophical exemptions (and, perhaps, no exemptions at all): allowing such exemptions results in the significant and widespread harm.

The tension between respecting one’s right to act in a way that coincides with one’s convictions and trying to make sure that people act in ways that have the best consequences for themselves and those around them is well-explored in discussions of ethics. The former kinds of concerns are often spelled out in terms of concerns for personal integrity: it seems that whether an action is in line with one’s goals, projects, and general plan for one’s life should be a relevant factor in deciding what ought to be done (for example, it often seems like we shouldn’t force someone to do something they really don’t want to do for the benefit of others). When taking personal integrity into account, then, we can see why we might want there to be room for philosophical exemptions in the law.

On the other hand, when deciding what to do we also have to take into account will have the best overall consequences for everyone affected. When taking this aspect into consideration, it would then seem to be the case that there almost certainly should be only the bare minimum of possibility for exemptions to vaccinations. While it often seems that respecting personal and integrity and trying to ensure the best overall consequences are both relevant moral factors, it is less clear what to do when these factors conflict. To ensure the best consequences when it comes to vaccinations, for example, would require violating the integrity of some, as they would be forced to do something that they think is wrong. On the other hand, taking individual convictions too seriously can result in significantly worse overall consequences, as what an individual takes to be best for themselves might have negative consequences for those around them.

However, there is certainly a limit on how much we can reasonably respect personal integrity when doing so comes at the cost of the well-being of others. I cannot get away with doing whatever I want just because I sincerely believe that I should be able to, regardless of the consequences. And there are also clearly cases in which I should be expected to make a sacrifice if doing so means that a lot of people will be better off. How we can precisely balance the need to respect integrity and the need to try to ensure the best overall consequences is a problem I won’t attempt to solve here. What we can say, though, is that while allowing philosophical exemptions for vaccinations appears to be an attempt at respecting personal integrity, it is one that has produced significant negative consequences for many people. This is one of those cases, then, in which personal conviction needs to take a backseat to the overall well-being of others, and so philosophical reasons should not count qualify as a relevant factor in determining exemptions for vaccinations.

What It Means to Legalize Euthanasia

photograph of empty hospital room with flowers

Euthanasia and physician assisted suicide are gradually gaining acceptance: both legally and socially. As of 2016, Canadians have a right to assisted suicide. New Jersey may become the next American state to allow some form of medically assisted suicide, joining a list that includes California, Colorado, Hawai’i, Montana, Oregon, Vermont, and Washington. The national debate surrounding Dr. Jack “Death” Kevorkian, who oversaw the deaths of over one hundred people and was subsequently sentenced to 10-25 years in prison, seems like the distant past. But now it is time to shine that spotlight of public attention on euthanasia once more.

Dr. Kevorkian once explained his rationale for administering lethal injections: “My aim in helping the patient was not to cause death. My aim was to end suffering.” The arguments in favor of euthanasia (lethal injection administered by a physician) and physician assisted suicide (lethal prescription taken by the patient) are in ways difficult to combat. Who are we to deny a suffering soul an accelerated but peaceful passage to the Afterlife? Who are we to judge someone for exercising their right to a dignified death? Some argue that to deny euthanasia is to deny a permanent reprieve from suffering.

Proponents of euthanasia typically use terminally-ill patients who are enduring pain and have no prospects for improvement as the kind of patient who exemplify a situation in which euthanasia is the most merciful option. Few would not have sympathy for this request and might therefore approve of euthanasia in this limited circumstance. But it would be useful to explore how even this narrowly defined support can actually permit scenarios beyond terminal illness.

When introduced into legislation and enforced by governments, euthanasia fails to be constrained to that limited circumstance. Some may argue that this a misapplication of the ideal extent of permissible euthanasia. The legalization of euthanasia is a prime example of how the ideal application and the actual reality of a concept often fail to mirror each other perfectly. Indeed, even the most stringent restrictions on euthanasia allow for certain individuals to be euthanized when they ostensibly should not have been.

For example, the Netherlands – one of the first countries to decriminalize euthanasia – have very strict criteria for legal euthanasia. After the patient has been euthanized, the Regional Review Committee reviews the following “due care” criterion (if the case fails to meet the any of the criterion, the physician is to be prosecuted):

    • the conviction that there was a voluntary and well-considered request from the patient
    • the conviction that there was hopeless and unbearable suffering of the patient
    • has informed the patient about the situation in which it was present and about its prospects
    • the patient has come to the conclusion that the situation in which they found no reasonable alternative
    • has consulted at least one other independent physician , who has seen the patient and gave his judgment in writing about the aforementioned due care criteria.
    • has carefully completed the termination of life or assisted suicide.
  • In 2016, these seemingly restrictive criteria nevertheless approved Mark Langedijk for euthanasia not because of a terminal illness but because of alcoholism. The despondent 41-year old alcoholic, who entered rehab 21 times, conceded that “enough is enough” after his marriage ended and he moved back in with his parents. It appears that Langedijk displayed hopeless and unbearable suffering, yet it is doubtful whether he could have made a voluntary or well-considered decision because of emotional strain of his alcoholism and the related desperation. This particular case is relevant because it intersects with the suicide crisis facing middle-aged men.

But Langedijk should not be considered an exception to the rule in the Netherlands. The percentage of those euthanized because of terminal illness is dropping. People enduring psychological turmoil such as social isolation or loneliness, criteria that have not traditionally been  considered justification for euthanasias, are becoming a growing subgroup. And there is reason to believe that this gradual widening of the acceptable criteria is an inevitable phenomenon not unique to the Netherlands.

In Oregon, the latest report shows that over half of people given the lethal prescription in 2016 listed “burden on family” and “financial implications of treatment” as their end of life concerns. Neither of these reasons relate directly to the physical suffering of the patient or their state of health, yet it is becoming an increasingly accepted justification for physician assisted suicide.

When patient autonomy is held above all other concerns governing medicine, the physician must oblige the request of the patient, whether the patient is about to die in 3 weeks and is in tremendous physical pain or is simply depressed. When the vague notion of mercy for suffering individuals is held above all other concerns, one must accept the necessary conclusion that non-physical, non-chronic forms of suffering–such as psychological turmoil–fall under the expansive list of reasons that would permit euthanasia. In fact, it seems with either justification, there would be few instances in which euthanasia would be not be permissible. This trend should not come as a surprise. Therefore, those claiming that the legalization of euthanasia is for only terminally-ill patients in physical pain are being disingenuous. The Mark Langedijks of the Medical World are not anomalies of legalized euthanasia; they are, or will soon become, normalized.

This trend in and of itself may not be morally troubling for some, namely those who value patient autonomy or dignified deaths. But the implication of the legalization on the government-civilian relationship is morally significant. What does it mean when your government allows euthanasia? How does that shift the nature of the relationship between the state and the civilians?

Perhaps the legalization of euthanasia indicates that your government values your personal liberty. Perhaps it indicates that your government is progressive, on the cutting-edge of medical ethics. But it might also indicate that your government no longer values the unconditional protection of innocent human life. Maybe it shows that your government values certain lives over others.

When a state, such as the Netherlands, examines cases of euthanized people, it is indirectly making a judgment on whose life is worth living. If an individual fails to meet one of the six criteria, they cannot be euthanized and they are to be kept alive. It is the life of the individual who fails to meet the criteria that the state deems worth saving. Some argue that the converse is true: the lives that meet the criteria are the lives the state deems not worth saving.

Indeed, activists for the disabled fear that the legalization of euthanasia relegates the status of those with physical impairments. In his piece for The Guardian, Craig Wallace writes that offering euthanasia to the disabled is not “an act of generous equality” but a “fake cruel-one way exit for vulnerable people locked out of basic healthcare and other social and community infrastructure.”

Echoing the sentiment of Wallace’s op-ed, Jamie Hale argues that people with disabilities are the strongest opponents to assisted suicide. Hale addresses the “financial burden” concern expressed by those in Oregon and says that it would be felt by the disabled, too. “People who are disabled, ill or elderly are constantly taught that funding our health and social care is a burden – that we are inherently a burden,” she writes. “I am given so little of the social care that I need that I am forced to rely on unpaid care from friends to survive.” With physician assisted suicide as an option, insurance companies and socialized healthcare may be incentivized to pursue the far cheaper lethal prescription over actual treatment.

In California, this reordering of priorities has already occured. “Stephanie Parker was informed by her insurance company that the chemotherapy she requested to treat terminal Scleroderma was not an option they were willing to provide,” writes Helena Berger. “Packer’s insurer then offered $1.20 co-pay for a handful of life-ending prescription drugs.”

There is a specter haunting the world. But is the specter laws prohibiting euthanasia or laws permitting euthanasia?

Are Rentier Economies Ethical?

A couch sits below a rental office.

This article is part two of a series on rentier capitalism. Here is part one.

The idea that ethics has something to say about economics is reaching a fever pitch of discussion amid global discontent about inequality. In my last article, I explored the meaning of economic rentiership: the private capture of unearned value. Rentier capitalism enables such capture, usually through the exploitation or contrivance of scarcity. Contemporary capitalism is rife with rent-taking institutions, among them private property of land and natural resources, market monopolies, the use of platforms, and extravagant intellectual property conventions. Here I will primarily be discussing private rentiers, though state rentiers exist.

Continue reading “Are Rentier Economies Ethical?”

Is Shaming an Important Moral Tool?

Photo of a person behind a banner that says "Shame on Mel Rogers, CEO, PBS SoCal"

Misbehaving students at Washington Middle School last month couldn’t expect their bad behavior to go unnoticed by their peers and teachers. A list titled “Today’s Detention” was projected onto the wall of the cafeteria, making the group of students to be punished public knowledge. This particular incident made local news, but it’s just one instance of a phenomenon known as an “accountability wall.” These take different forms, sometimes they involve displays of grades or other achievements, and sometimes they focus on bad behaviors. The motivation for such public displays of information is to encourage good behavior and hard work from students.  

Middle school administrators aren’t the only ones employing this strategy.  Judges around the country have participated in “creative sentencing,” using shaming to motivate the reduction or elimination of criminal behavior. For example, a district court judge in North Carolina sentenced a man convicted of domestic abuse to carry a poster around town reading, “This is the face of domestic abuse” for four hours a day, seven days in a row.  

The Internet ensures that the audience for public shaming will be wide in scope. Shaming behavior on social media ranges from photos of pugs wearing signs indicating that they “Ate Mommy’s Shoes” all the way to doxing—the sharing of names and addresses of people who participate in socially unpopular activities.

All of this is not entirely without warrant. Some emotions play a central role in morality—emotions like pride, guilt, and shame. We’re social beings, and as such, one of the ways that we protect against bad behavior in our social circles is to hold one another accountable. Imagine, for example, that Tom has a habit of not keeping his promises. He develops a bad reputation as an unreliable, untrustworthy member of the group. He may begin to feel guilt or shame for his behavior as a result, and he may then begin to actually do the things he has said that he is going to do. The recognition that his peers feel that he ought to feel badly about his behavior has the desired effect—it changes Tom’s behavior. It seems, then that shame can be a powerful tool in governing the behavior of members of a social group.

Shaming might play other important social roles as well.  First, it often makes the public aware of problematic behavior. It picks out people that some members of the population might want to avoid. For example, the revelation that Mike is a white supremacist who attended a white nationalist rally may prevent a potential employer from making the mistake of hiring Mike.

Second, public shaming may serve as a deterrent. If Sam, the regional manager of a small company, witnesses other people in his position being called out for sexual harassment against employees, perhaps Sam will stop harassing his employees out of fear of being publically treated the same way.

Third, shaming might be an important way of reinforcing our community values and making good on our commitment to speaking out against unacceptable behavior. After all, some of the most egregious human rights atrocities happened because of, or were prolonged by, the silence of people who knew better, could have spoken out, but did nothing.

On the other hand, there are some pretty compelling arguments against the practice of shaming as well. Often, shaming manifests in ways that end up humiliating another person for actions they have performed. Humiliation is, arguably, inconsistent with an attitude of respect for the dignity of persons. In response, some might argue that though humiliation may be a terrible thing to experience, many of the behaviors for which people are being shamed are comparatively much worse. For example, is it bad to humiliate someone for being a white supremacist?

In practice, shaming has real legs—stories about bad behavior travel fast. The details that provide context for the behavior are often not ready at hand and, most of the time, no one is looking at the context to begin with. Even if it’s true that shaming has an important place in moral life, this will presumably only be true when the shaming is motivated by the actual facts—after all, a person shouldn’t be shamed if they don’t deserve to be.

The question of ‘deserving’ is important to the resolution of the question of whether shaming is ever morally defensible. The practice of shaming can be seen as retributive—the assumption being made is that the person being shamed for their actions is fully morally responsible for those actions. A variety of factors including environment, socialization, and biology contribute to, and perhaps, at least in some cases, even determine what a person does. If societies are going to maintain the position that retributivism is necessary for fairness, they better be sure that they are using those retributivist tools in ways that are, themselves, fair. Similar actions don’t all have similar backstories, and being sensitive to the nuance of individual cases is important.  

The motivation for shaming behavior tends to be bringing about certain kinds of results such as behavior modification and deterrence. The question of whether shaming actually changes or deters behavior is an empirical one. Given the potential costs, for the practice to be justified, we should be exceptionally confident that it actually works.

A careful look at the real intentions behind any particular act of shaming is warranted as well. Sometimes people’s intentions aren’t transparent even to themselves. Moral reflection and assessment are, of course, very important. Sometimes, however, the real motivation for shaming behaviors is power and political influence. It’s important to know the difference.

Even if the evidence allowed us to conclude that shaming adults is a worthwhile enterprise, it would not follow that what is appropriate for adults is appropriate for children. Young people are in a very active stage of self-creation and learning. Shaming behavior might be a recipe for lifelong self-esteem issues.

Finally, given that shaming has the potential for bringing about such negative consequences, it’s useful to ask: is there a better way to achieve the same result?

Racist, Sexist Robots: Prejudice in AI

Black and white photograph of two robots with computer displays

The stereotype of robots and artificial intelligence in science fiction is largely of a hyper-rational being, unafflicted by the emotions and social infirmities like biases and prejudices that impair us weak humans. However, there is reason to revise this picture. The more progress we make with AI the more a particular problem comes to the fore: the algorithms keep reflecting parts of our worst selves back to us.

In 2017, research showed compelling evidence that AI picks up deeply ingrained racial- and gender-based prejudices. Current machine learning techniques rely on algorithms interacting with people in order to better predict correct responses over time. Because of the dependence on interacting with humans for standards of correctness, the algorithms cannot detect when bias informs a correct response or when the human is engaging in a non-prejudicial way. Thus, the best working AI algorithms pick up the racist and sexist underpinnings of our society. Some examples: the words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions. Europeans were associated with pleasantness and excellence.

In order to prevent discrimination in housing, credit, and employment, Facebook has recently been forced to agree to an overhaul of its ad-targeting algorithms. The functions that determined how to target audiences for ads relating to these areas turned out to be racially discriminatory, not by design – the designers of the algorithms certainly didn’t encode racial prejudices – but because of the way they are implemented. The associations learned by the ad-targeting algorithms led to disparities in the advertising of major life resources. It is not enough to program a “neutral” machine learning algorithm (i.e., one that doesn’t begin with biases). As Facebook learned, the AI must have anti-discrimination parameters built in as well. Characterizing just what this amounts to will be an ongoing conversation. For now, the ad-targeting algorithms cannot take age, zip code, or gender into consideration, as well as legally protected categories.

The issue facing AI is similar to the “wrong kind of reasons” problem in philosophy of action. The AI can’t tell a systemic bias of humans from a reasoned consensus: both make us converge on an answer and support the algorithm to select what we may converge on. It is difficult to say what, in principle, the difference is between the systemic bias and a reasoned consensus is. It is difficult, in other words, to give the machine learning instrument parameters to tell when there is the “right kind of reason” supporting a response and the “wrong kind of reason” supporting the response.

In philosophy of action, the difficulty of drawing this distinction is illustrated by a case where, for instance, you are offered $50,000 to (sincerely) believe that grass is red. You have a reason to believe, but intuitively this is the wrong kind of reason. Similarly, we could imagine a case where you will be punished unless you (sincerely) desire to eat glass. The offer of money doesn’t show that “grass is red” is true, similarly the threat doesn’t show that eating glass is choice-worthy. But each somehow promote the belief or desire. For the AI, a racist or sexist bias leads to a reliable response in the way that the offer and threat promote a behavior – it is disconnected from a “good” response, but it’s the answer to go with.

For International Women’s Day, Jeanette Winterson suggested that artificial intelligence may have a significantly detrimental effect on women. Women make up 18% of computer science graduates and thus are left out of the design and directing of this new horizon of human development. This exclusion can exacerbate the prejudices that can be inherent in the design of these crucial algorithms that will become more critical to more arenas of life.

Christchurch: White Supremacism, Politics and Moral Evil

Photograph of candles and flowers arranged to mourn victims of the shootings

Almost three weeks ago, on Friday March 15, 2019, the world looked on in horror as news broke of a terrorist attack perpetrated by a white supremacist against a community of Muslims during Friday prayers at two Mosques in Christchurch, New Zealand. The gunman, a 28-year-old Australian man, killed 50 people with a cache of weapons including semi-automatic rifles emblazoned with white nationalist symbols. He streamed film footage live on social media before and during the massacre. (Jacinda Ardern, New Zealand’s Prime Minister, has promised not to speak the terrorist’s name in public so as to deprive him of the fame he desires. Many news outlets in New Zealand and Australia have followed by continuing not to use his name, and in that spirit, this article will also decline to use his name.)

This individual was not known to authorities or to security agencies in Australia or New Zealand, but subsequent searches show that he supported Australian far right groups (now banned on social media) and was an active member of several online white supremacist forums. Prior to the massacre he published a 74-page “manifesto” online titled “the great replacement” in which he enthusiastically discusses various neo-fascist modus operandi including creating an atmosphere of fear in Muslim communities. He describes himself as a “regular white man from a regular family” who “decided to take a stand to ensure a future for my people.” He said he wanted his attack on the mosques to send a message that “nowhere in the world is safe.”

The accused gunman mentioned Donald Trump in his manifesto, praising the US president as “a symbol of renewed white identity and common purpose.” Acting White House chief of staff Mick Mulvaney brushed off the association: “I don’t think it’s fair to cast this person as a supporter of Donald Trump” Mulvaney said, adding “This was a disturbed individual, an evil person.”

The notion of evil is evoked in particularly extreme and egregious circumstances. Doubtless Mulvaney is right about the gunman being disturbed, and perhaps about his being evil. Evil is a moral category that bears some examination; but statements of the ilk of Mulvaney’s, which emphasize the individual nature of the action are challenged by another view. Since this horrific event there has been much soul-searching and a great deal of public debate in the gunman’s home country of Australia about possible causes or exacerbating factors for such an event; or at least about its possible relationship to wider public sentiments about issues like race and immigration. Many have criticized the level of public discourse in Australia where some views espoused by mainstream media and mainstream politics seem to prefigure and presage many of the views expressed by the gunman in his manifesto.

It is being widely acknowledged that there has been a rise in anti-Muslim sentiment in mainstream political discourse; that incendiary platforms of anti-immigration and racist rhetoric have increasingly been employed not just by fringe right-wing political outfits (in Australia the One Nation party is a particularly egregious example) but also by the major political parties to drum up support and to create political advantage.

Examples are not difficult to find. In the days following the massacre Frazer Anning, a senator from One Nation (Australia’s furthest right, whitest, most nationalist minor party), was castigated for suggesting the mosque attack highlighted a “growing fear over an increasing Muslim presence” in Australian and New Zealand communities. These remarks are obviously abhorrent, and Anning will be formally censured in Parliament for them. But while Australian Prime Minister Scott Morrison was denouncing Anning, he was also explaining, or rather denying, remarks he himself is reported to have made in a strategy meeting as opposition Immigration spokesman, in which he reportedly urged his colleagues to capitalize on the electorate’s growing concerns about “Muslim immigration”, “Muslims in Australia” and the “inability” of Muslim migrants to integrate.

And all this is familiar to the Australian public who have just witnessed, in the weeks before the massacre, the government drumming up hysteria about refugees (most of whom are Muslim) by suggesting that they may be rapists and paedophiles, and that bringing them to Australia for medical treatment would deprive Australians of hospital beds. There is no doubt (even if Donald Trump denies it) that white supremacy is on the rise, that it is being fed by social media, and that the movement is feeling emboldened by the current political climate. Given this tinderbox of conditions, many believe that it was only a matter of time before it again erupted in violence.  

So how do we square claims about the social and political conditions that feed such hatred with claims about the individual evil of the nature and actions of the one gunman who committed this massacre?

The question must be about responsibility. Acknowledging the conditions, which foment a general anxiety about race and immigration, and which embolden the already radicalized, are important parts of what we must as a (local and global) community come to terms with. Yet if we want to say that this was an act of evil perpetrated by an evil person, then we want it to be understood that that also means he is fully morally culpable, not that he is simply an instrument or product of the zeitgeist. We therefore must be aware of those who want to use that view to deflect responsibility away from themselves or their vested interests, including politicians whose policies and public pronouncements too closely resemble the evildoer’s message of hate.  

So how do we think about the notion of moral evil – and assess the moral usefulness of that concept here? There is a long history in philosophy of discussions of the nature of evil. Historically, evil has been a theological concept, and much philosophical discussion has tended to focus on ‘natural’ rather than ‘moral’ evil (natural evil is said to include bad events or bad things that happen over which agents have no control). Reasons for shunning the concept of evil in modern moral discourse are its sense of the supernatural, and because it can be thought to, by evoking a sense of mystery, express a lack of understanding and of reason. In the secular systems of philosophy, evil as a moral concept has often been eschewed in favor of moral categories of ‘wrong’ and ‘bad.’

When people say, following such an event, that ‘it was an act of evil’, what do they mean? Even if the category of evil is evoked over and above badness or wrongness, there may be different understandings of its distinction from these categories. Is evil different in kind, that is, is it qualitatively different, from an act that is just morally wrong, or may be described as bad? If that is the case, then there must be some element an evil act possesses that an act that is simply morally wrong does not. Yet it has not been easy for philosophers to pinpoint what that element is. It has been suggested, for example, by Hillel Steiner in his article “Calibrating Evil” that the quality present in an evil act that is not present in an act of ‘ordinary wrong’ is that of the pleasure derived by the perpetrator from the act. On the other hand, it could be argued that evil is quantitatively different from acts of ordinary badness, and that as a moral category it serves to amplify our understanding of the moral terribleness of an action.   

Regardless of your metaphysical commitments on these questions, a reason for turning to the concept of evil in moral philosophy is that the moral categories of ‘wrong’ and ‘bad’ are at times not enough to capture the moral significance of horrors which seem to go beyond the limits of those concepts. Hannah Arendt famously wrote about the concept of evil, in the context of her report on the trial of Adolf Eichmann, one of the chief architects, and bureaucrats, of the Jewish Holocaust. (As it happens, both her theory and her source material seem to be relevant here.) Arendt employed the idiom ‘evil beyond vice’ to name a kind of radical evil, one she saw as coming to fruition in the horrors of the Nazi death camps and the ‘final solution’. She analyzes evil of that nature as being a form of wrongdoing that cannot be captured by other moral concepts; that involves making human beings superfluous and that is not done for humanly understandable motives like self-interest.

Though a great deal of philosophical ethics is normative – gives us the tools to discern in a variety of situations, right from wrong and good from bad – following an event like the Christchurch massacre it seems that the role of ethics becomes partly a descriptive one – so that we use moral concepts to come to terms with, and face honestly up to, the terribleness of such events.

The paradigm for evil since the Second World War is the horror of the Nazi regime and the Jewish holocaust. It is very disturbing that there is a link, and not an incidental one, between that paradigm of evil and the motivations of the evil of the Christchurch shooter. White nationalism is white supremacy and white supremacy is neo-Nazism. There are ample pictures on the internet of the groups with which the Christchurch shooter identified, and countless groups like them, showing people displaying swastikas and doing the Nazi salute. Even the United States president Donald Trump ostensibly claimed that there were ‘fine people’ marching with torches in a white supremacist rally in Charlottesville in 2018.

Calling this an act of evil may, or may by some using that designation, be meant to distance it or cut it off from factors which the speaker has a reason to be defensive about. Yet there is no reason to accept the implication that an evil act is an act that occurs in isolation from social and political forces. Matters of causality are difficult, and almost always opaque. Not every individual engaged in nationalist chat rooms or racist conspiracy theories will commit an atrocity, but the discussions in those spaces will foment and galvanize the hatred. And every politician’s casually nationalist or off-handed racist statement or policy adds to the normalization of the same sorts of messages that white supremacists promote. All of this matters because it will help create the atmosphere for such unspeakable acts of evil to take place.

Misogyny, ‘Purity,’ and Leggings at Notre Dame

Photograph of southern quad and Morrissey Hall at the University of Notre Dame

On Monday, March 25th, The University of Notre Dame’s student-run newspaper The Observer printed a letter to the editor from Maryann White, the self-described mother of four sons who recently visited the Indiana campus, titled “The legging problem.” The note scolded the university’s student body for its attire, specifically criticizing the prevalence of form-fitting clothing. More specifically, the letter chastised the female students of Notre Dame for their clothing choices and suggested that women who wear leggings as pants are unavoidably leading men to ogle their bodies. As White explained, she was simply concerned “For the Catholic mothers who want to find a blanket to lovingly cover your nakedness and protect you — and to find scarves to tie over the eyes of their sons to protect them from you!”

In addition to a variety of published responses (appearing in venues ranging from CNN to the Washington Post, to the Today Show) more than a thousand students responded to a Facebook event organized by the Irish 4 Reproductive Health club, an on-campus organization promoting reproductive justice and access to sexual health resources, indicating their intent to wear leggings to class last week as a demonstration against White’s misogyny. The Observer indicated that, in addition to the much-publicized controversy online, their offices received several dozen additional letters in response to the article, several of which they also published.

To be sure, there are many who might balk at my application of the word ‘misogyny’ to this story (“after all, isn’t ‘Maryann White,’ herself, a woman?”), but the term has benefited from an enriched treatment in recent philosophical work and is fitting, given White’s expectation that women at Notre Dame shoulder the burden of warding off the male gaze. (NOTE: the latent heteronormativity of White’s initial letter is also worth critiquing, but that’s an issue for another article.)

In her recent book Down Girl: The Logic of Misogyny, Cornell philosophy professor Kate Manne explains that misogyny is more than just an emotional hatred of a particular gender, but is instead, “primarily a property of social environments in which women are liable to encounter hostility due to the enforcement and policing of patriarchal norms and expectations” (19). On this view, misogyny can be entirely emotionless (and even not directly intentional), but still misogynistic if it continues to promote sexist ways of life; as Trent University professor Kathryn J. Norlock put it in her review of Manne’s book, “If sexism offers planks, misogyny provides the nails.”

Imagine instead if Maryann White had witnessed a mugging during her campus visit, then wrote a letter chastising students for not taking self-defense classes – anyone reading that newspaper would rightly complain that White had misplaced the blame for the crime onto the victim, rather than the perpetrator. Even though her theoretical argument (that “if people aren’t ready to defend themselves, then they can’t be surprised when they’re attacked”) might not be, itself, a mugging, it nevertheless functions to create a context which helps muggers to mug by shifting the blame for the problem onto the victims. In reality, the only person at fault in a mugging is the mugger; so, too, with ogling or any other kind of sexual assault.

Down Girl is perhaps most famous for coining the term ‘himpathy,’ what Manne calls the “often overlooked mirror image of misogyny” evidenced by “the excessive sympathy shown towards male perpetrators of sexual violence” (197). If White’s desire to patronizingly cover unknown women with blankets to protect their modesty is strangely misogynistic, then her felt need to find scarves to restrain her own sons is similarly problematic. Both actions assume that the sexualization of innocent women’s bodies is suffered mainly by the men doing the sexualizing, not the women being objectified.

Of course, White’s expectations about women’s responsibilities (and men’s lack thereof) is far from unusual, particularly in an American religious context; in Linda Kay Klein’s book Pure: Inside the Evangelical Movement that Shamed a Generation of Young Women and How I Broke Free, she deconstructs what she calls the ‘purity culture’ that grew alongside American evangelical Christianity following the post-1980s ascendency of the Religious Right. In particular, Klein explores (primarily through interviews buttressed by research) how particular teachings about sexuality and gender, and particular interpretations of biblical passages (that see sexuality as a ‘stumbling block’ for one’s pre-marital ‘purity’) have led women, in particular, to feel burdened with guilt; as Klein explained of her own experience, “Intended to make me more ‘pure,’ all this message did was make me more ashamed of my inevitable ‘impurities’” (33).

So, the sentiment expressed in Maryann White’s letter is far from uncommon and, as Monica Hesse of the Washington Post put it, that’s the real conversation this letter should provoke. Far more concerning than the specific worries of one mother is the culture-wide phenomena of misogyny critiqued by thinkers like Manne and Klein.

Permalancing and What it Means for Work

A wooden desk holds up the equipment for an at home office

In early March David Tamarkin, editor of the cooking website Epicurious, posted a tweet advertising an “amazing job” for an editorial assistant. While the position called for all the buzzword-worthy characteristics of a desirable employee – such as being “sharp, organized, [and] cooking-obsessed” – it also used a phrase many found confusing, uncomfortable, and possibly illegal: “full-time freelance.” When asked to clarify what, exactly, it meant to say that a position was both freelance and full-time, Tamarkin initially clarified that it meant that it would be “Paid hourly at 40 hrs/week, no benefits.” Later, Tamarkin would reclarify that his initial clarification was a mistake, stating that the position was indeed eligible for benefits. This update, however, was only made after many Twitter users questioned the position’s legality, with some even tagging the New York State Department of Labor in their response.

Continue reading “Permalancing and What it Means for Work”

Inclusion, Artistic Expression, and the Victoria’s Secret Fashion Show

Photograph of two women in dresses and a man on a stage with a Victoria's Secret pink background

On December 3rd, 2018, Victoria’s Secret put on their annual fashion show. Every year the event attracts millions of viewers. The runway-style presentation features popular entertainers and extravagant props, sets, and costumes. Despite the high profile status of the participants, ratings for the event have declined over the years. In 2018, the event produced the lowest ratings in its more than twenty year history.

The 2018 show faced criticism for its lack of commitment to diversity and inclusion and for comments made by the company’s CEO Ed Razek. When asked about potential inclusion of trangender and plus-size models, Razek said:

“If you’re asking if we’ve considered putting a transgender model in the show or looked at putting a plus-size model in the show, we have …It’s like, why doesn’t your show do this? Shouldn’t you have transsexuals in the show? No. No, I don’t think we should. Well, why not? Because the show is a fantasy. It’s a 42-minute entertainment special. That’s what it is. It is the only one of its kind in the world, and any other fashion brand in the world would take it in a minute, including the competitors that are carping at us. And they carp at us because we’re the leader.”

Many viewed these comments as highly insensitive.  

A number of fairly high profile people have responded to this conception of “fantasy” in noteworthy ways. In 2015, androgynous model Rain Dove took to social media to make a point about beauty standards. Dove’s physical appearance does not conform to societal expectations—they have been hired to walk on runways for both male and female fashion lines. Dove took pictures of themself in Victoria’s Secret lingerie, some with pictures of Victoria’s Secret models taped on their face and some without to make the point that beauty, and fashion as art, does not have to comport with a binary understanding of gender.

Since 2016, supermodel Ashley Graham, well known for her activism for the cause of diversity in the fashion industry, has taken to social media to express her view that the Victoria Secret fashion show should be more inclusive. This year, she posted photos from her Ashley Graham for Addition Elle lingerie runway show, which featured models of all shapes and sizes. She included the hashtag, seemingly directed at Victoria’s Secret, #BeautyBeyondSize.

To many, it just seems like good common sense for brands to be more inclusive. Most people don’t look like Victoria Secret models. Human beings come in a range of shapes and sizes and express their identities in different ways. It sure seems as if there is good money to be made by appealing to a broader range of people.

If the issue is considered from the perspective of what would be best for society at large, it seems fairly clear that the public good would be advanced by inclusion. Too many people look in the mirror and hate what they see. Our relationship to our bodies is an existential matter. When that relationship is unhealthy, it can feel that we are trapped in a foreign and uncomfortable space. The hope is that this aspect of people’s lives could be transformed for the better if society stops sending the message that people can love themselves only if the body they occupy is shaped in a particular way.

What’s more, it would just be more convenient if the fashion industry were more inclusive. People would be happier if they knew they could reliably walk into a store and purchase attractive clothing that they would be comfortable wearing. As it is, people from a range of diverse groups must shop online or find specialty stores to meet their needs. This strikes many as discriminatory and unnecessary.

That said, even if we acknowledge all of these points, even if we think that change needs to happen, we still need to figure out how the change should happen, and the case isn’t as morally simple as it may appear. Fashion is a form of art. As we’ve seen, some people object to the way it gets made and to the form that it tends to take. If society objects to a form of art, does the obligation fall on the artist to stop making art of that type? Art can be a form of speech. Presumably, that’s part of the issue with the Victoria’s Secret fashion show—it sends the message that these and only these are the kinds of female bodies that are attractive. In his controversial comments, Razek essentially admitted as much—the fashion show is a fantasy and “Angel” bodies are the bodies worth fantasizing about. We might find that message ugly, but does it follow, then, that Razek should change his message or even quit speaking entirely? We rightly value free speech and freedom of artistic expression. What’s more, the ugly part of the message surely isn’t the only part of the message. Even if fashion shows aren’t your cup of tea, even if you find them objectifying, it’s difficult to deny that beauty of a certain type is being celebrated there. Is it wrong to celebrate that beauty because doing so fails to celebrate other forms of beauty?

There are several options open to consumers who would like to see the fashion industry change. First, people interested in fashion can create their own art—art that is geared toward a more diverse clientele or that is committed to celebrating the beauty of a diverse range of bodies. This suggestion is intuitively appealing, but it’s also important to recognize the incredible difficulty a startup fashion line would have competing with a fashion giant like Victoria’s Secret. Industries and institutions engage in gatekeeping. Those with the power have little interest in sharing it when it doesn’t satisfy their interests to do so. Humans are not unlike other animals in the sense that we engage in sexual competition for mates. We use fashion, in part, to fabricate peacock feathers as a sexual display to potential mates. The fashion industry has tremendous power as the puppeteers guiding the motions of important human interactions. The power players are unlikely to hand over the strings to new people with subversive ideas about how or even why the puppets should move. That said, no artist or set of artists could advance an alternative message if no one tried.

Another, more accessible approach is for consumers to speak with their wallets. Some existing and successful companies are tuning into the fact that there is a not small customer base that would like to see fashion change dramatically. Consumers of all shapes, sizes, and presentations can buy fashion from companies that share their values.