← Return to search results
Back to Prindle Institute

On Journalistic Malpractice

photograph of TV camera in news studio

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In 2005, then-CNN anchor Lou Dobbs reported that the U.S. had suffered over 7,000 cases of leprosy in the previous three years and attributed this to an “invasion of illegal immigrants.” Actually, the U.S. had seen roughly that many leprosy cases over the previous three decades, but Dobbs stubbornly refused to issue a retraction, instead insisting that “If we reported it, it’s a fact.”

In 2020, then-Fox-News anchor Lou Dobbs reported that the results of the election were “eerily reminiscent of what happened with Smartmatic software electronically changing votes in the 2013 presidential election in Venezuela.” Dobbs repeatedly raised questions and amplified conspiracy theories about Donald Trump’s loss, granting guests like Rudy Giuliani considerable airtime to spread misinformation about electoral security.

It’s generally uncontroversial to think that “fake news” is epistemically problematic (insofar as it spreads misinformation) and that it can have serious political consequences (when it deceives citizens and provokes them to act irrationally). Preventing these issues is complicated: any direct governmental regulation of journalists or news agencies, for example, threatens to run afoul of the First Amendment (a fact which has prompted some pundits to suggest rethinking what “free speech” should look like in an “age of disinformation”). To some, technology offers a potential solution as cataloging systems powered by artificial intelligence aim to automate fact-checking practices; to others, such hopes are ill-founded dreams that substitute imaginary technology for individuals’ personal responsibility to develop skills in media literacy.

But would any of these approaches have been able to prevent Lou Dobbs from spreading misinformation in either of the cases mentioned above? Even if a computer program would have tagged the 2005 leprosy story as “inaccurate,” users skeptical of that program itself could easily ignore its recommendations and continue to share the story. Even if some subset of users choose to think critically about Lou Dobbs’ 2020 election claims, those who don’t will continue to spread his conjectures. Forcibly removing Dobbs from the air might seem temporarily effective at stemming the flow of misinformation, but such a move — in addition to being plainly unconstitutional — would likely cause a counter-productive scandal that would only end up granting him even more attention.

Instead, rather than looking externally for ways to stem the tide of fake news and its problems, we might consider solutions internal to the journalistic profession: that is, if we consider journalism as a practice akin to medicine or law, with professional norms dictating how its practitioners ought to behave (even apart from any regulation from the government or society-at-large), then we can criticize “bad journalists” simply for being bad journalists. Questions of epistemic or political consequences of bad journalism are important, but subsequent to the first question focused on professional etiquette and practice.

This is hardly a controversial or innovative claim: although there is no single professional oath that journalists must swear (along the lines of those taken by physicians or lawyers), it is common for journalism schools and employers to promote codes of “journalistic ethics” describing standards for the profession. For example, the Code of Ethics for the Society of Professional Journalists is centered on the principles of accuracy, fairness, harm-minimization, independence, and accountability; the Journalism Code of Practice published by the Fourth Estate (a non-profit journalism watchdog group) is founded on the following three pillars:

  1. reporting the truth,
  2. ensuring transparency, and
  3. serving the community.

So, consider Dobbs’ actions in light of those three points: insofar as his 2005 leprosy story was false, it violates pillar one; because his 2020 election story (repeatedly) sowed dissension among the American public, it fails to abide by pillar three (notably, because it was filled with misinformation, as poignantly demonstrated by the defamation lawsuit Dobbs is currently facing). Even before we consider the socio-epistemic or political consequences of Dobbs’ reporting, these considerations allow us to criticize him simply as a reporter who failed to live up to the standards of his profession.

Philosophically, such an approach highlights the difference between accounts aimed at cultivating a virtuous disposition and those that take more calculative approaches to moral theorizing (like consequentialism or deontology). Whereas the latter are concerned with a person’s actions (insofar as those actions produce consequences or align with the moral law), the former simply focuses on a person’s overall character. Rather than quibbling over whether or not a particular choice is good or bad (and then, perhaps, wondering how to police its expression or mitigate its effects), a virtue theorist will look to how a choice reflects on the holistic picture of an agent’s personality and identity to make ethical judgments about them as a person. Like the famous virtue theorist Aristotle said, “one swallow does not make a summer, nor does one day; and so too one day, or a short time, does not make a man blessed and happy.”

On this view, being “blessed and happy” as a journalist might seem difficult — that is to say, being a good journalist is not an easy thing to be. But Aristotle would likely point out that, whether we like the sound of it or not, this actually seems sensible: it is easy to try and accomplish many things, but actually living a life a virtue — actually being a good person — is a relatively rare feat (hence his voluminous writings on trying to make sense of what virtue is and how to cultivate it in our lives). Professionally speaking, this view underlines the gravity of the journalistic profession: just as being a doctor or a lawyer amounts to shouldering a significant responsibility (for preserving lives and justice, respectively), to become a reporter is to take on the burden of preserving the truth as it spreads throughout our communities. Failing in this responsibility is more significant than failing to perform some other jobs: it amounts to a form of malpractice with serious ethical ramifications, not only for those who depend on the practitioner, but for the practitioner themselves as well.

COVID Vaccines and Primary Care

photograph of elderly man masked in waiting room

Dr. Jason Valentine, a general practitioner in Alabama, has decided to no longer treat unvaccinated patients. Starting October 1st, that is. At the beginning of August, Valentine’s clinic made the announcement, clarifying that his personal rule applied to both current patients and new patients. So long as you are unvaccinated, Dr. Valentine will not be seeing you. When asked why he was choosing not to treat unvaccinated patients, Valentine said “COVID is a miserable way to die and I can’t watch them die like that.” In Alabama, the state with the highest number of new COVID cases per day, such a sentiment is understandable. But is it ethical?

As most people know, doctors are bound by a creed called the Hippocratic Oath. The name of this oath comes from the historical figure of Hippocrates, a fifth century Greek physician, to whom the oath is traditionally attributed (although he was likely not the original author). The Hippocratic oath is the earliest-known source of many central idea of medical ethics that we still hold to today: e.g., the patient’s right to privacy, the obligation of the physician to not discriminate between the poor and the rich, and, most famously, the pledge to do no harm.

Doctors today continue to take a version of the Hippocratic Oath, though the oath has undergone major alterations in the past 2500 years. Still, the pledge to “do no [intentional] harm” remains. Major debates have been carried out historically over what exactly falls under the pledge to “do no harm” — that is, under what conditions are doctors guilty of breaking their oaths? More specifically, is Dr. Valentine breaking the Hippocratic Oath by refusing to see unvaccinated patients?

One argument for thinking that Valentine is breaking his oath is that refusing to see unvaccinated patients constitutes an illegitimate act of medical discrimination. Medical doctors have, historically, been stoically determined to ignore unpalatable particulars about the individuals they were treating. For example, during the Civil War, doctors in both the Union and the Confederate armies treated soldiers injured on the battlefield, regardless of their allegiance (excluding, sadly, Black soldiers on either side). During the second World War, British surgeons operated on Nazi prisoners of war, in many cases saving their lives. Under the Geneva convention, doctors are bound to treat soldiers from their army and enemy soldiers impartially — enemy soldiers are not to receive worse treatment or a lower medical priority because of their military allegiance. Surely, then, if the Geneva convention would forbid a doctor to refuse to see patients who were Nazis, it would prevent doctors from refusing to treat patients who had not received a vaccination for a dangerous and highly-contagious disease?

But there is legal precedent that complicates this verdict, as well. Specifically, doctors are allowed to, and do frequently, refuse to see children who have not received their recommended childhood vaccines and do not have a medical reason barring them from receiving vaccines. Reasons for these policies often include considerations of the extreme vulnerability of other patients that the voluntarily-unvaccinated may encounter in the office, including young children who are immunocompromised and babies who have not yet received all of their vaccines. Another consideration is that many childhood vaccines prevent infection from nearly eradicated diseases like the measles. When children are not vaccinated against these illnesses, breakthrough cases stand a higher chance of spreading, thereby resurrecting an almost defeated enemy.

For these reasons, one may be inclined to praise the doctor’s choice. Surely, if people are barred from seeing their general practitioner, this might motivate the unvaccinated to receive the vaccination, and undo some of the damage done by rampant misinformation regarding vaccine safety and efficacy. However, consider a (hypothetical) doctor who refused to treat patients who drank too much alcohol, or refused to exercise. In these cases, doctors would surely be seen as refusing to do their primary job: assuring the health of their patients to the best of their (possibly limited) abilities. Some philosophers, like Cass Sunstein, refer to actions and laws like these as “paternalism”: acts of mild coercion for the sake of protecting the coerced, are sometimes seen as acceptable — seatbelt laws and cigarette taxes are commonly-accepted paternalistic laws aimed at mildly coercing safer behavior. But when the coercion becomes harmful, or potentially harmful, these measures are generally seen as morally impermissible. For example, holding someone at gunpoint until they throw away all of their cigarettes may be incredibly effective, and maybe even good for the smoker in the long-run, but is surely morally wrong if anything is. The difference between paternalistic measures and harmful coercion is usually understood as a difference in potential harm and a difference in the degree of autonomy the coerced maintains. When laws increase the tax in cigarettes, smokers may be mildly financially harmed, but this generally will not amount to anything financially destructive. Generally, they retain the choice between either taking on a small additional financial burden or giving up smoking. In the gun-to-the-head case, the smoker no longer (meaningfully) retains a free choice. She must give up smoking or face her own death. Anything less than compliance, in this case, results in the most extreme kind of harm.

Clearly there will be many instances of coercive measures that fall somewhere between these two extremes. This raises a tough question for Dr. Valentine: does refusing to treat voluntarily unvaccinated patients constitute a case of permissible paternalism, or impermissible harmful coercion? One reason for thinking that such a decision may not result in real harm is the abundance of options of doctors that most people have access to. Surely needing to switch primary care doctors is merely an inconvenience, and not a significant harm. However, there are factors complicating this. Many people have insurance plans that severely limit what doctors they can see. Additionally, if Valentine is allowed to refuse unvaccinated patients, there is nothing stopping all of the doctors in his area from taking on the same rule. Someone may be effectively denied all medical care, then, if all local doctors decide to take up a similar rule. An inability to access a primary care doctor seems like a more severe harm than the instances of mild coercion in the cases of paternalistic cigarette tax laws.

There is no easy ethical analysis to give to Dr. Valentine’s decision. While we can surely sympathize with the protocol, and hope it leads to increased vaccination rates, we do not want large swaths of the general public living without a primary care doctor. Like many other aspects of COVID-19, ethicists here have their work cut out for them mapping brand new territory.

Risk, Regret, and Sport

photograph of two soccer players competing in air for ball

The legendary soccer player Denis Law recently announced that he has been suffering with dementia for several years. Law attributes his dementia to heading soccer balls. We’ve known for decades – in 2002 Jeff Astle’s death from dementia was linked to heading – that there is a link between heading and brain damage.

Other sports face similar issues. American football’s problem with Chronic Traumatic Encephalopathy (CTE) is well documented. CTE can lead to, amongst other things: aggression, depression, and paranoia that can arise in people in their 20s; it also can bring memory loss, dementia, and eventually death. Other sports like rugby and hockey also have links to CTE, and they have their own problems with brain damage.

Broadly, people who partake in sports that involve collisions (including things like headers) are at risk of brain injury. This is true especially when playing at higher levels of competition (as opposed to playing occasional pickup games), where impacts are bigger and players spend more time playing their sport.

How should players think about this risk? Last year, Jamie Carragher, a former top-level player for Liverpool FC and current pundit, said: “If I suffer from dementia in my old age and research suggests that is because of my football career, I will have no regrets.” Carragher recognizes that we are now better informed about the risks and need to make changes to minimize the risks (here is one: fewer headers in training), but he thinks the risks are still worthwhile, and that we must keep some of the risky elements in football: players should still be able to challenge each other in ways that risk sickening head-clashes.

I think Carragher’s thoughts are widely shared. Playing soccer, or rugby, or football is worth the risk of dementia later in life, so much so that players won’t regret playing their sport. But I think this line of thought rests on some troubling assumptions.

The first is the temptation to make a false comparison between the ordinary risks of sport and brain damage. We should obviously grant that some injuries are acceptable risks. I played rugby for over a decade, and I spent several months with sprained ankles and bad shoulders. It’s no surprise that I now occasionally get the odd ache. Almost every sport carries some risk of injury, and if we grant (as I think we should) that playing sports can be a meaningful part of our lives, these risks should not get in the way of us playing. When Carragher says that “there was a danger of injury every time I played,” he is right, but he misses the point. These brain injuries are not the same as (to take his example) a broken leg. They are highly damaging – far more long-term and life-changing than a broken leg usually is.

This leads to a deeper point. Living with dementia can involve a loss of awareness, a loss of memory, and confusion; CTE can lead to personality changes. We might reasonably think of these as transformative experiences. L. A. Paul developed the notion of a transformative experience. To take one of her examples, it’s impossible to know what it is like to be a parent – what it is to love your offspring, what it is to have such a particular duty of care – before becoming a parent. We can only know what it is like to be a parent by becoming a parent. But that means that choosing to become (or not become) a parent is always shrouded in ignorance. (Her other major example is becoming a vampire: we can’t tell what it will be like to be immortal creatures of the night.)

Perhaps the decision to play a sport that might lead to a serious brain injury involves some element of a transformative experience: you can’t know what your life would be like if you had CTE or dementia – confused, with a ruined memory and a changed personality – so perhaps you shouldn’t be so keen to declare that you won’t regret it. You might not feel that way when dementia takes its grip.

Here is another problem. Carragher’s line of thought also assumes that regret lines up with justification. That is to say, if you won’t regret something, then you were justified in taking that risk – you were right to do it. But, as R. Jay Wallace has argued, this isn’t always the case. In Wallace’s example, a young girl might get pregnant. She was far too young, and both she and her child would have had a better time of it had she waited several more years. Her decision to have a child was unjustified. Yet she surely cannot regret her decision: after all, she loves this child.

It isn’t surprising that people who have dedicated decades to their sports – sports that make their lives meaningful – won’t regret what they have done. But that doesn’t mean they made the right choice. There are plenty of other meaningful options out there: like taking up sculpting, squash, or chess.

Yet thinking about regret and justification also brings up something in favor of taking these risks: some people will have nothing to regret at all because brain damage is far from guaranteed, even in football. Bernard Williams argued that we might sometimes take a risk and that risk will be justified by the results. If you abandon your wife and children to set off on a career as a painter, you might have made a grave error if you fail in your career – but perhaps it will all have been worth it if you succeed. Likewise, Carragher, if he avoids dementia, might have been perfectly justified in playing soccer. Others might not be so lucky.

Sports play a meaningful role in many of our lives, and we are all happy to live with some level of risk. But we shouldn’t just say: “I won’t regret playing, even if I get dementia.” To note that you wouldn’t regret playing just because of a broken leg is to compare chalk and cheese; we don’t really know what our lives would be like with dementia, so we shouldn’t be confident in such assertions; and even if we end up with no regrets, that doesn’t mean we did the right thing. This discussion requires serious conversations about risk management and the meaningfulness of sport – it shouldn’t be conducted at the level of glib sayings.

On Objectivity in Journalism

blurred image of crowd and streetlights

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Over the past few years, a number of left-leaning journalists have publicly questioned the notion of objectivity as an ideal for journalists and journalistic practice. The discussions that ensued have generated a lot of heat, but for the most part not too much light. That’s why I was delighted by the latest episode of Noah Feldman’s podcast, Deep Background, which featured a lengthy interview with journalist Nikole Hannah-Jones, who is perhaps best known as the creator of The New York Times’s The 1619 Project. In that interview, Hannah-Jones and Feldman develop a nuanced account of the place of objectivity in journalism. I will discuss this account in due course. Before I do, I would like to unpack the multiple meanings of “objectivity” as it is used to describe journalists and their art.

The word “objectivity” is normally applied to two things: persons and facts (or truths). An objective person is one who has three attributes: neutrality, even-handedness, and disinterestedness. A neutral person has no prior or preconceived views about a particular subject; an even-handed person is disposed to give due weight to both sides in a factual dispute; and a disinterested person has no strong interests in one side or the other being the correct one. Thus, objectivity as an attribute of persons involves (the lack of) both beliefs and desires. It is in the name of promoting the appearance of this kind of objectivity that some journalists think it is improper for them to engage in political activity, or even to vote.

When applied to facts or truths, as in the oft-repeated phrase “objective truth,” the word is generally taken to mean something about either empirical verifiability or “mind-independence.” Take empirical verifiability first. In this sense, “objective” truths are truths that can be directly verified by the senses, and so are part of a public world which we share with other sentient creatures. In this sense, “objective” truths contrast with both truths about our mental states, such as that I like the taste of chocolate ice cream, and “metaphysical” truths, such as that God is all-powerful. Mind-independence is a slippery concept, but the basic idea is that mind-independent truths are truths which don’t depend on anyone’s beliefs about what is true. That it is raining in Durham, North Carolina would be true even if everyone believed it false. In this sense, “objective” truths contrast with conventional truths, such as truths about grammar rules, since such rules depend for their very existence on the attitudes, and in particular the beliefs, of writers and speakers. In this sense, however, “objective” truths include both metaphysical truths and truths about mental states. To see the latter point, consider that the fact that I like chocolate ice cream would be true even if no one, including I myself, believed it to be true. Thus, truths about personal taste can count as subjective in one sense, but objective in another.

With some exceptions I will discuss shortly, criticisms of objectivity rarely cast doubt on the existence of objective truths. Instead, they target the ideal of the journalist as a neutral, even-handed, and disinterested observer. The criticisms are two-fold: first, that adopting the objective stance is impossible, since all journalists use their prior beliefs and interests to inform their decisions about what facts to include or highlight in a story, and if they have the discretion, even what stories to write. Second, since a perfectly objective stance is impossible, trying to adopt the stance constitutes a form of deception that causes people to invest journalists with a kind of epistemic authority they don’t and couldn’t possess. Better to be honest about the subjective (basically, the psychological) factors that play a role in journalistic practice than to deceive one’s readers.

In the interview with Feldman, Hannah-Jones echoed these criticisms of objectivity. She then distinguished between two activities every journalist engages in: fact-finding and interpretation. In the fact-finding phase, she said, journalists can and must practice “objectivity of method.” What she apparently means to pick out with this phrase are methods by which journalists can hope to access objective truth. Such methods might include interviewing multiple witnesses to an event or searching for documentary evidence or some other reliable corroboration of testimony; they might also include the institutional arrangements that newsrooms adopt — for example, using independent fact checkers. However, she and Feldman seemed to agree that interpretation — variously glossed as working out what facts “mean” or which are “important” — is a subjective process, inevitably informed by the journalist’s prior beliefs and desires.

Here are two observations about Hannah-Jones’s account. First, the methods used to access objective truth in the fact-finding stage tend to force journalists to at least act as if they are objective persons. For example, interviewing multiple witnesses and weighing the plausibility of all the testimony is the kind of thing an even-handed observer would do. Looking for corroborating evidence even when one wants a witness’s testimony to be true emulates disinterestedness. This doesn’t mean that one has to be objective in order to practice journalism well, but it does suggest a role for objectivity as a regulative ideal: when we want to know how to proceed in fact-finding, we ask how an objective person would proceed. And to the extent that we can emulate the objective person, to that extent is the epistemic authority of the journalist earned.

Second, it seems to me that “interpretation” involves trying to access objective truth, or doing something much like it. Feldman and Hannah-Jones used two examples to illustrate the kinds of truths that the process of interpretation is aimed at accessing: truths about people’s motives, or why they acted (as opposed to truths about their actions themselves, which are within the domain of fact-finding), and causal truths, like that such-and-such an event or process was the key factor in bringing about some state of affairs. But such truths are objective in at least one sense. Moreover, even truths about motives, while subjective in not belonging to the public world of the senses, can be indirectly verified using empirical methods very similar to those used to access directly empirically verifiable truths. These are methods lawyers use every day to prove or disprove that a defendant satisfied the mens rea element of a crime. Since interpretation involves accessing objective truths or using empirical methods to access subjective ones, and since the methods of accessing objective truths involve emulating an objective person, interpretation at least partly involves striving to be objective.

This can’t be all it involves, however: what’s important is not equivalent to what’s causally efficacious. Here is where Feldman and Hannah-Jones are undoubtedly correct that a journalist’s attitudes, and in particular her values, will inevitably shape how she interprets the facts. For example, a commitment to moral equality may cause a journalist to train their focus on the experience of marginalized groups, that value informing what the journalist takes to be important. A merely objective person would have no idea of what facts are important in this moral sense.

Thus, a journalist must and should approach her practice with a complicated set of attitudes: striving to be objective (to be like an objective person) about the facts, while at the same time inevitably making choices about which facts are important based at least in part on her values. This is part of what makes journalism a difficult thing to do well.

Conservation and the Weight of History

photograph of statues in front of Philadelphia Art Museum

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In September of 2020, the National Trust, an organization that preserves more than two hundred historical sites scattered across the U.K., published a lengthy report on the material legacy of British colonialism. The report specifically identifies ninety-three sites under its purview that were built, occupied, or otherwise connected to the slave owners, bureaucrats, merchants, and politicians who drove the Atlantic slave trade. The vestiges of imperialism, the report implies, can be found not just in bombastic public monuments, but in the quaint country estates and manicured parkland. Blood money taints everything from private art collections (which contain curiosities pillaged from India and Africa) to luxury furniture (often made from tropical hardwoods like mahogany, which were invariably harvested by slaves).

Hilary McGrady, the director of the Trust, notes in a blog post accompanying the report that

history can also be challenging and contentious. It is surely a sign of confidence, integrity and pride that while we can celebrate and enjoy history we can also explore and acknowledge all aspects of it. The National Trust is at its best when we capture this complexity – when we present facts and material evidence in ways that inspire curiosity, inquiry, learning and sharing.

History has certainly proved to be contentious; Charles Moore, the former editor of the Daily Telegraph, accuses the Trust of bowing to Black Lives Matter, which he refers to as a “semi-racist political movement with extraordinary doctrines who love, among other things, knocking down statues,” and laments “that our greatest conservation body should be, as it were, taking the knee to them.”

Others believe that the Trust isn’t going far enough. Their new programming acknowledges the impact of imperialism, but it isn’t clear whether or not they’ll take the next step of repatriating artifacts. In an article on the Trust for The New Yorker, Sam Knight interviews British historian William Dalrymple, who explains,

If you were to gather a group of National Trust supporters in a room and say to them, ‘We have some examples here of looted Jewish art treasures taken by the Nazis that have ended up in our properties. Should we hold on to them? Or should we give them back to their owners, who now live in L.A.?’ There would be a hundred-per-cent vote, of course. Most British people simply are not aware, or haven’t processed. . . that this is the same thing. That this is another conquered nation, whose art treasures now sit in British museums and in British country houses.

Most visitors to Trust sites find these historical parallels difficult to process, because, as Knight argues, the National Trust fulfills “at least two large and subtly conflicting roles, as a custodian of collective memory and as a purveyor of weekend leisure. The Trust aims for total inclusion. Its slogan is ‘For everyone, for ever’ . . . The Trust hates to disappoint people. It hates, like any great British institution, to cause offense.” But is the point of historical sites to provide comforting narratives that bolster patriotism, or to display the stark and often ugly realities of history, offensive as they may be? Many of us understand history as inert, a tranquil landscape that we gaze at appreciatively from a safe difference, but we come to that landscape with baggage in hand. All conservational bodies, not just the Trust, have to reckon with what the public wants from history, how they want it to act upon them (or, in some cases, not act upon them.)

Novelist Zadie Smith explored the weight of history in an essay for The London Review of Books. Smith argues that “Public art claiming to represent our collective memory is just as often a work of historical erasure and political manipulation. It is just as often the violent inscription of myth over truth, a form of ‘over-writing’—one story overlaid and thus obscuring another—modeled in three dimensions.” She’s speaking about monuments here, which are typically built with a particular narrative of the past in mind, but the way we maintain and present historical sites is another form of storytelling.

Smith, who has lived on both sides of the Atlantic, acknowledges the rampant erasure of slavery in the United States, but in the U.K., she sees not

erasure but of something closer to perfect oblivion. It is no exaggeration to say that the only thing I ever learned about slavery during my British education was that ‘we’ ended it . . . The schools were silent; the streets deceptive. The streets were full of monuments to the glorious, imperial, wealthy past, and no explanation whatsoever of the roots and sources of that empire-building wealth.

Smith’s experience casts doubt on the ability of the Trust, or any single organization, to act as a definitive “custodian of collective memory” when so much of our history goes unacknowledged. Even the idea of total inclusion, which makes up half of the Trust’s slogan, feels like an attempt to smooth over division and inequalities. Smith sees a potential remedy to historical amnesia in artists like Kara Walker, whose work depicts the grotesque absurdities of slavery. Walker famously interrogated the serene public monuments of imperialism with her piece A Subtly, an enormous sculpture of a black woman made from white sugar (a commodity that drove much of the slave trade and helped beautify those ninety-three homes identified by the Trust).

One Walker drawing, enigmatically titled “What I want history to do to me,” elicits a polyphonous response from Smith. She reflects,

What might I want history to do to me? . . . I might ask it to urgently remind me why I’m moving forward, away from history. Or speak to me always of our intimate relation, of the ties that bind—and indelibly link—my history and me. I could want history to tell me that my future is tied to my past, whether I want it to be or not. . . . I might want history to show me that slaves and masters are bound at the hip. That they internalize each other. That we hate what we most desire. That we desire what we most hate. That we create oppositions—black white male female fat thin beautiful ugly virgin whore—in order to provide definition to ourselves by contrast. I might want history to convince me that although some identities are chosen, many others are forced. Or that no identities are chosen. Or that all identities are chosen . . . All of these things. None of them. All of them in an unholy mix of the true and the false.

When we approach the past, we come with many contradictory and often submerged desires, as Smith makes clear. British historical sites will continue to draw in tourists who want to snap photos of sprawling gardens and elegant drawing rooms. We can only hope that the National Trust’s admirable recognition of colonialism will start a new conversation about the many uses and misuses of history.

Afghanistan: The Graveyard of Liberal Interventionism?

image of topographic map of Afganistan and surrounding countries

After two decades, the war in Afghanistan (the so-called “graveyard of empires”) has been lost. The sudden defeat of the former Afghan government has sparked an outpouring of recrimination. It is time to take a step back and examine the ethical and political doctrine that was used to justify our nation-building mission in Afghanistan for so many years: liberal interventionism.

Liberal interventionists believe that countries such as the U.S. have an ethical and political duty to spread freedom across the globe — to nation-build, promote and defend democracy, political and civil rights, and international institutions. They argue that diplomatic, economic, and even military intervention can be justified in the name of liberty and humanitarianism.

The doctrine has deep philosophical roots. In his essay “A Few Words on Non- Intervention,” the liberal philosopher John Stuart Mill argued in favor of an ethical duty to intervene against “barbarous” states in order to bring liberty to the people of those states.

More recently, the philosopher Martha Nussbaum has defended some interventionism on humanitarian grounds. Her view is that intervention in a foreign country’s affairs can be justified or even ethically required if it is necessary for providing individuals with “basic capabilities,” such as the capability to live, be in good health, affiliate with others, and to maintain one’s bodily integrity.

Prior to the 2001 NATO invasion of Afghanistan, a liberal interventionist case for war was relatively clear. Under Taliban rule, Afghan citizens faced widespread human rights abuses. For example, women were forbidden from independent travel, from working, and from education over the age of eight. Women were also subject to cruel and arbitrary punishment. In 1996, a woman had the tip of her thumb cut off for the “crime” of wearing nail varnish. Given the appalling human rights abuses of the Taliban, Nussbaum’s conditions for intervention were likely met in Afghanistan.

Of course, it does no good to invade countries only to have them return to tyranny the next week. So intervention can only be justified, on the liberal interventionist view, if it produces lasting progress on human rights, democracy, and liberty. This suggests we have a duty to see our interventions through — to stay in Afghanistan until our humanitarian goals are secured indefinitely.

If there is such an ethical duty, we have violated it in withdrawing our military forces. The liberal-democratic Afghan government was too weak to resist the Taliban. Now Afghan democracy, education, healthcare and civil rights are all likely to be severely weakened or destroyed entirely. Before the recent withdrawal of international troops, Yamina Mishra, Amnesty International’s Asia-Pacific Director, claimed the move “threatens to undermine more than twenty years of progress for women and girls.” The shocking victory of the Taliban appears to be proving her right.

The Taliban have claimed they will abstain from human rights abuses of the past, that they will offer amnesty to those who worked with the former government, and that they will respect freedom of speech, expression, and women’s rights in a manner compatible with Sharia law. This, it has been claimed, is the “Taliban 2.0.” If these promises are kept, then withdrawal may be compatible with liberal interventionist principles. But, 25 years ago, as the Taliban seized Kabul, similar promises were made and quickly broken. To trust the Taliban to maintain the former Afghan government’s respect for human rights is likely wishful thinking. The doctrine of liberal interventionism would appear, then, to condemn Biden’s recent withdrawal.

Most Americans, however, think that withdrawal was the right choice. The U.S. fought in Afghanistan more than five times longer than in World War Two. It is a war that continued to take lives and cost billions each year, while our presence did not seem to be contributing to any progress. President Biden, during the withdrawal announcement, claimed that, “‘Just one more year’ of fighting in Afghanistan is not a solution, but a recipe for fighting there indefinitely.” Biden also appealed to the value of national self-determination, saying that, “It’s up to Afghans to make the decision about the future of their country.”

Despite facing widespread criticism in the media, Biden’s perspective is widely shared; seventy-three percent of voters approved of withdrawal while only twenty-three percent disapproved. Unusually, in this period of hyper-partisanship, a majority of both Republicans and Democrats approved of the withdrawal.

The popularity of the withdrawal might make it seem that America has lost whatever confidence it had in liberal interventionism. This conclusion may, however, be premature. Arguably, Biden’s withdrawal was ethically permissible even according to liberal interventionism.

Mill, the original proponent of liberal interventionism, argued that intervention is only justifiable in nations in which it will work. He writes:

“the answer I should give to the question of the legitimacy of intervention is, as a general rule, No. The reason is, that there can seldom be anything approaching to assurance that intervention, even if successful, would be for the good of the people themselves. The only test possessing any real value, of a people’s having become fit for [liberal intervention] is that they, or a sufficient portion of them to prevail in the contest, are willing to brave labour and danger for their liberation. […] the evil is, that if they have not sufficient love of liberty to be able to wrest it from merely domestic oppressors, the liberty which is bestowed on them by other hands than their own, will have nothing real, nothing permanent. No people ever was and remained free, but because it was determined to be so.”

Biden’s controversial recent speech hit similar notes:

“American troops cannot and should not be fighting in a war and dying in a war that Afghan forces are not willing to fight for themselves. We spent over a trillion dollars. We trained and equipped an Afghan military force of some 300,000 strong, incredibly well-equipped, a force larger in size than the militaries of many of our NATO allies. We gave them every tool they could need. We paid their salaries, provided for the maintenance of their air force. […] We gave them every chance to determine their own future. What we could not provide them was the will to fight for that future.”

Biden’s speech has been widely (and plausibly) interpreted as an ugly attempt to pin the blame for defeat on the Afghan people. But a more generous way to understand Biden is that he is making Mill’s point. If, after twenty years, we have failed to cultivate an Afghan government capable of resisting the Taliban, then there is no reason to think we would ever succeed. On Mill’s account, the blame falls not on the Afghan people, but on America and its allies for intervening while failing to foresee that nation-building would fail. On this reading, the withdrawal was simply a recognition of the painful fact that our nation-building project was never going to succeed, and was therefore never legitimate.

What the Moral Tragedy in Afghanistan Teaches

photograph of soldiers running in the desert

The current situation in Afghanistan brings to the forefront several moral issues which, taken together, create a complex situation with a great deal of uncertainty about how to proceed. With the clock ticking down until several nations, particularly the United States, pull out entirely from the country, the lives of millions hang in the balance, particularly those who are fleeing the Taliban for fear of reprisals and especially women. But how exactly should we understand this issue morally speaking? What moral responsibilities do nations have who actively occupied and developed the Islamic Republic of Afghanistan? Is this a moral tragedy? And if so, what should be done about it?

First, it is worth recounting what led us here. Since 2001, the United States and other NATO  allies have occupied Afghanistan after toppling the Taliban regime. These were actions taken in response to the Taliban harboring terrorist groups involved in 9/11. But building and supporting a new republic has been costly. Thousands of military personnel have lost their lives since then and the cost of occupation, development of the Islamic Republic of Afghanistan, and support in building the Afghan National Security Forces has been in the trillions.

In the meantime, the past twenty years has allowed a burgeoning democracy to exist in Afghanistan. In contrast to the Taliban rule of the 1990s, women have been given the freedom to be educated and to play a meaningful role in society including in journalism and the judiciary. Now many of these women, in addition to countless others who aided Western powers (such as interpreters) or who run afoul of traditional Taliban beliefs, are in mortal danger. Many seek to escape the country from the airport before the Taliban is able to take complete control. An entire generation of Afghanis may lose the only democracy they’ve ever known. The situation has reached a fever pitch with many now calling for a change of plans and with polls showing Americans unhappy with President Biden’s handling of the situation. In a situation as complicated as this, how do we morally make sense of it?

Firstly, there are several moral reasons which prompted the withdrawal in the first place. The United States has committed significant resources in the area both in troops and in money in an effort to prop-up the Afghan government for what many consider to be a ‘forever war’ that never had a chance of success.

In other words, the moral concern from people like President Biden is that while the U.S. had invaded Afghanistan to prevent terrorist attacks, the effort was not to “go to Afghanistan to nation-build.” While some believed that staying in Afghanistan would eventually lead to improvement, the sentiment from many military officials is that short of staying in perpetuity, the situation was never going to stabilize. Experts have predicted for some time that the war was ultimately unwinnable, that the Afghan government was corrupt, and that Afghan Security Forces would not succeed. Indeed, the pace at which the Afghan government fell to the Taliban after so much time, effort, and money had been spent could indicate how pointless staying ultimately would be.

All of these points suggest that, morally speaking, staying would not be worth the cost. Those, like Nikalas Gvosdev, suggest the adoption of a democracy triage mindset. As he notes, “U.S. support is not unlimited. If more is devoted to Afghanistan, or even sustained at current levels, it means less is available elsewhere for other equally deserving projects.” In other words, we cannot ignore the moral significance this scarcity of resources plays in determining the best course of action.

“With a population of about 36 million people, the question has to be asked as to whether that is the best investment of limited U.S. aid dollars versus other parts of the world where the same amount of money might lead to better outcomes for hundreds of millions more.”

On the other hand, philosopher Michael Blake emphasizes the ‘Pottery Barn rule’ (if you break it, you bought it): if you make yourself ruler over others, you are morally responsible for them. As Blake notes, “The decision to withdraw is likely to lead to enormous suffering in the years to come.” This means that the U.S. and other nations are to some degree morally responsible for the abuses that the inhabitants of Afghanistan will face. This would mean that it is incumbent on these nations to do something. To some, this means that “there is a moral case to remain and support Afghanistan against the Taliban threat” given the moral responsibility the West has to those who have embraced democracy, and because the costs of securing Afghanistan had been on the decline.

However, there is no magic undo button for the past few weeks and removing the Taliban would only be more costly now. There remain, however, lingering moral questions about what is owed to those who helped the West, those are being targeted by the Taliban, or those who simply want to leave. Gvosdev argues that in return for non-interference the U.S. should insist on the right to freely exit the country for those Afghanis who cannot live under the Taliban order. Indeed, much of the moral concern now is not whether to leave or stay, but rather what can be done to help given the terrible situation. Blake, and many other philosophers, would call this a moral tragedy – no matter what is done someone will be wronged – and so the best option is still one that is unmoral.

But this view is not one that should inspire moral apathy either. A forced choice between bad outcomes is no excuse for doing nothing anymore than claiming that since Afghanistan is a “graveyard of empires” (historically this isn’t really true) the situation was never going to resolve itself in a positive way. Indeed, this moral tragedy reminds us that ethics is not about choosing between good or bad options, but about weighing different, (and sometimes incommensurable) competing goods. Nevertheless, learning such lessons comes hard, as we are discovering right now in Afghanistan. But they also underscore the importance, as Blake explains, “that the U.S. tries to avoid entering into such morally tragic situations in the future,” and that people be more aware ahead of time of what their moral responsibilities demand. Perhaps that lesson learned could have avoided much of the “chaotic rush” we are now seeing, particularly when it comes to evacuations.

This only introduces yet another moral area of concern and that is the public. As Tom Nichols of The Atlantic argues, “this is on you” the voter. Even now as Biden’s approval ratings fall owing to the fallout of the withdrawal, it must be noted that the war’s beginning and the war’s end were popular. Minimizing moral tragedy means recognizing the role that one plays in creating them, and as Nichols notes, “Americans had no real interest in adult conversation about the reality of anti-terrorist operations in so harsh of an environment,” nor did they consider whether terrorism in Afghanistan had been effectively neutralized.

So perhaps this represents a failure of democracy as well. Just as was learned in Afghanistan, there is more to democracy than voting, so too is going to war a complicated business. For if the public is to be the warrant for future action, it can’t be the case that a war is allowed to fade into the background. A people cannot be expected to vote for war without knowing in advance what the situation is like, and the same goes for withdrawal. As most experts this week have made clear, the situation unfolding is not surprising (only surprising in terms of the speed at which events have unfolded).

While there are many lessons to be learned from Afghanistan, perhaps the public needs to recognize that when they beat the drum for war or call for the end of one, that they too bear moral responsibility for the outcomes. While Americans might blame the Biden administration for their handling of evacuations, they cannot blame the overall situation on their elected leaders for getting elected by calling for withdrawal. So, in addition to questions about the moral importance of either staying or leaving Afghanistan, and what means and resources should be used to aid the people of Afghanistan now, perhaps the most difficult moral question that haunts us is how to prevent moral tragedies like this from occurring again?

Aesop and the Unvaccinated: On Messaging and Rationality

cartoon image of scorpion on frogs back

Aesop shared a fable once about a scorpion and a frog. The scorpion asked a frog to ferry him across a pond. The frog was reluctant because he feared the scorpion’s sting. But the scorpion appealed to the frog’s intellect and pointed out that if he did sting the frog, the scorpion would surely drown as well. So, the frog agreed to the request. But, as expected, about halfway across the pond, the frog felt an awful pain and before they both died, asked the scorpion why. The scorpion replied that he really couldn’t help it saying, “it’s in my nature to sting.”

Why did the frog make that irrational decision, even though he knew better? Fables typically have a moral for us to learn, and this one is no different; make rational decisions. Unfortunately, we make irrational decisions all of the time, even if, in the animal kingdom, we are known as the rational ones.

As of this writing, about 50% of the U.S. population is vaccinated. Since it is estimated that between 70% and 90 % of the population will need to be vaccinated against the COVID-19 virus to reach herd immunity, we have a long way to go. But the vaccination rate overall has slowed significantly. We watched the vaccination rate begin to plateau in late June and early July, at about the same time that the more deadly Delta variant began to ravage the unvaccinated. Now, with new cases rising each day across the country, one wonders why anyone would put off getting the vaccine.

Explanations for this phenomenon abound; some believe that vaccine hesitancy is to blame. Early on in the rollout of the three major vaccines available in the U.S., many were “hesitant” because they wanted more information about the vaccines. Were the vaccines safe? If so, like most medications, they probably were not safe for everyone, so for whom were the vaccines not safe? Where would people go to get the vaccines? What costs would be involved? These are rational questions the population was asking; they may have been gathering facts to make rational decisions. Or were they?

Humans aren’t really known for our ability to be consistent when it comes to making rational decisions. Some of those same people get flu shots every fall and make sure their children receive needed vaccinations as infants and again prior to the start of school, still don’t want to take the COVID vaccine. All despite the fact approximately 99% of deaths in America due to COVID are found among those unvaccinated. It seems irrational not to avail oneself of this life-saving intervention.

Even some government officials — in those areas where the vaccination rate is low, and the spread of the variant is high — are growing more outspoken about their constituents’ health decisions. Senate minority leader, Mitch McConnell (R-KY), has reiterated in public that for those who can be vaccinated to do so. (His state, Kentucky, has a lower-than-average vaccination rate.) The Governor of Alabama, Kay Ivy, recently said that this is now an epidemic of the unvaccinated in her state, further stating that you just can’t teach “common sense.”

But alongside these pleas are plenty of name-calling, finger-pointing, and blaming — all of which may be smokescreens for the fact that we don’t really know how to message the vaccine’s appeal to remaining holdouts. We continue to assume that humans are consistent in making rational choices, and when we believe they have not done so, we have a tendency to throw up our hands. We think that stupid decisions are made by stupid people. The truth, however, is that we aren’t consistent in making rational choices; irrationality abounds, and it has nothing to do with stupid. The same people who buy lottery tickets also buy insurance. Why? Cognitive science and the felicific calculus of Jeremy Bentham may both give us a peek into why we make decisions as we do, whether they are rational ones or not.

In the 18th century, Bentham formulated the “felicific calculus” which stated that an event can be assigned a value (typically numeric) as to its utility or worth. That worth was measured in terms of the amount of happiness or pleasure the event would bring people; the more happiness, the better the decision that caused it, and the more rational it would be seen. This mathematical algorithm measured pleasure or pain in terms of several facets; among them were the pleasure or pain’s intensity, its duration, the probability of its occurrence (and reoccurrence), and the number of people affected. While being mathematically sound, philosophically appealing in many ways, and rational, for most day-to-day decisions the calculus was impractical. Adapting a thought experiment originally posed by cognitive scientist/mathematician Amos Tversky however, may help us understand from a cognitive perspective why people are so inconsistent when making decisions.

Example 1. Let’s say that your local health department has projected that 600 people will get the Delta variant of COVID-19 in your hometown of 6,000 people.

There is a proposed treatment, A, and if applied it will save 200 people. 

There is another proposed treatment, B, and if applied, there will be 1 chance in 3 that 600 people will be saved, and 2 chances in 3, that no one will be saved.

Which treatment would you choose?

When presented with the original problem, most people chose treatment A where there is a surety that 200 people will live.

Example 2. Now, let’s say that the health department again predicts that 600 people in your hometown of 6,000 will get the Delta variant of COVID-19.

There are 2 treatments, A and B.

If treatment A is applied, 400 people will die.

If treatment B is applied there are 2 chances in 3 that all 600 will be lost, and I chance in 3 that no one will be lost.

Which treatment would you choose?

When presented with the original problem, most people chose treatment B.

Notice, however, that 200 people survive in each case. Despite this, in case one, treatment A was chosen as the better alternative, while in case two, treatment B was chosen. Why, when the probabilities and outcomes are the same, did A get chosen one time and B the other time? It’s the way the cases are presented, or framed. In the first scenario, the probabilities are presented in terms of lives saved (gains), and in scenario two the probabilities are framed in terms of lives lost (losses). We focus on the number of lives saved in either case, whether it’s a “sure bet” or the better probability.

Currently, public messaging regarding vaccinations focuses on lives lost rather than the number of lives saved. If we reframe messaging to focus on lives saved (gains) instead of lives lost (losses), the application of Tversky’s thought experiment might get us over the hump and on our way to achieving herd immunity. The felicific calculus of Bentham applies as well; perhaps a mathematical algorithm makes more sense to us homo sapiens in this case. Think of the number of persons who would experience happiness and pleasure instead of pain over a long period of time, plus the freedom from worry that the Delta could re-infect us. Correctly framing the message seems to be one effective and scientific way to help people manage the inherent irrationality that comes with being human.

Life Imitates Art (and So Does the News)

image of movie opening title sequence

There is an old saw that life imitates art. But what exactly does it mean? Is it not the other way around – that art imitates life?

Many answers have been given to this question, but here’s one that I find plausible: life imitates art insofar as it reveals truths about us and our world. Such truths are not true because we find them corroborated by personal experience or the annals of history. The truths of art are true because they frame how we understand ourselves and our history in the first place. We might say, then, that life imitates art insofar as the truths of art help us make sense of life. They help us make sense of our human condition and what we value in it.

Take Homer’s Odyssey. According to one classicist, the great epic poem tells us “something true about life…It’s about homecoming…It’s about the bonds that connect family members over many years despite time and distance.” This is platitudinous, but nonetheless correct. The poem still speaks to us today partly because it transfigures our conceptions of what home and family are. That is, the poem compels us to understand homes and families differently, including our own. And we can appreciate such truths even when we have never left home, much less been to war.

If life imitates art, then so does the news. And there is one little-known artwork that seems to ring especially true given the current state of our union. The work I have in mind is “Stars in My Crown” (1950), a small-budget western film directed by Jacques Tourneur. The film tells the story of Walesburg, a small, predominantly white town in the postbellum South. Their story is strikingly similar to ours. Or we could say that our story imitates theirs.

Like our country right now, Walesburg is sick in body and soul. The town is not only plagued by an epidemic, but also struggling with the scourge of racism. The nature of these ills, as well as the town’s responses to them, are telling.

The racial troubles start – at least in the film – one lazy afternoon. An orphan named John Kenyon is fishing with his dear friend, a former slave named “Uncle” Famous Prill. John is a wide-eyed and well-mannered boy who is deeply loyal to Famous, and with good reason. Famous is a humble old man with a heart of gold. He has long been a guiding light in the community. As John tell us: “I don’t guess there was a boy or man in Walesburg who hadn’t had him for a teacher.”

While John and Famous are sitting along the creek beside their fishing rods, Lon Backett pulls up on his buckboard. Lon runs the general store, as well as a small mining operation outside of town. He wants to speak with Famous because the mica vein his workers have been mining runs under Famous’s property, and Lon wants to buy him out.

Lon makes several offers, but Famous graciously declines each one: “I got a long-tailed coat for Sundays. A house, got a bed, And I gets my vittles three times every God’s day, don’t I? Mr. Backett, what does I want with $16?” Lon drives off in a huff.

A few minutes later, Parson Josiah Gray comes along. The three discuss what had just transpired. They try to calm Famous down, assuring him that he is entitled to his land. After all, he is a free man under the law. But Famous knows better: “just saying a good thing don’t make it so.” The parson gets it. He acknowledges that no matter what assurances he gives, Famous will not have it easy: “I guess Lon Backett will have to kick up an almighty big stink before he learns his lesson.” This is a terrible understatement. Lon’s “stink” will nearly cost Famous his life.

While Lon drums up hostilities against Famous, the citizens of Walesburg start falling deathly ill with “slow fever.” Typhoid. Eventually they will discover that it is from the contaminated school well. Until then, the town goes into a lockdown. School closes and the church is shuttered. The graveyard begins to fill. The doctor and parson work double-time to serve the sickly and dead. (It is only then, by the way, that the doctor becomes integrated into the community. He was an educated elite from the big city and with a disdain for small town life. Townspeople sensed it, and for a long while they distrusted him. Sound familiar?)

During the epidemic, the threats against Famous intensify. Lon’s men are out of work and angry. One night they tear up his corn crop, destroy his winter food stores, and set loose his livestock. They come back another night as Night Riders, clad in white hoods and brandishing torches. They leave a burning cross in front of the porch and pin on Famous himself a note demanding that he give up his land or suffer the consequences.

When the note reaches Parson Gray the next day, he storms into the saloon where Lon and his clansmen hang: “Haven’t you seen one poisoned well spread grief and trouble through half the town? Don’t you realize the poison in that well was catlap compared to this?” The men are unmoved. If the parson wants a fight, they will give it to him.

Later that night the lynch mob surrounds the home of Famous, rope in hand, and orders him to come out. The parson intervenes. He asks that he be permitted to read Famous’s will before the dreadful deed is done. As the parson reads the will, he names each of the hooded men one by one. Famous intends to bequeath something to each of them: a razor for Bill Cole, who had wanted a beard since he was “knee-high to a hop toad,” an axe for Matt Gibson, his dog to Justin Briley, and even the mica vein for Lon Backett, since he seems to want it “powerful bad.” The men realize that they cannot go through with their plans. Not against Famous.

The film closes with a scene from church the next Sunday. The parson and his flock are singing:

“I am thinking today of that beautiful land
I shall reach when the sun goeth down;
When thro’ wonderful grace by my Savior I stand,
Will there be any stars in my crown?”

The camera pans the room, showing many of the townsfolk we have come to know. Most of them have been regularly attending services, but some have come for the first time. Everyone in the town seems to be there, celebrating together. The camera trains on Lon, with his hands piously clasped as he pours himself into the hymn. He looks as though he has, finally, learned his lesson and is now praying that there may still be stars in his crown.

This wholesome final scene has all the trappings of a feel-good Hollywood ending. A community looks healed and the credits will soon scroll. But then, just behind Lon through a church window, we catch a quick glimpse of Uncle Famous walking down the road, alone. The shot is easily missed. It is a subtle indication that the devastating effects of the peculiar institution continue, and often in ways that go unnoticed by those not suffering from them. The joyous churchgoers are unaware. And to the extent that we viewers believed everything in the town to be turning out alright, we, too, were complicit in the self-deception.

Today we face a similar situation. Coronavirus vaccinations promise an end to this terrible pandemic. Yet while our body politic has a path to health, there is no easy inoculation for the racism that has been poisoning our collective soul. And while most of us acknowledge the difficulties of combating racism, “Stars in My Crown” presents those difficulties in an especially perspicacious way.

First, the film shows how deep-rooted racism is often sustained because it advances the interests of the rich and powerful. This is not to say that racism is reducible to economic or class warfare. The point is rather that racist beliefs and practices are often reinforced because they serve the privileged. Lon Backett foments racial tensions in Walesburg because it advances his business interests. In America today there are many such people who sow racial division for their own gain. The billionaire businessman Charles Koch recently confessed that he and his political associates had “screwed up by being so partisan.” Koch seems well-intentioned. He seems to believe in equality and justice for all. But he and his Koch Network (now named, and not without irony: Stand Together) have invested millions of dollars in the very political messaging that has helped bring racial tensions in America to a fever pitch. This is hardly a new problem. And it persists because those who stand to benefit from systemic racism, however well-intentioned they may be, are easily blinded to the unjust reality they help create.

The film also shows the complexities of the human heart and how it so readily accommodates brotherly love, racial resentment, and economic anxiety. The Night Riders are undeniably racist, and their hate is further stoked by fears that without mining work they will be unable to feed their families. Yet however much racial hate they have, and however much that hate is exacerbated by worries about money, their enmity is nevertheless counterbalanced by a love and respect for Famous. “Sinners also love those that love them” (Luke 6:32). This is a complicated psychology, but not an uncommon one. What makes it complicated is that we cannot easily determine root causes. What is the real reason behind the Night Riders acting as they did, and what was mere pretense? Was their economic anxiety heightened by racial resentment? Or vice versa? Was their brother-love genuine, or just racism suffering from weakness of will? These very sorts of questions are being intensely debated right now (see, for example, here, here, and here).

Above all, the film reminds us how easily we ourselves are prone to overlook these challenges. When “Stars in My Crown” first debuted, The New York Times praised it: “The true spirit of Christmas – Peace on Earth, Goodwill Toward Men – is reflected both in word and deed in the heartwarming Metro-Goldwyn-Mayer picture.’” How far from the truth. The film does not warm our hearts, but rather warns us about our hearts. The film enjoins us, Ta-Nehisi Coates does, to “resist the common urge…toward fairy tales that imply some irrepressible justice.”

Now some readers might be saying to themselves: “I’ve read Coates, and I’ve thought quite a bit about these issues. I doubt that I really need to watch some B-western made nearly a century ago by an aristocratic Frenchman.” This may very well be true. Or it may not be. As Famous tells us in the film, just saying a good thing don’t make it so. We may think we understand what’s going on around us and in the news, and yet we may also be poorly mistaken.

Intellectual Property and the Right of Necessity

photograph of favella next to skyscrapers

Ever since the United States came out in support of waiving intellectual property protections for the COVID vaccines, we have seen renewed interest in the ethics of intellectual property over life-saving medication.

Currently, member nations of the World Trade Organization are bound by the TRIPS agreement to respect and enforce international medical patents. If a U.S. company develops a new drug and gets a patent for that drug approved by the United States, then other nations are bound by international law to also respect intellectual property rights to that invention.

There are numerous flexibilities built into the TRIPS agreement to try and ensure international access to medicine. For example, countries can issue compulsory licenses for intellectual property. These licenses allow a country to legally compel a company to make its patent available to domestic producers for a fee set by the government. For example, Canada could, under the TRIPS agreement, issue a compulsory license for the Johnson & Johnson vaccine, and force Johnson & Johnson to provide their patent to Canadian vaccine producers.

However, many have argued these flexibilities are insufficient to deal with the current pandemic and so have called for an international waiver to TRIPS protections for COVID-related medications. Such a waiver would allow anyone to produce COVID vaccines or medications without violating international property law.

There are legitimate worries about such a waiver. Critics argue that current production bottlenecks are not the result of intellectual property protection, that there are better and safer ways to increase vaccine production, and that such a waiver creates a precedent that could discourage future research and development.

This back and forth has been covered in a previous Prindle post, and so I won’t tackle the ethics of a COVID waiver here. Rather, I want to take a step back and look more broadly at intellectual property rights over life-saving medications. In particular, I want to consider such property rights in light of the ethical ‘right of necessity.’

Introducing The Right of Necessity

Most agree that it is permissible for a starving man to ‘steal’ a loaf of bread in order to save his own life. However, there are two very different explanations that one can give of that permissibility.

On the one hand, you might think that while taking the bread is indeed an act of theft, that act of theft can be justified since it is necessary for the man to save his own life. On this view, the starving man violates the property rights of the baker, but such right violations are justified in order to save a life.

On the other hand, you might think that the man is justified in taking the bread because, to use Aquinas’s language, it is not even “properly speaking theft.” According to this view, it is not that you are justified in violating someone’s property rights. Rather, the other person does not have a property right over the bread in the first place. If the baker has a surplus and there are others in true need, then the baker does not have a property right against them. Philosophers who take this second view, including Thomas Aquinas, Hugo Grotius, Samuel Puffendorf, and Alejandra Mancilla, believe in a right of necessity, a right to that which is necessary to survive.

There are many different arguments that one can give for a right of necessity. One argument, inspired by Puffendorf, is that you cannot justify to everyone a system of property that allows some to starve. What justification could you give to the starving man for why they should consent to, or accept, a system of property in which they die? Being dead, they will not receive any benefits of the system.

Another argument, this one inspired by Aquinas, is that we create systems of private property so that everyone can more efficiently acquire those goods necessary for their well-being. Nature originally belongs equally to everyone, and we divide it up into private property because it enables everyone to secure their well-being more easily. However, since private property is created to enable everyone to more easily secure that natural right, private property cannot contradict the natural right of people to that which they need to survive.

The Right of Necessity and Intellectual Property

If there is a right of necessity, what implication would that have for intellectual property rights over life-saving medication?

Life-saving medication, almost by definition, is often necessary for survival. Thus, if the right to necessity justifies stealing bread from those who have extra, so too it would seem to justify stealing a vial of unaffordable medication. Similarly, if I can steal an unaffordable vial of life-saving medication to save a life, then it would be strange to think I cannot violate an international patent to create that life-saving vial.

It seems, then, that if we accept the old doctrine that there exists a right of necessity, it would have profound implications for the justice of intellectual property law. Nations, according to such reasoning, possess a natural right to break patents if it is necessary to produce life-saving medication for those who could otherwise not afford them.

(The affordability qualification is an important one. Just as it would be theft for me, who can afford to buy food, to steal a loaf of bread. So too it would be unjust to violate international patents for patients who can otherwise afford to buy the medication.)

But even with the affordability qualification in place, there is currently a huge problem of access to life-saving medications by the global poor. As such, the right of necessity suggests a standing right to break many international medical patents.

A Looming Market Problem

There is a problem, however, with using the right of necessity to break patents on life-saving medications. If we can violate patent rights for life-saving medications, but not for relatively unimportant patents, it creates a systematic market incentive for firms to invest in relatively trivial research.

Let’s first consider this worry in the context of ordinary property. A starving man can take bread if he cannot afford to pay for it. But I cannot take a Rolex just because I cannot afford to pay for it. While the starving man needs bread, I do not need a watch, and so the right of necessity only applies in the starving man’s case.

But this raises a worry. If we, as a society, recognized a right to steal necessities, then that would seem to incentivize people to only produce luxuries. If you bake bread, then your wares can sometimes be taken without payment. But if you make luxury watches, then your property rights are totally safe. So why become a baker?

We can extend the worry to the pharmaceutical case. If a drug company invents a new life-saving medication, then, the company’s intellectual property rights will be systematically limited. There will be a standing right for others to violate their intellectual property protections if it is necessary to save lives. In contrast, if a drug company invents a non-life-saving medication, say a new form of Viagra, then there are no similar limitations on intellectual property protections. Since no one needs Viagra, companies can be secure in their property rights.

But the whole point of IP protections is to encourage innovation. We give companies patents in order to encourage them to invest in research and development of new, useful goods. If the patent protections on life-saving medications are systematically weaker, then it creates a perverse incentive for companies to divert R&D funding towards relatively unimportant medical research.

A Possible Solution

If we accept the right of necessity, it suggests a broad moral power to redistribute goods to those in need. However, we’ve also seen that the straightforward application of that moral power could have harmful long-term consequences.

One possibility is just that there is a conflict between justice and market efficiency. And indeed, I think defenders of the right of necessity must admit that it would justify inefficient market behavior. A starving man can steal bread, even if that creates a market disincentive to go into baking, which in turn drives up the price of bread even more.

However, I think there is another way we might try and reconcile these two.

The right of necessity is often illustrated with the permissibility of a starving man stealing bread. But, in principle, there is no reason why what’s taken must be directly related to the need. Suppose that the man was unable to steal a loaf of bread but could steal an expensive watch. Just as the man has a right to steal bread, so too he seems to have a right to steal the watch if it is required to be able to buy a loaf of bread.

This suggests a possible solution to the problem we have identified. While the right of necessity would justify a country in breaking international patents over life-saving medications. It would also, for instance, justify them in breaking other patents in order to raise the funds to purchase life-saving medications.

If this is right, then as long as there are any who remain in desperate and undeserved need, it provides a wide-ranging potential justification for breaking apparent property rights. Put another way, certain types of injustice, such as life-threatening poverty, might be so unjust as to render most of the property claims of our entire international system of justice merely provisional.

The Politically Great and the Morally Good

photograph of Machiavelli statue

It wouldn’t be beyond the pale to assert that we’re currently having a collective crisis of faith regarding many of our world leaders and the political institutions they represent. Not only do they appear to be ineffectual in the face of emerging challenges – climate change, economic collapse, pandemics, and rising fascism, to pick a few – but several also seem to be fundamentally untrustworthy and, if one wants to be provocative, downright immoral.

For example, in the U.K., in the past year alone, several high ranking cabinet ministers, including the Prime Minister himself, have been accused of lying, bullying, adultery, cronyism, corruption, and the questionable appropriation of public funds for personal use. Abuses of power like these, which extend far beyond the U.K.’s shores, have even been identified by the Centre for the Future of Democracy’s recent report as one of the critical causes for the broader decreased faith in democracy in many of its former bastions.

Now concerns regarding politicians acting less than virtuously are nothing new. In the 44 B.C. treatise De Officiis, the Roman statesman Cicero wrote that there is “no vice more offensive than avarice, especially in men who stand foremost and hold the helm of state. For to exploit the state for selfish profit is not only immoral; it is criminal, infamous.” Indeed, concern regarding politicians’ dubious dealings have been common throughout history.

Cicero’s demand that politicians not exploit the state is pretty ethically uncontroversial. However, there is a more nuanced question here than should politicians do their job morally. The question of the personal moral character of politicians and what this means for their suitability as statesmen is far more complex. Does it matter to us if a world leader acts immorally in private when they are good at their job of being a politician? That is, if they lead the country, protect their citizens, and communicate clearly with those they’re responsible for/to, then why should we care about their extracurricular ethical deficiencies?

To illustrate, let’s use a thought experiment. Imagine an election has been called to decide your country’s next leader. Candidate A is known to be morally admirable – they recycle, don’t cheat on their partner, give generously to charity, don’t lie, etc. On the other hand, Candidate B does the opposite – they lie, cheat, throw all their rubbish in the same bin, hoard their wealth, etc. After a brutal campaign, both candidates demonstrate their suitability for the job in equal measure. With nearly all the votes counted, there’s a tie. Yours is the only vote left to be cast. As such, you get to decide who the newest world leader is. Whom do you pick, and why?

On the one hand, you might think it’s obvious – you pick Candidate A as they are objectively better than Candidate B. In addition to doing the job, Candidate A is also a good person, and that’s valuable. After all, immorality is, by definition, bad (we might want to value it as -1). This undesirability is evident compared to the amoral (which we could value at 0) and even more so when compared to the morally good (which we can value as +1). Thus, if you end up with an effective leader regardless, why would you pick Candidate B, who comes with a negative value (-1), when you could instead have Candidate A, who comes with a positive value (+1). In other words, why settle for less when you can have more?

Complementing this self-centered approach, those favoring the morally virtuous leader may also ground their reasoning in justice. We typically think that the immoral shouldn’t succeed at the expense of the moral; there is something right in rewarding those who act morally and punishing, or at least not rewarding, those who don’t. To actively choose Candidate B over Candidate A would fly in the face of this sense of justice. It would indicate that individual integrity is divisible from the mainstay of professional ethics. That personal moral failing can be disregarded in decisions about who should(n’t) be rewarded in professional capacities. In short, when all other things are equal, justice demands that bad people shouldn’t succeed and good people should. Thus, Candidate A is the just choice.

On the other hand, it’s plausible to argue in favor of, or at least of not discounting, Candidate B simply because they possess some personal moral failings. After all, who among us is entirely virtuous?

As we’ve already established, the two candidates are equally qualified. Thus, we have to ask why it matters that one acts in a morally dubious manner and the other doesn’t. Could it be that we’re acting upon some ill-formed or reactionary intuition? After all, we don’t think in these terms when we consider a person’s suitability for other jobs or tasks. For example, if you need surgery, the idea that you would pick from a list of equally qualified surgeons based on whether one was faithful to their partner would seem bizarre. In this situation, what matters is that person’s capacity to fulfill the role’s requirements. Anything outside that scope is inconsequential. So, just as we would think it insignificant that a capable surgeon acts immorally when not on-call, we might feel the same about a world leader – provided that they can undertake the task of effectively leading the nation, all other considerations should fall to the wayside.

As with many thought experiments, you might consider this one’s parameters to be too restrictive or even implausible. The pessimistic (or some may say realistic) of you may argue that the very qualities that make one a practical head of state are the same qualities that would tarnish one’s personal ethical record. As noted in Machiavelli’s The Prince:

“And again, he need not make himself uneasy at incurring a reproach for those vices without which the state can only be saved with difficulty, for if everything is considered carefully, it will be found that something which looks like virtue, if followed, would be his ruin; whilst something else, which looks like vice, yet followed brings him security and prosperity.”

For Machiavelli, those qualities which make someone a good person are not the same as those that make them a great leader. Indeed, often those qualities may impede one’s ability to do their job as a politician. That, at the end of the day, the requirements of statehood demand that one lie, deceive, subjugate, battle, and even kill others – things that we would otherwise decry as vices and sins if not for the fact that the role of world leader required them.

Where does this leave us then? Do politicians need to be moral, and are we right in expecting them to be? While it may seem obvious to say yes – to decry the idea of an immoral leader – we have to be aware that we may be moralizing in a way that we wouldn’t do for people in other roles. Furthermore, we might even be doing our country a disservice by trying to install a leader who, under any other circumstance, might be considered a good person.

LEGO and the Building Blocks of Environmental Salvation

photograph of children playing with LEGOs in the grass

Last month, the LEGO Group unveiled its first prototype recycled plastic brick. The brick — made from discarded water bottles — is the result of three years of work by a 150-strong team of material scientists and engineers attempting to make the world’s most valuable toy brand more sustainable. But how should we receive such news? Are companies that adopt positive environmental practices deserving of moral praise? Or are they merely doing what they should have always done?

LEGO’s announcement is only the latest in a line of promising environmental developments for the brand: Its new, recycled bricks join its plant-based polyethelene bricks that first hit shelves in 2020. Together, LEGO plans to use these bricks to ensure that all of its elements are produced from sustainable materials by 2030. In addition, LEGO has announced that it aims to send zero waste to landfills by 2025. The company is on track to this goal, with 91% of their waste (including 100% of all plastic waste) being recycled in 2020. The previous year, LEGO also became balanced by renewable energy – with the energy output from their investments in renewables being greater than the total energy used in LEGO factories, offices, and stores.

And the LEGO Group is by no means the only corporation debuting positive environmental policies. In January 2020, Starbucks CEO Kevin Johnson announced the company’s new sustainability commitment. Central to this commitment is an aspiration to become resource positive: storing more carbon than they emit, eliminating waste, and providing more clean freshwater than they use. In concrete terms, this has seen Starbucks set three preliminary targets for 2030:

  1. A 50% reduction in carbon emissions in their direct and supply chain.
  2. 50% of their water withdrawal for direct operations and coffee production will be conserved or replenished with a focus on communities and basins with high water risk.
  3. A 50% reduction in waste sent to landfill from stores and manufacturing, driven by a broader shift towards a circular economy.

A cynical eye may see these declarations as a simple exercise in public relations. And we’re right to be wary. Many corporations engage in ‘greenwashing’ – that is, spending a great deal of time and money on marketing themselves as environmentally friendly, while doing little to minimize their actual environmental impact. Chevron went to great lengths to proclaim their environmental conscientiousness in their 1980s “People Do” campaign – all while the company violated the Clean Air Act and Clean Water Act and spilled oil into wildlife refuges. More recently, Amazon has announced its plans to have 100,000 electric delivery vehicles on the road by 2030. But Amazon remains silent on how these vehicles will be charged. With more than 60% of USA’s electricity generation still being derived from fossil fuels, there’s every chance that the impact of these electric vehicles is overstated.

Nevertheless, there is a very real sense in which the commitments of large, multi-national corporations may play a pivotal role in addressing climate change. With a disappointing outcome at the latest COP conference, and many countries already failing to meet their own emissions reduction pledges, there is a strong incentive for “sub-national actors” to take up the slack. Ordinarily, we think of these actors being state governments and cities – that is, political communities. Large corporations, however, hold significant sway over consumer behavior and consumption, and may be just as important in avoiding catastrophic climate change.

But are the environmental policies of corporations like the LEGO Group and Starbucks deserving of moral praise? To answer this question, we have to make a distinction between when a particular action is merely morally good, and when it is morally obligatory. Suppose, for example, that I live on a busy road, and elect to go out to the curb and spend the remainder of my day helping elderly pedestrians cross an otherwise harrowing intersection. Clearly, it would be morally good of me to do this. It’s a kind, thoughtful gesture that minimizes the risk of harm to these pedestrians. But there’s certainly no moral obligation for me to spend my afternoon doing this. To say that there was such an obligation would be to say that it’s morally wrong of me to spend my afternoon doing something else (like writing this article). This, it seems, goes too far. Contrast this with a case in which I clearly do have an obligation to do something: say, for example, feeding my cat. In that case, I would be doing something wrong if I failed to act (thereby causing my poor feline companion to go hungry).

Note something interesting, however: Where a moral obligation exists, we seem less inclined to praise an individual for their actions. While I may deserve moral praise for helping elderly pedestrians cross the road, I don’t deserve similar praise for feeding my cat. I’m simply doing what I’m supposed to do.

This distinction between morally good and morally obligatory actions can be helpful in deciding how we should respond to corporations like LEGO and Starbucks. The question we need to ask ourselves is: do these companies already have a moral obligation to take positive environmental action? Our answer will depend on how we think we should assign responsibility for things like climate action. Here, some of the same principles I discussed in a previous article can be of use. We might, for example, think that responsibility should fall on those who have directly contributed to the climate crisis (the Polluter Pays Principle), or who have benefited from those same activities (the Beneficiary Pays Principle). These principles would certainly place a moral obligation on corporations like LEGO and Starbucks. But even this may not be required. Unlike states (who are mired in internal and external politics) and individuals (who may have insufficient resources at their disposal), corporations have an enormous amount of freedom and financial resources to engage in positive environmental action. This alone may be sufficient to place a special obligation on them to do the right thing (what is often referred to as the Ability to Pay Principle).

With more than 30,000 stores worldwide, and an almost 40% share of the U.S. coffee market, there is no denying that what a corporation like Starbucks does, matters. Even small policies – like offering a discount for drinks served in reusable cups – can have a significant positive environmental impact. And while such action does serve a marketing purpose – lifting the value of their brand in the public eye – it may also play a vital role in our global efforts to avert catastrophic climate change. Whether or not such actions are deserving of moral praise is another question entirely, however. If there already exists a moral obligation on these companies to act in such ways, then praise is undeserved. Instead, companies like LEGO and Starbucks are merely doing what they always should have done.

Ethical Obligations to Climate Refugees

photograph of waves threatening coastal city

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


According to projections, by 2060 the lower third of Florida, home to 8 million residents, will be underwater. Within just a few decades many of Miami Beach’s landmarks will be lost. In response, some areas plan to fight rising sea levels with new infrastructure and new sea walls while other areas plan for a “managed retreat.” However, there are many more places around the world where there isn’t the money or capability to prevent homes from slipping into the water. People losing their homes to the sea means that they will need to go somewhere, and as time goes on we can expect to see a rising number of climate refugees. So, what are our ethical obligations to those being displaced?

Climate change is causing the melting of ice sheets and glaciers and the expansion of sea water. Current estimates show that sea levels are rising by 3.6 mm per year. A 2019 study projected that sea levels will rise by 69-111 cm by 2100. (In February, however, a paper in Ocean Science argued that previous projections have been conservative and that sea level rises will be higher.) What this means is that by 2100, we can expect significant threats to many areas of human habitation. In addition to Florida, which faces a number of climate challenges, Brazil, Egypt, Cameroon, China, India, and Indonesia will all face serious problems. For Bangladesh, rising sea levels represents a growing existential threat. Flooding owing to rising sea levels could result in the displacement of hundreds of millions of climate refugees. Indeed, while 2011 estimates had 187 million people potentially having to flee their homes, recent figures now project that as many as 630 million people (that is roughly 12% of Earth’s population) may be displaced.

But this is not simply a humanitarian crisis where we might feel obligated to lend a hand. The fact that the relationship between rising sea levels and climate change is generally well known changes the moral situation. Our intervention is demanded not (only) as a response to those desperately in need of help, but as a matter of justice given the harms we’re responsible for. If a nation or group of nations emits carbon, which in turn raises sea levels, and thus causes people to be displaced from their homes, surely there is an added moral dimension concerning what aid is owed to climate refugees. Given their particular contribution to the problem, what unique obligations might Western nations, for example, bear?

First, there are potential legal obligations. According to international law, people who are fleeing persecution in their country can seek to enter another. However, the current definition of “refugee” doesn’t apply to people who flee their homes because of climate displacement. Recently this controversy was addressed by the UN Human Rights Committee, who in 2020, ruled that climate migrants cannot be returned to countries where their lives might be threatened by climate change. Unfortunately, this is not binding; the issue is controversial and remains disputed. There are some who believe that the original 1951 convention on refugees should be interpreted to include those who are fleeing climate disasters. Others, like Alexander Betts and Nina Birkeland have argued against trying to redefine what it means to be a refugee because it would be impractical; renegotiating the convention would likely result in a worse deal for refugees.

Where does this leave us? According to the “conventional view” as described by Joseph Carens, states are free to exercise considerable discretionary control over the admission and exclusion of immigrants. As this is a power often considered an inherent part of what it means to be sovereign, it would seem that nations are not obligated to help climate refugees. Indeed, international libertarians argue that the only obligations that extend beyond a state’s border concern respecting other nations’ right to self-determination and refraining from harming them. (Though an important exception to this is refugees who are governed under international agreement.)

Putting aside the legal matter of potential obligations, there are those who argue that a state should provide aid and accept climate refugees if that nation has disproportionately benefited from the combustion of fossil fuels. Since many Western nations are largely responsible for the increased carbon emissions, the rising sea levels, and thus the environmental disasters that follow, it is argued that the developed world has a special responsibility not only to restrict emissions, but to protect and assist the global poor who are facing fallout from said disasters. On the other hand, there are critics who argue that historical responsibility as a justification for an obligation to assist migrants is problematic.

The polluter pays principle, for example, holds that those who pollute should bear the costs of managing the fallout. In other words, responsibility is tied to historical facts. It suggests that a nation like the United States should be obligated to aid climate refugees because of its role in causing the problem. However, there are difficulties in attributing blame in this way because of our inability to identify the specific harms done and to trace them back to specific causal factors. This complicates our ability to say that any particular nation might be obligated to accept certain refugees. There is also disagreement about how far back this kind of responsibility goes. Should a nation be held just as accountable for emissions dating back to a time when the effects of climate change were not well known? Theorists, like David Miller, have argued that emissions prior to the 1980s were not inherently harmful and so don’t count towards historical responsibility.

On the other hand, some philosophers argue that considerations of historical responsibility are beside the point; what we owe to climate refugees need not depend on establishing causality. For example, Jamie Draper argues that high-emitting states have a responsibility to climate refugees because even since the 1980s there has been a foreseeable connection between rising carbon emissions and harmful consequences of climate change. Because of this a nation like the United States can be said to be obligated to help regardless of the specifics concerning the causal relationship. These nations were well aware of the risks; they knew their emissions would generate harmful effects. Their failure to take the appropriate precautions render them a guilty party. For Draper, it isn’t a causal connection, but a failure to take due care that obligates nations to aid climate refugees.

It is to recognize that this is a problem being felt today, not merely one we must plan to confront in the future. There are already millions of people facing the prospect of fleeing their home. Addressing this problem means answering difficult questions: Does it matter who caused climate change? Should one’s ability to bear these costs be factored in? Do nations, or regions, or corporations, or individuals bear the blame? Is there such a thing as collective responsibility that we all share? Will our moral and legal frameworks catch up before it’s too late?

Why Trivial Contributions to the Climate Crisis Still Count

photograph of water pollution with skyscrapers on opposite shoreline

Countries resistant to meaningful climate action often point to the relatively small size of their contributions to global carbon emissions. It is this very point which conservative Australian broadcaster Alan Jones sought to convey with his infamous grain-of-rice demonstration. The argument against Australia taking climate action, it seems, goes something like this: Even if anthropogenic climate change is a concern, and even if Australia is adding to this problem, their contribution (about 1%) is trivial compared to the exceedingly large contributions of other nations (such as China and the U.S. with 28% and 15% of global emissions respectively). Given this, it is these emissions heavyweights that should bear most — if not all — of the responsibility for taking climate action. Call this the Trivial Contribution Argument.

But is this a good argument? For starters, let’s ignore the fact that — despite their relatively small total emissions — Australia has the third worst per capita emissions rate in the world. Let’s also ignore the fact that when emissions from coal exports are taken into account, Australia’s total contribution to global carbon emissions is closer to 3-4%. Assuming that Australia is responsible for only 1% of global carbon emissions, does this excuse them from taking meaningful climate action?

In order to answer this question, it’s necessary to dig a little deeper into how the Trivial Contribution Argument works. One underlying assumption seems to be that a trivial contribution, when remedied, will only ever provide a trivial solution — one that is unlikely to solve the problem in question. Suppose, to borrow a vivid illustration provided by one philosopher, I am currently pouring a jug of water into a flooding river. Suppose, further, that the river is about to breach its banks downstream and cause devastation to a nearby town. Am I under some kind of obligation to curb my behavior? My contribution to the flood is trivial, and — for this same reason — any remedy to my actions will only provide a trivial solution. Sure, I can refrain from pouring the jug into the river — but this won’t prevent the flood. Given this, there seems to be no compelling reason for me to modify my behavior; it makes no difference either way.

This, it seems, is the fundamental reasoning behind a country pointing to its trivial carbon emissions as a way of avoiding their obligation to engage in meaningful climate action. Unless larger polluters (like China and the U.S.) do more, there is little to be gained from the remedial actions of smaller emitters. Given that climate action always comes at a cost — both economically and otherwise — why would countries decide to bear this burden when it won’t solve the problem?

Such reasoning, however, is deeply flawed. Consider another example to show why this is the case. Suppose that official waste disposal is expensive in my neighborhood, and that — instead of paying for this service — my neighbors begin dumping their garbage on my front lawn. The damage to my garden (and property value) is predictably severe. I eventually catch one of my neighbors tossing a burger wrapper on to my property and confront him about his behavior. He shrugs his shoulders and says that he isn’t the culprit I need to worry about. He surveys the accumulated rubbish pile and estimates that he’s responsible for less than 1% of the waste. He identifies two of my neighbors as littering heavyweights, claiming that they, together, are responsible for more than 40% of the waste. He explains that curbing his own behavior won’t do much to help until I convince those neighbors to do something about their own behavior. With that, he shrugs his shoulders, flings a banana peel onto the heap, and departs.

In this context, the unreasonableness of my neighbor’s defense is plain to see. Yes, there are those who are more responsible for the problem. But he is still responsible for at least some of the problem, and thus responsible for at least some of the solution. While ending — or at least reducing the extent of — his littering will not remedy the issue entirely, this does not excuse his complete inaction.

In fact, the Trivial Contribution Argument isn’t merely flawed — it’s actually paradoxical. Suppose we accept that a 1% contribution is small enough to excuse a country like Australia from any obligations regarding climate action. What percentage, then, would require them to act? Those emitting 2% will point to those emitting 5%, and those emitting that amount will shift the blame on to those emitting even more. Inevitably, the buck will be passed upwards until only the largest emitter is held responsible. But herein lies the paradox: While China is the world’s largest carbon emitter, they are still responsible for ‘only’ 28% of total global emissions. Thus, any remedial action taken by China would be limited to solving no more than a quarter of the problem. Indeed, China could shirk their own responsibilities by saying “even if we do all we can, it won’t be enough, as the remaining countries (combined) are doing far more damage than we are.” In this way, the rationale behind the Trivial Contribution Argument would allow China to shift blame back on to the smaller emitters — leading us full circle, with no responsibility attributed.

The only way to avoid this is to deny the validity of the Trivial Contribution Argument; that is, to deny the claim that a trivial contribution to a problem should be treated like no contribution at all. This is why — when considering the demands of climate justice as they relate to climate action — philosophers tend to take a more pluralistic approach. While the extent to which an actor has contributed to a problem (often called the Polluter Pays Principle) is relevant, we also take into account other principles — such as the extent to which an actor has benefited from the problematic behavior (the Beneficiary Pays Principle) and the actor’s capacity to provide a solution (the Ability to Pay Principle). This more nuanced approach is vital if we wish to engage in real and effective climate action on a global level.

The Higher and Lower Pleasures of the French Culture Pass

image of stacked comic books

French president Emmanuel Macron recently introduced a “culture pass,” what amounts to €300 for each 18-year-old in France to spend on cultural activities – like going to the movies, seeing a play, or going to a museum – or for buying items that are of cultural or artistic value – such as books, art materials, membership in classes, etc. The French youth need only download an app, and then they have 2 years to spend the funds on whichever of the above they see fit. Some have praised the initiative for encouraging youths to experience more cultural activities after a long lockdown, as well as for stimulating the creative sectors of the economy; others, on the other hand, have taken a more cynical stance, denouncing it as nothing more than a vain attempt at wooing France’s younger voters.

There has, however, been a different kind of criticism, one that concerns what the culture pass users are spending their money on. A headline of a recent article in The New York Times, for instance, reads: “France Gave Teenagers $350 for Culture. They’re Buying Comic Books.” The article outlines how many are using their culture pass to buy manga, specifically, with some in the French media even dubbing the culture pass the “manga pass”, instead. While the Times article is, in fact, largely supportive of the initiative, it’s clear that there is some subtle judgment going on in the title.

Others have been less subtle. For instance, in a recent opinion piece at The Telegraph, the author disparagingly compares the culture pass to the Education Maintenance Allowance (EMA), funds that are available to youths in the UK for educational purposes:

“The EMA was a great scheme, but young people don’t always do what they’re supposed to with the resources that they’re given. This is a lesson now being learnt by the French government, with the news that Emmanuel Macron’s ‘culture pass’ is being used by its young beneficiaries to stockpile graphic novels instead of opera tickets… It’s no wonder that many French kids are spending their €300 on the instant gratification of an entertaining comic, not challenging themselves with an arthouse film or a three-hour play.”

Here, then, is the concern: giving youths money to spend on culture is really only worthwhile if they spend it on the right kind of culture. Art galleries and opera are cultural activities that will challenge you and open your mind to new artistic experiences; comic books, on the other hand, will not.

Is this mere snobbery, or is there something to this argument? There does exist philosophical precedent for making a distinction of this kind: John Stuart Mill, for instance, famously stated that “it is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.” According to Mill, that this is the case is the result of there being “higher” and “lower” pleasures: those of the former type exercise our more complex capacities – say, by challenging us to use our reason or engage with difficult subject matters – while those of the latter appeal to our more animal nature – e.g., those pleasure that come along with eating, sleeping, and our more carnal desires. Mill also argued that there was no way to balance the lower against the higher: in other words, even though you might enjoy a greasy fast-food meal, no number of such meals could ever outweigh the much more quality pleasure of visiting an art museum.

It’s not clear how convincing Mill’s view is. After all, it seems to be dependent on the individual as to what one gets out of any particular experience. For instance, while one might think that going to see an opera is more worthwhile than, say, going to see the new Fast and the Furious movie, I might get more out of the experience of appreciating the excellent cinematography of the latter, especially if I’m bored to tears by the former. Something similar is no doubt the case when it comes to what one will get out of different reading materials: while one person might not find anything of value or interest about manga, others will no doubt get much more out of the experience of reading it. There doesn’t seem to be a good way, then, of clearly categorizing certain cultural or artistic experiences as objectively better than others.

Nevertheless, one might still think that there is at least a sense in which the 18-year-olds of France shouldn’t be spending all their culture pass money on manga. Here, then, is a different kind of argument: one could perhaps make best use of an initiative like the culture pass to experience a diversity of cultural activities. This is not to say that there are any specific cultural or artistic experiences that are any more valuable than any others – as we saw above, there is no specific reason to rank opera above manga. Rather, there is value to be had in the diversity of experiences itself.

There is perhaps something common both to the argument that there is value in a diversity of cultural and artistic experiences, and the one that says that the French youth are wasting their culture pass buying comic books: if all one is doing is buying more comic books, and this does not make one’s cultural experience any more diverse, then one should consider spending their money on something else instead. Where these arguments differ, however, is that if there is value in diversity, then perhaps some people should, in fact, be buying more comic books. For instance, if you’re the kind of 18-year-old who grew up listening to classical music, going to the opera, and reading Dostoevsky, then diversifying your cultural experience might mean that you should really go out and buy some manga; after all, you might learn something new.

Vainglories Are Like Ogres — Part III: The Layer Problem

photograph of buskers celebrating child's donation

In this series of posts, we’ve been talking about a particular challenge with the vice of vainglory. In this particular post, I’m going to talk about the fact that vainglory comes in layers, and about how, even when one attempts to combat vainglory, one immediately becomes vainglorious about that very attempt.

If you are not already familiar with the concept of vainglory, you should first look back at the first and second post in this series.

  • In the first post, I defined vainglory and explained how it differs from the vice of vanity. The vice of vainglory is, in essence, the disordered desire for the acclaim and recognition of others.
  • In the second post, I explained why vainglory is such a big deal. I argued that there are many dangers of vainglory, but the perhaps greatest is that our desire for the approval of others often tempts us away from doing good and even tempts us to doing evil.

A Bit of Auto-Biography

To introduce this problem of vainglory, it will be useful to give a bit of biography. I was first introduced to the vice of vainglory in the Sermon on the Mount. In Chapter 6 of the book of Matthew (you don’t have to be Christian to appreciate this passage as a profound bit of moral psychology), Jesus says:

“‘Be careful not to practice your righteousness in front of others to be seen by them. . . . So when you give to the needy, do not announce it with trumpets, as the hypocrites do in the synagogues and on the streets, to be honored by others. Truly I tell you, they have received their reward in full. But when you give to the needy, do not let your left hand know what your right hand is doing, so that your giving may be in secret. . . .

‘When you fast, do not look somber as the hypocrites do, for they disfigure their faces to show others they are fasting. Truly I tell you, they have received their reward in full. But when you fast, put oil on your head and wash your face, so that it will not be obvious to others that you are fasting, but only to your Father, who is unseen; and your Father, who sees what is done in secret, will reward you.’”

Here, there is a warning against doing good things just for the reputational advantage, and a suggestion that to combat that temptation you do good works in secret.

I took this to heart, not in the sense that I always did my good deeds in secret, but I often would. For example, when donating to a cause I almost always make sure the donation is anonymous. I do this, because each and every time I donate, I’m still hopeful that people will notice and be impressed by my generosity. To put this in more contemporary parlance, I try extremely hard to avoid virtue signaling, because I want to make sure I’m doing these things for the right reasons. Or, at least, that is what I’ve told myself. But here is the problem that I have noticed. If I really look at my own behavior, it becomes clear that at least part of my motivation for doing these things in secret is because I don’t want people to think I’m vainglorious or virtue signaling.

Most of my Facebook friends are either Christians who know the Sermon on the Mount, philosophers familiar with the vice of vainglory, or debaters familiar with the concept of virtue signaling. So, when I donate to a cause and don’t share it on Facebook, am I really doing that to help the poor, or am I doing so because I care about my reputation as not being vainglorious?

Sometimes I’ll donate to a cause and it will say that if I post about it on Facebook others are likely to donate. I would increase the help I give to the poor by bragging a little bit. When I refuse to do so, what is my real motive there? Is it my concern for the poor, or is it instead that I’m too embarrassed others will think that I’m trying to show off?

Rebecca DeYoung, in her book Glittering Vices, says that the best way to combat vainglory is by the practices of “silence and solitude.” That is, we are to do good things alone and then keep silent about them. This does seem to be the best way to train yourself out of vainglory, and yet the vice is so pernicious that I become vainglorious even about that practice itself.

The General Problem: Layers

I am not the first person to notice this problem about vainglory. The problem was put particularly well by John Cassian, the great fourth century mystic:

“Our elders admirably describe the nature of [vainglory] as like that of an onion, and of those bulbs which when stripped of one covering you find to be sheathed in another; and as often as you strip them, you find them still protected. . . . [T]his one when it is beaten rises again keener than ever for the struggle; and when we think that it is destroyed, it revives again, the stronger for its death. The other kinds of vices usually only attack those whom they have overcome in the conflict; but this one pursues its victors only the more keenly; and the more thoroughly it has been resisted, so much the more vigorously does it attack the man who is elated by his victory over it.”

Cassian gives us particular examples. I hate this passage, because it is far to easy to recognize myself in it (note, the point is a good one even without employing the notion of an anthropomorphic devil):

“For where the devil cannot create vainglory in a man by means of his well-fitting and neat dress, he tries to introduce it by means of a dirty, cheap, and uncared-for style. If he cannot drag a man down by honour, he overthrows him by humility. If he cannot make him puffed up by the grace of knowledge and eloquence, he pulls him down by the weight of silence. If a man fasts openly, he is attacked by the pride of vanity. If he conceals it for the sake of despising the glory of it, he is assailed by the same sin of pride.”

For a long time, I thought I had done well in not caring about frivolities like clothing. I did not care about my reputation. Except, further reflection revealed that I did care a lot about what people thought about what I wear. I would have been embarrassed if people thought I was the sort of person who cared about something as frivolous as style. My disregard of style was a style all its own, a style I could be proud of without recognizing the vice.

And so we reach the central lesson of this series: vainglories, like ogres, are like onions. They come in layers, and so even once you have peeled away one form of vainglory, it leaves vainglory of another form underneath. To combat run-of-the-mill vainglory, I ended up adopting a meta-vainglory. And indeed, a part of me is worried that in writing this series I’m actually engaged in a sort of meta-meta-vainglory — it really is hard at times to tell.

Does Peeling Away Layers At Least Make Progress?

How far can we take this analogy with onions? If we take it far enough, we might find something at least somewhat comforting in it. Perhaps, even as you peel away layers of vainglory there is yet new vainglory underneath, but just as an onion gets smaller each time a layer is peeled away, so too maybe the vainglory gets smaller and less troublesome the more times we work through the process.

But if Cassian is right, vainglory actually gets more dangerous as it moves from the more earthly to the more spiritual. According to Cassian, as vainglory gets mixed in with virtue, it becomes harder and harder to recognize and combat the vainglory that remains: “but this being interwoven with our virtues and entangled in the battle, fighting as it were under cover of the darkness of night, deceives the more dangerously those who are off their guard and not on the lookout.”

To make this concrete, let’s consider my own case again.

Part of the danger of vainglory, usually, is that we prioritize our own reputation over help we can give to those in need. We invest our time in energy, for example, in high-profile causes rather than ones where we can really make the biggest difference. Does my own meta-vainglory avoid that problem? No. After all, I chose not to post about the charity I donate to on Facebook, an act which would indeed help raise awareness and funds, because I don’t want to appear to be virtue signaling.

But not only does the same problem occur, Cassian also says that it deceives more dangerously as if under the cover of darkness. Is he right that this vainglory was harder to notice? Yes, I think so. First, I was able to trick myself not only into thinking I was acting just to help the poor, but I also thought I was specially avoiding a concern for my own reputation. Thus, there were two layers of self-deception I had to work through before I could see the truth. Second, it was much harder for others to point out my own vainglory, because it was precisely that vice that I was in the process of hiding from them. I have seen people called out and corrected for virtue signaling before. It is pretty easy to do when someone is engaged in run-of-the-mill vainglory. But no one has ever called me out for my meta-vainglory, because, of course, the whole point is no one sees that as what I’m doing. So, this meta-vainglory is not only still damaging, it is doubly hidden both from myself and from others.

So What Do We Do?

Honestly, I have no idea. John Cassian thought that vainglory could not just be treated, the root cause had to be weeded out as well. For Cassian, you have to get to the bottom of why we care about our reputation in the first place. To excise that vicious tendency in its entirety, you won’t be able to just counteract the various forms in which the vice arises in piecemeal fashion. The problem is that Cassian is not clear on how that excising is to be accomplished — and indeed he seems to agree with Augustine and Aquinas that the vice could in principle only be overcome with divine assistance.

As best I can tell, the most practical secular solution involves two discrete steps. First, one should engage in regular practices of silence and solitude. One should make sure that, when there is no reason for one’s good works to be known, that they are not known. However, one should also inculcate a greater concern for the poor and greater love for the common good. The only way to root out vainglory, in the end, is to be so absorbed looking outward at what is good, that one does not even attend to one’s own reputation (except where that reputation is important to the goods that one pursues). We return, perhaps, to the lesson of Augustine from Part I: to overcome vainglory, it is not really about coming to care less about our reputation, but coming to care about other things more.

Destroy the ‘Mona Lisa’ for an NFT?

pixelated image of Mona Lisa painting

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Damien Hirst is a contemporary English artist whose work is world-renowned. Recently, I had the opportunity to view a collection of his work at the Galleria Borghese in an exhibition titled “Archaeology Now.” Hirst works almost exclusively in the genre of sculpture, but recently he has been pursuing a decidedly less physical kind of art — NFTs.

NFT, which stands for “non-fungible token,” is a piece of digital property that is unique — nobody but the owner of the NFT has that exact electronic object in their possession, and it is marked as an original in a way that digitally distinguishes it from all copies made. Hirst is part of a recent project, titled The Currency, in which artists submit works of art which are then turned into NFTs. For example: a sculptor can create a physical sculpture and put it on display in a museum or an art gallery. On the other hand, that sculpture could be turned into an NFT, and the owner of that NFT would then be able to print the image of the piece wherever they like, send the image to anyone who wanted to see the piece, or possibly even 3D print an exact replica of the piece. Art in the form of NFTs allows artists to send their works all over the world. And some would say that “sending” digitized artwork is much better than loaning physical works of art — there is no limit on the number of places one may send the image to and no time spent waiting for it to get there. The idea behind The Currency is a sort of competition between what we might think of as the “received view” of art — that the physical originals have a unique value that cannot be replicated by digital copies — and an emerging view that denies a difference of value between the two (or, perhaps, sees more value in the NFTs).

For artists, art connoisseurs, or even regular art-appreciators, the idea of destroying a physical work of art might be painful to think about. So much time, money, and resources have been invested in attempting to preserve original physical copies of works like the Mona Lisa that these efforts have become intertwined with the pieces’ value. Recent examples of near destruction of great works of art are similarly met with horror, sadness, and a rush to try to preserve the pieces. Just think of the fire at the Notre Dame Cathedral, or the vandalization of Michelangelo’s Pieta. Is Hirst’s project — one in which original works of art may be destroyed if clients choose to keep the NFT instead — a tragedy? While the project is unquestionably controversial, is there anything inherently wrong in destroying (the physical copy of) a work of art, so long as the NFT remains?

One argument in favor of owning NFTs over physical copies of works of art is that access to these pieces could be expanded greatly. Rather than having to travel to a museum to see a certain piece, viewers could simply download a digital copy reproduced by an NFT. This could be done, in theory, in a sensorily immersive way, such that there would be little difference between the digital experience and the in-person experience. Those who lack resources to travel, or the time to visit museums, could have access to the great works of art that they might otherwise be deprived of. This democratization of art could, in turn, boost education equity and improve education outcomes across demographics. The experience of art could be liberated from its typical exclusivity.

But there is another strong argument for answering “no” to this question, which relies on intuitions many have about what it is that is valuable about art. Philosopher Alexander Pruss pursues this question in relation to one particular form of art: music. In a blog post titled “Musical Beauty and Virtual Music” Pruss asks “where does the musical beauty lie?” He goes on:

“One day […] I expect many people will have direct brain-computer interfaces. When they ‘listen to music’, no sounds will be emitted (other than the quiet hum of computer cooling fans, say). Yet I do not think this will significantly change anything of aesthetic significance. Thus, the production of musical sounds seems accidental to the enjoyment of music.”

Pruss here argues that the way the music is produced, the technology through which the sounds are emitted, does nothing to change my experience of the music and, therefore, does nothing to change how valuable the music is. The beauty — and value — of music therefore must lie in the experience of it. Similarly, people who agree with Hirst may find themselves drawn in this direction with regards to other forms of art, like sculpture, paintings, performance art, etc. Perhaps the value of these pieces lies in what we experience when we observe them, and not in the physical manifestations themselves. Destroying the Mona Lisa, therefore, may be perfectly fine so long as the experience of seeing the Mona Lisa can be preserved. Hirst seems to have a similar idea about the value of art, saying, “I just think anything that looks good and feels good, and makes you feel good, you know, it’s good art.”

This picture of value calls to mind the famous “experience machine” thought experiment proposed by the philosopher Robert Nozick. In the fictional scenario, you are given the option of plugging into a virtual reality machine, where you will (1) forget you’re in a virtual reality, and (2) live an incredibly happy and fulfilling life, all while your body is kept alive for as long as possible. Guessing that most people would refuse to enter the experience machine, Nozick uses this thought experiment to argue that the value of good things in life goes beyond mere experience — there is value in the goods actually occurring in reality, outside of our perception of them. For example, we might think that it is better to actually have relationships with real others, than to merely believe you have relationships with real others, even if there is no difference between the experience of the real relationships and the experience of the simulation. Likewise, one might think that there is value in the original piece of physical artwork itself that goes above and beyond the viewers’ experience of it. The fact that Frida Kahlo herself put these precise, meticulous brush strokes on the canvas, for instance, may hold value that is not grounded in our experience of her work.

Further, we may wonder whether the experience of seeing original works of art in person really can be preserved in an NFT. Anyone who has been deeply affected by the experience of viewing their favorite originals face-to-face may be skeptical that NFTs can truly capture the full experience. One difficulty would be recreating the awareness of sheer size, something important to the experience of carved works like the Appennine Colossus or Landowski’s Christ the Redeemer in Rio de Janeiro, and to large-scale paintings like the Sistine Chapel. Such awareness requires something like a sensory comparison of your size compared to the size of the work of art — something that would likely be difficult, or perhaps even impossible, for an NFT to facilitate.

A further complicating factor is that we tend to believe that pictures of art are not themselves art. For example, if you go to the Vatican Museum to see the Sistine Chapel, and then purchase a poster print of The Creation of Adam at the gift shop, you probably do not actually believe you have taken the painting home with you. Moreover, you also likely do not believe that your poster is equally as valuable as the original painting on the roof of the ceiling. The gift shop certainly thinks there is a difference in value, as a poster of the painting costs only about as much as one admission ticket! Whence the difference in value? Does it have to do with scale? But we can easily imagine a to-scale replica of the Sistine Chapel. Is the difference in value related to the wide availability of replicas compared to the limited availability of originals? If so, NFTs would retain the single-original status of works of art, and thereby (presumably) keep their value high, giving the owners of the NFTs full rights over any reproduction of the work. Additionally, some creators have sought to solve the problem of the inherent value of the original by positing that creators and artists may decide for themselves which work is the original: the physical copy, or the NFT. Choosing to designate the NFT as the original, as one YouTuber suggests, may change our understanding such that the physical copy becomes merely a “tool” used to create the final product, the NFT.

And finally, we may question whether Hirst’s project — exchanging original physical works of art for NFTs — would work for other kinds of art beyond painting and sculpture. There are, for example, works of art specifically created by the artist to exist only for a moment. Instances of this kind of art include a recent piece by Banksy titled “Girl with Balloon” which was secretly designed to self-destruct a few hours after it sold (the destruction was only partially successful), as well as performance art such as Chris Burden’s famous “Shoot” in which his friend shoots Burden in the arm with a gun.

Whether you’re an optimist or a pessimist about buying and selling art as NFTs, it is clear that the emerging technologies raising these philosophical questions will have a large impact on our view of art — what counts as art, who counts as an artist, and how we can best experience the work of those working with digital and physical mediums. May it make us more creative, and more open to appreciating the creative talent of others.

Vainglories Are Like Ogres — Part II: Why is Vainglory Such a Big Deal?

photograph of winner's podium sketched on blackbaord with colored chalk

In this series of posts, I want to talk about a particular challenge with the vice of vainglory. This is a challenge I’ve been struggling with for years, and unfortunately it is not one that I have a good solution to. The challenge, in a nutshell, is that vainglory comes in layers.

However, it would be difficult to jump straight into that discussion, since vainglory is not a vice that people talk about a lot these days. So, I’m going to develop this discussion over the course of three posts.

  • In the previous post, I explained what vainglory is and how it differs from the vice of vanity.
  • In this second post, I explain why vainglory is such a big deal and worth combating in our own lives.
  • Finally, in the third post, I will talk about why the layered nature of vainglory has made it so difficult to combat in my own life.

Why It’s a Big Deal

There are two parts to showing why vainglory is a big deal. First, we need to show that vainglory is destructive. Second, we need to show that it is common. Ants are no big deal because though they are common, they are not destructive. And blackholes are no big deal, because while they are destructive, they are not common. But vainglory is both.

A Destructive Vice

There are lots of reasons that vainglory is a destructive vice. The first, and probably simplest reason, is that vainglory often tempts us away from the pursuit of primary goods. Often the best action we can perform is not the action that will lead to the biggest boost to our reputation. I like how Steven Pinker puts the point when talking about effective altruism:

“[W]e have a large set of motives for why we’re altruistic and some of them are ulterior — such as appearing beneficent and generous, or earning friends and cooperation partners. Some of them may result in conspicuous sacrifices that indicate that we are generous and trustworthy people to our peers but don’t necessarily do anyone any good.”

Plato made this point long ago. Good things are often accompanied by the appearance of the good. And over time, we can come to care more and more about the appearance and less and less about the reality. We move from wanting to help the poor to wanting to look like we are helping the poor.

Of course, this is true only to the extent that I care about my reputation for its own sake, and not merely as a means to helping the poor. If I really do care about being thought generous merely so that others will be generous as well, then no such conflict can arise. This is why Aquinas thinks that the dangers of vainglory occur when one desires a reputation for the wrong reasons.

Not only can vainglory tempt us away from the good, it can also be a powerful temptation towards evil. The classic philosophical example here is from Augustine’s Confessions. In Book II, he describes a point in his adolescence where he vandalized another’s pear tree, not to eat the fruit, nor to sell it, but simply to act maliciously. Augustine considers this such a terrible act, not because the damage was that great, but because it was done for malice’s sake. What could drive him to such an action? According to Augustine it was vainglory, a desire to be approved of by one’s friends: “As soon as the words are spoken ‘Let us go and do it’, one is ashamed not to be shameless.”

C.S. Lewis, that great popularizer of Augustine, says something similar. He argues that, of all desires, the desire to be approved of by those ‘in the inner ring’ is the “most skillful in making a man who is not yet a very bad man do very bad things.” (If you don’t believe this, just consider the results of the Milgram experiment where a majority of participants were willing to perform an action which they thought was delivering possibly fatal electric shocks just because they were told to by an authority figure.) Like Augustine, Lewis emphasizes that the temptation is not about what one will gain from being approved of. Instead, the temptation comes from the simple sweetness of being thought well of by one whose approval you desperately desire.

“And the prophecy I make is this. To nine out of ten of you the choice which could lead to scoundrelism will come, when it does come, in no very dramatic colours. Obviously bad men, obviously threatening or bribing, will almost certainly not appear. Over a drink, or a cup of coffee, disguised as triviality and sandwiched between two jokes, from the lips of a man, or woman, whom you have recently been getting to know rather better and whom you hope to know better still—just at the moment when you are most anxious not to appear crude, or naïf or a prig—the hint will come. It will be the hint of something which the public, the ignorant, romantic public, would never understand: something which even the outsiders in your own profession are apt to make a fuss about: but something, says your new friend, which ‘we’—and at the word “we” you try not to blush for mere pleasure—something ‘we always do.’

And you will be drawn in, if you are drawn in, not by desire for gain or ease, but simply because at that moment, when the cup was so near your lips, you cannot bear to be thrust back again into the cold outer world. It would be so terrible to see the other man’s face—that genial, confidential, delightfully sophisticated face—turn suddenly cold and contemptuous, to know that you had been tried for the Inner Ring and rejected. And then, if you are drawn in, next week it will be something a little further from the rules, and next year something further still, but all in the jolliest, friendliest spirit. It may end in a crash, a scandal, and penal servitude; it may end in millions, a peerage and giving the prizes at your old school. But you will be a scoundrel.”

Of course, this temptation can only occur to the extent that you desire approval, not for what is good, but simply as such. If I merely want to be approved for my honesty, then a desire for approval will not usually tempt me to join in a deception. This is one reason, then, why Aquinas thinks that the vice of vainglory involves a desire for mere approval, rather than approval for what is really good.

A second danger in vainglory is that the vainglorious lose control over their own happiness. You have much more control over how good you are than you do over what other people think of you. You can control what you do, but not what people think. Thus, the more your happiness depends on the approval of others, the fickler your happiness will be.

This point is made best, I think, by Dumbledore. In the fourth Harry Potter book, Hagrid is distressed that many dislike him simply because he is half-giant. To this worry Dumbledore replies: “Really, Hagrid, if you are holding out for universal popularity, I’m afraid you will be in this cabin for a very long time.” Ultimately, there is nothing Hagrid can do about the prejudice and poor judgment of others. However, if he grows less vainglorious, he will come to realize that the mere unqualified approval of others is not ultimately that important and should not stand as a precondition to his happiness.

Of course, not all approval is equally fickle. If someone is a good judge of character, they are more likely to think well of you to the extent you are good. This is one reason, then, why we should care more about the approval of those who are wise; their approval is less fickle and so provides a more secure foundation for happiness.

A third, but by no means final, danger of vainglory is that it can lead to dishonesty. If I come to care just about my reputation, then what matters is not being good but being thought good. Thus, I will be tempted to get others to think me good by any means I can, and not merely by becoming a better person. The vainglorious person is always faced with a temptation to make themselves look better than they really are: to take more credit for a group project than they really deserve, to make one’s motives look more noble than they really were, to exaggerate just a little bit the quality of what one did.

Why do people cheat in friendly board games? It is not like you get anything when you win, and you have not even accomplished anything if you win by cheating. The answer, of course, is that we like to be thought the winner by others. In other words, vainglory is behind many of those everyday dishonesties which populate ordinary life. And just as with the other dangers, this helps us understand why certain desires for recognition are particularly vicious. If you just care about your reputation, rather than caring that people recognize what is actually good about you, you will be more strongly inclined towards dishonesty.

The Ubiquity of Vainglory

I expect that many of you already will agree that vainglory is ubiquitous. Hopefully, you already recognize that you personally are vainglorious. (Of course it’s possible that you lack this vice, however, I expect that most people who don’t believe they are vainglorious think that because they are self-deceived, and not because they have actually reached such lofty heights of virtue.) But it is still worth understanding just why we humans are so inclined to the vice of vainglory.

While there are lots of explanations we can give for the ubiquity of vainglory, let’s just consider the explanation given by evolution. Human psychology evolved to help us survive and reproduce as a social creature within human community. And the thing is, our reputation was far more important to our reproductive success in early human communities than it is now.

There are two layers, then, to this evolutionary distortion. First, evolution selects traits important for reproductive fitness rather than goodness. Second, evolution optimized for a culture where reputation mattered in very different ways.

The Reproductive Fitness Distortion. Let’s take the point about reproductive fitness first. Often the good way to be is not going to be the same thing as the way that increases your chance at successful reproduction. The person who gives all their extra resources to the poor may be a morally better person, and do more for the common good, but they will not necessarily have lots of super successful offspring.

We can make the same point with other types of goods. For instance, the good proper to belief is truth. But even if it is good to believe true things, that does not always mean that believing true things best serves your reproductive fitness. Dan Kahan makes this point in explaining why scientific literacy correlates with partisan conformity more than with truth.

“Nothing any ordinary member of the public personally believes about the existence, causes, or likely consequences of global warming will effect the risk that climate changes poses to her, or to anyone or anything she cares about. Nothing she does as a consumer, as a voter, as a contributor to political campaigns and causes, or as a participant in public conversation will be of sufficient consequence on its own to have any impact. However, if she forms the wrong position on climate change relative to the one that people with whom she has a close affinity—and on whose high regard and support she depends on in myriad ways in her daily life—she could suffer extremely unpleasant consequences, from shunning to the loss of employment. Because the cost to her of making a mistake on the science is zero and the cost of being out of synch with her peers potentially catastrophic, it is indeed individually rational for her to attend to information on climate change in a manner geared to conforming her position to that of others in her cultural group.”

Obviously, we don’t consciously try to believe whatever our social group approves of. We think that what we believe really tracks the truth. However, there is a disconnect between what we aim at in thinking and what our cognitive apparatus has evolved to secure. This point is easy to see in selfless terms; we can recognize why believing the truth or serving the poor might trade off with reproductive fitness. But this is also true of things like happiness. Someone who cares what other people think is likely to be both more reproductively fit and also less happy.

The reason we are inclined to be excessively concerned with reputation is because reputation has a bigger influence on reproductive fitness than it does on happiness or goodness. Thus, evolution has inclined us to overinvest in our reputation at the expense of other goods.

The Evolutionary Environment Distortion. Not only did evolution optimize for the wrong thing, it also optimized for the wrong environment. Most of human evolution occurred when we lived in small communities of thirty to a hundred people. But we now live in massive, globalized societies.

If you live in a small community of thirty people, then a single person not liking you is an issue, five people not liking you is a huge problem, and fifteen people not liking you is a disaster. You did not get to select amongst forty thousand undergraduates those five most like you to be friends with, you had to be friends with the only five people available. In this environment, it made a lot more sense to make sure that each and every person you came across was impressed by you. Not only that, it was also a lot easier to make sure that everyone was impressed by you, because to be the best at something you only needed to be better than a small group of people.

Nowadays, however, it is both harder to stand out, and far less important. Despite that, all our evolved impulses still think it is incredibly important to do whatever it takes to make sure that every single person we come across knows just how great we are.

Hopefully, you now have a sense both for what vainglory is, and why it is so dangerous. In the final post, I’ll talk about one reason I’ve had so much trouble combating my own vainglory.

ROC and the Ethics of Guilt by Association

image of Russian Olympic Committee Flag 2021

Doping has been a persistent theme of conversation around sports these past few months. During the Olympics, athletes have gone so far as alleging that they were not able to compete in a clean competition, and much of this was directed at one team: “ROC”, which stands for the Russian Olympic Committee. Due to a state-sponsored system of doping, Russia is banned from competing, and Russian athletes who were not implicated in the doping system are instead allowed to represent ROC at the Olympics.

Ryan Murphy’s allegation that swimming is haunted by doping was barbed precisely because he lost to a Russian athlete, Evgaeny Rylov. Fellow American swimmer Lilly King made similar allegations, with a direct jab at Russian athletes. (This isn’t exactly the first Olympics to see tensions flare between Russia and America.) But is it fair to be skeptical about athletes who are associated with countries – or, broadening away from this particular case, coaches – that engage in mass doping schemes? Further, is it fair to be skeptical about entire sports? Murphy later seemed to modify his comments, claiming that he wasn’t voicing skepticism about Rylov but was concerned that swimming, as a sport, wasn’t clean. Not that this is limited just to swimming — after all, many of us view cycling with great suspicion.

One problem is basing these allegations on guilt by association. The evil deeds of others don’t make you guilty. For instance, to allege that Mumford and Sons are a far-right band because of (now-former) member Winston Marshall’s recent behavior is a logical error; the fact that Marshall sides with reactionary views doesn’t mean his fellow bandmates do. In our case, to insinuate that Rylov is guilty of doping because he’s a Russian athlete is to claim he is guilty because of his association to guilty athletes and a corrupt sporting system.

To emphasize why guilt by association is problematic, it’s useful to look at the contrast between shame and guilt. We can focus on two points: Firstly, something can shame you even though it isn’t wrongful. You can feel ashamed for having a long nose or not being very funny. But you aren’t guilty (you can’t feel guilty, and no one can impugn you over it) for having a long nose. Secondly, you can be shamed by your associations to other people. You can feel ashamed that your friend acted in such a way, or you can feel ashamed that your child made such a choice.

So, shame by association is perfectly appropriate: a Russian athlete might feel ashamed that their compatriots doped. But because guilt requires wrongdoing, you can’t be guilty simply because of what someone else has done. So, hinting that a ROC athlete is guilty because they are Russian is inappropriate: to be guilty you have to do something wrong, you aren’t guilty because of who you are affiliated with. And it is worth noting how these Russian athletes are made worse off by the fact they have to compete for ROC. The media often enough referred to “Russia” winning a medal at the games. Had they just been competing as (genuinely) neutral athletes, clean athletes would at least be able to hold Russian doping at arm’s length.

But maybe there is another way of looking at guilt by association that does justify these allegations of cheating: some associations are evidentiary. If you hang around Bada Bing!, the strip bar on The Sopranos, there’s a reasonable chance you’re involved in organized crime. To suggest this based on a mere association between you and Tony Soprano would be dodgy guilt by association. To suggest this based on the statistical evidence that, say, 68% of people who hang out there in fact are gangsters is not dodgy. Or to suggest that if you go there you are likely a gangster because people go there to discuss crime is not dodgy. These latter suggestions turn on something more than insinuation and gossip and find a credible grounding: they are evidence based on factual elements (for discussion, see Marshall Bierson’s “Stereotyping and Statistical Generalization”).

For such an allegation based on association to stick in the ROC case, we need to find grounds to suggest that being Russian is good evidence that ROC athletes have cheated. And one can start to make such a case: after all, if elite athletes in a country are engaging in state-sponsored doping then other athletes will be under pressure to also dope in order to keep up with the other elite athletes. (Likewise, we might run the same arguments for sports like cycling: to even be competitive, you are under pressure dope, which is why it might be reasonable to be suspicious of the entire sport.)

This is a plausible starting point. But it faces three hurdles. Firstly, it is mere speculation and needs to be filled in with something evidentiary (say, if a bunch of ROC athletes confessed to doping, or if there were evidence that other athletes were under pressure to dope). Secondly, it’s at best probabilistic. It only helps to justify the claim that ROC athletes are to some degree more likely (than, say, a neutral athlete) to dope. Even if stereotypes or statistical claims sometimes enable us to make quick judgments (and this can sometimes be useful), the problem with the allegations from Murphy and King was that they were interpreted by any reasonable listener as an attack on a particular athlete: Evgeny Rylov. Thirdly, this line of argument starts with a handicap: the athletes who compete for ROC had to demonstrate that they were not involved in doping. Given this, there should be a presumption that they are competing fairly.

Of course, I am no Olympic swimmer, and a further factor is that these athletes surely have a better insight on the behavior of some of their competitors than I do. Perhaps there is genuine evidence that Rylov doped, evidence that King and Murphy are party to but we have not yet seen. But, unless that is the case, they should be more cautious about making allegations. Guilt by association, unless that association is evidentiary, is no ground for a serious allegation.

Vainglories Are Like Ogres — Part I: Defining Vainglory

photograph of TV studio with actors displayed in camera viewfinder

In this series of posts, I want to talk about a particular challenge with the vice of vainglory. This is a challenge I’ve been struggling with for years, and unfortunately it is not one that I have a good solution to. The challenge, in a nutshell, is that vainglory comes in layers.

However, it would be difficult to jump straight into that discussion since vainglory is not a vice that people talk about a lot these days. So, I’m going to develop this discussion over the course of three posts.

  • In this first post I explain what vainglory is and identify how it differs from vanity.
  • Then in the second post, I will explain why vainglory is such a big deal and worth combating.
  • Finally, in the third post, I will talk about why the layered nature of vainglory has made it so difficult to combat in my own life.

What Is Vainglory

“Vainglory” is not a term that we hear very often. Indeed, if I were to describe myself as vainglorious, many would think I was describing myself as vain. But what we mean by vainglory, at least in its traditional sense, is quite different from what we now mean by the word vanity.

To say I am ‘vain’ is to say that I think very highly of myself. The vain person thinks that they are great and deserving of esteem. If I think myself particularly good looking — which, of course, I am — then, I can rightly be described as vain. Vainglory, however, means something subtly different. Vainglory is defined by Rebecca DeYoung as “the excessive and disordered desire for recognition and approval from others.” Thus, vainglory is not about what I think of myself, but my concern for what others think of me.

Let’s take a concrete example. Suppose I donate fifty dollars to charity. It would be vain for me to think that because I donated that money, I must be a wonderful and generous person. It would be vainglorious, however, to post about the donation on Facebook in order to get the reputational boost of being thought generous by others.

You can have one vice without the other. Thus, I might be posturing out of a feeling of insecurity; perhaps the reason I work so hard to make myself look generous, is because I really think that deep down I’m a selfish person. Here I would be vainglorious, but not vain. On the other hand, I could also think that ‘since I’m such a good and generous person, it does not matter what other people think.’ Who cares what the hoi polloi thinks? I already have the approval of the only opinion I really respect, my own.  Here I would be vain, but not vainglorious.

What is essential to vainglory, then, is a concern for one’s own reputation. However, not just any concern for one’s own reputation is a problem. It is not vainglorious to check my resume for spelling errors, even if the only reason I am doing so is because spelling errors will lead people to think worse of me as a job candidate. If you do not care at all what other people think of you, then you are not going to be able to thrive and flourish as a social creature.

Just as vanity involves an unjustifiably high opinion of one’s self, and just as cowardice involves an unreasonable fear, so vainglory involves a concern for one’s reputation that has gone wrong in some way. And according to DeYoung, there are two vainglorious ways that a concern for reputation can go wrong.  First, the concern can be excessive. Second, the concern can be disordered.

Let’s use this contrast between excessive and disordered concern to suggest two different ways of thinking about vainglory — one way suggested by Augustine and one suggested by Aquinas.

Augustine on Vainglory as Excessive Concern

In Book XV, Section 22 of his book The City of God, Augustine defines virtue as the ‘ordo amoris’, or the ‘right ordering of loves.’ For Augustine, vice rarely consists in loving evil things, but rather in loving good things to the wrong degree:

“And thus beauty, which is indeed God’s handiwork, but only a temporal, carnal, and lower kind of good, is not fitly loved in preference to God, the eternal, spiritual, and unchangeable good. When the miser prefers his gold to justice, it is through no fault of the gold, but of the man; and so with every created thing. For though it be good, it may be loved with an evil as well as with a good love: it is loved rightly when it is loved ordinately; evilly, when inordinately. . . . So that it seems to me that it is a brief but true definition of virtue to say, it is the order of love.”

Greed does not just involve caring about wealth, it involves caring about wealth more than I care about the poor. Thus, it is not a problem to care about my reputation, but it is a problem if I am concerned more with my reputation than I am concerned with others (or more than I am concerned with honesty etc.).

This is a helpful way to think about vainglory, and it encourages us to ask if our concern for our own reputation is leading us to neglect other things which we should care about more. But it is not the only useful way to think about vainglory.

Aquinas on Vainglory as Disordered Concern

When Aquinas talks about vainglory, he does not emphasize the relative degree to which we care about our own reputation (or what he calls glory). In the Secunda Secundæ (a fancy way of saying ‘the second part of the second part), Question 132, Article 1 of his Summa Theologiae, Aquinas identifies three ways that a concern for our reputation could go wrong:

“Now in one sense glory can be called vain on the part of the thing for which one seeks glory, when someone seeks glory for what does not exist or for what is not worthy of glory, as when someone seeks glory for what is frail and perishable.

In a second sense, glory can be called vain on the part of the one from whom someone seeks glory, e.g., a man whose judgment is not fixed.

In a third sense, glory can be called vain on the part of the one who is seeking the glory, viz., when he does not refer his desire for glory to an appropriate end, viz., to the honor of God or to the well-being of his neighbor (ad honorem Dei vel proximi salutem).” (From Alfred Freddoso’s wonderful translation)

For Aquinas, it is not a problem simply to want others to admire good things about you. But for this desire to be rightly ordered, it needs to satisfy certain conditions. First, you should want people to think well of you because of true things about you. It would be vainglorious to want people to admire your generosity if you are not actually generous. Second, you should want people to think well of you because of things about you that are really good. Having expensive and name-brand clothing is a frivolous good, and so you should not want people to admire you for having such clothing. Third, you should want to be approved by those who are actually good judges of such things. You want to be thought good by wise judges of character, not by just anybody irrespective of how discerning they are. Fourth, you should want to be thought good for the right sort of reason, not just because it feels good or because it will make you more popular than others.

Ultimately, both Aquinas and Augustine’s ways of thinking about vainglory are useful. I think both get at something really important, and both give us a useful model for thinking about vainglory in our ordinary life. In particular, the important thing that both accounts share is that they distinguish vainglory from the appropriate concern for one’s reputation. There is nothing wrong with wanting others to think well of you; the problem, like most problems, lies in the details.

Having identified what vainglory is, the question we will tackle in the next post is just why vainglory is such a serious vice.

Fast Fashion Isn’t the Fashion Industry’s Only Problem

photograph of shelves of shirts in shrinkwrap

Most young Americans have never lived in a world where the latest fashion trends were not available instantly at dizzyingly low prices. Fast fashion retailers like Shein and ASOS offer seemingly endless online catalogs of low-quality clothes, typically about as durable as tissue paper, and in the last few years they’ve broadened their audiences through social media sites like TikTok and Instagram. Influencers purchase a few hundred-dollars worth of clothing, reflecting that week’s micro-trends, and spread these finds out on the floor to be filmed. Most items are only worn a few times before they move on to their next haul. To get a sense of the scope of this problem, we might look at pioneering online retailer ASOS, which aims to add roughly 5,000 items to its virtual catalog every week. As Terry Nyguen, a reporter on consumer trends for Vox, explains, “Garment production has quietly accelerated to breakneck speeds over the past three decades, easing young and old consumers into thinking of their clothes as disposable.”

As a culture, we’ve been trying to wrap our heads around fast fashion for nearly a decade now; for example, Sakshi Sharma and Victoria Jennings have probed the ethical dilemmas posed by fast fashion before here on the Post. Sharma explains how the industry allows wage stagnation and workplace abuse to flourish, and Jennings examines the negative impact fast fashion has on the environment. It is worth noting how extremely difficult it is to quantify this negative impact. The oft-repeated statistic that the garment industry pollutes more than any other, with the exception of oil, seems to come from a study on a single Chinese province, as Alden Wicker explains in his expose on the tangled web of shoddy evidence and unverifiable data that impede genuine research. Misinformation, he argues, poses obstacles for consumers and eco-activists alike.

But regardless of what the exact impact is, the fashion industry is ramping up, not slowing down. A 2019 report published by the Global Fashion Report on the fashion industry predicts a global increase in textile production of 81% by 2030. It’s near impossible to make a case for fast fashion, but at the same time, it’s misguided to focus all of our ire on that specific sector of the industry when high fashion is just as guilty of unsustainability. Is fashion itself, as we currently understand it, inherently unsustainable?

Unfortunately, a designer label is hardly a guarantee of eco-friendliness. Urban Outfitters and Anthropologie, while not exactly qualifying as “high fashion,” are certainly on a different tier than Shein. However, these two brands, which are owned by the same company, still use unsustainable synthetic fabrics like polyester, and neither is especially transparent about where and how their garments are made or what steps they plan to take to reduce carbon emissions. Companies that definitely would be considered high fashion, like Versace, have made gestures towards sustainability, but as one sustainability-rating site noted, there is no evidence that Versace is on track to meet its goals.

Paradoxically, fashion is both art and commodity. We think of our clothing as expressive of our true essence and therefore unique, but the things we buy are selecters for us by a cadre of market researchers and boardroom executives, and are ultimately iterations of ephemeral trends that flatten rather than enrich individual expression. Fashion, as essayist Kennedy Fraser noted in 1978, is at its core “materialistic, and holds that appearances are of greater significance than substance. Among the shared limitations are fickleness, a preoccupation with descrying the will of the majority in order to manipulate it or pander to it, and a concern with the accumulation or protection of power and profit. Although all fashion looks mobile and rebellious at times, its roots are surprisingly constant: to think or act for reasons of fashion in any given field is to support that field’s established centers of power.” Fraser rightly points out that fashion in general, whether high or low, requires a base of consumers, so its continuation can only ever perpetuate the aims of capitalism. Even the most daring trend can be watered down and shuffled onto a Target sales rack, fully incorporated into the mainstream culture it once challenged the boundaries of.

At the same time, how impossible it is to dismiss the idea of fashion, to stop our ears against the alluring language of pattern and color, of form and movement. Like any kind of image-making, fashion provides us with metaphors and symbols through which we understand ourselves and our position in the world. Situated at the intersection between private and public, between self and other, these polyvalent symbols allow us to simultaneously articulate, as well as create, our sense of self. The fashion industry capitalizes on these ingrained desires, which is partly why addressing the negative environmental impact of the garment industry is so difficult. Consumers should and must shun fast fashion brands, but that only tackles one small part of the problem. We need to completely rethink fashion, finding a way to embrace the good and discard the bad, if we want to lay the foundation for a more sustainable world.

Arguments about Doping Are Difficult

photograph from diving board of Olympic pool lane

American Swimmer Ryan Murphy recently alleged that he was “swimming in a race that’s probably not clean,” having just lost to Russian athlete Evgeny Rylov. Murphy later claimed this was not an allegation, but it is hard not to hear his comment as a dig at Rylov, and Russian athletes more generally, given Russia’s recent ban for operating a state-backed doping regime, where athletes were given a cocktail of performance-enhancing drugs, and had their tainted drugs tests replaced by clean ones. (Russia cannot compete at the games, athletes who were not implicated in the scheme can compete for “ROC” – the Russian Olympic Committee.)

We commonly condemn the practice of athletes doping – taking banned substances (often a drug, but sometimes their own recycled blood) to improve sporting performance. This might make them quicker, increase their reaction times, or help them recover from training or an injury.

Now, there are those that think we should just embrace doping. Some argue that it lets those who are naturally less talented to catch up to those who are naturally more talented. Others think that the point of athletics is to go faster, higher, and stronger, so we should be able to use whatever means to do that.

But many of us find doping abhorrent, and we can at least get a hold on why doping might be wrong by thinking about the nature of sport. An enduring analysis of the nature of sport is Bernard Suits’s idea that sports involve reaching a goal while overcoming “unnecessary obstacles.” For instance, in golf you have to get the ball in the hole, but you must use a certain stick to propel the ball 300+ yards; you can’t put the ball in through easier means. When we add in that sports involve physical skill, we can start to see the problem with doping: if someone dopes, they lessen some of the obstacles they face. This strikes at the spirit of sport: dopers remove the obstacles they should be facing, and facing obstacles is part of the point of playing sports.

Still, this leaves lots of scope for debate: what restrictions are important obstacles in competing in a sport? Athletes are allowed to improve their physical skills, such as through training, so why does doping strike against the nature of sport in a way that eating 12-egg-omlettes and training eight hours a day does not?

They’re questions for another day. My focus is on a neater question: how should we balance the need for athletes to live their lives with the need to test for, and prevent, doping? On the one hand, doping pervades our sports and we (fans, as well as athletes like Murphy) want competitors to be clean. Doping is not, of course, restricted to Russia. Lance Armstrong achieved seven consecutive Tour de France wins, all while doping, and he received help from the sport’s governing body to cover up his violations. The cover-ups can be extreme, too: former Armstrong team-mate Tyler Hamilton claimed that he had not been doping, rather there was somebody else’s blood mixed in with his sample because he had absorbed a twin in the womb. He later admitted to massive doping. Why go to such lengths to get away with doping? Well, the incentives are huge. Success brings cash, or sponsorship opportunities. And then there’s the sheer glory of being the best in the world.

On the other hand, anti-doping measures involve severe impositions on athletes’ private lives. For one, even in-competition testing is onerous for athletes. Petr Cech missed some of the celebrations of Chelsea’s 2012 Champions League victory because he had to go for a drugs test. Athletes also have to face significant intrusions into their private lives so that they can undergo regular testing. There are different programs in place, but these can be very strenuous: some athletes have to let USADA (U.S. Anti-Doping Agency) know their overnight location for every day of the next calendar quarter and provide a 60-minute window where they will be available for testing (they can, of course, update this information, but it must be submitted quarterly and updates made along the way).

So, there is a tension: there is the fact that athletes are human beings who deserve to live their lives, and there is the fact that athletes dope to win, which threatens the integrity of sports. This tension might help us recognize that although the burdens of the system seem demanding, they make more sense when we appreciate the lengths that some athletes go to in order to dope. And this sheds some light on some controversial recent cases.

Last month, 100-m Hurdler Brianna McNeal was banned for five years. Nobody has accused her of doping; she merely failed to answer the door when drug testers came to her house. (McNeal had previously been banned for a year for failing to update her whereabouts and missing three tests.) This time, her ban came after missing one test. She missed the test because she had an abortion and was recovering in bed when the drugs testers called. She submitted documentation to support this, but altered the documentation provided by the clinic. Her ban was not for taking drugs, nor for missing a test (you have to miss three in a year), but for altering the documents. McNeal had to reveal (to strangers) something that is deeply personal. But, add in the fact that some people go to such lengths to dope, we can understand the need for regular tests, and also the move to punish tampering with doping documents.

In a similar manner, middle-distance runner Shelby Houlihan was suspended for four years because she tested positive for nandrolone (a common doping steroid). Houlihan claimed that she accidentally came into contact with the drug through a tainted pork burrito. Is it really fair that athletes can’t eat a tasty burrito? Well, there are plenty of things ordinary folk do all the time that athletes can’t do. For instance, athletes have to stick to strict training regimes that often take over their whole lives. Further, it’s not all that clear that this was an intrusion into Houlian being able to live a normal life — the scientists who researched the possibility of pork-based nandrolone contamination hold that the chances of it affecting a drug test are “slim.” Houlihan’s excuse, then, runs the risk of being as unbelievable as Tyler Hamilton’s.

The tension between an athlete’s personal life and anti-doping regulations also bears on perhaps the toughest recent case. Sha’Carri Richardson was banned for a month for testing positive for marijuana, ruling her out of the Olympics. It takes a heart of stone not to feel sorry for her, given she took the drug when she was grieving the loss of her mother. Further, in many parts of America, marijuana use is legal – so why should marijuana be prohibited?

For one, there are other legal drugs that are banned for athletes. But we also need to recognize that many of these competitions are international. A recent statement by WADA noted some governments had requested they distinguish between in-competition and out-of competition use of marijuana. Cannabinoids are only banned when athletes are competing. So – where legal – athletes can enjoy marijuana, if they wish, much of the time. Perhaps WADA is right that this strikes a balance between respecting athlete’s “civil liberties” and respecting the fact marijuana is illegal in some places.

Together, these cases tell us at least one thing: there is no easy conclusion here. If we want to stop doping – to give athletes like Ryan Murphy assurance that he’s swimming in a “clean pool” – we have to test athletes for drugs, and this will inevitably involve encroachment on their private lives. Perhaps these measures sometimes go too far, but if we want to ban doping, we have to be willing to bear some of these costs.