← Return to search results
Back to Prindle Institute

Utilitarian Justification for Survivor Cannibalism?

black and white photo of snowy mountains in the clouds

The release of the film Society of the Snow – based on the 1972 crash of Uruguayan Air Force Flight 571 into the Andes mountains – reminds us that sometimes the fantastic lifeboat-type ethical scenarios that philosophers like to discuss are never mere thought experiments. Left with no additional food, the survivors resorted to cannibalism to survive, eating the bodies of dead passengers. Cannibalism is almost universally condemned around the world. As anthropologist Beth A. Conklin explains, “Cannibalism is a difficult topic for an anthropologist to write about, for it pushes the limits of cultural relativism, challenging one to define what is or is not beyond the pale of acceptable human behavior.” Philosophers like to discuss such cases because different moral theories can give different answers, so let’s consider the coherency of the utilitarian response to this case.

On October 13, 1972 the flight left Uruguay for Chile with 45 passengers and crew on board. The plane crashed into the Andes mountains, killing twelve immediately with several more deaths afterwards. The survivors were exposed to frigid temperatures, avalanches, and starvation. Once what little food they had was eaten, the survivors resorted to survival by eating the bodies of passengers and crew who had passed away. Eventually, two of the survivors Nando Parrado and Roberto Canessa hiked for ten days into Chile to find help and the remaining sixteen survivors were rescued. Despite the fact that they were all extremely reluctant to eat their fellow passengers (some especially so), they became convinced that it was the only way to survive. However, they initially were not forthcoming with the details of how they survived until word got to the press about what had happened.

The question to consider is whether the passengers did anything wrong by surviving in this way. Many passengers were Catholic and were worried for their souls. Kantian deontology would likely condemn any such a practice as a violation of the humanity principle – treating another as a means to an end. Utilitarianism, on the other hand, seems far more eager to consider the finer points of such cases without issuing a blanket ban. According to Mill’s utility principle, “actions are right in proportion as they tend to increase happiness, wrong as they tend to produce the reverse of happiness.” On which end of the spectrum do the survivors’ actions fall?

It might seem like a straightforward answer. The survivors felt pain to have to resort to cannibalism, but probably nothing like the pain of starvation. Also, by eating the bodies of dead people, the survivors were able to live and survive, thus, presumably, creating greater happiness than the alternative being that they would all perish. Another important point was that no one needed to sacrifice their life for the others; the people who were eaten were dead and thus could feel no pain or pleasure. Thus, by sustaining their own lives the survivors satisfied the greatest happiness principle and were morally justified in their actions. Perhaps as a comfort (probably more a worry) we might think that even if there were a lack of bodies and some of the survivors needed to sacrifice their own life for the lives of others, that this too would be permitted since it would have created the most pleasure at the least cost of pain.

However, this is a bit too narrow of a way to consider this case, for we are asked to consider what will generate the greatest happiness for the greatest number of people. What about the families of those who were eaten and everyone else on the planet who might have been revolted at what took place? If we consider their utility and the potential pain they might experience from hearing their loved ones were eaten, would that be enough to tip the scales? We would also need to consider the families of the survivors and the pleasure they would experience; also, the public at large who was initially horrified. If the public’s reaction to an incident like this was broadly negative, does this mean that the survivors would have to sacrifice their lives by starving for the greater good of the public back home?

According to Mill, “the thoughts of the virtuous man need not on these occasions travel beyond the particular persons concerned,” for,

The occasions on which any person (except one in a thousand) has it in his power to do on an extended scale, or in other words, to be a public benefactor, are all but exceptional…. in every other case, private utility, the interest or happiness of some few persons, is all he has to attend to.

In other words, Mill tells us that we do not need to consider broader public utility beyond the local people involved in a situation unless there is concern that there is a violation of rights.

Does the public have a right to be free of cannibals? Did the survivors violate the rights of the family in eating their loved ones? Typically, something like this would be a violation of the law, but is this equivalent to a violation of a right? No one was prosecuted for violating the law. Perhaps the right is a more informal expectation that, as a general rule, condoning cannibalism works against the collective good – just as lying simply for expediency’s sake cannot be justified since it works to undermine overall human trust.

But is there really no reasonable room to say that an exception should be made in such an unusual case? Certainly Mill is willing to make exceptions to the general prohibition against lying in cases where withholding the fact would save an individual great evil and there are no other means. Likewise the survivors were saved from a great evil and only by engaging in a practice both they and most others morally reject. But does this mean that such an exception is justified in the manner that Mill suggests? The question now becomes whether a greater utility is preserved by carving out an exception to the practice of cannibalism in this particular case or whether such an exception runs afoul of other significant utilities we wish to preserve. This is a complicated value judgment, not far perhaps from where we began. Mill remains adamant that,

if the principle of utility is good for anything, it must be good for weighing these conflicting utilities against one another, and marking out the region within which one or the other preponderates.

But that remains a contentious process. Is our inability to land on a definitive judgment a failure of the moral theory or a testament to the thorniness of this particular case? Moral theories can be helpful in framing the issue and thinking critically about what we value, but they may not be as action guiding as they sometimes appear to be on the surface. Especially when it comes to hypotheticals come to life.

FTX, Effective Altruism, and Ends Justifying Means

image of smartphone displaying FTX logo with stock market graphs in background

Until a week ago, Future Exchange (FTX) was one of the largest and most respected cryptocurrency exchanges in the world. Then, in spectacular fashion, it all collapsed.

The collapse didn’t just wipe out the billions of dollars that users had invested in the platform. It also wiped out the fortune and reputation of FTX’s billionaire CEO and philanthropist, Sam Bankman-Fried. And because of Bankman-Fried’s close ties with effective altruism, a particular kind of philanthropy championed prominently by Oxford moral philosopher William MacAskill, the shockwaves of FTX’s collapse have been far reaching.

Effective altruism is a movement with roots in moral philosophy. In 2011 when MacAskill was a graduate student in philosophy at Oxford he co-founded the organization 80,000 Hours. The name is taken from an estimate about the number of working hours a person will have over the course of their career. Its goal is to advise people about how to make the biggest impact in their careers to address the world’s most pressing problems. In practice, the advice is often to earn as much money as possible and then donate that money to causes that are effective at doing good. MacAskill himself describes the movement as follows:

The effective altruism movement consists of a growing global community of people who use reason and evidence to assess how to do as much good as possible, and who take action on this basis.

If you want to do as much good as possible, there are two main things to think about: (1) what distribution of resources makes the biggest difference, and (2) how do we get a lot of resources to distribute? On that second question effective altruists have advised each other and outsiders to pursue careers that will generate significant financial resources. On the first question, effective altruists have advised others to give (and have, themselves, given) to causes such as mosquito nets for areas subject to malaria (because the cost of the nets is so low compared to the life-saving potential), and to minimizing global risk such as pandemic preparedness, AI risk mitigation, and nuclear nonproliferation (because in these cases, the potential upside of lowering risk is so great).

Effective altruism is not an ethical theory per se, but it does have connections to the ethical theory known as utilitarianism. According to utilitarianism the sole way to evaluate actions morally is based on their consequences. For each action you could perform, you consider the total set of consequences for all people, current and present. You are then morally obligated to choose the action that produces the best overall outcome. What makes an outcome best? That is a question about which there is disagreement, but everyone agrees that pain and suffering are bad, and that happiness and pleasure are good. On this (overly) simple view, the action with the best outcome is just the one with the best balance sheet: total happiness minus total pain.

One key feature of utilitarianism is that it doesn’t rule out any kinds of actions unilaterally. Lying, stealing, and even murder could, in certain situations, yield the overall best consequences. In such cases, utilitarianism says that’s what you ought to do. The ends justify the means.

If you apply utilitarianism to questions about philanthropy or about what career you should pursue, you get something that looks a lot like effective altruism. That’s not to say that effective altruism just is utilitarianism. Effective altruism isn’t a single unified theory, so isn’t really in the same category as utilitarianism. But if effective altruists are those who “use reason and evidence to assess how to do as much good as possible, and who take action on this basis,” then it is going to share some features with utilitarianism – in particular, that (in some cases) the ends might justify the means. When an effective altruist advises someone to take a career in finance so that he might make a lot of money that can then be donated to worthwhile causes, this advice is grounded in the idea that working in finance is simply a means to doing good. And for that reason, such a career trajectory is morally justifiable. That doesn’t mean that an effective altruist has to say that stealing is okay if it leads to the best consequences. But it does mean that the effective altruist is engaged in the same style of argument.

Now, in 2011 just as MacAskill was founding the effective altruism movement, he also met Sam Bankman-Fried, then an undergraduate at MIT. According to a New York Times article describing this meeting, Bankman-Fried was interested in utilitarianism and talked with MacAskill about working to promote animal welfare. MacAskill suggested that Bankman-Fried instead pursue a high-earning career and then donate to causes that could do the most good. The suggestion was clearly taken to heart. Bankman-Fried pursued a career in finance, founding the cryptocurrency exchange FTX in 2019, which allows users to buy and sell cryptocurrencies like Bitcoin or Ethereum much like one might buy stocks on the New York Stock Exchange. Up until a few weeks ago, his net worth was valued at around $10.5 billion, which he had promised to give away in the spirit of effective altruism. He had, in fact, already given generously and had set up the FTX Foundation, which promotes and supports various effective altruist projects. One prominent part of the foundation is the FTX Future Fund, which makes grants to nonprofits and individuals who are helping to make positive and long-lasting change in the most effective ways. MacAskill, until his recent resignation, was on the team that decides how to distribute FTX Future Fund grants.

That brings us to the surprising events of the past two weeks. During the week of November 7, floods of FTX users started to withdraw their funds, creating what was essentially a bank run. The details about what caused this are still emerging, but it was precipitated by the revelation that a hedge fund also run by Bankman-Fried, Alameda Research, was closely (too closely?) entangled in FTX’s finances. By November 9, FTX had suspended withdrawals. By November 11, FTX and its affiliates had declared bankruptcy.

During that same week, MacAskill and others resigned from their roles managing the FTX Future Fund. In a public letter they wrote:

We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund.

We don’t yet know for sure whether moral or legal norms were violated by Bankman-Fried and others at FTX, but MacAskill seems to have suspicions.

On Twitter, MacAskill responded to the possible bad behavior at FTX as follows:

For years, the EA [Effective Altruism] community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning.

This particular response by MacAskill struck some as odd. After all, effective altruists, by MacAskill’s own lights “use reason and evidence to assess how to do as much good as possible, and who take action on this basis.” If reason and evidence suggest that violating common-sense moral constraints will lead to the most possible good, then why shouldn’t an effective altruist take action on that basis? Seen in this light, what was wrong about Bankman-Fried’s actions at FTX is not that he violated common-sense moral constraints (if it is true that he did), but instead that he did so ineptly and that left him without significant resources to donate.

MacAskill hasn’t explained his thinking on this particular case publicly, but there are things that an effective altruist could say in response. One prominent response is that often those who violate common-sense moral constraints “for the greater good” eventually end up getting found out and doing more damage to the cause they were championing. In this way, “ends justify the means” reasoning is typically counterproductive. So, the evidence suggests that violating common-sense moral constraints will not generate as much good as possible. Effective altruists, then, have a case for saying that common-sense moral constraints shouldn’t be violated: it just doesn’t work out.

Is that a compelling response? Perhaps. But notice that this line of thought locates the wrongness of violating common-sense moral constraints not in the violations themselves, but rather in the way that others will perceive such violations. Knowing that, it is not so hard to convince oneself that violating a constraint in a particular case is not really so bad, especially if it looks like one can keep things secret. This suggests that even if the response by the effective altruist above is philosophically compelling, it may be one that it is dangerous to give voice to. In the end, perhaps the best way to be an effective altruist is to not talk about effective altruism.

Why Speciesism Is Not a Prejudice

color photograph of tiger at zoo with family posing in black and white

Despite some notable dissenters, it has become a near-article of faith in applied ethics that “speciesism” — giving greater moral consideration to one individual or group than to another based merely on their membership in a certain species — is a prejudice indistinguishable from racism, sexism, and other forms of bigotry. Daniel Burkett succinctly states the dominant view when he writes that the argument that the suffering of animals counts for less “simply because they are animals” is “the same (very bad) rationale that justifies” these discredited prejudices.

But the rationale for speciesism is different in key respects from that for racism or other forms of bigotry.

The typical justification for racism consists of two claims. First, it is claimed that some phenotypic trait — in this case, skin color — maps onto, or is at least a reliable indicator of, some other characteristic. Second, it is held that the latter characteristic determines, or is at least relevant to, the degree of moral consideration to which an individual is entitled. The first is an empirical claim, while the second is a moral claim. Both claims may be false, but need not be for racism to count as a prejudice. For example, in the nineteenth century there was widespread agreement among white scientists that African-Americans were impervious to pain — in effect, that they were less sentient than whites. Today, almost all moral philosophers agree that sentience is, if not the sole basis for moral consideration, then at least one of the main ones. Thus, those who used racist science to justify differential treatment of African-Americans were not mistaken in focusing on sentience as a characteristic relevant to moral consideration. Rather, their racism was a prejudice because it rested on a false and unjustified empirical belief that African-Americans have “duller sensibilities” than whites.

This analysis of racism suggests that there are actually two kinds of justification for speciesism.

The first, mirroring the typical rationale for racism in its basic structure, is that species membership maps onto or is a reliable indicator of some other characteristic, and this characteristic is relevant to moral consideration. Call this justification “Empirical Speciesism.” The second is that species membership itself is relevant to moral consideration. Call this justification “Categorical Speciesism.” Either justification differs from the typical rationale for racism in key respects. First, the empirical claim in Empirical Speciesism need not be false or unjustified. For example, the Empirical Speciesist might claim that membership in the species homo sapiens maps onto enhanced sentience. That may very well be true, and even if it is false we may be justified in believing it. Second, Categorical Speciesism does not rest on any empirical claim. Thus, neither Empirical Speciesism nor Categorical Speciesism makes speciesism a prejudice on a par with racism. Philosophers who use that analogy as a way to dismiss speciesism out of hand are simply mistaken.

But perhaps what philosophers have in mind when they compare speciesism to racism is racism justified in a manner analogous to Categorical Speciesism. Instead of partially relying on an empirical claim, this justification for racism simply asserts that skin color is the morally relevant characteristic. The anti-speciesist argument would then be that both justifications are erroneous for similar reasons: neither species membership nor skin color is a morally relevant characteristic.

What justifies our confident conclusion that skin color itself is not a morally relevant characteristic? It can only be that this claim does not cohere with our other settled moral judgments. For example, everyone, including racists, believes that very similar phenotypic traits — for example, eye color or hair color — are morally irrelevant. Skin color, a superficial phenotypic trait, differs markedly from other characteristics everyone agrees are morally relevant, such as sentience. In light of these judgments, it seems arbitrary to hold that skin color is morally relevant.

If a white racist’s friends and family woke up one morning with brown skin, it is doubtful that the racist would consider this sufficient reason to treat them differently. This tends to show that the racist is either an Empirical Racist, or his beliefs are simply incoherent. And so on.

But unlike Categorical Racism, Categorical Speciesism coheres fairly well with our other settled moral judgments. There are no other characteristics that are suitably similar to species membership and that we generally hold to be morally irrelevant. Species membership is not a superficial phenotypic trait: it is part of an individual’s biological essence. For most people, if their friends and family woke up one morning transformed into cockroaches — not cockroaches with human minds, just cockroaches — that would give them sufficient reason to treat them differently. Granted, we seem to have strong intuitions that membership in the species homo sapiens is not necessary for moral consideration — even the strong moral consideration to which humans are thought to be entitled. Any given episode of Star Trek suggests as much. But it does not follow that membership in that species is not relevant to moral consideration: for example, it may still be sufficient for it. In other words, while the argument that insects are not entitled to consideration because they are not human may fail, the argument that humans are entitled to consideration because they are human may still succeed. In addition, species membership may justify differential treatment of two individuals alike in all respects except their species — for example, Vulcans and humans.

The upshot of my argument is not that speciesism is justified. Rather, it is that it cannot be easily dismissed as belonging to the same category as racism, sexism, and other forms of bigotry. When Peter Singer popularized this argument in Animal Liberation, he may have done a tremendous amount of good by calling attention to the morally relevant characteristics that animals and humans share. But as the sometimes slipshod reasoning in certain seminal Supreme Court civil rights opinions demonstrates, there is no guarantee that moral progress will be grounded in sound arguments.

Should You Outsource Important Life Decisions to Algorithms?

photograph of automated fortune teller

When you make an important decision, where do you turn for advice? If you’re like most people, you probably talk to a friend, loved one, or trusted member of your community. Or maybe you want a broader range of possible feedback, so you pose the question to social media (or even the rambunctious hoard of Reddit). Or maybe you don’t turn outwards, but instead rely on your own reasoning and instincts. Really important decisions may require that you turn to more than one source, and maybe more than once.

But maybe you’ve been doing it wrong. This is the thesis of the book Don’t Trust Your Gut: Using Data to Get What You Really Want in Life by Seth Stephens-Davidowitz.

He summarizes the main themes in a recent article: the actual best way to make big decisions when it comes to your happiness is to appeal to the numbers.

Specifically, big data: the collected information about the behavior and self-reports of thousands of individuals just like you, analyzed to tell you who to marry, where to live, and how many utils of happiness different acts are meant to induce. As Stephens-Davidowitz states in the opening line of the book: “You can make better life decisions. Big Data can help you.”

Can it?

There are, no doubt, plenty of instances in which looking to the numbers for a better approximation of objectivity can help us make better practical decisions. The modern classic example that Stephens-Davidowitz appeals to is Moneyball, which documents how analytics shifted evaluations of baseball players from gut instinct to data. And maybe one could Moneyball one’s own life, in certain ways: if big data can give you a better chance of making the best kinds of personal decisions, then why not try?

If that all seems too easy, it might be because it is. For instance, Stephens-Davidowitz relies heavily on data from the Mappiness project, a study that pinged app users at random intervals to ask them what they were doing at that moment and how happy they felt doing it.

One activity that ranked fairly low on the list was reading a book, scoring just above sleeping but well below gambling. This is not, I take it, an argument that one ought to read less, sleep even less, and gamble much more.

Partly because there’s more to life than momentary feelings of happiness, and partly because it just seems like terrible advice. It is hard to see exactly how one could base important decisions on this kind of data.

Perhaps, though, the problem lies in the imperfections of our current system of measuring happiness, or any of the numerous problems of algorithmic bias. Maybe if we had better data, or more of it, then we’d be able to generate a better advice-giving algorithm. The problem would then lie not in the concept of basing important decisions on data-backed algorithmic advice, but in its current execution. Again, from Stephens-Davidowitz:

These are the early days of the data revolution in personal decision-making. I am not claiming that we can completely outsource our lifestyle choices to algorithms, though we might get to that point in the future.

So let’s imagine a point in the future where these kinds of algorithms have improved to a point where they will not produce recommendations for all-night gambling. Even then, though, reliance on an impersonal algorithm for personal decisions faces familiar problems, ones that parallel some raised in the history of ethics.

Consider utilitarianism, a moral system that says that one ought to act in ways that maximize the most good, for whatever we should think qualifies as good (for instance, one version holds that the sole or primary good is happiness, so one should act in ways that maximize happiness and/or minimize pain). The view comes in many forms but has remained a popular choice of moral systems. One of its major benefits is that it provides a determinate and straightforward way (at least, in principle) of determining which actions one morally ought to perform.

One prominent objection to utilitarianism, however, is that it is deeply impersonal: when it comes to determining which actions are morally required, people are inconsequential, since what’s important is just the overall increase in utility.

That such a theory warrants a kind of robotic slavishness towards calculation produces other unintuitive results, namely that when faced with moral problems one is perhaps better served by a calculator than actual regard for the humanity of those involved.

Philosopher Bernard Williams thus argued that these kinds of moral systems appeal to “one thought too many.” For example, if you were in a situation where you need to decide which of two people to rescue – your spouse or a stranger – one would hope that your motivation for saving your spouse was because it was your spouse, not because it was your spouse and because the utility calculations worked out in the favor of that action. Moral systems like utilitarianism, says Williams, fail to capture what really motivates moral actions.

That’s an unnuanced portrayal of a complex debate, but we can generate parallel concerns for the view that we should outsource personal decision-making to algorithms.

Algorithms using aggregate happiness data don’t care about your choices in the way that, say, a friend, family member, or even your own gut instinct does.

But when making personal decisions we should, one might think, seek out advice from sources that are legitimately concerned about what we find important and meaningful.

To say that one should adhere to such algorithms also seems to run into a version of the “one thought too many” problem. Consider someone who is trying to make an important life decision, say about who they should be in a relationship with, how they should raise a child, what kind of career to pursue, etc. There are lots of different kinds of factors one could appeal to when making these decisions. But even if a personal-decision-making algorithm said your best choice was to, say, date the person who made you laugh and liked you for you, your partner would certainly hope that you had made your decision based on factors that didn’t have to do with algorithms.

This is not to say that one cannot look to data collected about other people’s decisions and habits to try to better inform one’s own. But even if these algorithms were much better than they are now, a basic problem would remain with outsourcing personal decisions to algorithms, one that stems from a disconnect between meaningful life decisions and impersonal aggregates of data.

COVID and Climate Change: Taking the Long-Term Seriously

photograph of ripple on lake expanding

Amid the ongoing COVID-19 pandemic, world leaders are assembling in Glasgow for COP26, the UN’s climate change conference. Both the pandemic and global warming are powerful reminders that the choices we make can have consequences that continue to unfurl over decades and centuries. But how much should we care about these hard-to-predict long-term consequences of our actions? According to some, so-called moral “longtermists,” we ought to care a great deal. Others, however, have called longtermism “the world’s most dangerous secular credo.”

COVID, climate change, and the long-term impact of our choices

The coronavirus now appears to be endemic. It is likely to continue to circulate across the globe indefinitely, causing more and more human suffering, economic damage, and disruption to our lives. The total sum of harm an endemic virus can cause is theoretically boundless. And yet, if China had better regulated its meat markets or its bio-labs (depending on your preferred origin theory), it would have likely prevented the outbreak entirely. This failure, in one place at one time, will have significant long-term costs.

The headline ambition of COP26 is for nations to commit to specific plans for achieving net zero (carbon and deforestation) by the middle of the century. Whether or not these talks are successful could have a profound long-term impact. Success could put humanity back onto a sustainable trajectory. We might avoid the worst effects of climate change: biodiversity collapse, flooding, extreme weather, drought, mass famine, mass refugee movements, possible population collapse, etc. Taking effective action on climate change now would provide a huge benefit to our grandchildren.

But the comparison between climate action and inaction does not stop there. As helping our grandchildren and great-grandchildren, the benefits of effective climate action now would likely continue to snowball deep into the next century. Instead of our great-grandchildren needing to allocate their resources and efforts on mitigating and reversing the damage of climate change, the twenty-second century might instead be spent in pursuit of other goals — eliminating poverty, making progress on global justice, and deepening our understanding of the universe, for example. Progress on these goals would, presumably, generate their own positive consequences in turn. The good we can achieve with effective climate action now would continue to accumulate indefinitely.

Commitment to taking the long-view

Both COVID and climate change make a strong intuitive case for moral “longtermism.” Longtermists think that how things go in the long-term future is just as valuable, morally speaking, as what happens in the near-term future. If you can either prevent one person from suffering today or two tomorrow, the longtermist says you morally ought to prevent the two from suffering tomorrow. But if you also had the option of preventing three people from suffering in a million years, they say you should do that instead. It doesn’t matter how far events are from us in time; morally, they’re just as significant.

The second part of the longtermist view is that we can influence the long-term future with our choices today. They argue that the long-term future that occurs depends on what humanity does in the next century. And the stakes are high. There are possible futures in which humanity overcomes the challenges we are faced with today: ones in which, over millennia, we populate the galaxy with trillions of wonderful, fulfilled lives. There are also possible futures in which humanity does not even survive this century. There is, in other words, a very valuable possibility — in moral philosopher Toby Ord’s words, a “vast and glorious” version of the future — that’s worth trying to make real.

A catastrophic future for humanity is not a particularly remote possibility. Ord, who studies existential risk, sees the next century as a particularly dangerous one for humanity. The risks that concern him are not just the cosmic ones (meteorites, supernova explosions) or the familiar ones (nuclear war, runaway global warming, a civilization-collapsing pandemic); they also include unintended and unforeseen consequences of quickly evolving fields such as biotech and artificial intelligence. Adding these risks together, he writes, “I put the existential risk this century at around one in six.” Humanity has the same odds of survival as a Russian roulette player.

The cost of failing to prevent an existential catastrophe (and the payoff of success) is incredibly high. If we can reduce the probability of an existential risk occurring (even by a percentage point or two), longtermists claim that any cost-benefit analysis will show it’s worth taking the required action, even if it incurs fairly significant costs; the good future we might save is so incredibly valuable that it easily compensates for those costs.

But, for whatever reason, reducing the probability of improbable catastrophes does not rise to the top of many agendas. Ord notes that the budget of the Biological Weapons Convention, the body that polices bioweapons around the globe, has an annual budget of just $1.6m, less than the average turnover of a McDonald’s restaurant. As Ord explains this strange quirk in our priorities, “Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it.”

Even short of generating or mitigating existential risks, the choices we make have the potential to put the world on different trajectories of radically different value. Our actions today can begin virtuous or vicious cycles that continue to create ever-greater benefits or costs for decades, centuries, or even millennia. So besides thinking about how we might mitigate existential risks, longtermists also claim we need to give more thought to getting onto more positive trajectories. Examples of this kind of opportunity for “trajectory change” include developing the right principles for governing artificial intelligence or, as COP26 is seeking to achieve, enacting national climate policies that will make human civilization ecologically sustainable deep into the future.

Challenges to longtermism

Last week, Phil Torres described longtermism as “the world’s most dangerous secular credo.” A particular worry about longtermism is that it seems to justify just about any action, no matter how monstrous, in the name of protecting long-term value. Torres quotes the statistician Olle Häggström who gives the following illustration:

Imagine a situation where the head of the CIA explains to the U.S. president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken [the longtermist] Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders. 

Longtermism entails that it’s morally permissible, perhaps even morally obligatory, to kill millions of innocent people to prevent a low-probability catastrophic event. But this can’t be right, say the critics; the view must be false.

But does Häggström’s thought experiment really show that longtermism is false? The president launching such a strike would presumably raise the risk of triggering a humanity-destroying global nuclear war. Other countries might lose faith in the judgment of the president and may launch a preventative strike against the U.S. to try to kill this madman before he does to them what he did to Germany. If this probability of catastrophic global nuclear war would be raised by any more than one-in-a-million, then longtermism would advise against the president’s strike on Germany. This is to say that if the president were a longtermist, it’s at least highly debatable whether he would order such an attack.

Of course, we can modify Häggström’s case to eliminate this complication. Imagine the chance of the madman succeeding in blowing up the world is much higher — one-in-two. In such a case, longtermism would likely speak in favor of the president’s nuclear strike to protect valuable possible futures (and the rest of humanity). But it’s also a lot less clear that such an act would be morally wrong compared with Häggström’s original case. It would be terrible, tragic, but perhaps it would not be wrong.

Maybe the real risk of longtermism is not that it gives us the wrong moral answers. Maybe the criticism is based on the fact that humans are flawed. Even if it were true that longtermism would rule out Häggström’s nuclear attack on Germany, the strategy still seems to place us in a much riskier world. Longtermism is an ideology that could theoretically justify terrible, genocidal acts whenever they seem to protect valuable long-term possible futures. And, ultimately, it’s more likely that flawed human minds perform unconscionable acts if they have an ideology like longtermism with which to attempt to justify their actions.

This last criticism does not show that moral longtermism is false, exactly. The criticism is simply that it’s dangerous for us humans to place such immense faith in our ability to anticipate possible futures and weigh competing risks. If the criticism succeeds, a longtermist would be forced to embrace the ironic position that longtermism is true but that we must prevent it from being embraced. Longtermists would have to push the view underground, hiding it from those in power who might make unwise and immoral decisions based on faulty longtermist justifications. Ironically, then, it might be that the best way to protect a “vast and glorious” possible future is to make sure we keep thinking short-term.

The Witcher and the Lesser of Two Evils

photograph of The Witcher character from a gaming expo

In Netflix’s The Witcher, we are treated to swords, sorcery, sex, and a slightly confusing plotline. More surprisingly, we also get to see an interesting take on an issue from moral philosophy: getting your hands dirty and doing the lesser of two evils.

The protagonist, Geralt, hunts monsters; but he is no mere sword-for-hire, and he will not kill innocent people. In one scene, the malicious Stregobor asks Geralt to kill Renfri, a woman Stregobor believes is cursed and has the power to destroy everybody. (Not to mention, she wants to kill Stregobor.) Stregobor implores Geralt to kill Renfri, suggesting that it is “the lesser evil.” Geralt’s response is fascinating: “Evil is evil… lesser, greater, middling. It’s all the same. If I have to choose between one evil and another, then I prefer not to choose at all.”

Geralt doesn’t want to get his hands dirty. The problem of dirty hands is often presented as a political problem. To take Michael Walzer’s example, should a political leader order the torture of a terrorist in order to find out the location of a series of bombs that will harm innocent citizens? The political leader has to do something bad—something one would much prefer not to do, something with a moral cost—in order to secure a better state of affairs. But these cases need not be so grand, we might make minor moral sacrifices or do things that are a little grubby in order to achieve worthy political goals. And we can find these cases outside of the political sphere: you might have to lie to a friend to save their feelings or ignore somebody’s needs in order to help somebody else who is in a worse position.

One might think that there is no moral cost to doing the lesser of two evils. If you do the best thing, can it really be evil? And shouldn’t we be content to bring about the lesser of two evils, given that it avoids a greater evil?

Bernard Williams thought that it can still be evil and that there can be reasons why we might want to avoid bringing about that evil. Take one of Williams’s most famous examples: Jim, an explorer, stumbles into a scene where twenty people are condemned to be executed. Because Jim is a venerable guest, the executioner offers to free all but one of the condemned, if Jim wants the honor of killing that one; if Jim refuses, all twenty will be killed. The condemned beg Jim to kill one of them. For utilitarians (the specific targets of Williams’s critique), it doesn’t matter that Jim has to kill someone—what matters is that either twenty people will die, or one will die, and it is far better that only one dies. Williams’s point was that it clearly does matter, especially to Jim, that to secure this optimal state of affairs Jim has to kill somebody.

What we do matters to us, and this is often very significant. In doing the lesser of two evils, perhaps we lose something, perhaps we harm someone, perhaps there is something “distressing or appalling”—such as in Jim’s case—or even just a little off about what we do, or perhaps it simply is not the sort of thing done by “honourable and scrupulous people.” The point is that even if it is the best option, the lesser of two evils can still be genuinely evil and we can be averse to doing it.

Ethical theory should leave some space for self-regard and the fact that actions can implicate us in ways that we may deeply wish to avoid. This might help to justify Geralt’s position: he would rather not choose, because if he chooses, he is forced to do evil and get his hands dirty. Still, in Jim’s case, Williams thinks that Jim should get his hands dirty; Williams’s point is that our involvement matters, it is not the stronger claim that we are always justified in keeping our hands clean.

But Geralt takes this to an extreme: he recognizes the lesser evil, but he’ll do all that he can to avoid doing it himself. Evil is evil, and he prefers not to choose at all. But this means that Geralt would allow a greater evil to take place, rather than commit a lesser or a middling evil himself. There is something noble about this, but there is also something distastefully self-regarding: in refusing to bring about evil in order to prevent greater evils, Geralt insulates himself from what happens in the world. He shows that, to some extent, he doesn’t care what happens to people, as long as he isn’t involved.

But Geralt’s position is not just self-regarding, it is unrealistic. Geralt doesn’t have the luxury of not choosing at all; the greater evil, if Stregobor is right, is not trying to kill Renfri. By not choosing, Geralt chooses the supposedly-greater evil. Williams was keen to emphasize this: sometimes whatever we do might be evil. If Jim turned down the chance to shoot one of the condemned, they would all die; if Walzer’s political leader refused to order the torture of the terrorist, innocent citizens would die. Even if there is something noble about Geralt’s desire to avoid getting his hands dirty, sometimes he simply might not have the luxury of choosing not to choose. And when he realizes that he must choose, he might be less committed to the idea that evil—lesser, greater, middling—is all the same.

Santa Clarita Diet and Moral Imperfectionism

aerial photograph of suburbs

Note: This piece contains spoilers.

At first glance, Santa Clarita Diet appears to be just another light-hearted, zombie-in-suburbia romp. But at its heart, the show rivals other forward-thinking series like Black Mirror and The Good Place for tackling extraordinary ethical scenarios. Santa Clarita Diet even sheds new light on familiar contexts in its “suburban white woman as zombie” conceit.

Drew Barrymore plays Sheila Hammond, a straight-laced, thin-lipped “realator” (her idiosyncratic pronunciation of “realtor” becomes a running gag in the show). We are given glimpses into pre-undead Sheila’s personality, a woman who quietly fumes at her abusive boss, blows off fun-seeking neighbourhood moms, and is immune to the appeal of spontaneous sex offered by hopeful spouse Joel (Timothy Olyphant).

This version of Sheila exemplifies the constraint that is iconic of the competing demands of suburban white womanhood. She muses wistfully: “I’d like to be 20% bolder. No, more, 80%. No, that’s too much.” Spoiler alert: things change.

Through a mysterious (and gross) transformation, Sheila becomes undead. Sheila, Joel, and their daughter Abby come to a slow realization of Sheila’s new abilities with the help of Eric Bemis (nerd kid next door) and under the gaze of dispassionate drugstore clerk Ramona, treated as oracle and therapist in turn by the stressed Joel, Abby, and Eric.

Sheila transforms from a constrained personality – someone who was beholden to unspoken rules – to someone who throws herself into life with joy and abandon.

Feminist themes are among the ethical perspectives that pervade the show, particularly through symbolism. Sheila develops her first taste for human flesh, ironically, when a coworker (played by Nathan Fillion) is attempting to coerce her into sex. The show has some on-the-nose moments (Sheila later attacks a misogynist at the point of his declaring victimhood) but avoids heavy-handedness under Barrymore’s tart and gleeful execution. The theme of bodily integrity recurs for woman and man, living and undead. These issues are treated thoughtfully and yet with a light touch.

It is easy to interpret various moments in the story as that of stages in a cis-woman’s life – the bodily fluids excreted by Sheila recall menstruation, treated in many cultures as a transformative moment. The raw power in her subsequent personality recalls duelling conceptions of post-menopausal women the typically negative Western view as changes affecting emotions, libido, impulse control, and that typical of other cultures as a time of increased freedom and power. It also deals with ageing: Sheila’s body is more prone to deterioration in its undead state, but at the same time she has never been more in touch with her physical energy and gusto. This mirrors the stages of life that women (and men too) can experience as a time of simultaneous inconvenience and liberation.

At first, Joel Hammond struggles with his wife’s brute strength (at one point he pleads, “I want Abby to grow up thinking men can kill, too”). But as he himself indicates on the cusp of an unwanted barfight, Joel does not need to prove himself. He carved out a much-valued life that does not depend on macho posturing. While his mixed emotions are sometimes played for laughs, they reveal his character’s fundamental values of open-mindedness, self-awareness and maturity which leave us rooting for him and his relationships.

While Sheila initially embarks on an impulse-ridden spree, seeking adventure and smiting problematic people with equal gusto, with the help of Joel, even zombified Sheila comes to quickly realize that her actions have consequences. This leads us to the central ethical problem: what are the actions of a good person? The twist (and joke) of course lies in the premise that said person could only survive by eating fresh human.  After breaking one of the most fundamental taboos, why enter moral niceties into the details? The progression of episodes’ breezy titles reflect the tug-of-war between existing moral imperatives and concessions to these radical circumstances: “We Can’t Kill People” to “We Can Kill People”; “We Let People Die Every Day” and “Moral Gray Area”.

The fact is, as the show teases, we deal with issues of moral magnitude all the time. Abby, the Hammonds’ normal teen daughter, is emboldened by her parents’ taking on more than they can chew. After joining a school environmental club, she decides to blow up a nearby fracking site. Unintended consequences quickly follow, with the FBI investigating her best friend who now risks decades in prison. But, in a keen moment of speaking truth-to-power, one of her classmates declares under the baleful eye of a Skittles-wielding FBI agent, the real crime is in the destruction of the planet.

As moral philosophers like Peter Singer point out, many people across the planet are facing death over situations in which we are all implicated – global inequality and extreme environmental degradation. The Hammonds see their place in a world bigger than themselves. When Sheila and Joel discover the cause of her condition, they hasten at personal risk and effort to eliminate its spread. Abby is a chip off the old block for seeing her role in a world bigger than her private circle.

In this light, the Hammonds’ quandaries and escapades take on a different hue. Rather than eschew morality altogether in an extreme situation, they mostly take care to accomplish the most good and effect the least harm. As such, the Hammonds’ ethic is shaped by utilitarianism. They also regularly exemplify care ethics, by being motivated and informed by relationships with others.

I also use the term “moral imperfectionism” to describe the show’s vision because it represents a coherent ethical position, one that can be contrasted to “moral perfectionism” or an “objective,” deontological account of the good. The show emphasizes epistemic uncertainty and the impossibility of perfect decisions in the face of enormous moral stakes, an ethical approach that is existential, humble, and optimistic. At no point do the Hammonds’ impossible positions and patchy outcomes lead to the adoption of nihilism, or conversely, the assumption of rigid, unchanging rules. The Hammonds constantly evaluate what they owe to the world: they don’t want to be ”assholes”. They show their commitment to grow, depend on input, support, and new information from others and dole out care of their own. They treasure the people in their lives, dead or undead, bipedal or eight-legged. For a show that deals in so much death, it has a lot to say about how to live one’s life.

Ethical Concepts in the Age of the Anthropocene

photograph of floating ice in Antartica

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


We all know, more or less, that Planet Earth is in trouble, that there is an overwhelming scientific consensus that an environmental catastrophe systemic, complex, and more and more irreversible is already underway.

We are facing an unprecedented concatenation of changes to the Earth. Global warming from fossil fuel pollution is causing ice caps to melt and oceans to rise, threatening to inundate many coastal habitats within decades. Climate change is causing more frequent and more extreme weather events in the form of violent storms and severe droughts. Destruction of ecological systems is leading to the collapse of insect and bird populations which are necessary for the pollination of plants including human food crops. Oceans are filling up with plastic waste, and toxic synthetic substances can now be found in every part of the world. A “biological annihilation” of wildlife in recent decades shows that the sixth mass extinction in Earth’s history is underway and it is more severe than previously feared, according to new research.

Continue reading “Ethical Concepts in the Age of the Anthropocene”

Some Normative Perspectives on Borders and Asylum Seekers

In recent weeks, President Trump’s administration has shocked both Americans and the international community by separating families of migrants and asylum seekers at the southern border. Two-thirds of Americans objected to these practices, many noting the likeness of forced separations and internment camps to the Third Reich. On June 20th, Trump reversed the policy with an executive order. The administration was subsequently mandated by federal judge Dana Sabraw to reunite families it separated within 30 days. However, reunification efforts will be a complex process.

The detainment and separation of young children from their families struck a chord. Many people recognized egregious moral violations in this practice. Among the protests and national outcry, a movement to abolish ICE has gained traction with support from such public figures as Samantha Bee and recent Senate candidate Chelsea Manning. Manning’s platform went so far as to propose an open border policy. While her Senate bid failed, Manning’s proposal is not so radical as it may appear. Philosophers like Michael Sandel are leading public debates on open borders.

The most common justification for selective borders come from political realism and arguments for state sovereignty. As political realism is less concerned with ethics than political theory, I will focus on other more recent normative — or how one should act — justifications for national borders (and the national interests they presumably protect).

The first normative perspective is communitarianism. Communitarian philosophy arose in critique of liberalism’s abstract individualism. Communitarians emphasize the role of particular societies in which individuals live. They argue, with some plausibility, the Hegelian insight that communities accord individuals meaning, identity, relationships and spheres of action. This philosophical perspective has a political cousin in local patriotism, grassroots movements, community organizing, and can also provide justifications for multicultural theory and policy. Collectivities, including nations, can shape and guarantee spheres of freedom for its inhabitants.

At the same time, there exist debased versions of communitarian discourses which communitarian philosophers would not endorse. The most notorious example of such was the Nazi ”Volksgemeinschaft or ideal of a unified German national community united by race. Part of Trump’s base appears to have adopted a Nazi-lite version of Völkisch theory, articulated by alt-right spokesperson Richard Spencer’s ”white nationalism”. Trump’s well-documented racialization and criminalization of “illegal aliens” resonated strongly Spencer’s white nationalist following.

Charles Mills argued that, in fact, much of modern Western political philosophy is built on a contract for the creation and maintenance of whiteness. He identified in writers such as Kant evidence of a “racial contract”, designed to shape notions of citizenship and statehood exclusively for white access and in defense of white interests, while constructing the domain of the non-white (slaves, First Nations, and other colonized groups). Here in particular we can note the shaky foundations of colonial states’ claims to border sovereignty.

A second explicitly ethical tradition that can be used in defense of selective borders is utilitarianism: promoting the greatest happiness for the greatest number. A utilitarian could argue that a privileged nation should welcome as many refugees or asylum seekers as could be sustained with the quality of life maintained by the privileged state. Or, alternatively, a utilitarian could argue that the previous suffering or risk of the refugee is so extreme as to be necessarily improved by taking up residence in the new country. In either case, there would generally be an upper limit for the amount of happiness that the utilitarian is able to distribute in a welfare state.

An illustration of the challenge of limited resources appears in the ”Lifeboat Ethics” of ecologist Garrett Hardin. Hardin provides a simplified but cogent model of resource-conscious moral reasoning. Picture a lifeboat with capacity for sixty persons. A nautical disaster has occurred, leaving fifty people on board. One hundred more hapless souls struggle nearby to stay afloat on the ocean’s surface. Should (some of) these hapless souls be taken aboard? If so, why and under what conditions? How do we decide? Do the ones in the boat have a prior claim to the ones outside of the boat? Hardin’s example is illustrates a particular “hard choice” version of ethics in a world that is becoming an ecological catastrophe– not everyone can be saved. This approach was questioned in a later work by Diane Brzozowski (2003) which illustrated with real case studies that, on some occasions at least, it is possible to save many or all concerned.

Thus far, these defenses might arguably be used to limit immigration to some extent. But are they sufficient to limit havens for refugees and asylum seekers? What about when “Lifeboat Ethics” becomes more than a metaphor, and people’s lives or fundamental freedoms are at stake?

From some major ethical traditions, turning away people who face harm or risk in their home countries are never defensible. In the wake of World War II, the principle of non-refoulement (international law against the repatriation of refugees and asylum seekers) was established (though not strictly observed by Western states). Non-refoulement derives from the UN Declaration on human rights, enjoining state parties to commit to the dignity of individual lives and fundamental freedoms. ”Human rights’ discourses are examples of pure deontological ethics, i.e. universalizable laws that cannot be chipped away by expedience. Another example of an entitlements theory (though one more based in measurement of human welfare rather than in a legal notion of “right”) is the capabilities approach. Initially formulated by Amartya Sen, the capabilities approach was adapted by Martha Nussbaum specifically in response to limitations of Rawlsian liberal theory which limited states’ responsibilities within their own borders.

Other ethical perspectives which mandate taking responsibility for refugees and asylum seekers are humanitarian grounds. These include the cosmopolitan approach, named from an early Stoic conception of human beings as ”citizens of the world” rather than of individual states. Arguably, thinkers like Peter Singer (who discounts geopolitical difference as a ethically relevant factor in deciding whom to help) represent contemporary cosmopolitan ethics. Cosmopolitanism dictates that our obligations to others in need are not restricted to narrow local circles of concern. Another ethical tradition that is likely to include needy others outside one’s borders is “care ethics”. Carol Gilligan, the founder of care ethics, observed that that women exhibit a unique approach to responsibilities for others and selves in a relational model, an alternative to both deontological and consequentialist ethics narrowly conceived.

One of the most parsimonious ethical theories lies in one simple principle: the harm limitation. While best known under John Stuart Mill’s articulation that one’s range of freedoms is limited by ensuring a like freedoms for others, it also appears in thinkers such as Pufendorf. A more stringent version of the harm principle is seen in the “negative duty” argument taken up by some ethical philosophers. We have an obligation not to harm others by our actions or our negligence. Thus, for example, Thomas Pogge would argue that we have a negative duty to not cause global poverty by our participation in an unjust international economic order. Taken in this broad perspective, there can be many circumstances in which Western countries are directly or indirectly responsible for the harms that refugees and asylum seekers face, as Matthew Gibney points out in his book, “The Ethics and Politics of Asylum” (Cambridge 2004). Thus, if a country has an outsized portion of responsibility for global warming by causing the most carbon emissions, they are responsible for the consequences that are borne by more geographically vulnerable states and the consequent surge of refugees. Similarly, if one nation covertly installed a dictator or encouraged armed conflict, there is a direct causal relationship of harm in those who seek refuge from conflict or persecution. Gibney also argues that partial responsibility can occur (Pogge’s example of participating in an unjust economic order that creates victims and winners could be one example).

But let us imagine us a case in which the privileged destination state is assumed to be wholly uninvolved in the miseries faced by the refugee or asylum seeker. Are they responsible for the ensuing harm if they dismiss the harassed individual or family at the border? By definition, a refugee and an asylum seeker have little options. It appears that President Trump’s administration’s confessed policy of family separation as deterrence is explicitly intended harm: deter asylum seekers by making their arrival in the US as horrific as the situation they left behind. In this case, they are obviously introducing more harm into the lives of these individuals and families seeking refuge, and have justly raised international outcry. But what are genuinely ethical ways to respond to refugees and asylum seekers, supposing there to be a limited lifeboat? Is it time to abandon both the narrowly nationalist and lifeboat perspectives to realize that we are all in this together?

This question will only become more serious as Western states contend with the rapidly changing global climate and the effects of the international economic order, with all the political and ecological ramifications of a vastly unequal neoliberal globalism. It’s clear that we need forward-thinking, ethical leadership. Instead of turning the clock back to 1939, we need to confront the challenges of the 21st century critically and openly.

A Journalist Fakes His Own Death. Was His Decision Moral?

Image of Arkady Babchenko speaking with politicians.

Russian journalist Arkady Babchenko was allegedly murdered in Ukraine by hired killers working for Vladimir Putin’s regime. A picture of his body bathed in blood was publicized. Then, in an astonishing twist of events, 24 hours later Babchenko appeared in a news conference to inform that, indeed, he was alive, and it had all been a deception.

Continue reading “A Journalist Fakes His Own Death. Was His Decision Moral?”

Why Give $75 Million to Philosophy?

Image of Johns Hopkins University's Main Campus

When Bill Miller, a wealthy businessman, recently made a $75 million donation to the philosophy profession—specifically, to the Johns Hopkins philosophy department—philosophers rejoiced in unison, right? Not exactly. Some rejoiced while others engaged in a debate. Mike Huemer, a philosopher at the University of Colorado, kicked it off in a Facebook post, which was reposted at the What’s Wrong? blog.

Continue reading “Why Give $75 Million to Philosophy?”

Reckoning with the Legacy of Derek Parfit

Philosopher Derek Parfit died on January 1st. Let us hope he will go to heaven. Will he? Parfit, who was an agnostic, was not much concerned with the existence of heaven or hell. But, he did famously argue that, even if such places do exist, the person going there would not be the same person who previously died. And, thus, someone would be punished or rewarded for the deeds of another person. This is deeply unjust, as unfair as sending someone to prison because of the crimes committed by his identical twin brother.

Continue reading “Reckoning with the Legacy of Derek Parfit”