Back to Prindle Institute

Too Clever for Our Own Good?: Moral Bioenhancement and Existential Harm

image of woman's profile in silhouette with sun behind clouds superimposed on her mind

Knowing things is good. How do you change a tire? What’s the right combination of time and temperature to cook a turkey? Why do we call the mitochondria the powerhouse of the cell? The answers to these questions make our lives easier, enabling us to overcome challenges. But these examples are just the tip of the iceberg. Over time, we’ve not only grown to understand more things about ourselves and the universe around us, we’ve also continued to discover new questions in need of answers.

But with this increase in our collective understanding comes an increase in the risks we pose to ourselves, each other, and, in extreme cases, the Earth itself. This is because each scientific, medical, and technological breakthrough brings opportunities for benefits and harm. The acquisition of knowledge is an inherently ethical enterprise characterized by what is known as the dual-use dilemma. As defined by Seamus Miller and Michael J. Selgelid:

The so-called “dual-use dilemma” arises in the context of research in the biological and other sciences as a consequence of the fact that one and the same piece of scientific research sometimes has the potential to be used for harm as well as for good.

For example, virology research is good as it means we have a greater understanding of how viruses evolve and spread through a population, enabling the development of societal and medical countermeasures like social distancing and vaccinations. However, if put into the wrong hands, such knowledge can also be used by terrorists and hostile political powers to create devastating viral weaponry or misinformation campaigns. Ultimately, every intellectual step forward brings both the potential for good and ill.

But this potential for risk and benefit has not grown steadily over the centuries; some advances prove more beneficial and some more devastating than others. For example, the creation of the plough revolutionized how we, as a species, farmed, but the negative implications of such a technological advancement are, arguably, minimal or at least nondirect.

Today, however, highly destructive technologies seem increasingly common due to our collective intellectual capacity and interconnected world. As such, even small groups can threaten existential harm.

For example, through advancements in genetics, virology, synthetic biology, or multiple other scientific disciplines, a few persons can, in principle, develop an organism or technology with the power to catastrophically ravage the planet. Moreover, with each discovery opening the door for new avenues for inquiry, there is no reason to think that this availability of potentially dangerous knowledge will subside anytime soon.

This leaves us with a problem. Suppose we continue to develop our collective cognitive capacities, enabling the discovery of even more methods through which we can come to harm ourselves or others, either through deliberate action or accident.

In that case, do we also need to enhance our ability to reason ethically to keep pace with this possibility of harm?

Ingmar Persson and Julian Savulescu posed this question in their 2008 article, The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. In it, they argue that moral bioenhancenment (MBE) – a biotechnological intervention aimed at enriching ethical decision-making and outcomes – should be developed and distributed to close the gap between humanity’s destructive capabilities and moral faculties. The idea is that our “natural” moral abilities are ill-equipped to deal with the complex and high-stakes world created by humanity’s mental prowess. They note, however, that those most needing a greater level of ethical understanding are those least likely to take such an intervention willingly; a nefarious actor planning to use a nuclear weapon to start an apocalyptic war isn’t exactly going to be first in line for a morality pill. So, according to Persson and Savulescu, MBE shouldn’t be optional – everyone should have to take it. As they write:

If safe moral enhancements are ever developed, there are strong reasons to believe that their use should be obligatory, like education or fluoride in the water, since those who should take them are least likely to be inclined to use them. That is, safe, effective moral enhancement would be compulsory.

According to them, this is the only way to ensure we can effectively mitigate the risk of existential harm. If left up to individual choice, some persons would inevitably choose not to become morally enhanced. This refusal would, in turn, leave the potential for cataclysmic risk unaffected, as even a tiny chance is too great to be left unaddressed. Much like playing Russian roulette, even the slightest probability is substantial enough to necessitate the rejection of the possibility altogether. To ensure we eliminate the risk of ultimate destruction, every person would need MBE.

Of course, this raises both principled and practical objections.

John Harris expresses concerns that, for MBE to be effective, it would have to prevent us from acting unethically. If it didn’t, it would be an effective countermeasure to the harms Persson and Savulescu envision. However, this would mean that the intervention directly prevents us from acting in a certain way and thus inhibits our free will. This possibility worries Harris as, without the ability to be unethical, the virtue of ethical actions ceases to exist – you’re not doing right if you have no choice. Vojin Rakić takes this worry even further, exporting it from the individual to the societal, arguing that MBE would deprive persons of their ability for collective morality and, ultimately, of a vital aspect of our humanity.

But, as I have argued, perhaps MBE need not be compulsory to be effective as we develop our behavioral attitudes from those around us.

If most people take MBE willingly, then there’s reason to believe that the unenhanced would act more morally as they would be surrounded by morally aspirational individuals and would be insulated from immorality’s temptation.

Additionally, there’s the political obstacle of simply getting every nation to agree to enact such a program. Given that we argue over seemingly unequivocable matters – like the need to tackle climate change – getting every world leader on board for such a program is practically impossible.

However, these objections don’t necessarily detract from Persson and Savulescu’s observation that our intellectual capacity has outpaced our moral capabilities. Instead, they highlight the difficulties in finding a suitable solution to the problem. Ultimately, if we all behaved more ethically, the world may not be in the precarious situation it is right now. The rise of fascism, the threat of global warming, the increase in conflicts, and the general breaking down of the established liberal world order may go some way in convincing skeptics that, while compulsory MBE may not be ideal, it’s preferable to the alternative of widespread, even global, destruction.

Fantastic Beasts and How to Categorize Them

photograph of Niffler statue for movie display

Fantastic Beasts and Where to Find Them is both a film franchise and a book. But the book doesn’t have a narrative; it is formatted like a textbook assigned in the Care of Magical Creatures course at Hogwarts. It’s ‘written’ by Newt Scamander and comes with scribbles from Harry and Ron commenting on its contents.

Before the creature entries begin there is a multipart introduction. One part, entitled “What is a Beast?” seeks to articulate a distinction between creatures who are ‘beasts’ and those that are ‘beings.’ The text notes that a being is “a creature worthy of legal rights and a voice in the governance of magical world.” But how do we distinguish between beasts and beings? This is one of the main questions central to the topic of moral status.

So, the intro asks two questions: who is worthy and how do we know? The first question seeks to determine who is in the moral community and thus deserving of rights and a voice. This is a question concerning whether an entity has the property of ‘moral standing’ or ‘moral considerability.’ The second question seeks to identify what properties an entity must have to be a member of the moral community. In other words, how does one ground a claim that a particular entity is morally considerable? We can call this a question about the grounds of moral considerability. It is the main question of the short introduction to Fantastic Beasts:

What are the properties that a creature has to have in order to be in the category ’beast’ (outside the moral community) or ‘being’ (inside the moral community)?

Attempts to resolve a question of moral considerability confront a particular problem. Call it the Goldilocks Problem. Goldilocks wants porridge that is just right, neither too hot nor too cold. We want definitions of the moral community to be just right and avoid leaving out entities that should be in (under-inclusion) and avoid including entities that should be left out (over-inclusion). When it comes to porridge it is hard to imagine one bowl being both too hot and too cold at the same time. But in the case of definitions of the grounds of moral considerability, this happens often. We can see this in the attempts to define ‘being’ in the text of Fantastic Beasts.

Fantastic Beasts looks at three definitions of the grounds of being a ‘being.’ According to the text, “Burdock Muldoon, Chief of the Wizard Council in the fourteenth century, decreed that any member of the magical community that walked on two legs would henceforth be granted the status of ‘being,’ all others to remain ‘beasts.’” This resulted in a clear case of over-inclusion. Diriclaws, Augureys, Pixies and other creatures were included in the moral community of beings, but should not have been. The text states that “the mere possession of two legs was no guarantee that a magical creature could or would take an interest in the affairs of wizard government.”

What really mattered was not the physical characteristic of being bipedal but the psychological characteristic of having interests. By focusing on the wrong property this definition accidentally included entities that did not belong.

This of course is related to a humorous story that Plato once lectured about Aristotle’s definition of a human as a featherless biped only to have Diogenes show up the next day with a plucked chicken stating “Behold! A man.”

At the same time, however, this definition is under-inclusive. Centaurs are entities that could take an interest in the affairs of wizards, but they have four legs and thus are left out. Merpeople also could take an interest in the affairs of wizards, but have no legs and thus are left out. Clearly, this definition will not do.

And it is not surprising that the definition fails. Using a physical characteristic to determine whether an entity will have the right psychological characteristics is not likely to work.

So what is a wizard to do but try to find a property more closely linked to the relevant psychological characteristic. Interests — for example, wants and needs — are often expressed linguistically: “I want chocolate chip cookies”; “I need more vegetables.” This apparently led Madame Elfrida Clagg to define a being as “those who could speak with the human tongue.” But, again, we have an example where the definition is over- and under-inclusive. Trolls could be taught to say, but not understand, a few human sentences and were included in the community but should have been excluded. Once again, the merpeople, who could only speak Mermish, a non-human language, were left out when they should have been included.

In our own world, the focus on language and other activities as proxies for cognitive traits have been used to discuss the moral status of animals (also, here). Attempts to exclude animals from the moral community did, in fact, use speech-use and tool-use as reasons to exclude animals. Descartes famously claimed in part V of the Discourse on Methods that animals did not use language but were mere automatons. But apes can use sign language, and crows, elephants, otters and other animals can use tools. So, for many who want to only include humans as in the category of ‘being,’ these activity-based definitions turn out to be over-inclusive. But again, given the incapacity of new born humans to use language or tools, they would also leave out some humans and be under-inclusive. So, using a non-psychological property (an activity) to identify a psychological property is unsurprisingly problematic.

Apparently, the wizarding world got the memo regarding the problem of these definitions by the 19th century. In 1811, Minister of Magic Grogan Stump defined a being as “any creature that has sufficient intelligence to understand the laws of the magical community and to bear part of the responsibility in shaping those laws.” The philosophical term for this set of capabilities is autonomy, at least in the way Immanuel Kant defined the term.

One way to express Kant’s’ view is that the morally considerable beings, the beings that could be called ‘persons,’ were those that had the capacity to rationally identify their interests and then have the will to execute plans to see those interests realized.

Persons are also capable of seeing that others have this capacity and thus rationally adopt rules that limit what we can do to other persons. These are the moral rules that guide our interactions that ground our rights, legal and moral, as well as give us a voice in self- and communal-governance. In other words, the term ‘being’ in Fantastic Beasts is just the text’s term for ‘moral person.’ Furthermore, the relevant psychological characteristic of persons is autonomy as defined by Kant.

There is something questionable about this Kantian view of being-hood or person-hood. On this view, persons need sophisticated cognitive abilities to be identified as persons. Any entity that lacks these cognitive abilities needed for moral judgment are non-persons and thus wholly outside the moral community. In other words, non-persons are things, have only instrumental value, and can be equated with tools: you can own them and dispose of them without morally harming them. But, this definition also excludes human infants and humans with diminished cognitive abilities, but we do not think of them as outside the moral community.

Surely these implications for humans are unacceptable. They would probably be unacceptable to the fictional Newt Scamander as well as to people who fight for animal rights. But the Kantian view is binary: you are a person/being or a beast/thing. Those who find such a stark choice unappealing can and do recognize another category between person and things. This would be something that has interests, but not interests in having a voice in governance. These entities often are vulnerable to damaging impacts of the behavior of persons and have an interest in not suffering those impacts, even if they cannot directly communicate them.

So, we need a new set of terms to describe the new possible categories of moral considerability. Instead of just the categories being/person and beast/thing, we can discuss the categories of moral agent, moral patient, and thing.

A moral agent is an entity that meets the Kantian definition of person. It is an entity who is in the moral community and also shapes it. A thing is something that does not have interests and thus is outside the moral community. But a moral patient is an entity that has interests, specifically interests against harm and for beneficence that should be morally protected. Thus, they are members of the moral community, just not governing members. So, Centaurs and Merpeople and Muggles can all be considered moral agents and thus can, if they so desire, contribute to the governance of the magical community. But even if they don’t want to participate in governance, the magical community should still recognize them as being moral patients, as beings who can be impacted by and thus whose interests should be included in the discussion of governance. The giants, trolls, werewolves in werewolf form, and pixies should at least fall into this category of patient as well. In the human world, infants, young children, and those with cognitive impairment would also fall into this category.

To sum up, then, the text of Fantastic Beasts presents a view similar to Kant’s of the grounds of moral status, but it can be improved upon by recognizing the category of moral patients. Furthermore, Fantastic Beasts clearly supports psychological accounts of the grounds of moral status over physical accounts. In other words, what matters to many questions of identity and morality are psychological properties and not physical properties or behavioral capacities. This is consistent with a theme of the Harry Potter novels where the main villains focus on the physical characteristic of whether an entity has the right blood-status to be part of the wizarding community. In other words, only a villain would solely focus on physical characteristics as a source of moral value.

The Desire for Moral Impotence

photograph of hands tied behind man's back

Richard Gibson and Nicholas Kreuder recently wrote about humans’ morally troublesome desire for control. The prospect of control is, Gibson notes, “intrinsically appealing” to humans, an “incredible common desire,” concurs Kreuder. Both writers also agree we should be wary of this desire for control. Gibson argues that this desire negatively influences our relationship with nature, while Kreuder argues that it “may leave our interactions with others feeling impoverished and hollow.” I largely agree, but I think there is another equally universal and deep-seated desire that also deserves some consideration — the desire to lack control.

An oft-repeated saying in philosophical ethics is “‘ought’ implies ‘can’.” In other words, if you can’t do something, then there’s no question of whether you ought to do it. Our moral responsibilities only extend as far as our abilities.

Because of this important link between what we ought to do and what we can do, being reminded that something is under our control often also serves as a reminder that it is also our responsibility.

The discovery that one has control is often not as joyous and anxiety-relieving an experience as you might expect given the universal human desire for control Gibson and Kreuder describe. In fact, anger, resentment, and bitterness are all common reactions to being reminded that we are in control of something. We often don’t want control. We yearn for it to be nothing to do with us — someone else’s problem.

Many of our responsibilities are, of course, distinctly moral ones. The world is an imperfect place, and we all have the capacity to make it better to some degree. In fact, many of us have the power to make it significantly better. In other words, most of us actually have a morally significant level of control over how the future unfolds.

Let’s take an example. It costs significantly more than most people think to save a life by donating to the most effective charities — about $2,300. But that’s still only about half as much as the average American spends at restaurants each year.

Ask yourself honestly; could you make a few lifestyle changes and afford the $2,300 needed to save a life? If so, how often? Once in your lifetime? Once a decade? Once a year? More?

How does this make you feel? Are you excited to learn or be reminded of your morally significant amount of control over the world? To discover that you (probably) have the radical power to give another human, a person just like you, the gift of life? Speaking for myself, far from feeling elated, I feel guilty and ashamed. My conscience would be clearer if highly cost-effective charities like this simply did not exist — if they did not grant me this ability to meaningfully reshape the world (at least for that one person and their family). Because having that ability means I have that moral responsibility. In my ordinary life, I act in bad faith. I think and act as though I don’t have the power to save lives with moderate charitable donations. For self-serving reasons, I think and act as though I lack control over the world that I actually possess.

In his discussion of Nathan Fielder’s The Rehearsal, Kreuder points out the attractiveness of having more control over our interactions with others. Imagine having more ability to decide how people will respond, being sure that you’re not going to say the “wrong thing.” He suggests this kind of control would provide relief for those “wrecked with anxiety and marred with feelings of powerlessness.” This is certainly a desire I can recognize.

But I can also see the inverse: the desire of having less control in our interactions with others, in many cases.

Imagine your younger sibling is going off the rails – drinking too much and partying too hard. Their grades are suffering. Your sibling doesn’t listen to your parents but they look up to you; you know they will listen in the end. So you know that you, and only you, can intervene and make them get back on track. You can sit them down and have the difficult conversation that neither of you want to have. In other words, you have a great degree of control over your sibling.

How would you feel about having this kind of interpersonal control? Far from relieving your anxiety, you might feel deeply burdened by it, and the significant responsibility that it entails. It would be understandable to wish that you weren’t in such a potent position, and that someone else was instead. You might even be tempted to deny to yourself that you have such control over your sibling to avoid having to deal with the moral burden.

Rather than the risks that accompany greater interpersonal control, Gibson is concerned primarily with the negative effect that our desire for (often illusory) control has on our relationship with nature. It influences how we approach debates about “designer babies, personalized medicine, cloning, synthetic biology” and his focus, “gene drives.”

Gibson contends that humans actually have much less control than we like to think. In a cosmic sense, I think he is right. But, at least as a collective, humanity is surely in firm control over much of nature, perhaps even too much. Unfortunately, we control the global climate via our CO2 emissions. We control global fish stocks via modern fishing practices. And now, as Gibson explains, we also control which species we want to continue living and which we want to drive to extinction via the emerging technology of gene drives.

With respect to nature, at least the biosphere of Earth, humanity surely has much more control than most of us would think is desirable.

Our catastrophic relationship to nature seems to me less a symptom of our desire to control nature, and more a symptom of our being in a blissful state of denial about just how much control we have.

To be clear, I think Gibson is right to warn against an excessively domineering attitude toward nature, and Kreuder is right to warn against having too much control over our interactions with others. But we should also be on guard against the equally human tendency to find narratives that absolve us of our burdensome responsibilities. If Gibson is right that, fundamentally, “we’re subject to, rather than the wielders of power,” if we can’t really exercise control over the world, then there’s no reason to ask ourselves the tough question — what should we do? Avoiding this question may feel good, but it would be morally disastrous.

Underdefined Terms in the Abortion Debate

photograph of pro-life protest signs in front of Supreme Court

With the Supreme Court’s ruling in Dobbs v. Women’s Health, debates about the regulation of abortion will likely become an even more contentious issue in the coming days – a sentence which would have seemed absurd even just a few years ago. State legislatures may now attempt to outright ban abortion, which would likely lead to highly publicized, intense debate on the issue.

My purpose in this and the subsequent article is not to advance a view on abortion or the regulatory limits the state ought to place on it. Instead, my goal is to consider some concepts central to the abortion debate and demonstrate that they are used sloppily in public debates about abortion.

The hope is that, by illuminating these concepts, we may perhaps gain some clarity into why it feels like this debate has made little to no progress in the nearly 50 years since Roe v. Wade.

Indeed, polling suggests that for nearly thirty years a slight majority of Americans have consistently felt abortion ought to be legal in most cases, while those who think abortion ought to usually be illegal make up somewhere in the high thirties to low forties of participants polled.

Before beginning, it may be worthwhile to put my biases on the table. I think that a right to access abortion is implied by a general right to bodily autonomy. However, I am not a legal scholar and cannot discuss in depth what legal rights substantive due process guarantees, or whether we even ought to adopt that standard.

Nonetheless, I find the reasoning in the majority’s opinion deeply troubling. The opinion, authored by Justice Samuel Alito, argues that abortion rights are not explicitly guaranteed by the Constitution. Further, the majority state that a right to abortion it is not “deeply rooted in this Nation’s history and tradition” as the procedure was illegal in most states until the “late 20th century.” However, plenty of other things seem obviously inappropriate for the state to regulate yet fail to pass these criteria; a right to marriage is not named in the Constitution, and interracial marriage was illegal in most states until the mid-20th century, with 16 hold-out states until Loving v. Virginia, a case decided just six years before Roe. Although it is worth noting that the majority opinion attempts to differentiate abortion from other purported rights grounded in the right to privacy, like interracial marriage, same-sex marriage, and access to contraception. (I will discuss more on this point later.)

Through my discussion, I will refer to those who are against the legalization of abortion as “anti-abortion” advocates, while calling those who support the legalization of abortion as arguing for “reproductive choice.” This is to avoid the way in which identifiers like “pro-life” may be normatively loaded.

I begin by focusing directly on some concepts which are the heart of debates about the morality of abortion. My next column focuses more directly on overlooked concepts involved with debates about its legality.

Humans & Persons

Generally speaking, the position of anti-abortion advocates gets its support from a very simple argument:

1. Killing is wrong.
2. Abortion kills a zygote, embryo, or fetus.
Therefore, abortion is wrong.

Line 1, while very intuitive, quickly runs into issues. Namely, not all killings are obviously wrong. For instance, most people see no moral problem with killing an animal to eat it, or exterminating unwanted pests in the home.

So, one might specify line 1 by changing it to 1’) Killing a human is wrong. However, even this may still be questioned. 1’ puts significant pressure on the second premise. A zygote is human in the sense that it has human DNA. But as a bundle of cells, it is no more human than, say, a tumor. Embryos and fetuses begin to resemble humans in their physical shape. So perhaps they might meet the criteria for being human, while a zygote does not.

But being a living being that looks human or has human DNA does not seem to grant “moral status” – having standing such that your interests are given equal moral importance to the interests of others.

Being human seems neither necessary, nor sufficient. Science fiction and fantasy are littered with non-humans who seem to deserve the same moral status as humans; some fans felt aggrieved that Chewbacca did not get a medal at the end of the first Star Wars film.

Further, imagine a hospital in a triage situation. Suppose that a new patient arrives who needs immediate care. Call her patient K. If patient K receives the care she needs, she will make a complete recovery. If not, she will quickly die. However, the only way to free up the resources necessary to treat patient K is to cease treating a different patient. Call her patient M. Patient M has suffered total frontal brain death. Although patient M’s body still performs vital functions like maintaining breath and heartbeat, there is no chance that she will ever regain consciousness or perform higher cognitive functions again.

Should the doctors halt patient M’s treatment to save patient K’s life? Most would think yes.

This is not to say that patient M deserves no consideration. But rather, when patient M’s interests conflict with patient K’s interest, it seems like patient K’s ought to win out. This means that patient K has a higher moral status than patient M.

This example in particular suggests that having full moral status requires more than being a living human being. Namely, it requires having certain psychological capacities – things like the ability to form a self-concept, to think rationally and to develop a plan for your life. This is why beings like Chewbacca deserve the same consideration that we do, despite the fact that they lack human DNA.

While zygotes, embryos, and fetuses are human in the sense that they are members of the human species, they seem to lack these psychological capacities. Thus, they lack what is required for what some call personhood in the moral sense, or moral personhood. Thus, even if their interests deserve some moral consideration, they may not deserve as much consideration as those of a clear moral person – in particular, a pregnant person.

However, it should be noted that some may ground full moral status in things other than possession of certain psychological capacities. For instance, some religiously motivated anti-abortion advocates may claim that a human has full moral status from the moment of conception onward due to the possession of something like a soul. Of course, unless their interlocutors share the belief in a soul, this move may simply bring the discussion to a halt.

“Potential Lives”

So, some anti-abortion advocates might change their rhetoric. They could instead claim that zygotes, embryos, and fetuses have elevated status because of their potential, rather than their current capacities. In criticizing the dissenting opinion, Justice Alito notes that the dissenters do not acknowledge “the States’ interests in protecting fetal life.” Elaborating further on this, he briefly states that comparing the decision in Dobbs to decisions in Griswold, Eisenstadt, Lawrence, and Obergefell relies on a false analogy – abortion “destroys a potential life” but contraception, sodomy, and same-sex marriage do not, claim the majority. (Although it is worth noting that Justice Clarence Thomas, in his concurrence, writes that the court should reconsider its due process precedents, specifically naming Griswold, Lawrence, and Obergefell.)

But this idea of a potential life is quite vague. Eggs in a bird’s nest are potential lives. Unless these are eggs of an endangered species, the state does not seem to have an interest in regulating our conduct towards them. So surely the majority means potential persons – the idea here being that because personhood is of moral significance, then having the potential to be a person must also be morally significant.

However, “potential person” is still too vague. Taken on its face, everything from a fertilized egg onward is a potential person. But it need not stop here. All it takes to begin a process that will eventually result in a person is to have a single sperm cell and a single egg cell. So, these two cells may be potential persons. This would lead to some absurd consequences – if the state has an interest in protecting potential persons, then they would have an interest in banning products like contraceptives or procedures like vasectomies. But surely, at least for now, this is not what the Court suggests.

As a result, we may wish to introduce a more fine-grained distinction about potentiality. Stephen Buckle distinguishes between potentiality in two senses – the potential to produce and the potential to become.

A single sperm and egg cell are potential persons in the sense that they may produce a person; they’re the necessary “raw ingredients” to start a process that eventually can result in a person. However, they themselves do not become a person. Instead, they produce other entities that become a person.

The power to become instead deals with what Buckle calls “distinct individuals which preserve their identity over time.” Thus, even a fertilized egg would not have the power to become a person. According to Buckle, preserving identity over time requires having some kind of unifying structure which makes one a distinct individual.

But the exact nature of this identity preserving unifying structure is an open question. Some might argue that identity is a matter of being the same living organism. Others claim that it is a matter of spatio-temporal continuity; that if we could plot your location over the course of time, the result would be an unbroken 4th dimensional “space-time-worm” where each “slice” is your location at a moment in time. I’ve noted elsewhere that psychological factors play a role in our identity.

Thus, there are multiple candidate answers to the question of what has the power to become a person. If our identity consists in being a human organism, then the power to become a person emerges at the point when the embryo becomes a fetus – after the cells have differentiated, taken on functions, and the overall structure of the organism is set into place. If we are instead space-time-worms, then our identity stabilizes as early as fertilization. But if our identity is a matter of psychology, then we may not develop the power to become persons until much later, perhaps not even until after birth when our experiences start shaping our future attitudes and behaviors.

As a result, the sense in which we call something a “potential life” has great bearing on what policies we should favor. If we mean those things which have the power to produce a person, then what behaviors the state has an interest in regulating would be very far reaching – any time that we are not reproducing as much as possible, we are destroying or at least preventing a potential life. But if we instead mean that a potential life exists only when something has the power to become a person, then we are stuck having to answer the question of what preserves our identity over time before we can answer any questions about whether a fetus has the potential to become a person.

Rethinking “Rethinking the Moral Status Debate”

photograph of boy and girl watching elephant at zoo

To have moral status is to be morally important for your own sake. Dogs, for instance, matter morally in a way that rocks don’t. Dogs have moral status. To have full moral status is to matter morally for your own sake as much as people do. Most people think dogs have moral status, but not full moral status. They think dogs matter morally, but not as much as people. Some philosophers agree with this, while others think this belief is a mere prejudice (“speciesism”), and that all conscious beings have full moral status. I have written about this here before.

Much of the abortion debate centers around the question of whether and when fetuses possess moral status, and/or full moral status. Is, say, an eight-week embryo morally important for its own sake? Is an eight-week embryo as morally important for its own sake as an eight-year old child? These questions may not automatically settle the abortion debate, but they are clearly important to it.

The relevance of moral status to abortion spurred an interesting recent piece in which Ben Rossi suggests that many of us are thinking about moral status incorrectly. Most of us think moral status is an intrinsic property, but Rossi thinks it is an extrinsic property. An intrinsic property depends on how you are in yourself, while an extrinsic property depends on how other things are. That I have a lap is an intrinsic property. That my cat, Apollo, is currently sitting in my lap is an extrinsic property: if he moved, I would lose the property “having a cat sitting in my lap,” even if nothing about me in and of myself changed. Or, in science, mass is an intrinsic property, while weight is an extrinsic property. If you put me on the moon, I would weigh a lot less, even if I was the same in and of myself.

Most of us think moral status depends on intrinsic properties. We think it depends, say, on whether you are capable of consciousness, or of reasoning, or whether you are a human, etc. Rossi thinks it depends on an extrinsic property: “the moral costs and benefits — principally understood in terms of aggregate welfare, autonomy, and equality — of adopting as a general rule that such-and-such a class of beings possesses strong moral subject-hood” determine whether you have full moral status. I take it we are supposed to accord full moral status to a being if doing so would have the best consequences.

We can see that this makes moral status extrinsic by noting that it means I could gain or lose moral status without changing anything about how I am in and of myself. If hostile aliens threatened to destroy the world unless we ceased treating Prindle Post writers as having full moral status, then I suppose it would be best not to treat Prindle Post writers as having full moral status. In that way, I could lose my moral status without changing how I am in and of myself. Or, to take things back to abortion, suppose we developed cheap artifical wombs, and could safely transplant embryos from pregnant people into these wombs. I suppose that on Rossi’s view, since this means the burden of treating embryos as having full moral status would no longer impose as great a burden on pregnant people, the invention of this device would strengthen the case for embryos having full moral status, without changing anything about embryos themselves.

Rossi notes that his account makes moral status extrinsic, and notes this is odd: it does not seem that the aliens issuing this threat could cause me to become less morally important. But since he has already discussed this, I will focus on a different worry for his view. I worry that his view involves a troublesome circularity. In order to know how good or bad an outcome is, we need to know the moral status of the individuals involved. But if the moral status of the individuals involved determines how good or bad the outcome is, then how good or bad the outcome is cannot then determine the moral status of the individuals involved. That’s a circle.

Consider, for instance, Rossi’s concern for equality. If people were allowed to kill eight year-olds, our society would not be one that respects equality. On the other hand, if people are allowed to kill hydrangeas, our society might still be one that respects equality. Why is that? Eight-year olds have full moral status, but we are not giving them the protections given to others who have full moral status. In this way, we fail to respect their equal moral standing. On the other hand, hydrangeas do not have full moral status. Perhaps they have no moral status at all. Accordingly, it is not an offense against equality to deny them the protections given to individuals with full moral status.

In our current society, people (at least in most states) can kill eight-week old embryos. Is this an offense against equality? To know this, we may need to know whether eight-week old embryos possess full moral status. If they do, perhaps we are failing to respect their equal moral status by failing to give them the protections we have. If not, then not. But if we need to know their moral status in order to know whether we are satisfying the demands of equality, and if satisfying the demands of equality is one of the things that determines how good or bad it is to treat them as having full moral status, then how good or bad it is to treat them as having full moral status cannot be what determines what moral status they actually have. That would be a circle.

What If You Aren’t Sure What’s Moral?

photograph of a fork in the path

Today, I woke up in a soft bed in a heated apartment. I got up and made full use of the miracle of indoor plumbing before moving on to breakfast. Pouring myself a bowl of vitamin-enriched cereal and milk (previously delivered to my doorstep) I had to admit it: modern life is good.

Opening up my laptop, my gratitude for modernity diminished as quickly as my browser tabs multiplied. Our phones and laptops are not just tools. They are portals to another world — a relentless world of news, opinion, and entertainment. We’re living through the age of information overload. On average, we now consume 174 newspapers worth of information each day. “I’ve processed more information in the last 48 hours than a medieval peasant would in a lifetime,” reads a well-liked tweet.

And yet, amid this tsunami of information, we seem to have less certainty than ever. Controversy and discord reign. There is little agreement about basic facts, let alone about what is to be done. Is it time to lift COVID-19 restrictions yet? Is American democracy at risk of failure? Are plastics killing us? Should we allow genetically modified foods? Will climate change be simply bad or disastrous? I have my opinions, and I’m sure you have yours, but do any of us know the answers to any of these questions with certainty?

As well as uncertainty about the facts, we continually find ourselves facing moral uncertainty. Moral theories and views divide both public and philosophical opinions. They defy consensus. Is euthanasia morally permissible? Is abortion? Eating meat? Amid our unprecedented access to a wide range of moral arguments and views, all competing for our allegiance, we are left to come to our own moral conclusions. If we are being brutally honest with ourselves, we probably aren’t absolutely certain about all of our moral views.

In these conditions, moral uncertainty is the norm. But, as the Samuel Beckett line goes, “You must go on.” Even if you don’t know for sure what the right moral view is, reality refuses to stop the clock to let you figure it out. You have to act one way or another, despite your moral uncertainty. Being uncertain doesn’t take you off the hook of moral responsibility. Neither does refusal to act. As climate change illustrates, refraining from taking decisions can be just as disastrous as making the wrong decisions.

So, how can you go on under these conditions of moral uncertainty? Let’s take a concrete example. What if you think eating meat is morally permissible, but you’re not totally sure? If you’re willing to admit there’s some chance you could be wrong about the morality of vegetarianism, what should you do? Keep eating meat? Or give it up?

The philosopher William MacAskill argues that if you are morally uncertain about vegetarianism, you should give up eating meat. In fact, even if you think there’s only a 10% chance that vegetarianism is the right moral view, you should still give up meat.

MacAskill thinks there’s an asymmetry in the moral risks you’re running. “If you eat veggie and eating meat is permissible, well, you’ve only lost out on a bit of pleasure,” says MacAskill, “But if you eat meat and eating meat is impermissible, you’ve done something very wrong.” Maybe you should give up a bit of pleasure to avoid the risk of doing something really morally terrible, even if the probability that you would be doing something really morally terrible is relatively low. “The morally safe option,” claims MacAskill, “is to eat vegetarian.”

We can apply MacAskill’s approach to other problems where we face moral uncertainty. Peter Singer famously argued that failing to donate money to help alleviate suffering in the developing world is just as morally wrong as letting a child drown in front of you. Most of us seem to think that Singer’s moral claims are too strong; we don’t think we are morally obligated to donate to charities, even if we think it is morally good – beyond what we are obligated to do – to donate. However, it seems at least possible that Singer is right. If he is right, then not giving any money would be very wrong, as wrong as letting a child drown. But if Singer is wrong, then all I’d lose by donating is a bit of money. Given the moral risk, the appropriate choice seems to be to donate some money to charity.

These two cases might make MacAskill’s approach look appealing. But it can also get strange. Imagine you really want to have a child. You are near-certain that having a child is morally permissible. In fact, you think having a child, bringing a happy person into the world, would be a serious moral good. You also think there’s a tiny (less than one percent) chance that anti-natalism is true. According to the version of anti-natalism you’re considering, by having a child you’re doing something morally terrible — bringing into existence a chain of human suffering that will continue for millennia. If anti-natalism says that having a child is morally wrong enough, then it would be less morally risky for you to simply not have a child. But should you really not have a child in such a case? Even though you believe with near-certainty that doing so would be a morally good thing? That seems like a strange conclusion.

The ethicists Johan Gustafsson and Olle Torpman give an alternative framework for thinking about how we should act under moral uncertainty. When we think of good, moral people, we generally think they are conscientious; they are typically true to what they believe is right. To put it another way, we think that a moral, conscientious person won’t do what they sincerely believe to be wrong. In the child example, your sincere, near-certain belief is that it is permissible, perhaps even a good thing, to have a child. MacAskill’s approach to dealing with moral uncertainty seems to say you ought not to have a child. But how can a moral theory that you don’t believe in matter more than the one you do believe in? For these reasons, Gustafsson and Torpman propose a much simpler approach: act in accordance with the moral view that you are most confident in. In this case, that would mean you should have the child that you want.

This simpler approach to dealing with moral uncertainty might seem straightforward and convincing. But I invite the reader to go back and apply Gustafsson and Torpman’s approach to the two cases discussed earlier, of charity and vegetarianism. Arguably, their approach gives less convincing advice in these cases.

How we should act given moral uncertainty is an important question for the discordant moment in which we are living. Whether we have the correct answer to this question remains far from clear.

Can Santa Know Who’s Naughty or Nice?

photograph of child with present watching silhoette of Santa's sleigh

Reliable sources indicate that every year, Santa makes a list of boys and girls he deems “naughty” or “nice.” The stakes could not be higher: on the basis of this classification, a child receives either wonderful gifts or coal in his or her stocking. Thus, it would appear that a serious inquiry into Santa’s methods is in order. In short: how does Santa know who’s naughty or nice?

There are actually two parts to this inquiry. The first concerns Santa’s definition of “naughty” and “nice.” I’ll leave this interesting question to the serious normative ethicists. The issue I’m interested in is this: even if you’ve been naughty, I assume that Santa will not judge you deserving of coal unless you are responsible for your naughtiness. Naughtiness and responsibility for naughtiness are distinct. After all, some people behave naughtily but are blameless for doing so: for example, those who are blamelessly ignorant of what they are doing. So, the question I want to focus on is how Santa knows who is responsible for being naughty, thus deserving coal in their stockings.

Most philosophers agree that responsibility for wrongdoing has two components: a control component and a mental component. Plausibly, you are not responsible for what is not under your control. Likewise, you are not responsible if you don’t know what you’re doing. So, responsibility requires at least some sort of awareness of what one does and some sort of control over what one does. (There is much more to be said about both components, but for our purposes this will suffice).

However, as the philosopher Michael Zimmerman has observed, if you are only responsible for what is under your control, then it would appear that those who attempt naughtiness are no more and no less responsible (read: blameworthy) than those who are naughty. (Most of what follows is taken from Zimmerman’s book.) Consider Sue and Sara. Sue throws a baseball at a neighbor’s window, shattering it. Sara throws a baseball at a neighbor’s window, but a strong gust of wind blows the baseball off course and it lands harmlessly in the neighbor’s yard. Is Sue more to blame than Sara? If we are responsible only for what is under our control, the answer appears to be “no.” After all, Sara would have shattered the window with the baseball had it not been for something entirely outside of her control: namely, the direction and strength of the wind. Arguably, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. A pithier way of making the same point is to say that Sara was lucky that she didn’t hit the window, but since her not hitting the window was a matter of luck, Sara’s blameworthiness is the same as Sue’s.

If Santa accepts this reasoning, he will put those who attempted naughtiness on the naughty list with those who succeeded. Perhaps this expansion of the list is tolerable to the old man. The problem is that this same line of reasoning threatens to scramble the whole system. Consider first that what we do is determined not only by what we decide to do, but also by the opportunities and character that we happen to have. Consider Susanna, whose parents have refused to buy her a baseball; and Shirley, who would never dream of throwing a baseball at a window because her parents have effectively taught her never to destroy other people’s property. It may be true that Susanna would have shattered her neighbor’s window had it not been for something outside of her control: whether or not her parents had bought her a baseball. And it may be true that Shirley would have shattered her neighbor’s window had her parents raised her differently, which is also something outside of her control. Once again, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. Thus, the same reasoning that consigned those who attempted naughtiness to the naughty list also seems to consign those who lack opportunity for naughtiness and those whose characters preclude naughtiness to the naughty list.

Even worse, the same line of reasoning implies that everyone is blameless and, indeed, praiseworthy. Just as it may be true that Sara, Susanna, and Shirley would have shattered the window had they been free to do so were it not for some factor outside of their control, so it is also true that the three girls would have performed praiseworthy acts that they did not actually perform were it not for lack of opportunity or their own bad characters, all of which are beyond their control. If a person would have done something praiseworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as praiseworthy as if she had done that thing.

Now we can see why, if the argument is sound, Santa’s whole system is in trouble. The claims we have been making about the three girls — that they would have done such-and-such, were they free to do so — are sometimes called “counterfactuals of freedom.” According to the argument just developed, for any child there are an indefinite number of “counterfactuals of freedom” that make that child blameless, praiseworthy, and blameworthy. Santa will never know all of these counterfactuals, so he is never in a position to weigh them up to make a final judgment on the child’s naughtiness or niceness.

So, what is poor Santa to do? Zimmerman thinks that he should put everyone on the nice list, since it is far worse to punish the innocent than to not punish the guilty. As the ultimate niceness or naughtiness of a child is unknowable to Santa, he should assume that all children are nice to avoid punishing some nice children.

But there is an alternative response. For utilitarians, blame and praise (as well as punishment and reward) are tools that we ought to use to bring about good effects – such as deterrence or moral growth – rather than ways of registering a person’s desert. From this perspective, it would make no sense to blame or punish someone for something someone would have done in some counterfactual circumstance of which we have no knowledge. Such punishment would be arbitrary. Even if we somehow could know the truth of some counterfactual of freedom, people can’t choose to avoid being the kind of person who would commit naughty acts under some counterfactual circumstance, so there is no deterrence value in punishing them for being that kind of person. By contrast, it does make sense from this perspective to punish someone for something they have actually done — not because that person is more deserving, but because in response to such punishment people (including the punishee) can choose to avoid committing naughty acts in the future.

So, if Zimmerman’s argument is sound, then Santa has at least two choices: put everyone on the nice list, or turn utilitarian. I recommend the latter.

Faramir’s Insight

photograph of Lord of the Rings book atop Middle-Earth map

When I last reread J.R.R. Tolkien’s Lord of the Rings trilogy, one line in particular stood out to me. It was something that Faramir says when talking with Frodo about whether to abandon Gollum. I’ll quote the line in context:

‘Frodo, I think you do very unwisely in this,’ said Faramir. ‘I do not think you should go with this creature. It is wicked.’

‘No, not altogether wicked,’ said Frodo.

‘Not wholly, perhaps,’ said Faramir; ‘but malice eats it like a canker, and the evil is growing. He will lead you to no good. If you will part with him, I will give him safe-conduct and guidance to any point on the borders of Gondor that he may name.’

‘He would not take it,’ said Frodo. ‘He would follow after me as he long has done. And I have promised many times to take him under my protection and to go where he led. You would not ask me to break faith with him?’

‘No,’ said Faramir. ‘But my heart would. For it seems less evil to counsel another man to break troth than to do so oneself, especially if one sees a friend bound unwitting to his own harm.

The last line rings true to my experience. It is one thing for me to take on a sacrifice to do what is right. It is another thing entirely to advise someone else to make that sacrifice.

It is easy to be frugal about stuff I buy for myself. Harder to be frugal about stuff I buy for my wife. Why? Because it is harder to ask someone else to sacrifice to do the right thing. I would never spend much money on jewelry for myself. And yet I felt tempted to spend too much money on an engagement ring for my wife. (Fortunately, my wife is as frugal as I am, and so independently insisted on a cheap $50 engagement/wedding ring.)

I want to talk about this phenomenon of expecting less of others than we expect of ourselves. I want to talk about it, because it first appears as almost noble. But the more I’ve thought about it, the more convinced I’ve become that it is actually a vice. It involves a neglected failure of character, a failure to take seriously the moral personality of others.

To see this, let’s consider possible explanations of the phenomenon. Two I’m sure are wrong, and three that might be right.

Wrong Hypothesis 1: Different Obligations

One possible explanation is that I really think my moral obligations are stricter than the obligations of others. Faramir thinks it would really be wrong for him to break his promise to Gollum, but that it would not be wrong for Frodo to do the same.

While I’m sure this sometimes happens, it does not explain the phenomenon I’m trying to understand.   The difference I’m responding to is not one of ‘weaker’ vs. ‘stronger’ but rather me vs. other. I will hold my close friends to higher standards than strangers, even if I think the strangers are better people. Likewise I am willing to hold myself to a higher standard than my close friends, even though I know many of my close friends are more virtuous than me.

Wrong Hypothesis 2: Selflessness

So could it be selflessness? Could it be that I just care more about the sacrifices of others than I do about my own sacrifices?

Perhaps this occurs for some people, but it is not the explanation in my case. For one thing, I’m not nearly that selfless. I spend far more time working on my own job applications than helping friends with their applications. Why? Because I selfishly care about my own future more than the future of my friends.

For another thing, the closer I am to someone the more, not the less, willing I am to ask them to sacrifice. If a stranger asks me for advice, I’m very unlikely to tell them to make the radical sacrifice. But the closer a friend, the more comfortable I am to ask of them sacrifice. As I will argue below, this is because the closer a friend, the more deeply I appreciate their moral personhood.

Possible Hypothesis 1: Moral Uncertainty

One possible explanation is that the hesitancy follows our moral uncertainty combined with a fear of responsibility.

Suppose someone held a gun to my head and told me to renounce Christianity. I’d like to think I’d have the courage to die rather than commit apostasy – that is, the abandonment of my personal beliefs. (After all, I believe that to commit apostasy is one of the worst things a human can do.) But the thing is, I’m not sure it’s really that bad. After all, I was being coerced. Everyone would be able to tell I don’t really mean it. So is it really committing apostasy? Or is it instead just playacting to pacify the criminal?

Those replies ring hollow when I imagine facing the situation myself. But they ring less hollow when I imagine a friend facing a similar situation. If a friend asked me what to do, I’d be much more tempted to tell them to “just say what the person wants to hear, you don’t have to mean it.” If a friend felt guilty about committing apostasy in such a situation, I’d be tempted to tell them not to feel bad, they didn’t do anything wrong.

The uncertainties loom larger when I consider a friend making the sacrifice. But not only then. They also loom larger when I consider making the choice myself but sacrificing another. These same worries would ring less hollow if someone held a gun to another person’s head and told me to commit apostasy. The significance of my uncertainty is magnified when others are at risk, because if someone else dies as a result of my choice, it’s not just sad but a moral tragedy. It’s not similarly a moral tragedy if I die as a consequence of my own choice. It’s sad if I die while trying to free climb a mountain, but it’s not a moral tragedy. It’s a moral tragedy if I kill someone else after taking a risk. Ultimately, I don’t want to give advice to others, because if I’m wrong I’ll feel responsible for that choice.

If this explanation is right, then the phenomenon is vicious. It denies the moral personality of others. I’m failing to recognize that the choice is still up to the other person, even when I advise them what I think is right.

This is easiest to see if we think of ourselves as the one getting advice. If I ask for the advice of someone else, I recognize it is still up to me whether to listen. It is myself, not them, who is ultimately responsible for the choice. As such, I don’t want the other person managing me with their advice in order to assuage their own conscience. Rather, I want them to be honest so that I can fully take responsibility for my own choice.

But if I’d want others to be honest with me so that I can make my own choice. Then, if I fully appreciated the moral personality of others, I’d not be tempted to be dishonest with them in order to shield myself from responsibility.

Possible Hypothesis 2: Shared Values

My wife’s theory is that the phenomenon is explained by the role of shared values in moral advice. The more someone shares your values, the more appropriate it is for them to act on your advice. You share perfectly your own values, you share extensively your close friend’s values, But you only partially share the values of a stranger. So, if moral testimony involves giving advice in light of shared values, then the more uncertain you are that values are shared, the less certain you are whether to recommend sacrifice.

Of all the plausible explanations of the phenomenon, this one looks closest to being noble. But even here I think there is something vicious about the phenomenon. If you believe the person is under the same moral obligation, but that they don’t share your values, to that extent you must think their values are mistaken. But if they are mistaken, and if you take seriously their moral personality, that gives you reason to explain as best you can the value you see in the sacrifice. It does not, if you take the moral personality of another seriously, give you reason to shrug your shoulders and consign them to vice. That would be to treat them as a moral lost cause.

Possible Hypothesis 3: Inside/Outside Tension

A final explanation of the phenomenon is that it is explained by a tension between the deliberative and external perspectives on action.

When you deliberate about your acts, you look at them from the inside. Because human death is a terrible thing, you recognize it would be terrible to kill one human to save three. You can tell, from the inside, that it would be wrong to kill someone even to save more lives.

But we can also take the external perspective. We can see, not a choice to kill, but rather a choice between two outcomes. It is not a choice to kill, but rather a choice between one killing and three deaths. And from that external perspective, we recognize that one killing is not worse than three deaths.

The force of moral constraints only emerges inside the deliberative perspective. They don’t appear when you look merely at the outside comparison of results. And this may explain our phenomenon: From the deliberative perspective, Faramir can see that it is an evil thing to break a promise. But from the outside perspective, a broken promise is far less evil than Frodo’s death and betrayal.

Once again the phenomenon would reveal an inability to appreciate the moral personality of others. It stems from a failure to see the other as a deliberating person, it instead looks at them merely from the outside as a sort of object. The phenomenon emerges when you look at a person, rather than when you look vicariously with a person at the choice they face.

I’m not sure which explanation is right. But I’m inclined to think all three plausible explanations reveal just how far I have to go before I really appreciate the full personhood of others.

COVID and Climate Change: Taking the Long-Term Seriously

photograph of ripple on lake expanding

Amid the ongoing COVID-19 pandemic, world leaders are assembling in Glasgow for COP26, the UN’s climate change conference. Both the pandemic and global warming are powerful reminders that the choices we make can have consequences that continue to unfurl over decades and centuries. But how much should we care about these hard-to-predict long-term consequences of our actions? According to some, so-called moral “longtermists,” we ought to care a great deal. Others, however, have called longtermism “the world’s most dangerous secular credo.”

COVID, climate change, and the long-term impact of our choices

The coronavirus now appears to be endemic. It is likely to continue to circulate across the globe indefinitely, causing more and more human suffering, economic damage, and disruption to our lives. The total sum of harm an endemic virus can cause is theoretically boundless. And yet, if China had better regulated its meat markets or its bio-labs (depending on your preferred origin theory), it would have likely prevented the outbreak entirely. This failure, in one place at one time, will have significant long-term costs.

The headline ambition of COP26 is for nations to commit to specific plans for achieving net zero (carbon and deforestation) by the middle of the century. Whether or not these talks are successful could have a profound long-term impact. Success could put humanity back onto a sustainable trajectory. We might avoid the worst effects of climate change: biodiversity collapse, flooding, extreme weather, drought, mass famine, mass refugee movements, possible population collapse, etc. Taking effective action on climate change now would provide a huge benefit to our grandchildren.

But the comparison between climate action and inaction does not stop there. As helping our grandchildren and great-grandchildren, the benefits of effective climate action now would likely continue to snowball deep into the next century. Instead of our great-grandchildren needing to allocate their resources and efforts on mitigating and reversing the damage of climate change, the twenty-second century might instead be spent in pursuit of other goals — eliminating poverty, making progress on global justice, and deepening our understanding of the universe, for example. Progress on these goals would, presumably, generate their own positive consequences in turn. The good we can achieve with effective climate action now would continue to accumulate indefinitely.

Commitment to taking the long-view

Both COVID and climate change make a strong intuitive case for moral “longtermism.” Longtermists think that how things go in the long-term future is just as valuable, morally speaking, as what happens in the near-term future. If you can either prevent one person from suffering today or two tomorrow, the longtermist says you morally ought to prevent the two from suffering tomorrow. But if you also had the option of preventing three people from suffering in a million years, they say you should do that instead. It doesn’t matter how far events are from us in time; morally, they’re just as significant.

The second part of the longtermist view is that we can influence the long-term future with our choices today. They argue that the long-term future that occurs depends on what humanity does in the next century. And the stakes are high. There are possible futures in which humanity overcomes the challenges we are faced with today: ones in which, over millennia, we populate the galaxy with trillions of wonderful, fulfilled lives. There are also possible futures in which humanity does not even survive this century. There is, in other words, a very valuable possibility — in moral philosopher Toby Ord’s words, a “vast and glorious” version of the future — that’s worth trying to make real.

A catastrophic future for humanity is not a particularly remote possibility. Ord, who studies existential risk, sees the next century as a particularly dangerous one for humanity. The risks that concern him are not just the cosmic ones (meteorites, supernova explosions) or the familiar ones (nuclear war, runaway global warming, a civilization-collapsing pandemic); they also include unintended and unforeseen consequences of quickly evolving fields such as biotech and artificial intelligence. Adding these risks together, he writes, “I put the existential risk this century at around one in six.” Humanity has the same odds of survival as a Russian roulette player.

The cost of failing to prevent an existential catastrophe (and the payoff of success) is incredibly high. If we can reduce the probability of an existential risk occurring (even by a percentage point or two), longtermists claim that any cost-benefit analysis will show it’s worth taking the required action, even if it incurs fairly significant costs; the good future we might save is so incredibly valuable that it easily compensates for those costs.

But, for whatever reason, reducing the probability of improbable catastrophes does not rise to the top of many agendas. Ord notes that the budget of the Biological Weapons Convention, the body that polices bioweapons around the globe, has an annual budget of just $1.6m, less than the average turnover of a McDonald’s restaurant. As Ord explains this strange quirk in our priorities, “Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it.”

Even short of generating or mitigating existential risks, the choices we make have the potential to put the world on different trajectories of radically different value. Our actions today can begin virtuous or vicious cycles that continue to create ever-greater benefits or costs for decades, centuries, or even millennia. So besides thinking about how we might mitigate existential risks, longtermists also claim we need to give more thought to getting onto more positive trajectories. Examples of this kind of opportunity for “trajectory change” include developing the right principles for governing artificial intelligence or, as COP26 is seeking to achieve, enacting national climate policies that will make human civilization ecologically sustainable deep into the future.

Challenges to longtermism

Last week, Phil Torres described longtermism as “the world’s most dangerous secular credo.” A particular worry about longtermism is that it seems to justify just about any action, no matter how monstrous, in the name of protecting long-term value. Torres quotes the statistician Olle Häggström who gives the following illustration:

Imagine a situation where the head of the CIA explains to the U.S. president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken [the longtermist] Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders. 

Longtermism entails that it’s morally permissible, perhaps even morally obligatory, to kill millions of innocent people to prevent a low-probability catastrophic event. But this can’t be right, say the critics; the view must be false.

But does Häggström’s thought experiment really show that longtermism is false? The president launching such a strike would presumably raise the risk of triggering a humanity-destroying global nuclear war. Other countries might lose faith in the judgment of the president and may launch a preventative strike against the U.S. to try to kill this madman before he does to them what he did to Germany. If this probability of catastrophic global nuclear war would be raised by any more than one-in-a-million, then longtermism would advise against the president’s strike on Germany. This is to say that if the president were a longtermist, it’s at least highly debatable whether he would order such an attack.

Of course, we can modify Häggström’s case to eliminate this complication. Imagine the chance of the madman succeeding in blowing up the world is much higher — one-in-two. In such a case, longtermism would likely speak in favor of the president’s nuclear strike to protect valuable possible futures (and the rest of humanity). But it’s also a lot less clear that such an act would be morally wrong compared with Häggström’s original case. It would be terrible, tragic, but perhaps it would not be wrong.

Maybe the real risk of longtermism is not that it gives us the wrong moral answers. Maybe the criticism is based on the fact that humans are flawed. Even if it were true that longtermism would rule out Häggström’s nuclear attack on Germany, the strategy still seems to place us in a much riskier world. Longtermism is an ideology that could theoretically justify terrible, genocidal acts whenever they seem to protect valuable long-term possible futures. And, ultimately, it’s more likely that flawed human minds perform unconscionable acts if they have an ideology like longtermism with which to attempt to justify their actions.

This last criticism does not show that moral longtermism is false, exactly. The criticism is simply that it’s dangerous for us humans to place such immense faith in our ability to anticipate possible futures and weigh competing risks. If the criticism succeeds, a longtermist would be forced to embrace the ironic position that longtermism is true but that we must prevent it from being embraced. Longtermists would have to push the view underground, hiding it from those in power who might make unwise and immoral decisions based on faulty longtermist justifications. Ironically, then, it might be that the best way to protect a “vast and glorious” possible future is to make sure we keep thinking short-term.

Considered Position: On Voluntary Non-Vaccination – Real World Complications

photograph of child among masked crowd

This piece concludes a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here (Part II).

In Part I of this series I tried to explain the moral intuition that I have less reason to engage in precautionary social-distancing behavior once those who are unvaccinated are unvaccinated by choice rather than because they could not access the vaccine. This intuition, I argued, is explained by the fact that justice mainly requires us to give people reasonable options for safety and does not require us to make others safe.

In Part II, I looked at what it takes for an option to be reasonable, and we realized that this is often sensitive to the reasons that someone does not want to get a vaccine. If someone’s objection to the vaccine is reasonable, then justice may still require me to adopt precautionary behavior.

In both posts, I idealized the questions at hand. I ignored real life complications, and just tried to identify the abstract principles involved. For that reason, my cases were often unrealistic, involving such absurdities as the unvaccinated engaging in moral blackmail or people taking pills to make themselves allergic to the vaccine.

In this final post, I want to turn to the real world and look at some of the complications which make this a difficult ethical question.

Complication 1: We judge the reasonableness of others in biased ways

We saw in Part II that if the other person has reasonable grounds for refusing the vaccine, then justice still requires me to protect them by social distancing (even if their grounds are mistaken). One challenge, however, is that we tend to be biased in our own assessments of what are reasonable grounds.

Consider, for example, the following two suggested grounds of vaccine hesitancy:

Skepticism 1: Distrust of a Racist Healthcare System

Some Black people in the U.S. are reluctant to get a vaccine due to distrust of the American medical system. While this is sometimes attributed to historical injustices, like the Tuskegee study, it is more plausibly explained by current disparities in health care treatment. (It also, as a whole, might just be overblown; but we will put that aside for now.) The thought might go as follows:

“As Ben Almassi has argued in the context of organ donation, there are good grounds, given persistent racial health inequities, for Black people in the U.S. to distrust that the medical system has their best interest at heart. But if one has good reason to distrust the health system, then one also has good reasons to distrust the recommendations of the health system. This is especially true because we know that drugs and vaccines can sometimes affect different racial groups differently, and we also know that Black people tend to be massively underrepresented in vaccine trials (even when those rates are reported).”

Skepticism 2. Distrust of the Liberal Narrative 

Some conservatives are reluctant to get the vaccine due to distrust in the way that mainstream media portrays medical information. They might say things like:

“Putting aside worries that the COVID threat was overhyped to sink former President Trump’s reelection chances; we have seen a systematic tendency for the media to provide unreliable coverage on how to respond to the vaccine in order to fit its preferred political narrative. First, we see the same caseloads suggest different reactions depending on who is in charge. The week after President Biden was sworn in, long before any change in policy could have altered risk, blue states began opening up and schools began seriously trying to return students to in-person education. That was true, even though the absolute case numbers were still worse than they were in the summer when everyone insisted things needed to be shut down under President Trump.

Then, of course, ‘the Scientists’ consistently lauded Democratic governors like Andrew Cuomo and panned Republican governors like Ron Desantis. And yet, we have consistently found that the media narrative was backwards. Florida, despite an extremely old population, came out of the pandemic pretty well, with a much stronger economy, and with many more kids staying in school. This is not just cherry-picking. Republican states, on average, had fewer deaths and caused significantly less damage to the economy. Then ‘the scientists’ told us to take our kids out of school, but didn’t object to massive unmasked Black lives matter protests. ‘The scientists’ told us not to wear masks, until they needed a social symbol for purposes of public shaming.”

I bring up these two explanations of skepticism, not because I find them equally plausible, but because I don’t find them equally plausible. Intuitively, I find the first reasonable and the second ludicrous. But here is the issue: it’s hard to identify any objective reason the first is more reasonable. Sure, it fits better with my own preferred political narrative; but I think there are decisive objections to both lines of reasoning, and I don’t think the errors in one are in any sense more egregious than the errors in the other.

The danger, then, is that I am more sympathetic to members of what I see as my political in-group. But that differential in sympathy means I’m especially likely to inconsistently apply a standard as squishy as ‘reasonable.’

I don’t have a good solution here, and so just advise extreme caution when you label political allies reasonable or when you label political opponents unreasonable.

Complication 2: Immunization isn’t up to some people

While there are few, if any, groups in the U.S. for whom it would be dangerous to get the vaccine. There is some evidence that immunocompromised patients, while they can safely get the vaccine, do not always produce the required antibodies.

Similarly, there is a group of people in the U.S. who cannot choose to vaccinate: children. This is true in the limited sense that the vaccines are not currently approved for use in children below the age of 16. But it is also true in the sense that, even once the FDA approves the vaccine for children, children cannot choose to be vaccinated without a parent’s permission. Unvaccinated children, then, might not be unvaccinated by any choice of their own.

These are important complications, but I’m not sure that on their own they would show you must socially distance until we reach herd immunity (on the hypothesis that there are a large percentage of vaccine skeptical holdouts). Children are far less susceptible to COVID-19, and only a very small portion of the population are severely immunocompromised. Given these facts, the threat posed to children and the immunocompromised is far smaller than the risk posed by pre-pandemic activities when most people did not have access to the vaccine. Certainly, you should engage in some precautionary measures, especially if you know you are likely to be around someone who is immunocompromised. But it is unclear that those are any different than the ordinary obligations one has during flu season.

Complication 3: Deception and consent

One further complication is that deception tends to undermine voluntariness. For example, if I lie to you about the results of a surgery, then your consent to that surgery does not actually constitute voluntary consent. Similar issues arise about sexual consent.

Or suppose you told your friend that you would pick them up from the airport. But then I, just for the fun of it, lie to you and tell you that your friend’s flight was delayed, that they were not able to reach you, and that they don’t need a ride after all. If you don’t pick your friend up from the airport, then breaking the promise was involuntary. It was involuntary because I am the one who bears responsibility for your failure.

Now, if it is true that deception can undermine voluntariness, then one worry we might have is that there may be a good number of people who refuse the vaccine because they were lied to, and if so, it is those who lied who bear the actual responsibility for the non-vaccination.

One reason this is an important point to notice, is because a lot of people are especially likely to think that those with unreasonable reasons for refusing the vaccine accept those reasons because they are being lied to by their media ecosystem. Thus, many on the left think the vaccine hesitancy on the right is ludicrous, but those same people on the left are also likely to think that Fox News, OAN, or Newsmax, are systematically deceiving their viewers. Similarly, many on the right think that concerns of racism are blown way out of proportion, but those same people on the right are also likely to think that mainstream media organizations — like CNN or The New York Times — are providing systematically distorted information on those very issues.

Indeed, it is not just cases of outright lying that might trigger a shift in responsibility. Not only do I preclude the voluntariness of your action by lying to you, I do the same thing if I tell you something false when I should have known better. If I tell you something false but am really honestly trying to tell you the best I know, then your actions are still voluntary. You made the choice to trust me in good faith. But if I am not acting in good faith myself, then I am the one at fault when you err.

Conclusion

So once vaccines are widely available (such that the unvaccinated are mostly unvaccinated by choice) but before we reach herd immunity (due to widespread vaccine hesitancy) can you return to pre-pandemic behavior?

As we’ve seen, this is a difficult question. However, it seems likely that the right answer is generally yes. For the most part, because it is reasonable to expect people to get the vaccine, it is reasonable to return to behaviors that would be safe were others to be vaccinated. This is true, even without factoring in the fact that the vaccinated are very unlikely to spread COVID. And so, it does seem like justice allows life to return to normal.

However, we have also learned an important moral lesson about what it takes to justly live together in a society. For justice to allow us to return to pre-pandemic activities, it does not just require the vaccine to be widely available. It also depends on other people being able to voluntarily refuse the vaccine. And as it turns out, there are complicated ways in which we can undermine the voluntariness of other’s actions. When we are not fastidious about what we tell others, we risk undermining their own capacity to make voluntary choices. If I thoughtlessly spread misinformation or repeat something as fact that I am only pretty sure of, then I undermine one of the fundamental building blocks of our system of justice.

My own testimonial irresponsibility undermines the voluntariness of those who believe me in good faith. And systems of justice largely depend on the mutual voluntariness of everyone’s choices. This is one reason why lying and bullshitting are such profound moral wrongs. It is not just that others end up with some wrong beliefs (we all have a bunch of misguided convictions), but that other people are rendered, in a sense, passive in relation to your choices. By breaking down even a small portion of the reciprocal norms of justice, you don’t just lie to another but partially bring the whole system of justice down on top of you.

Considered Position: On Voluntary Non-Vaccination – Types of Reasons

photograph of masked and unmasked people in a crowd

This piece is part of a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here.

Hopefully pretty soon, my state, Florida, will reach the point that anyone who wants to be vaccinated can be vaccinated. In Part I of this series, I argued that once we reach that point, I have fewer reasons of justice to engage in aggressive social distancing. After all, everyone has the option to get the vaccine and so protect themselves from whatever risks I impose by my everyday activities.

In that argument, however, I ignored an important variable in our assessment of justice: why are people not getting vaccinated? This is important because different reasons give rise to different duties on my end.

To see this, let’s start with a particularly extreme contrast of cases:

Coordinated Moral Blackmail: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population are excited to return to normal and immediately get vaccinated, the other 40% coordinate together and refuse to get vaccinated until their demands are met. The 40% realize that the other 60% don’t want to put anyone’s health at risk, and so they refuse to receive the free vaccine unless the other 60% agree to collectively pay each person in the 40% camp three hundred dollars. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a tool of moral blackmail.

Widespread Deadly Allergy: Suppose that herd immunity requires that 80% of the population get vaccinated. Let us also suppose that one of the weird things about this vaccine is that it produces a severe, possibly fatal, allergic reaction in people with peanut allergies. But while 60% of the population have no allergies and so get the vaccine, the other 40% all have severe peanut allergies and so forgo vaccination until a safer vaccine is approved. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated because the vaccination is not worth the risk of anaphylaxis.

I presume that you share my intuitions about these two cases. In the first case, the 60% have no reasons of justice to forgo returning to pre-pandemic life. But in the second case, the 60% still have reasons to social distance. Indeed, I would think that the 60% in the second case have duties pretty similar to the duties we would have if the vaccine were only available to 60% of the population.

Normally, if someone voluntarily chooses to forgo the vaccination that changes my duty of justice. However, it only makes a difference to duties of justice if we can reasonably expect the person to get the vaccine.

It is unreasonable to expect someone to take a very risky vaccine, just as it would be unreasonable to expect a family to bankrupt themselves to buy the vaccine. But such pragmatic concerns are not the only relevant ones. For instance, if it is unethical to get the vaccine, say because it was produced by slave labor, then again if the majority of the population refuse the slave made vaccine, justice requires me to continue to maintain pandemic precautions.

If it would be a mistake to get the vaccine, then I am still bound by the same norms of justice to engage in precautionary behaviors. If people truly believe that they should not get the vaccine, then I can’t reasonably expect them to vaccinate.

But what if instead people falsely believe there is a problem with vaccines? Well, in that case things become more complicated. There are two kinds of error that could explain why one thinks a right action is wrong, both of which are described by Elizabeth Anscombe. First, one might be wrong about the ‘facts on the ground,’ the circumstantial descriptions of what is going on — for instance, one might think the vaccine is less safe than it is, or one might falsely think it is produced by slaves. Second, one might be wrong about the moral implications of those ‘facts on the ground.’ For instance, one might know the risk is tiny, but have the false principle that one cannot be expected to undergo even the smallest possible risk for the public good. Following a convention that I use in my own work, let’s call the first an error about circumstantial facts and the second an error of classificatory facts.

Error 1: Mistakes about Circumstance

The COVID vaccines are not, on the whole, very risky. The consensus judgment of the medical community is that the vaccines, like most vaccines, are safe and effective. But even if the vaccines are safe, some people might believe them to be unsafe. And this raises a question of whether such honest beliefs change my own reasons of justice.

Widespread Error of Belief: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population get the vaccine, truly believing that vaccines are safe and effective, 40% of the population mistakenly believe that the vaccine is dangerous. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a result of mistaken beliefs.

In this case, does justice require me to forgo pre-pandemic activities? To answer this question, we need to know more information. In particular, we need to know why people have the false belief.

Suppose that preliminary scientific studies, incorrectly, suggest that the vaccine is unsafe for pregnant women. This information is widely announced and the majority of scientists and doctors accept the result. As it turns out, however, the conclusion is the result of a subtle mathematical error that no one has yet noticed.

If the best scientific evidence suggests that pregnant women should not get the vaccine, then it is clearly unreasonable for you to expect pregnant women to get the vaccine. It does not matter that the studies are wrong, because you cannot expect the average person to realize that the studies are wrong. If this is right, then at least some of the time false beliefs about matters of circumstance (such as the safety of belief), really do make it unreasonable for me to expect you to be vaccinated.

But not all mistaken factual beliefs work that way. Now imagine someone who is profoundly lazy and just assumes vaccines are unsafe. Because they are lazy and do not care about public health, they never bother to do any research to check whether their assumption is right.

We can accept that the person really thinks that the vaccines are unsafe. And we can further accept that the person cannot, by a voluntary act of will, get herself to believe the vaccines are unsafe (for instance, you cannot, at this very moment, choose to believe there is an elephant behind you even if I offered you five hundred dollars to form the belief).

So suppose our imagined interlocutor says: “I’m not getting a vaccine because I really believe they are unsafe; ultimately I don’t choose what I believe, so you can’t blame me for not getting vaccinated.” Is this right? Does the fact that we cannot choose our own beliefs mean we cannot be blamed for our false beliefs?

Elizabeth Anscombe, in her article “On Being in Good Faith,” argues this line of thought is mistaken. While good faith belief can make one’s behavior reasonable. A good faith belief, in the sense that excuses behavior, requires more than you really believing the thing you say you believe:

“Consider a rash and uncharitable judgement which led one to slander someone gravely. One does not – usually at least—say to oneself ‘Now I will judge rashly’ or ‘I am determined, rash as it is, to judge that so-and-so’- one simply judges rashly. What does ‘in good faith’ mean? If it means that one has not got one’s tongue in one’s cheek, is not consciously lying or hypocritical, then good faith is not enough to exonerate. If it is enough to exonerate, then the slander uttered under the influence of a rash and passionate conviction is not a statement made in good faith. . . . Thus good faith or sincerity are either not much good as a defence, or not so easily assured as we might think at first. . . . ‘He ought to have realised…’, ‘He ought to have thought of…’, ‘He had no business to suppose that…’, ‘He ought to have found out…’, are phrases that bear witness to the fact that a man’s beliefs do not suffice to justify him so long as he acts accordingly.”

What Anscombe is arguing is that one can be responsible for false beliefs if we could have expected you to know the truth. If the reason you have a false belief is because the entire scientific community is mistaken, we can’t expect you to know any better. But if the reason you have a false belief is because you are too lazy to do research, then of course we can blame you for your error.

If I accidentally kill a child because I didn’t know they had a deadly carrot allergy, then I’m usually not responsible for that death. However, if the only reason I don’t know about the allergy is because I skipped our camp’s health and safety meeting, then I am at least somewhat responsible. I can’t object that ‘I didn’t know,’ because I should have known.

The same principle applies to vaccines. If you have a false belief that might make your non-vaccination reasonable, but it only does so if we could not have expected you to know better.

Error 2: Mistakes about Classification

What about cases of moral ignorance? That is, someone knows all the ‘facts on the ground,’ but she mistakenly thinks that those facts mean she should not take the vaccine.

Some philosophers think moral ignorance is unique, that while circumstantial ignorance can excuse, moral ignorance never can. Other philosophers disagree. (I discussed that debate at length in a previous Prindle Post.)

In this context, however, it seems that moral ignorance is probably not unique. That is because we want justice to allow for at least some moral pluralism. If there are no reasonable, though false, moral principles; then the project of democratic pluralism is in serious danger. If we want to live together in society with at least some level of deep moral disagreement; then we should acknowledge the reasonability of people acting on at least some moral principles we think are wrong.

Now, in our society we don’t have widespread reasonable moral ignorance preventing us from reaching herd immunity. But there are at least some real-life cases where it is reasonable to wrongly refuse a vaccine on moral grounds. A good example is those who don’t want to take a vaccine that was created via use of aborted fetal tissue. This seems to me to be to be too morally fastidious, but the reasoning is not unreasonable, and I do not think we can expect people to never make that error.

Conclusion

The reason that people refuse the vaccine matters. If they are right to refuse it, then one is just as responsible to take precautionary actions as one was before the vaccine became available. If they are wrong to refuse, then the question becomes whether or not it is reasonable to expect them to take the right action. If their incorrect refusal is explained by a reasonable form of circumstantial or moral ignorance, then justice continues to make the same demands.

Continue to Part III – “Real World Complications”

A Problem with Emotions

abstract acrylic painting of divided canvas

There is a certain challenge to the adequacy of our emotional reactions — especially those reactions, like grief and joy, which feel ‘called for’ at certain times. Suppose a family has a child who falls grievously ill. After many sleepless nights, the child stabilizes and eventually recovers. There are appropriate emotional responses to this sequence; the parents will, and should, feel relieved and joyed at the child’s recovery. Now suppose another family has a child who similarly falls grievously ill. Except this child does not recover and eventually dies. Again, there are appropriate emotional responses. The parents will, and should, feel grieved and heartbroken at the child’s death.

So far, there is no challenge. But now suppose that instead of two different families, it was one family with two children — one recovers, one dies. Here, what are the parents supposed to feel? There are a couple of options.

Perhaps they should feel a sort of moderated grief. After all, something wonderful has happened (a child has recovered) and something terrible has happened (a child has died). Do they partially cancel out (but maybe weighted in the direction of grief since ‘bad is stronger than good’)? The problem with this answer is that the grief is a response to the tragedy of the child’s death. And that child’s death is no less a tragedy just because the other child survived. Moderation would be appropriate if something happened to moderate the tragedy of the child’s death — such as the child being spared death and instead placed within an enchanted sleep — but it does not seem like the appropriate response to some other good thing occurring.

Perhaps then, you just need to feel either emotion. Both grief and joy are appropriate — so long as you feel one, then you are feeling well. But this won’t do either. There is something wrong with the parent who feels nothing for the recovery of their child, just as there is something wrong with the parent who feels nothing for the child’s death.

In fact, the only response that seems appropriate to the situation is to feel both grief and joy. You ought to be grieved at the one child’s death and joyed at the other child’s recovery.

But here is the issue. It doesn’t seem possible to fully feel both at once. Feelings, unlike some other mental states, compete with each other. When I feel happy about one thing, it pushes sadness about other things to the periphery. This is unlike, say, beliefs. The parents can fully believe that one child recovered while, at the same moment, fully believing that the other child died. This is because beliefs do not require active attention. Moments ago, you believed all sorts of things about your former elementary school, but I expect until you read this sentence you were not actively attending to any of those beliefs.

Emotions, however, do require attention. If I can become fully absorbed in my work, then for a time my grief will retreat. (Of course, one of the frustrating things about grief is the way that it maintains a ‘grip’ on your attention — forcing your thoughts to circle back and return again, and again, to the tragedy.)

So, to fully feel the grief at the one child’s death, and to fully feel the joy at the other child’s recovery, would require me to keep my full attention on both at the same time. But we can’t do that with attention, attention is a limited resource. It can only be fully engaged in one direction.

The best we can do, then, is a sort of ping-ponging back and forth between grief and joy. Feeling complete grief when attending to the death, feeling thankful and relieved when attending to the recovery. But at no point, it seems, can my emotions be completely responsive to what is called for.

Berislav Marušić, in his essay “Do Reasons Expire”, considers a related puzzle:

“Grief is, plausibly, a response to reasons; the reason for my grief was my mother’s death; her death does not change over time; but it is not wrong for me to grieve less over time. Yet how could the diminution of grief not be wrong, if my reason for grief stays the same?”

The reason the problem is similar is that there is a disconnect between the response demanded by the event (the tragedy of someone’s death) and the psychological realities of our capacity to have emotions. You just can’t indefinitely grieve, and in turn you don’t indefinitely grieve. But doesn’t it seem as if there is a sense in which you ought to?

There is a conflict, then, between the psychological realities that constraint our emotions, and the appropriateness conditions surrounding what emotions we ‘ought’ to feel.

This is an important conflict to think about. One reason it’s important to be aware of this conflict is because it helps recognize exactly why we need to be so skeptical of grounding our moral decisions simply on emotions like anger or grief. Since we can only feel some emotions to an extent, our emotional responses, at a given time, are usually not responsive to the full range of relevant considerations. You can feel outrage about an injustice, or hopeful at political progress that has been made, but you can’t feel both at the same time to the appropriate extent. But given that psychological reality, that means that basing policy recommendations on emotions of rage or optimistic hope is likely to be morally dangerous.

This does not mean that emotions should play no role in our moral decision-making. Emotions are important. Instead, what this means is that we need to be extremely cautious when acting on our emotional reactions. We should always bear in mind that emotions are likely to not be reflective of the full range of complexities in any given case.

Scarce Goods and Rationalization

photograph of crowded waiting room

A friend of mine recently posted on Facebook asking for insight into “the ethics of (1) getting vaccinated as quickly as possible for the common good and (2) not using privilege to be vaccinated ahead of vulnerable people.”

Many responded with arguments along the lines of, “by getting a vaccine you are contributing to herd immunity, so it is a good thing to do.” Others linked to this New York Times ethics column in which Dr. Appiah argues that the advantage of easy management means that people should get vaccines when they can get them (and not worry too much about whether others might need them more), and further that by getting the vaccine “you are contributing not just to your own well-being but to the health of the community.”

Another friend recently mentioned in a group chat how she was able to get a vaccine that, technically, she did not yet legally qualify for (since Florida is only officially vaccinating K-12 educators, and not college instructors). I demurred, saying it’s important as healthy youngish people to wait our turn, and a third friend argued that even if you are not the ideal person to get the vaccine, you should still get it if you can since more vaccines are better than fewer and you can help protect others by getting vaccinated.

Assessing the Arguments

The Herd Immunity Argument — The thing that unites all these replies is the thought that by getting the vaccine you are helping to protect others. But in these cases, that is probably wrong. I want to be clear. I am not denying that more people being vaccinated contributes to herd immunity. What I am denying is that my friends getting a vaccine contributes to more people being vaccinated.

Right now the vaccines are a scarce good. If I do not get a vaccine, someone else will get that particular injection. As such, in getting a vaccine I have not actually done anything to increase the percentage of the population that is vaccinated, I have simply made sure that I, rather than someone else, am part of that vaccinated percentage.

The Waste Rejoinder — Some commenters on Facebook mentioned that some vaccines go to waste. But for the most part the vaccine distribution process has sorted itself out. While a good number of vaccines were being wasted in January, we are now in mid-March and the number wasted is utterly tiny in comparison to the number used. The odds that if you do not get a vaccine that the vaccine will end up in the trash is extraordinarily small.

So sure, if you happen to be in a situation where the alternative to not getting a vaccine is throwing it away, then get the vaccine. But unless you know that to be the alternative, you should not think that in getting the vaccine you are heroically contributing to solving the problem.

Speed of Distribution — While no one in the threads mentioned this argument, there is something that could be said for skipping the line. Even if someone else would have gotten that same vaccine, it’s possible it would have taken longer for the vaccine to get in someone’s arm. Now, it’s true that at this point the states are not sitting on nearly as large a vaccine stockpile as they were originally. But it is still the case that some vaccines, while they are not being wasted, are taking longer than ideal to end up in someone’ arm. Indeed, this seems to be happening where I am in Tallahassee.

But the problem is, this was not the situation either of my friends were in. Sure, this situation might be more common than the wasted vaccine situation. But it will still be rare (and indeed, markets are such that this waste usually does not last very long; soon after that article about Tallahassee was published demand at the site increased).

The Lesson

Now, I don’t want to argue that it is wrong to get the vaccine if you have the chance to do so. Probably sometimes it’s right and sometimes it’s wrong. As is often the case, it all depends on the details.

Instead, I want to suggest that we need to be careful to not convince ourselves that our selfish acts serve an altruistic motive. I think it’s probably ok to be somewhat selfish. It’s reasonable to care more about saving your own life than  the lives of a stranger (even Aquinas agreed as much). But I think when you are prioritizing your own good over the good of others, it’s important to recognize that that is what you are doing.

So if I get the vaccine perhaps that is ok. But I should recognize that if I get the vaccine someone else will not. I should also recognize that since I am young and healthy, that other person probably would have gotten more value from the protection than I did. The question, as far as altruism goes, is how do I compare to the average person getting a vaccine these days? Am I younger than the average person who would get the vaccine instead of me? Then probably it is better that the other person gets it. Am I healthier than the average person who would get the vaccine instead of me? Then probably it is better that the other person gets it.

The thing is, we have strong biases in favor of rationalizing our own selfish acts. Thus, we often look for reasons to think doing the thing we want is also good in general. This is a very dangerous tendency. People often accept really bad arguments, if those really bad arguments help them think well of their own selfish activity. This should scare us, and make us all a little more self-critical about our moral reasoning anytime we come up with plausible reasons for thinking the thing we want to do is also the best thing for the world as a whole. Remember, we all have a tendency to think that way, even when the act is merely selfish.

Intuitions and the Duty to Aid

photograph of a cluster of traffic lights sending mixed signals

Many philosophers have considered whether folks who are better off have a moral obligation to help those who are desperately poor through no fault of their own. This issue is especially salient at the moment due to the economic fallout from the coronavirus pandemic and ensuing lockdowns across the globe. The result is that the global poor are hardest hit, and the trend of eradicating poverty over the past few decades is reversing. We are thus left to wonder what, if anything, we owe the global poor.

Here enters the work of perhaps the most famous contemporary philosopher, Peter Singer, who argues that many folks in rich countries — like the USA, Japan, and Germany — have a moral obligation to donate a large amount of their income to the global poor because they can afford to without falling into poverty themselves. He motivates this position by an appeal to a simple thought experiment:

“On your way to work, you pass a small pond. … [You] are surprised to see a child splashing about in the pond […] it is a very young child, just a toddler, who is flailing about, unable to stay upright or walk out of the pond. […] The child is unable to keep his head above the water for more than a few seconds at a time. If you don’t wade in and pull him out, he seems likely to drown. Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago, and get your suit wet and muddy.”

Singer thinks we have a moral obligation to save the child based on the strong intuition that it just seems like the right thing to do — it wouldn’t cost us much to save the child, but it would benefit the child significantly. We can formulate Singer’s argument like this:

  1. Suffering and death from lack of food, shelter, and medical care is bad.
  2. If it is in your power to prevent something bad from happening, without sacrificing anything nearly as important, it is wrong not to do so.
  3. By donating to aid agencies, you can prevent suffering and death from lack of food, shelter and medical care, without sacrificing anything nearly as important.
  4. Therefore, if you do not donate to aid agencies, you are doing something wrong.

The first step of the argument seems obvious: the pain that comes from failing to have your basic needs met is obviously bad. We wouldn’t go to such lengths to try to prevent these things in our own lives if we thought otherwise. And we will grant, for the sake of argument, Singer’s third step: donating to trustworthy and competent aid agencies allows us to mitigate the harm that comes from people being unable to meet their basic needs.

Our focus here is on the second step of the argument. Do we actually have a moral obligation to the poor; or does it merely seem like that? Should we trust our intuition to save the child? Based on recent experimental evidence from psychology and economics, we should be skeptical of our intuition to save the child. Allow me to make the case.

We have solid experimental evidence from psychology and economics that people care how they look to others. As a species, humans are highly cooperative and social, and depend on help from others to survive — so much so that the ancients believed banishment from society worse than death, as it entailed not only death, but prolonged suffering as well. But relying on cooperation from others makes us susceptible to free-riders: individuals who enjoy the fruits of everyone else’s labor, while contributing less than their fair share. We thus use reputations to distinguish the trustworthy from the untrustworthy; we don’t want to cooperate with someone who might defect, especially in situations with high stakes — e.g., it matters who we choose to have children with. And since we cannot read the minds of others, we must rely on a high-fidelity signal of trustworthiness.

Here we need an example of a high-fidelity signal. The classic example is peacocks: their colorful feathery display is a costly signal to prospective mates that even with amplified risk of predation, he can still thrive — a signal that indicates fitness and is hard to fake. Or consider the ability to lift two-hundred pounds over one’s head as a reliable signal of strength: one cannot lift that much weight without possessing enough strength. Colorful features on a peacock would be a fatal liability if the bird weren’t healthy enough; someone simply wouldn’t be able to lift that much weight if they lacked sufficient strength. If we are to rightly trust others, we need a signal of trustworthiness that would be hard to fake by those who are untrustworthy.

One way to signal trustworthiness and communicate that one is a suitable partner for collaboration is through uncalculating cooperation: helping someone without waiting to consider whether the benefits of helping exceed the costs of doing so. By not calculating the advantages and disadvantages of pitching in, we signal to others that we can be trusted as a potential cooperator: we aren’t participating simply because we’ve determined that it’s in our interests. When we cooperate without doing the cost-benefit analysis, we signal we’re committed to the joint venture not merely because it would benefit us. As the authors of a recent study explain:

“To provide empirical support for this account, we experimentally test the hypothesis that people avoid calculating the costs of cooperation because of reputational concerns. Across two experiments, we demonstrate that when people’s decision-making processes are observable to others, they behave in a less calculating way. This observation suggests that they use uncalculating cooperation to gain reputational benefits.”

We often cooperate with others without calculating the cost. We grant friends’ requests without making inquiries about the time and trouble the request might take; we come to the aid of strangers in distress; we adhere to strict moral and religious precepts that are other-regarding, whatever the costs and benefits. These types of behaviors make sense once we frame them in terms of reputation: we cooperate without calculating because of how it makes us look to others. So it looks like we might have the intuition to save the drowning child because it makes us look good to others — consider the social pressure one would feel refusing to help a drowning child in the presence of onlookers. Just the thought of refusing to help seems unconscionable.

The strong intuition to save the drowning child looks like a product of our evolutionary history as a social, cooperative species and the need to look good to others for the sake of our survival. This should make us doubt that our intuitions in the drowning child case track the moral facts; it looks like we have these intuitions for evolutionary reasons rather than anything having to do with any moral obligations — we may only have such intuitions because they aid in our survive and reproduction, not because there is actually a moral obligation to save the child.

Someone may, of course, reply that we could have a strong intuition to save the drowning child both because it is morally required, as well as for reputational and evolutionary reasons. The trouble here though is that we simply cannot rule out that we have the intuition only because it helps us survive and reproduce. By example: it could be that the lottery ticket you hold in your hand is a winner or a loser; you simply do not know, even though it is highly likely the ticket is a loser given the odds. However, since you cannot rule out that the ticket is a winner — this is a distinct possibility — you don’t know the ticket is a loser. By similar logic: we cannot rule out that our drowning child intuitions are only an evolutionary by-product, so we should doubt we know that we have a moral obligation to save the child — and, of course, by extension, that we have a moral obligation to donate to the poor.

Philosophical Insights from Great Literature

photograph of tall stack of children's books

I expect that some of you are feeling a little worn down, it has been a tough year. And so I want to try something a little lighter than usual, I want to talk about some of the philosophical lessons we can learn from great literature.

Of course, by great literature I mean great children’s books.

I should perhaps mention that the inspiration for this post is a passage in chapter four of G.K. Chesterton’s book Orthodoxy. There Chesterton notes, in passing, that many great ethical principles can be extracted from children’s fairy tales.

“But I deal here with what ethic and philosophy come from being fed on fairy tales. If I were describing them in detail I could note many noble and healthy principles that arise from them. There is the chivalrous lesson of “Jack the Giant Killer”; that giants should be killed because they are gigantic. It is a manly mutiny against pride as such. For the rebel is older than all the kingdoms, and the Jacobin has more tradition than the Jacobite. There is the lesson of “Cinderella,” which is the same as that of the Magnificat— EXALTAVIT HUMILES. There is the great lesson of “Beauty and the Beast”; that a thing must be loved BEFORE it is loveable. There is the terrible allegory of the “Sleeping Beauty,” which tells how the human creature was blessed with all birthday gifts, yet cursed with death; and how death also may perhaps be softened to a sleep.”

So let us extend this analysis, what are some philosophical lessons we can learn from great literature?

To make things more difficult, let’s also limit ourselves to non-obvious lessons. Obviously, we can learn from The Little Engine That Could about the value of optimism and hard work. And obviously we can learn Where the Wild Things Are lessons about emotional management and community. But those books are, at least in part, written to teach us those lessons. What I’m looking for are hidden lessons, insights deeper than the author’s own awareness.

For example, I don’t know if Crockett Johnson read much of Immanuel Kant’s Critique of Pure Reason when writing Harold and the Purple Crayon. But whether he had Kant in mind or not, I know of no story that so well captures the Kantian idea that the world we experience, even up to and including space and time, are a construction we create out of the order our minds to impose on the world.

Nor are the insights of that book limited to metaphysics. Consider the profound psychological insight revealed when Harold, after drawing up an apple tree bursting with fruit, frightens himself away from the good things he created by enlisting a frightening dragon to guard the tree. How often, I wonder, do we fence ourselves off from goods simply because we overreact to the fear that others may take something we think our own?

Or perhaps some of you are looking for a more political lesson to draw from the book. And so we reach the point of the book where Harold was looking for his own window. And since “he couldn’t think where it might be. He decided to ask a policeman.” But of course “the policeman pointed the way Harold was going anyway,” and so Harold learnt, even from a young age, the way that police power is employed to maintain the, even unjust, status quo.

But let us move on from Harold and the Purple Crayon. 

There is the profound lesson of If You Give a Mouse a Cookie; that your reward for doing a good deed is not usually the relaxation of conscience, but rather the deepening of charity and so a dawning appreciation of the higher and harder acts of love still required of you. And then there are the prosaic lessons of Goodnight Moon. First on the value of attending to the commonplace goods of the everyday, and second the value of taking goodbyes and goodnights seriously, since you never know for certain that you will both wake again in the morning.

There are some obvious Aristotelian themes on temperance in The Very Hungry Caterpillar. (What with how unhealthful attempts to indulge appetite, rather than leading to relief, merely result in the appetites growing all the more persistent; and how he is only sated when he eats his proper food of a single leaf). But there are also deeply hidden philosophical nuggets. For instance, do you remember how the pages of that book grew with each passing day? You could see what foods were coming up, but not yet see their quantity nor read what the effects of indulgence would be. What a profound commentary on what happens to us when we try to pursue future goods. We see, at least somewhat, the good we are after. But we are insensitive to quantity, and rarely notice the unfortunate unintended consequences of the pursuit of the apparent good.

Of course, sometimes the philosophical lessons of children’s books are more abstract and less practical. Thus, in Chapter III of A.A. Milne’s Winnie-the-Pooh (entitled In Which Pooh and Piglet Go Hunting and Nearly Catch a Woozle”) Pooh starts tracking what he thinks is a Woozle, only to find as he tracks that a second Woozle seems to have joined the first. Piglet joins the hunt, and before long yet another type of animal seems to have joined the pack they are tracking. Eventually Pooh realizes that he has “been Foolish and Deluded,” because of course it was his own tracks that he was chasing.

This is, of course, almost identical to the story that John Perry tells in his famous and rightly acclaimed “The Problem of the Essential Indexical”:

“I once followed a trail of sugar on a supermarket floor, pushing my cart down the aisle on one side of a tall counter and back the aisle on the other, seeking the shopper with the torn sack to tell him he was making a mess. With each trip around the counter, the trail became thicker. But I seemed unable to catch up. Finally it dawned on me. I was the shopper I was trying to catch.”

Perry might extract more philosophical insights about the essential indexical than our foolish and deluded Pooh does. But I’m still glad to know that our bear of very little brains came up with the raw material for the philosophical insight 53 years before Perry did.

I hope you’ve found these examples entertaining. There are dozens, maybe hundreds, more hidden philosophical lessons to be drawn from children’s books, but I’d rather hear the examples others come up with. Send me an email with your own favorite examples of hidden insights from great literature; I’ll eventually make a follow up post either here or at least on my personal blog.

But What Are We Doing Here?

Before I end this post, I just want to head off a certain potential skepticism. Some of you may doubt that it is the business of the ethicist to be extracting lessons from children’s books. Philosophers are trying to make great new ethical discoveries, seeing ethical truths others have missed! Why bother with the moral insights so humdrum as to trickle down to the tales we tell kids?

Well, perhaps there is sometimes a role for ethicists to find new ethical truths, identify unnoticed principles, or apply principles in original ways. But I don’t think that is what ethicists should usually be doing. Consider, for instance, this footnote in Immanuel Kant’s Critique of Practical Reason:

“A reviewer who wanted to say something censuring this work hit the mark better than he himself may have intended when he said that no new principle of morality is set forth in it but only a new formula. But who could even want to introduce a new principle of all morality and, as it were, first invent it? Just as if, before him, the world had been ignorant of what duty is or in thoroughgoing error about it.”

Kant’s point, and I think it is a good one, is that ethics is not like natural science at least in this one respect: ethical truth is something to which humanity has always had access just in virtue of being human. We do not need to discover ethics the way we discover that force equals mass times acceleration. Rather, we need to recognize and remember those ethical truths which, in a sense, we already knew. Thus, it is the job of the ethicist not to invent new principles, nor even discover unknown truths, but to give us new and clearer formulations of those principles which we somehow had already.

Incentivizing the Vaccine-Hesitant

photograph of covid vaccination ampoules

Since the beginning of the COVID-19 pandemic, vaccine hesitancy has remained a constant concern. Given expectations that a vaccine would be found, experts always anticipated the problem of convincing those who distrust vaccines to actually get inoculated. A great many articles coming from the major news outlets have aimed at addressing the problem, discussing vaccine hesitancy and, in particular, trying to determine the most promising strategy for changing minds. In The Atlantic, Olga Khazan surveys some of the methods that have been proposed by experts. Attempts to straightforwardly correct misinformation seems to have proven ineffective as they can cause a backfire effect where individuals cling to their pre-existing beliefs even more strongly. Others instead suggest that a dialectical approach might be more successful. In The Guardian, Will Hanmer-Lloyd argues that we should refrain from blaming or name-calling vaccine-hesitant individuals or “post on social media about how ‘idiotic’ people who don’t take the vaccine are” because “it won’t help.” Similar to this “non-judgmental” approach that Hanmer-lloyd recommends, Erica Weintraub Austin, Professor and Director of the Edward R. Murrow Center for Media & Health Promotion Research at Washington State University, and Porismita Borah, Associate Professor at Washington State University, in The Conversation propose talking with vaccine-hesitant people and avoiding “scare-tactics.” Among the things that can help is providing “clear, consistent, relevant reasons” in favor of getting vaccinated while at the same time discussing what constitutes a trustworthy source of information in the first place.

In spite of all these good suggestions, to this day, Pew Research reports that only 60% of Americans would probably or definitely get a vaccine against COVID-19. Though confidence has been on the rise since September, this still leaves a concerning 40% unlikely to pursue vaccination. It is perhaps in light of these facts that a recent proposal is beginning to gain traction: incentivizing people by offering prizes. Ben Welsh of the LA Times reports that the rewards proposed include “Canary home security cameras, Google Nest entertainment systems, Aventon fixed-gear bicycles and gift cards for Airbnb and Lyft.”

But is it right to give out prizes to lure the initially unwilling to seek vaccination?

The answer depends on the moral system to which you subscribe. You might think that given the seriousness of the current circumstances it is especially crucial to get as many folks vaccinated as possible, and that the means of accomplishing this task are of secondary importance. This would be a consequentialist view according to which the moral worth of an action depends on the outcomes it produces. One might feel the force of this line of argument even more when considering that the consequences of vaccine hesitancy can carry dangers not only for the individuals refusing to get vaccinated but for the rest of us as well. Just recently, a Wisconsin pharmacist purposefully made unusable 57 vials of vaccine that could have been used to vaccinate up to 500 people because of a belief they were unsafe. So considering how significant the impact of vaccine-distrust can be, it is understandable that one might employ even unusual methods – such as prizes – to convince those who remain reluctant to join the queue.

On the other hand, if you do not feel the force of this outcome-based argument, you might think that there is something to say about the idea that changing people’s behavior does not necessarily change people’s beliefs. In this sense, offering a prize might not do much to alleviate the distrust they feel towards vaccination or the government. Consider another example. Suppose you do not believe that exercising is good. Yet your best friend, who instead does believe in the positive aspects of exercising, convinces you to go running with her because the view from the hill where she runs is stunning. In that sense, you may eventually elect to go running, but you will not do it because you are now a believer in exercising. You will go running just so that you can admire the view from the hill, without having changed your beliefs about exercise.

What is the problem of not changing people’s beliefs? You might be tempted to think that there is no problem, if you believe that the end result is all that matters. But even in that case, it is beliefs that drive our actions, and so as long as individuals still believe that vaccines are not to be trusted, giving out prizes will only be a marginal and temporary solution that fails to address the deeper, underlying issue. The worry is that someone who may opt to get vaccinated upon receiving a gift card is not deciding to get vaccinated for the right kind of reason. This argument picks out a distinction famously known in philosophy between right versus wrong kinds of reasons. The philosophical debate is complex, but, in general, when it comes to believing something, only epistemic, evidence-based reasons represent good reasons for actions. Should one, instead, come to act on the basis of reasons that have more to do with, say, wishes or desires, those would represent the proper kinds of reasons.

So what is the solution here? Well, there is no solution, as is often the case when it comes to philosophical positions that are fundamentally at odds with one another. But here is the good news: looking at the ways in which real life events connect with philosophical issues can help us figure out what we think. Examining issues in this way can prove useful in isolating the features that may help us understand our own particular commitments and convictions. Thinking through these tensions for ourselves is what allows us to decide whether we think the proposal to encourage vaccination efforts by offering prizes is a legitimate one.

Time to Let Up or Double Down?

photograph of woman with face mask sitting in large, empty street dining area

Rollout of COVID-19 vaccines represents a significant step in combating the pandemic, one that will likely alter people’s behavior to this global health crisis in significant fashion. With a vaccine on the horizon, risk assessment can change in two very different ways:

On the one hand, it can alter the risk associated with individual behaviors. For instance, with a risky behavior, the prospect of safety can reduce the perspective of associated risk. Here we could think of jumping out an airplane, which seems less risky because there is a parachute. With a vaccine in circulation, taking one’s chances with exposure can seem a more reasonable thing to do. Vaccination will (hopefully) mean there will be fewer people contracting it, lowering the impact on the societal concerns overall. This means risk is assessed in short-term frames: if every risk of exposure over 4 months compares to 12 months, one could think that they might as well lighten restrictions.

On the other hand, the prospect of a vaccine can alter the way we assess risk in a long-term context. When fighting a disease with a radical course of treatment, having an indeterminate time frame versus a given length of time to “push through” makes a great deal of difference. When the end point is unclear, it makes sense to consider harsh conditions unrealistic or unreasonable. In less dire cases, say a highly demanding and stressful workload at work, the expected length of time makes a significant difference in deliberation. Altering the long-term structure of your life around such demands can seem less than feasible, and compromises in meeting those demands can make a great deal of sense. It can make less sense, on the other hand, if the heightened demands are only for a short period of time and come with an important payoff.

With a vaccine in sight, much rests on how the adjustments to daily life given the risk of exposure are reassessed. One reason many give for not complying with state restrictions is that the virus is just something we “have to learn to live with,” or that it is a new way of life. Treating the vaccine as a parachute, as a dialing down of the harm associated with individual actions that put others at risk of contracting the virus, increases danger until the vaccine can come into effect. Letting up on the adjustments to behavior continues to do all the harms that have been associated with the spread of the virus: the deaths, the long-term effects of contracting the virus, the impact on our healthcare system, the systemic impact on the most marginalized populations, the destruction of our economy due to essential workers becoming ill, etc. These effects will not stop simply because of the prospect of a vaccine. The goals remain the same as they have been since February.

With the prospect of improving the fight against the pandemic, the reasonable choice could actually be to double down because we lose one reason to avoid the restrictions. The counterargument that pushes that long-term restrictions will harm the economy, will undermine the values in daily lives, etc. has been weakened considerably as we are now facing a short-term sacrifice for a long-term reward. But until inoculation reaches critical mass, we can’t point to our parachute to justify a refusal to exert effort in pursuit of our shared end goal.

Why I Am Not like You: The Ethics of Exceptions

photograph of long line of people queuing to enter store

Consider two different arguments. The first that it was okay for me to travel in early December, the second that I should be given early access to a COVID vaccine.

My Travel: I understand that traveling was irresponsible in general, and that it was important that people not do so. Had COVID been happening in any other year I would have not traveled at all during the holiday season. But since it was this year, I had good reasons to carve out an exception for myself. First, it was really important for my girlfriend to meet my parents in person before we could get engaged, most people did not have such major life plans put on hold by the inability to travel over the holidays. Second, my grandfather is not doing well and so the consequences of delaying a visit could not be known. Third, this was the first time in six years my parents were back in the states for Christmas. Fourth, my girlfriend and I could take steps to minimize the risk: we drove instead of flying, we could travel in between the Thanksgiving and Christmas rushes, we both got tested before the trip, and I was able to aggressively quarantine the week before traveling.

My Vaccine: While I should not get the vaccine before the elderly, I should get it before it is open to the general public. First, I am teaching an in-person class in the spring and doing so, at least in part, because the state government of Florida is pushing to increase the percentage of college classes taught in-person in the spring. I offered to teach in person to help out, but it seems like the least that the state government could do after I agree to be around (I expect) irresponsible undergraduates is help make sure I have access to a vaccine. Second, I have been extremely aggressive in my social distancing. This means I should get the vaccine early since a) I have already taken on more inconvenience than most to help protect the public good and b) I’m more responsible than most, so I’ll be a larger drain on the economy if I remain unvaccinated. Third, I’m hoping to get married fairly soon, and that is an important life event that should qualify me for some priority.

— — —

I think the first argument is pretty good and the second one pretty bad. I really should not get priority vaccine access, but I think it was OK for me to travel in early December. But what I want to discuss in this post are some of the challenges in identifying when you should be an exception to a general rule.

Each argument tries to make out that I am, in some sense, special. And if you are going to exempt yourself from a rule you think others should generally follow, then you need to provide a compelling explanation for what makes your case unique. This follows from a deep moral principle about the moral equality of persons (one of the principles Immanuel Kant was getting at in his first formulation of the categorical imperative).

Suppose I don’t want to wait in line at the coffee shop. Can I jump the line? No. If ‘not wanting to wait’ was an adequate reason for anyone to cut in line, then everyone would cut in line (since basically no one wants to wait). But if everyone cut in line, then there would no longer be any line at all. My impatient cutting in line relies on the patient waiting of everyone else.  But here we bring in our deep moral principle: I am not special, which is to say that if I should get to do something, other people should as well. So if ‘not wanting to wait’ is a good reason for me, it must be a good reason for everyone. Since we have already seen it cannot be a good reason for everyone, we can conclude it is not a good reason for me.

So, if I want to cut in line, then I had better have a special reason to do it — a reason that will not apply to everyone else as well. Suppose I arrive at the hospital with a child suffering an anaphylactic shock. I see there is a long line of people waiting to get their severed thumbs reattached (I’ll leave it to you the reader to explain the sudden epidemic of thumb severings).

Here it is permissible for me to cut in front of people waiting to get their thumbs reattached. It is permissible because my reason for cutting will not generalize. If we changed the case so the line was all other parents with children suffering anaphylaxis, then it would not be permissible to cut (since we would otherwise return to our original problem).

Okay, so to carve out an exception there must be something unique about me. Well, there are things that are fairly unique about me, does that mean I should get to jump the vaccine line?  Well no. It was not just that anaphylaxis was different from a severed thumb, it also needed to be  more important. A broken leg, just because it is a different injury, would not make it okay to cut in line.

And here we come to a problem. While there are some things unique to me that suggest I should take precedence, basically everyone has some reason why they should be an exception. Sure, I’m hoping to get married but others, who are about to have their first child, will need to spend some time in a hospital and could really use the in-person support of grandparents. Sure, I’m teaching in person, but others are taking (more than one) classes in person. Syndrome was right, if everyone is special, no one is — at least in the sense that if everyone can identify reasons why they should be able to skip to the front of the line, then no one gets to skip.

And indeed, even if I decided I really was more special than others, it is still probably a bad idea to let me jump in line. That is because we, as a general rule, do not want society making thousands of fine-grained decisions comparing every possible special exception. It opens up far too many possibilities for bias and corruption, and besides that, it becomes democratically problematic because it is impossible to adequately articulate the thousands of priority decisions to the citizenry.

Alright, so I should not get to cut the vaccine line.

But what about my choice to visit my parents in early December? I think most people should stay home, but I also really thought I had a better reason to travel than others. Is that enough to justify my exception.

Not quite, there are two complications I need to consider.

First, I need to factor in my biases. Lots of biases may play a role, but let’s just look at an availability bias. I know the details of my life quite well; I do not know the details of yours. Thus even if my case looks more exceptional to me, that might not be because it is, but just because my own specialness is easier to see.

Second, even if I factor in all those biases and still think I’m exceptional, there is a problem with taking that as sufficient to make an exception. That is because I’m not only making a first-order decision, I’m also making a second-order decision. I’m not only deciding that my case is exceptional, I’m also regarding myself as a competent judge to decide on my own exception. This creates a problem because I expect most people are biased, and so if most people decide for themselves whether they should be an exception, far too many will make the wrong choice.

One way to see this problem is to note that others will disagree with me about what is an important reason for an exception. Let’s explain this with an analogy. Something like over 90% of teachers believe they are above average. Now, this might be that teachers are biased (I expect that is likely), but there is another explanation. Perhaps Anne and Barnie are above average lecturers and Chloe and Darius are above average mentors. Anne and Barnie think lecturing is the most important part of teaching (thus why they spent time getting good at lecturing) and Chloe and Darius think mentoring is the most important part of being a good teacher (thus why they invest so much in mentoring students). Here, even if each of them accurately judges how good they are at various teaching techniques, we will still get everyone thinking they are an above average teacher.

Similarly, if everyone decides for themselves whether they should be an exception. We could well end up with many people thinking they are one of only a few who deserve an exception. Not because they are wrong about any of the details, but simply because different people have different priorities. So even if 100% of people think only the 5% of people with the most pressing reasons to travel should travel, you could still easily get 30% or 40% people honestly deciding they fall in the 5%.

Of course, I think my priorities are right. I think I am better at thinking these things through then the average person. But is that enough to let myself treat myself as an exception? Probably not, since I also think that others think their priorities are right, and I expect that others think that they are better than average at thinking these issues through. So the question I am forced to ask is not just, am I better at making decisions, but rather should anyone who thinks they are better at making decisions be allowed to decide for themselves. If my answer to that latter question is no, then it might still be wrong to carve out the exception.

So was I wrong to travel in early December? It is hard to say. On the one hand, I really do think I had a good reason to do so. But on the other hand, I do not think most people should get to carve out their own exceptions just because they think the exception is warranted (of course, maybe it is not actually hard to say but I just do not want to admit I made the wrong choice).

Jane Austen and Moral Instability

portrait engraving of Jane Austen

“Instability” is not a word many would associate with Jane Austen. Film and television adaptations have cemented her reputation within pop culture; we picture rolling hills, country balls, and restrained drama played out in charming domestic interiors. She seems uninterested in the Napoleonic wars, which were playing out just across the channel, or any of the weighty political matters that concerned the more “serious” writers of her day. She does seem interested in social unity, usually represented by a wedding, which punctuates the end of each narrative. Just desserts are always doled out by the narrator, and we always know which characters to root for. For these reasons, her name has become a byword for moral stability, and her version of the English countryside has come to represent a time when society wasn’t subject to rupture and confusion, as it is today.

If the wide array of contemporary Austen-themed conduct books indicates anything, she’s still seen as a touchstone for moral behavior. Her words have been used to demystify cooking, sex, and everything in between. This flourishing industry casts her as a sweet and world-savvy aunt, and further suggests that her novels can be pulverized into idiomatic quotes without context to serve a unified (if somewhat patchwork) Austenian ethic of the everyday.

And yet, beneath this seemingly tranquil surface lies a battleground for radical and conservative academics. Looking more closely at her works, it’s easy to see why; what at first appears a unified moral vision is anything but.

Attributing a single moral philosophy to Austen is notoriously difficult. There are overarching moral messages that connect her novels, but what may be the subject of mockery in one text is celebrated within another, or even within the same text. The unstable positioning of the Gothic in Austen’s first published novel, Northanger Abbey, is just one example. The novel’s heroine, Catherine Morland, is a voracious reader of pulpy romances, which leads her to commit a series of social blunders. She suspects that her love interest’s father murdered his own wife, in a plot lifted directly from the sensational literature of her day. But even though Catherine’s suspicions are proven false, the widowed gentleman proves to be cruel in other ways, which indicates that there is a glimmer of insight in even the most ridiculous Gothic fiction.

Even Austen’s engagement with class is hardly as black-and-white as it may appear. Often cited as the most fundamentally conservative element of her fiction, social and economic distinction are generally portrayed as the natural state of society, even beneficial to those at the bottom. Members of the landed elite like Mr. Darcy of Pride and Prejudice and Mr. Knightley from Emma especially embody this paternalism. And yet Austen’s final published novel, Persuasion, celebrates the meritocratic royal navy, and denigrates the landed elite as undeserving of their wealth and privilege.

Academics from both sides of the political spectrum have claimed her as one of their own, a conflict which came to a head with queer theorist Eve Kosofky Segwick’s groundbreaking article on Austen, “Jane Austen and the Masturbating Girl,” in which she explores the cultural history of masturbation through Austen’s Sense & Sensibility. The mere title (the actual paper had yet to be published) prompted conservative academic Robert Kimball to write Tenured Radicals, a pearl-clutching polemic on the moral bankruptcy of leftists in the academy, who dared link a bulwark of old-fashioned English morality like Austen with such a depraved topic. Kosofy’s article, and Austen by association, clearly came to represent something much larger within intellectual discourse. Both Kosofky and Kimball had completely different views of this body of work, which again speaks to Austen’s versatility as a writer and as a moral touchstone.

Like all great literature, her work opens the way for a myriad of interpretations. She was a novelist, not a philosopher, and was therefore not obliged to lay out her understanding of the world in treatise-form. As Thomas Keymer mentions in his book Jane Austen: Writing, Society, Politics, Austen recoiled from moralizing novels of her contemporaries, like those of Hannah More, for their Evangelical zeal and purely didactic approach to fiction. She herself wrote to her sister Cassandra, “I do not write for such dull Elves As have not a great deal of Ingenuity themselves.” She is not calling for moral and imaginative complacency, but for wide-ranging sympathy and understanding.

Helena Kelly’s 2016 book Jane Austen, the Secret Radical, is described by Google Books as “A brilliant, illuminating reassessment of the life and work of Jane Austen that makes clear how Austen has been misread for the past two centuries and that shows us how she intended her books to be read, revealing, as well, how subversive and daring — how truly radical — a writer she was.” The impulse to claim her as a “secret radical” is perhaps as misguided as Kimball’s attempt to claim her for conservatives, compelling as Kelly’s interpretation may be. We can never completely reconstruct how Austen understood the world through her novels and surviving letters, but we can understand her as a three-dimensional person who may have had radical thoughts while still being a product of her time. When we move past our preconceived notion of her as a fixed moral touchstone, we can engage with her work in exciting new ways, which ultimately sharpens our understanding of how to be a person in an increasingly complicated world.

Disagreements in Ethical Reasoning: Opinion and Inquiry

photograph of graffiti image on building with two arms pointing in opposite directions

With the school year about to begin there are going to be plenty of students entering colleges and universities who have never taken an ethics course before. When I teach introductory philosophy courses the common response that I get when I ask students about ethical issues is “it’s all a matter of opinion.” This is part of a general attitude that when it comes to ethics there is no judgment that is better than any other. This habit of thinking can be so hard to break that even after an entire semester of talking about moral problems and debating the merits of different moral theories, students will still report that it is all just a matter of opinion. Why is this a problem? The habit of thinking that ethics is just a matter of opinion ultimately serves as a roadblock to ethical thinking and moral inquiry.

Moral relativism can be a complicated topic in philosophy, but for our purposes we can define it as the view that moral judgments are not true or false in the same way as factual judgments. Instead, morality is dependent on groups or cultures, each with their own incompatible ways of understanding the world. J. David Velleman has argued that based on data collected from various communities, different communities understand moral actions differently. Jesse Prinz argues that emotional sentiment plays a strong role in moral judgments; an action is wrong if it stirs a negative sentiment. Moral relativism is also often connected to tolerance; if there are no universal moral principles, the moral principles of one culture are not objectively superior to others so we should be tolerant of other cultural practices.

Relativism would seem to offer support for the idea that ethics is all a matter of opinion. Being tolerant of other moral worldviews is generally considered a good thing. Often moral issues can strike different emotional chords with people and it can seem disrespectful to tell people that they are wrong. If ethics is about how we feel about moral problems, then it seems hard to claim that it can rise above mere opinion. However, the view that ethics is all just a matter of opinion and relativism are not necessarily the same. If one believes that morality is dependent on culture, it would not warrant the claim that morality is all a matter of opinion, especially if we are only talking about a single person. Littering is considered a cultural faux-pas in North America so an individual would not be able to claim they are morally okay littering merely because it is their personal opinion that it is morally okay.

Indeed, while the justification for the view that ethics is just a matter of opinion and the moral relativist view can overlap, the position that ethics is just a mere matter of opinion (especially personal opinion) is especially problematic. For starters, one can be tolerant of other cultures and their moral views without having to believe that ethics is merely opinionated. For instance, a moral pluralist may claim that there are objectively correct and incorrect ways to react to moral problems and that moral answers can vary depending on local concerns. Second, while ethics does contain an emotional component, we are not therefore obligated to accept that ethics is merely emotional. Just because you or many others feel something about a moral issue does not mean that that feeling justifies any possible response.

The biggest problem, however, with the view that ethics is merely a matter of opinion is that more often it becomes an excuse to not think too deeply about moral problems. Consider this example: You have a strong desire to help others and are trying to determine what charities you wish to donate to and how much. You could investigate how effective each charity is, who may need it the most, and how much money you wish to give relative to other financial needs and desires you may have. But instead, you decide to take your cash and shred it.

Certainly, we can debate what might be the right thing to do in this situation, but it would require a fairly idiosyncratic person to decide that shredding money was the moral thing to do in that situation. We may not all agree on what the right thing to do in that situation is, but we can establish a fairly broad consensus on what is the wrong thing to do in that situation. Someone who is genuinely interested in helping others and is genuinely conflicted how to do it is not justified in shredding their money. Objectively, this is because it doesn’t solve their own moral problem. In other words, mere opinion is insufficient to justify any possible answer.

Now let’s say that in the same situation I decide that the most moral thing to do is to give money to an animal charity. You may disagree and opt instead for a charity that alleviates hunger. Should we conclude that our disagreement is a mere matter of opinion? Two moral people can come to different conclusions, with each trying to secure different goods and avoid certain problems. Each can also recognize the moral reasoning of the other as being legitimate without having to conclude that the other was morally wrong for doing what they did. This is not merely because the two have a difference of opinion. It is because each appreciates the moral reasoning of the other; they are capable of recognizing the legitimacy of other courses of action. However, they may not recognize the morality of a mere opinion that hasn’t been thought through. Both could agree that shredding your money is morally wrong action and both could recognize the importance of moral reasoning as a means of revising and refining a proposed course of action.

American philosopher Charles S. Peirce believed in the importance of inquiry for settling disagreements and disputes of opinion, not only between each other but with ourselves. If we could only inquire long enough, he argued, we could test our ideas in practice. Because of this, he claimed that part of the bedrock of reasoning is that we do not take steps to block the path of inquiry. The instinct to look at any moral problem and claim that it is all a matter of opinion does exactly this. The immediate response that the answer to any moral problem is a matter of opinion cuts off inquiry before it begins. If we accepted that there is no better answer, we will not seek it. It is an excuse to not look for a better answer, to not rely on our reasoning, to not discuss our proposed solutions with others, and to not seek consensus by refining our ideas.

The notion that the answer to any moral problem is a matter of opinion and that is all there is to say about it is intellectual laziness. If you are a new student who is taking their first ethics class, I urge you to look beyond such an attitude and to inquire further. We may end up concluding that our answers are only opinionated, but we have no justification for starting with that answer. Instead, we may find that we have missed several better responses that can only come from a willingness to inquire further.

Ethical Questions about Poverty Tourism

If you choose to visit one of the world’s big cities, a sightseeing option that may be available to you is what is frequently referred to as “poverty tourism.” If you look hard enough, you’ll be able to find tour buses that will drive you through the poorest parts of the city—places that you wouldn’t see if you hitched a ride on the standard hop-on-hop-off tourist bus.  Poverty tourism is common in places that have been hit hard by natural disaster. Tourists tend to be curious about the extent of the devastation. Continue reading “Ethical Questions about Poverty Tourism”

Feel This

Much has been written about the appalling, depressing and infuriating case concerning Brock Turner and his unnamed victim. I won’t rehearse the case, nor the dialectic it has sparked between those sympathetic to the victim and those outraged that sympathy can ever be extended to crime perpetrators, especially when such perpetrators are member of a hyper-privileged class such as that to which Turner belongs.

Continue reading “Feel This”