← Return to search results
Back to Prindle Institute

Travel on Trial: A Defense of Tourism from Moral Imagination

photograph of crowded market street in Barcelona

An article making the case against recreational travel recently garnered attention, prompting many to jump to the defense of tourism. Regardless of one’s ultimate conclusion regarding the value (or lack thereof) of travel, the article’s author, Agnes Callard, makes some compelling points. The fundamental argument of the piece is that travel does not actually provide the benefits we tend to ascribe to it. A commonly accepted narrative is that travel allows us to grow in personal insights, connect with others, and to have our presuppositions challenged. However, if we are fully honest with ourselves, travel rarely seems to actually have these effects. This is perhaps most clearly displayed when we analyze the behaviors of others. While it might be easy to trick ourselves into thinking travel engenders personal transformation, if we evaluate its impact on our friends and family, we must confess that travel’s transformative impacts are few and far between. Our friends and family appear to stay mostly the same, regardless of their most recent global trek.

So in the face of this counter-evidence, why do we still insist on touting the many virtues of travel? Callard concludes by claiming that travel helps shield us from reflecting on our own mortality. Structuring our lives around our next adventure allows us to feel like the future is full of wide-open possibilities, when in all actuality, such adventures are powerless to change our eventual fate.

While there are various places one could object to Callard’s chain of reasoning, I hope to offer at least a partial vindication of the value of travel by appealing to its contributions to our moral imagination. I agree with Callard that the link between personal transformation and travel can be overstated, but I contend that travel (at least in principle) has the capacity to reshape our moral lives in helpful ways. Moral imagination consists in our ability to effectively identify the full array of options when it comes to ethical thinking and decision-making. A well-cultivated moral imagination is what allows us to transcend mere moral convention and to strive for better ways of living. It allows us to act intentionally in ways that we deem morally ideal, regardless if we see others around us acting in a similar manner.

The concept of moral imagination is multifaceted, so I’ll unpack two aspects of moral imagination in order to illustrate my claim about the potential of travel. One such related concept is that of framing, which deals with the frame of reference through which one engages the world. Depending on one’s frame, one might be more or less likely to ignore morally salient features of situations. For instance, someone who grows up on a dairy farm might be shut off to the possibility of ethical veganism, not due to having a substantive objection to the position, but merely due to a lack of exposure to the idea. The cultural factors relevant to the individual’s environment which make the viewpoint seem intuitively implausible, function to exclude it from one’s frame.

Of course, just because it doesn’t fall within one’s frame doesn’t exclude it from potentially being the morally correct position. Particularly when one has limited exposure to other ways of life, travel can help expand one’s frame to be inclusive of a wider array of moral possibilities and categories. Regardless if the dairy farmer still opts to reject ethical veganism after considering the position, it seems morally preferable that they reject it for considered reasons as opposed to failing to ever consider it.

Another dimension of moral imagination is that of having well-developed moral vision. One who has moral vision lacks significant ethical blind spots, and such an agent can readily identify the morally relevant features of situations. The opposite of moral vision is that of moral myopia or moral blindness. Consider a young business owner who is seeking to grow her company. She might very well decide to outsource the material production of her products to a nation with less regulation around such activities. However, she fails to do her due diligence, by opting not to look too deeply into the working conditions of the employees producing her company’s products. The business owner’s frame allows for the consideration of the workers’ conditions, it’s just that she also experiences financial incentives to avoid seriously grappling with the issue in front of her, which potentially prevents her from coming up with creative solutions.

Since widening one’s moral frame and strengthening one’s moral vision are constitutive parts of fostering moral imagination, what positive role does travel supposedly play? Let’s start with unpacking the link between travel and the expansion of one’s moral frame. One’s moral frame is established by the set of experiences, assumptions, and beliefs one brings to the table of moral deliberation. If we consider the dairy farmer example, it is plausible to think that the experience of other cultures, which revere animal life differently and consume an alternative kind of diet, would encourage him to consider other ethical outlooks. Insofar as stretching one’s moral frame is directly linked to the expansion of moral possibilities, travel is the ideal tool since it allows for the consideration of diverse perspectives. Of course, Callard is correct in her assertion that the mere opportunity for such consideration does not mean that people will actually take advantage of the opportunity. It is simply to say that travel affords one increased potential to grow their moral imagination via exposure to diverse perspectives.

Similarly, we have good reason to think there’s a positive relationship between travel and the prevention of moral blindspots. There are certain daily realities faced by those in different cultures, socio-economic brackets, and religious systems, which we fail to consider in our moral deliberations. This fact might be particularly relevant when considering topics like the ethics of charitable giving and the way we engage with the global economy. Failure to appropriately appreciate the ways in which our actions (or inactions) impact those in our global community can certainly lead to moral blind spots in these areas, due to certain cultural and financial incentives to continue on in relative ignorance. Thus, travel has a unique ability to shake-up the status quo of our moral life, providing us a rich avenue for the cultivation of our moral imagination.

Why Is the World in Moral Decline?

photograph of dark alley with sunlit street in background

It isn’t. But apparently it feels that way to most people. According to investigators Adam Mastroianni and Daniel Gilbert, people everywhere (or at least in sixty different nations) have consistently believed for seventy years that the world is declining morally – that people are getting ethically worse as they get older, and that every succeeding generation is morally worse than the one before it.

Except that if you ask people other questions like, “Were you treated with respect all day yesterday?” or “Would you say that most of the time people try to be helpful, or that they are mostly just looking out for themselves?” or “During the past 12 months, how often have you carried a stranger’s belongings, like groceries, a suitcase, or shopping bag?” you get very different answers. Mastroianni and Gilbert say, in other words, “that when people are asked to assess the current morality of their contemporaries, their assessments do not change over time.” So, if you ask people if society is declining morally and people are worse than they used to be, a startling 84% say, “Yes.” But if you ask them about how they’ve personally been treated lately the answers people gave in 1949 pretty much match what people say in 2023. Hence the title of Mastroianni and Gilbert’s paper, “The Illusion of Moral Decline.”

I have encountered a lot of anecdotal evidence that people believe that morality is declining, and Mastroianni and Gilbert cite more. In fact, I have heard plenty of people claim everything is declining, all the time. What is new to me is that people’s contemporaneous reports about their actual experiences with other people belie this narrative of decline. How can we explain this?

I would argue that human beings are pervasively influenced by various archetypal narratives. Here are two really powerful ones. The narrative of progress – that, as Dr. Martin Luther King said, “the arc of the moral universe is long but it bends toward justice” – has had tremendous influence. But the narrative of moral decline goes all the way back to the Garden of Eden and that apple. (Lest you think it’s only a Christian phenomenon, the Ancient Greeks also thought everything and everybody was getting worse.) There is, however, no narrative of stasis. People usually think that things are mostly getting better or mostly getting worse.

But even if that’s true, it doesn’t explain why the majority gravitate toward the latter rather than the former – that morality should be thought to be in decline, specifically, rather than progressing. The explanation that Mastroianni and Gilbert give for this illusion is based on how two psychological phenomena come together: “biased exposure effect” and “biased memory effect.” Biased exposure effect says people pay more attention to negative information about other people. The media, especially the new media, may amplify this effect. “If it bleeds, it leads” was supposed to have been the motto of the most influential newspaper publisher in American history. But negativity, even violence, still seems to generate more interest on screens. Biased memory effect, alternatively, goes the other way. People recall positive events more, forget or misremember negative events, and to the extent they recall negative events tend to have lost their emotional impact. You may have experienced meeting someone who we once had a conflicted relationship with, but discovering you now share a warm nostalgia for positive events you share and forget about the bad times.

So, when people look around at what is currently going on the negative aspects of things, the bad things that happen have more salience for them. However, when they think of the past it seems much better to them now than it did at the time. But when you ask people specific questions about what they have recently experienced, those experiences aren’t all that different from what people experienced in the past.

I don’t want to argue with any of this – I am not an empirical psychologist. But I wanted to bring one thing to the forefront that seems to be lurking behind. There’s always a problem when doing empirical work on normative or moral issues. Most famously, in studying moral development Lawrence Kohlberg asserted that people who follow Kant’s moral theory are more morally well-developed than consequentialists. But that only makes sense if you have already won the argument about Kant’s theory being the best available. Try telling a utilitarian that they only believe that view because they are not as morally well-developed as Kantians. Given our very different perspectives, how should we go about quantifying moral decline?

Mastroianni and Gilbert give a pretty plausible account of the core of morality when they ask people if other people have shown them respect or have been helpful. But suppose I have the view that any marriage except between one man and one woman is morally wrong or that doctors who help people medically transition from one gender to the other are butchers. I might think then that while people are still relatively nice, they are morally bankrupt in ways no one would have even thought of in the past. Or suppose I think abortion is the key to women’s autonomy and without robust protections around it, women will not be counted as fully-human. I might think that Dobbs v Jackson shows the world in precipitous moral decline. Whether or not you think the world is in moral decline depends on how we define moral decline. Again, it’s perfectly possible to think that in one-on-one interactions people are generally fine, but that, overall, things are worse because people are morally corrupt in other ways.

Is there a way to generalize this effect? What if the older people get, the more they tend to get morally conservative in their outlook? What if successive generations of younger people tend to be more tolerant of a wider range of behaviors? Then one person’s moral decline is another person’s moral progress. Or so we might speculate.

Another Way: Moral Reasoning and Selective Focus

photograph of a forked path in pine forest

Imagine, with me, a scenario.

You and I are medical students working in an intensive care unit. Over the course of the last few weeks, we’ve been caring for Mrs. Rhodes: a kind woman who, despite her advanced years, has retained her sharp humor and optimism through an incredibly severe bout of pneumonia. In the last few days, however, her condition has rapidly deteriorated, and the decision was made to place Mrs. Rhodes on a form of heart-lung bypass known as ECMO: though her chances are slim, we hope that, by decreasing the burden on her heart and lungs, we will buy her time for the infection to pass and her lungs to heal.

The problem, though, is that Mrs. Rhodes is not the only patient with this pneumonia: since Mrs. Rhodes’ admission to our unit, the causal infection has spread throughout the nursing home where she was being cared for. Over the past few weeks, therefore, our service has become overrun with ill seniors: many who just require supportive care, and some, like Mrs. Rhodes, who require invasive life support. And as of this morning, Mrs. Rhodes has continued to deteriorate, and all five of our ECMO machines are now occupied.

Later that night, as we hope that none of our patients will deteriorate further, we receive a call from a resident obstetrician. A young pregnant woman has just been seen in the emergency room. She’s eclamptic, and is showing early signs of cardiopulmonary failure: without some form of life support, both her and her fetus are at risk. The obstetrician asks if any ECMO machines are available.

We, then, are faced with a dilemma: do we allow this young woman and her fetus to die, or do we reallocate one of our ECMO machines, and allow one of our elderly patients to die in the process? We look down our patient panel, and see Mrs. Rhodes’ vital signs and laboratory results: it’s clear that her chances of recovery are dim to pitch dark. But is that grounds to sentence her to almost certain death? The ethical quicksand begins to surround us. Does it matter that we can save the lives of the mother and her fetus if we reallocate the machine? Does it matter that Mrs. Rhodes is older? Or is it ageist for us to even think so? Is Mrs. Rhodes entitled to the machine, given that she’s currently attached to it? Does it matter if Mrs. Rhodes is a leader in our community? If the mother is an alcoholic? If Mrs. Rhodes smokes?

Who gets a chance to live, and who doesn’t?

.  .  .

Ethical dilemmas are frequently framed by limitations; in fact, it is often the limitations themselves which give a dilemma moral weight. We are finite beings with finite options, capabilities, and resources, and this simple fact undergirds many of the dilemmas which we might encounter. In the case of Mrs. Rhodes, for example, our dilemma arises from a limited number of lifesaving ECMO machines: and appreciating this limitation, we turn our moral attention to the question of how to fairly and equitably allocate our limited resources.

Real-life ethical dilemmas, however, do not exist in a vacuum: they are inexorably dependent on the context in which they occur, on the socioeconomic, political, and philosophical systems which surround us.

These systems are causally connected to what happens, and often determine who it happens to; and dilemmas, in all of their alluring simplicity, can lead us to believe that these limitations are simply facts, and not the result of the unnamed systems which brought the dilemma about.

We may debate, for example, the merits of reallocating the ECMO machine or not; we may spill a significant amount of ink over what the moral thing to do is in such a case. Such a focus on the dilemma at hand, however, obscures the role of the systems which brought the dilemma about: we forget that the choice to buy five ECMO machines, and not six, was a choice. People gathered in a (likely exquisitely furnished) conference room, and decided that the procurement of five ECMO machines provided the greatest benefit at the least cost. When we debate and decide if Mrs. Rhodes or the young mother should get the ECMO machine, then, we are truly debating who must die to maintain the financial status quo — a reality which the dilemma, as presented above, wholly obscures. With this perspective, we see that a valid ethical analysis of the case of Mrs. Rhodes is that the time for moral action was in that conference room, and that a hospital which cannot care for a nursing home of sick seniors is negligently underprepared.

One could very well argue that when we get to the point where we are deciding who dies a preventable death for the maintenance of an unjust status quo, we can no longer claim that what we’re doing is truly moral in any meaningful sense. But when we only ask the question posed by a dilemma — in this case, who should get the ECMO machine — we fail to ask these broader questions, and we fail to see the full scope of the problem before us.

The same goes for similar dilemmas which we might encounter. Rare diseases go unresearched and untreated due to economic calculus, but what if non-economic values could guide how research priorities are set? We might ask questions about how to best allocate our individual resources to various charities, but what if individual philanthropy itself, in its modern form, frequently propagates injustice? We can ask questions about the dilemmas which are posed to us, but we must also ask questions of the systems which impose the limitations that inform these dilemmas.

Some dilemmas arise from limitations that are inherent to our nature: we cannot be in two places at once, and we cannot live forever. But, with disturbing frequency, the limitations which inform real-life ethical dilemmas are artificial: they are the contingent result of the decisions which we, as individuals and as societies, make, and though their role might be obscured, these decisions are relevant to how we choose to ethically resolve the dilemmas we face. Understanding ethical decision-making requires attention to these choices and the systems and beliefs which arise from them — attention to the things which, in the dilemmas we face, may lead us another way.

ChatGPT and the Challenge of Critical (Un)Thinking

photograph of statue of thinking man

For the past weeks there has been a growing interest on ChatGPT, this new artificial intelligence language model that was “programmed to communicate with people and provide helpful responses.” I was one of the curious that had to try it and figure out why everyone was talking about it.

Artificial intelligence is not a new thing; at least as an idea it has some decades now, since it was firstly introduced in 1950 by Alan Turing, the British mathematician who is generally considered to be the father of computer science. Later on, in 1956, John McCarthy coined the term “artificial intelligence” in a conference, giving birth to a new field of study. Today, it is everywhere, we use it even without knowing and the advancements in the area create entirely new fields of inquiry, bringing along new ethical dilemmas that go from the discussion what (if any) moral rights to attribute to A.I., to designing new digital rights that encompass different milieus and that have political and legal consequences – see, for instance, the European Union attempts since 2021 to create a legal framework regarding the rights and regulations of AI for its use on the continent.

ChatGPT is something unique – at least for now. While a recent development, it seems almost too familiar – as if it was always there, just waiting to be invented. It is a Google search on steroids, with much more complexity in its answers and a “human” touch. Once you read the answers to your questions, what calls your attention is not only how fast the answer is provided, but also how detailed it seems to be. It mimics pretty well our ways of thinking and communicating with others. See, for instance, what happened when staff members at Vanderbilt University used it to write an email responding to the shooting at Michigan State – a well written 297-word missive which might otherwise have been well received. However, the fact that at the bottom of the email was a line that read as following: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” outraged the community. The Associate Dean of the institution soon apologized, saying that the use of the AI-written email contradicted the values of the institution. This is one (of no doubt many) examples of how the use of this technology may disrupt our social and cultural grids. This new tool brings new challenges, not only for education – how students and professors incorporate this technique into their practices – but also for ethics.

Contemporary models of education still rely heavily on regular evaluation – a common mission across educational institutions is to foster critical thinking and contribute to the development of active and responsible citizens. Why is critical thinking so valued? Because being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value. To judge something is not as easy as it may initially seem, for it forces each of us to confront our prejudices, compare it to reality – the set of facts common to all of us, what the world is made up – and take a stand. This process also moves us from our inner monologue with our self to a dialogue with others.

What happens when students rely more and more on ChatGPT to do their homework, to write their essays and to construct their papers? What happens when professors use it to write their papers or books or when deans of universities, like the example mentioned above, use it to write their correspondence? One could say that ChatGPT does not change, in essence, the practices already in place today, given the internet and all the search engines. But insofar as ChatGPT is superior in mimicking the human voice, might its greatest danger lie in fostering laziness? And shouldn’t we consider this laziness a moral vice?

In the Vanderbilt case, what shocked the community was the lack of empathy. After all, delegating this task to AI could be interpreted as “pretending to care” but fooling the audience. To many it seems a careless shortcut done for time’s sake. Surely it shows poor judgment; it just feels wrong. It seems to betray a lack of commitment to the purpose of education – the dedication to examine and think critically. In this particular context, technological innovation appears nothing more than a privileged means to erode what was supposed to contribute to, namely, thoughtful reflection.

While technologies tend to make our life much more comfortable and easier, it’s worth remembering that technologies are a means to something. As Heidegger well pointed out in an emblematic text entitled “The Question concerning Technology” (1954), we tend to let ourselves be charmed and hypnotized by its power; while forgetting the vital question of purpose – not the purpose of technology but the purpose of our lives, as humans. And while ChatGPT may be great for providing context and references on virtually any topic of research, we cannot forget that the experience of conscious thinking is what makes us uniquely human. Despite all appearances of coherent and well-ordered prose, ChatGPT is only mirroring what we, humans, think. It still does not have nor can mimic one thing: our emotions and our ability to respond in a singular manner to specific situations.

If we generalize and naturalize the use of this kind of technologies, incorporating into our daily lives, aren’t we making a choice of non-thinking in detriment of an instantaneous response that serves a strict utilitarian purpose? Heidegger says that “technology is a mode of revealing,” insofar what we choose (or do not choose) reveals the ways in which we are framing our world. And if we choose not to think – believing that something else can “mirror” our possible thought – aren’t we abdicating of our moral autonomy, suspending the human task of reflecting, comparing, and judging, and instead embracing a “dogmatic” product of a technological media?

Gas Stoves: A Kitchen Culture Clash

photograph of gas burner being lit

Progressive and conservative media flared up last month over an issue tucked in the side of your kitchen: gas stoves. This surprise episode in America’s culture wars aired after a Biden administration official, the chairman of the Consumer Product Safety Commission, suggested that it is considering restricting or even banning gas stoves in the wake of a new study published in the International Journal of Environmental Research and Public Health, a study that alleges that gas stoves threaten public health and damage the environment. This kitchen equipment drama featured conservative media lambasting the administration for its latest show of “paternalism” and “green extremism,” progressive media rushing to deny that the administration was moving to ban them and yet also to defend the soundness of gas stove bans that have already been passed in states like New York, and the Biden administration itself denying that it wants to ban them entirely while supporting states that did.

In the midst of all this political hubbub, many are left to wonder: why does it matter whether gas stoves are banned?

The pro-ban crowd shares a few reasons for its case. First, gas stoves pose a risk to public health. The study that prompted the ban debate alleges that gas stoves emit enough detrimental fumes that children who inhale risk developing asthma. Second, gas stoves damage the environment. The fumes from gas stoves contain enough greenhouse gasses to contribute to climate change. Thus, to slow the rate of climate change, one small but meaningful change we can make to our lives is to switch out our gas stoves for electric ones. Even if the change might pale in comparison to other solutions — like moving away from fossil fuels in our electricity supply — we should do it anyways because, as some say, we must treat climate change as a World War II-esque threat and mobilize of all our available resources to fight it. Thus, to protect public health and the environment, the pro-ban team says we should ban gas stoves.

The anti-ban crowd shares a few reasons for its case. First, they allege that the study used to justify public health and environment concerns lacks scientific merit and is only being touted for aligning with the partisan motives of the Biden administration. They say that the study’s findings are misdirected; if true, this would not only undermine the case for banning gas stoves but would erode trust in the Biden administration: surely it is wrong to distort science to further one’s political agenda, an especially nefarious type of virtue signaling.

Second, they allege that even if there were some slight detrimental effects of gas stoves concerning public health and the environment, the cost of keeping gas stoves is surely lower than the cost — to consumers’ wallets and freedom — of replacing all gas stoves with electric stoves. Thus, it would be imprudent to ban stoves; this side may recognize that climate change is real, but they also recognize that unchecked, militaristic zeal to “fight climate change” might create graver problems than it solves. Such a climate crusade might keep the Earth’s average temperature less than 1.5 degrees Celsius higher than pre-industrial levels, but it could spark inflation, arrest economic growth, and thus also cause chronic unemployment.

Third, along similar lines, the anti-ban crowd alleges that there is a glaring inconsistency in the principles behind support for banning gas stoves: if equipment that risks harm to public health and the environment should be banned, then shouldn’t we ban cars, trains, ships, and planes? Thus, according to the anti-ban crowd, we should reject the mentioned principle that underlies the logic of the pro-ban crowd, for, if we followed it to its logical conclusions, we would have to commit ourselves to policies that we cannot undertake.

Fourth, perhaps more obviously, the anti-ban crowd fears that banning gas stoves would violate the principle of consumer autonomy through excessive government oversight in the kitchen. In short, the anti-ban crowd objects to gas stove bans on the grounds that they are motivated by the wrong things, are imprudent, and derive from an untenable principle.

Thus, there are two main camps on the gas stove issue, and neither seems willing to budge. Yet, the responsible citizen should resist the temptation to turn to tribalism and deny that the other side has good points. Although an anti-ban zealot might claim that the pro-ban crowd represents the side of green hysteria and government nannying, no one can disagree that public health and environmental care are important. Likewise, although a pro-ban zealot might claim that the anti-ban crowd is motivated by feigned outrage, fanned by the specious reasoning and spicy rhetoric of conservative media, no one can deny that honesty in science, prudence in policy, and soundness in principle are noble aims toward which we should all strive.

Ultimately, each side takes its respective stance with admirable intentions, and the responsible citizen should authentically engage with each side, listening to its reasoning and judging the issue for theirself.  In doing so, they should ask themself: what is the key ingredient to a healthy, environmentally clean kitchen — individual responsibility or government intervention?

Private Reasons in the Public Square

photograph of crowd at town hall

The recent Dobbs decision induced a tidal wave of emotions and heated discourse from both sides of the political aisle as well as from all corners of the American cultural landscape. Some rejoiced that it’s a significant move towards establishing a society that upholds the sanctity of human life, while others mourned the loss of a basic liberty. The Dobbs ruling overturned the historic Roe v. Wade verdict, and it has the practical consequence of relegating decisions about the legality of abortion to individual states. Abortion access is no longer a constitutionally protected right, and thus where and when abortion is legal will be determined by the democratic process.

The legal battle at the state level over abortion rights will continue over the coming months and years, giving voters the chance to share their views. Many of these citizens take their most deeply held moral, religious, and philosophical commitments to have obvious implications for how they vote.

But should all of these types of reasons affect how one votes? If other citizens reject your religion or moral framework, should you still choose political policies based on it?

Political philosophers offer a range of responses to these questions. For simplicity’s sake, we can boil down the responses to two major camps. The first camp answers “no,” arguing that only reasons which are shared or shareable amongst all reasonable citizens can serve as the basis for one’s vote. This seems to rule out religious reasons, experience-based reasons, and reasons that are based on controversial moral and philosophical principles, as reasonable people can reject these. So what kinds of reasons are shareable amongst all reasonable citizens? Candidates for inclusion are general liberal ideals, such as a commitment to human equality, individual liberty, and freedom of conscience. Of course, what these general ideals imply for any specific policy measure (as well as how these reasons should be weighed against each other when they conflict) is unclear. Citizens can disagree about how to employ these shared reasons, but at least they are appealing to reasons that are accepted by their fellow reasonable citizens instead of forcing their privately held convictions on others.

The other camp of political philosophers answers “yes,” arguing that so long as one’s reasons are intelligible or understandable to others, they can be used in the public square. This approach lets in many more reasons than the shareable reasons standard. Even if one strongly opposes Catholicism, for example, it is nevertheless understandable why their Catholic neighbor would be motivated to vote according to church teaching against abortion rights. Given the neighbor’s faith commitments, it is intelligible why they vote pro-life. Similarly, even if one accepts the controversial claim that personhood begins at conception, it is easy enough to understand why other reasonable people reject this belief, given there is no consensus in the scientific or philosophical communities. This intelligibility standard will also allow for many citizens to appeal to personal experiences, as it is clear how such experiences might reasonably shape one’s political preferences, even if these experiences are not shared by all reasonable citizens.

Of course, one might notice a potential pitfall with the intelligibility standard. What if a citizen wishes to support a certain policy on the basis of deeply immoral grounds, such as racist or sexist reasons? Can the intelligibility standard keep out such reasons from public discourse?

Defenders of the intelligibility standard might respond that it is not intelligible how a reasonable person could hold such beliefs, blocking these reasons from the public square. Of course, there may also be disagreement over where exactly to draw this line of reasonableness. Advocates of the intelligibility standard hope that there is enough consensus to distinguish between reasonable belief systems (e.g., those of the major world religions and cultures) and unreasonable ones (e.g., those of racist sects and oppressive cults). Naturally, proponents of the shareable reasons standard tend to be dubious that such an intuitive line in the sand exists, doubling down on placing tight restrictions on the types of reasons that are acceptable in the public square.

What is the relevance of this shared vs. intelligible reasons distinction when it comes to the average citizen? Regardless of where one falls in the debate, it is clearly beneficial to reflect on our political beliefs. Appreciating the reasons of other thoughtful citizens can prompt us to take the following beneficial steps:

1. Recognize that your privately held belief system is not shared by every reasonable, well-intentioned citizen. Our political culture is constituted by a wide array of differing opinions about abortion and many other issues, and people often have good reasons for holding the viewpoints they do. Recognition of this empirical fact is a crucial starting point for improving our political climate and having constructive democratic debate.

2. Reflect on why your friends, neighbors, and co-workers might disagree with you on political issues. Morality and politics are complicated matters, and this is reflected by surveys which indicate the depth of disagreement amongst professional experts in these fields. Given this complexity, individuals should be open to potentially revising their previously held beliefs in light of new evidence.

3. Engage with those who do not share your belief system. Inter-group contact has been shown to decrease harmful political polarization. In the wake of the Dobbs decision, this looks like a willingness to engage with those on both the pro-choice and pro-life sides of the aisle.

Regardless of where they fall in the shared reasons versus intelligible reasons debate, citizens have a responsibility to recognize that their political opponents can be reasonable as well. Embracing this idea will lead to more productive democratic discourse surrounding difficult political issues like those bound up in the Dobbs ruling.

Why Bother with Political Arguments?

photograph of protestors marching

Moral arguments can seem pointless in our hyper-polarized, post-truth society. People rarely seem to change their minds on hot-button political issues, like abortion, gun control, or climate change. Yet Congress recently passed a bill protecting same-sex marriage, and with the support of many Republicans. That bipartisan effort would have been impossible ten years ago.

Does social change like this result from moral arguments, though? Moral reasoning gets a bad rap, but it truly fuels moral progress, so long as it comes from a place of mutual trust and respect.

Ethics professors (like us) might be prone to valorize moral reasoning. We study moral arguments for a living, yet don’t appear to be more ethical than other people. We’re just skilled at poking holes in opposing moral views or coming up with reasons to support our own.

If anything, arguing about politics only seems to make each side dig in their heels (or worse, become more extreme). Cognitive science reveals that, when it comes to ethics and politics, we regularly use reasoning to rationalize the values we already hold. Climate activists often assume that skeptics just don’t understand the science. But research increasingly suggests that the more skeptics know about the science the less they think climate change is a serious threat.

Yet some political partisans do change their minds in light of arguments. For years, Jerry Taylor, a former fellow of the Cato Institute, churned out talking points for climate skeptics. Respected peers eventually convinced him of flaws in his sources and holes in his arguments. Eventually Taylor’s skepticism eroded, and he left Cato to become a climate activist.

It’s not just conservatives who change their minds in response to reasons. The science writer, Bethany Brookshire, once produced a viral tweet about the apparent sexism hitting her inbox. Of the scientists who correspond with her, men seemed much less likely to start their emails with her proper title, “Dr. Brookshire.” However, going back through her emails revealed that it was women who were slightly less likely to use the more formal and respectful title. So Brookshire publicly corrected her mistake.

Even if some people are persuaded by rational argument, aren’t these cases incredibly rare? These stories are anecdotes, but they make vivid a tendency present in us all that’s usually just blocked by other factors. As Julia Galef puts it, although we commonly adopt a “solider mindset,” hellbent on defending core beliefs at all costs, it isn’t inevitable. We are able to adopt a “scout mindset” aimed at an accurate map of the complex terrain.

Recent experiments suggest ordinary people’s attitudes and behavior can respond to arguments about contemporary moral issues. One intriguing study found that students in a college ethics class who studied a famous moral argument for vegetarianism purchased less meat from the dining hall, compared to another group of students who were randomly assigned to study an argument in favor of donating more to charity. Another series of experiments provided participants an opportunity to donate part of a bonus to a charitable organization. The researchers found that reading moral arguments could increase charitable giving, provided the arguments are engaging. These and other studies provide evidence that moral reasoning can change real moral behavior, not just self-reported attitudes.

The trick is to recognize the many forms of reasoning. Moral arguments can be presented as a boring set of premises that logically lead to a conclusion, or lobbed at opponents within a culture of contempt. But these aren’t the only, or the most effective, forms of moral reasoning.

Us humans are not cold robots designed to dispassionately reason alone. Moral reasoning evolved among social primates to resolve problems of interdependent living. Competing ideas arise in dialogue with others, and you’re not going to buy an argument from someone you despise or distrust, and certainly not from someone who treats you with contempt. Mutual trust and respect are required for arguments to be taken seriously in the first place.

Should we conclude that emotions, not reasons, drive social change? It’s both. Mutual trust and respect enable reasoning to do its work of changing moral attitudes.

Consider one way support for same-sex marriage has increased dramatically in the past few decades. Many people — including Republicans like Rob Portman and Dick Cheney — have discovered that a beloved friend or family member is gay. Existing empathy and respect for a loved one removes barriers to understanding the oppression of gay people and to seeing true love between same-sex partners. People have reasoned that if their loved one doesn’t deserve discrimination and stigma, then other gay people don’t either. Consistency reasoning of this sort is ubiquitous in moral life.

Moral arguments from the opposing side are certainly hard to swallow, for they often conflict with our values and challenge our identities. But when we deride reasoning in politics, we’re no better than a physician who concludes that a drug is ineffective because patients refuse to take it. As Taylor emphasizes, once he heard arguments from people he trusted and respected, he opened up, and over time his skepticism began to weaken because he appreciated the arguments.

When moral arguments are planted in fertile ground, they are merely sowed seeds. And we’re not talking about jalapeño seeds, which produce peppers in a few months. Think avocados, which can take a dozen years to bear fruit. During that time, the ground must remain fertile as well. Combative arguments brimming with contempt can poison the well and yield crop failure.

Moral reasoning so conceived is truly a driving force for social change. Without it, progress is impossible. The key is patience, persistence, and mutual respect. Under the right conditions, moral arguments can move mountains — slowly but surely.

Too Clever for Our Own Good?: Moral Bioenhancement and Existential Harm

image of woman's profile in silhouette with sun behind clouds superimposed on her mind

Knowing things is good. How do you change a tire? What’s the right combination of time and temperature to cook a turkey? Why do we call the mitochondria the powerhouse of the cell? The answers to these questions make our lives easier, enabling us to overcome challenges. But these examples are just the tip of the iceberg. Over time, we’ve not only grown to understand more things about ourselves and the universe around us, we’ve also continued to discover new questions in need of answers.

But with this increase in our collective understanding comes an increase in the risks we pose to ourselves, each other, and, in extreme cases, the Earth itself. This is because each scientific, medical, and technological breakthrough brings opportunities for benefits and harm. The acquisition of knowledge is an inherently ethical enterprise characterized by what is known as the dual-use dilemma. As defined by Seamus Miller and Michael J. Selgelid:

The so-called “dual-use dilemma” arises in the context of research in the biological and other sciences as a consequence of the fact that one and the same piece of scientific research sometimes has the potential to be used for harm as well as for good.

For example, virology research is good as it means we have a greater understanding of how viruses evolve and spread through a population, enabling the development of societal and medical countermeasures like social distancing and vaccinations. However, if put into the wrong hands, such knowledge can also be used by terrorists and hostile political powers to create devastating viral weaponry or misinformation campaigns. Ultimately, every intellectual step forward brings both the potential for good and ill.

But this potential for risk and benefit has not grown steadily over the centuries; some advances prove more beneficial and some more devastating than others. For example, the creation of the plough revolutionized how we, as a species, farmed, but the negative implications of such a technological advancement are, arguably, minimal or at least nondirect.

Today, however, highly destructive technologies seem increasingly common due to our collective intellectual capacity and interconnected world. As such, even small groups can threaten existential harm.

For example, through advancements in genetics, virology, synthetic biology, or multiple other scientific disciplines, a few persons can, in principle, develop an organism or technology with the power to catastrophically ravage the planet. Moreover, with each discovery opening the door for new avenues for inquiry, there is no reason to think that this availability of potentially dangerous knowledge will subside anytime soon.

This leaves us with a problem. Suppose we continue to develop our collective cognitive capacities, enabling the discovery of even more methods through which we can come to harm ourselves or others, either through deliberate action or accident.

In that case, do we also need to enhance our ability to reason ethically to keep pace with this possibility of harm?

Ingmar Persson and Julian Savulescu posed this question in their 2008 article, The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. In it, they argue that moral bioenhancenment (MBE) – a biotechnological intervention aimed at enriching ethical decision-making and outcomes – should be developed and distributed to close the gap between humanity’s destructive capabilities and moral faculties. The idea is that our “natural” moral abilities are ill-equipped to deal with the complex and high-stakes world created by humanity’s mental prowess. They note, however, that those most needing a greater level of ethical understanding are those least likely to take such an intervention willingly; a nefarious actor planning to use a nuclear weapon to start an apocalyptic war isn’t exactly going to be first in line for a morality pill. So, according to Persson and Savulescu, MBE shouldn’t be optional – everyone should have to take it. As they write:

If safe moral enhancements are ever developed, there are strong reasons to believe that their use should be obligatory, like education or fluoride in the water, since those who should take them are least likely to be inclined to use them. That is, safe, effective moral enhancement would be compulsory.

According to them, this is the only way to ensure we can effectively mitigate the risk of existential harm. If left up to individual choice, some persons would inevitably choose not to become morally enhanced. This refusal would, in turn, leave the potential for cataclysmic risk unaffected, as even a tiny chance is too great to be left unaddressed. Much like playing Russian roulette, even the slightest probability is substantial enough to necessitate the rejection of the possibility altogether. To ensure we eliminate the risk of ultimate destruction, every person would need MBE.

Of course, this raises both principled and practical objections.

John Harris expresses concerns that, for MBE to be effective, it would have to prevent us from acting unethically. If it didn’t, it would be an effective countermeasure to the harms Persson and Savulescu envision. However, this would mean that the intervention directly prevents us from acting in a certain way and thus inhibits our free will. This possibility worries Harris as, without the ability to be unethical, the virtue of ethical actions ceases to exist – you’re not doing right if you have no choice. Vojin Rakić takes this worry even further, exporting it from the individual to the societal, arguing that MBE would deprive persons of their ability for collective morality and, ultimately, of a vital aspect of our humanity.

But, as I have argued, perhaps MBE need not be compulsory to be effective as we develop our behavioral attitudes from those around us.

If most people take MBE willingly, then there’s reason to believe that the unenhanced would act more morally as they would be surrounded by morally aspirational individuals and would be insulated from immorality’s temptation.

Additionally, there’s the political obstacle of simply getting every nation to agree to enact such a program. Given that we argue over seemingly unequivocable matters – like the need to tackle climate change – getting every world leader on board for such a program is practically impossible.

However, these objections don’t necessarily detract from Persson and Savulescu’s observation that our intellectual capacity has outpaced our moral capabilities. Instead, they highlight the difficulties in finding a suitable solution to the problem. Ultimately, if we all behaved more ethically, the world may not be in the precarious situation it is right now. The rise of fascism, the threat of global warming, the increase in conflicts, and the general breaking down of the established liberal world order may go some way in convincing skeptics that, while compulsory MBE may not be ideal, it’s preferable to the alternative of widespread, even global, destruction.

Fantastic Beasts and How to Categorize Them

photograph of Niffler statue for movie display

For a short video explaining the piece click here.

Fantastic Beasts and Where to Find Them is both a film franchise and a book. But the book doesn’t have a narrative; it is formatted like a textbook assigned in the Care of Magical Creatures course at Hogwarts. It’s ‘written’ by Newt Scamander and comes with scribbles from Harry and Ron commenting on its contents.

Before the creature entries begin there is a multipart introduction. One part, entitled “What is a Beast?” seeks to articulate a distinction between creatures who are ‘beasts’ and those that are ‘beings.’ The text notes that a being is “a creature worthy of legal rights and a voice in the governance of magical world.” But how do we distinguish between beasts and beings? This is one of the main questions central to the topic of moral status.

So, the intro asks two questions: who is worthy and how do we know? The first question seeks to determine who is in the moral community and thus deserving of rights and a voice. This is a question concerning whether an entity has the property of ‘moral standing’ or ‘moral considerability.’ The second question seeks to identify what properties an entity must have to be a member of the moral community. In other words, how does one ground a claim that a particular entity is morally considerable? We can call this a question about the grounds of moral considerability. It is the main question of the short introduction to Fantastic Beasts:

What are the properties that a creature has to have in order to be in the category ’beast’ (outside the moral community) or ‘being’ (inside the moral community)?

Attempts to resolve a question of moral considerability confront a particular problem. Call it the Goldilocks Problem. Goldilocks wants porridge that is just right, neither too hot nor too cold. We want definitions of the moral community to be just right and avoid leaving out entities that should be in (under-inclusion) and avoid including entities that should be left out (over-inclusion). When it comes to porridge it is hard to imagine one bowl being both too hot and too cold at the same time. But in the case of definitions of the grounds of moral considerability, this happens often. We can see this in the attempts to define ‘being’ in the text of Fantastic Beasts.

Fantastic Beasts looks at three definitions of the grounds of being a ‘being.’ According to the text, “Burdock Muldoon, Chief of the Wizard Council in the fourteenth century, decreed that any member of the magical community that walked on two legs would henceforth be granted the status of ‘being,’ all others to remain ‘beasts.’” This resulted in a clear case of over-inclusion. Diriclaws, Augureys, Pixies and other creatures were included in the moral community of beings, but should not have been. The text states that “the mere possession of two legs was no guarantee that a magical creature could or would take an interest in the affairs of wizard government.”

What really mattered was not the physical characteristic of being bipedal but the psychological characteristic of having interests. By focusing on the wrong property this definition accidentally included entities that did not belong.

This of course is related to a humorous story that Plato once lectured about Aristotle’s definition of a human as a featherless biped only to have Diogenes show up the next day with a plucked chicken stating “Behold! A man.”

At the same time, however, this definition is under-inclusive. Centaurs are entities that could take an interest in the affairs of wizards, but they have four legs and thus are left out. Merpeople also could take an interest in the affairs of wizards, but have no legs and thus are left out. Clearly, this definition will not do.

And it is not surprising that the definition fails. Using a physical characteristic to determine whether an entity will have the right psychological characteristics is not likely to work.

So what is a wizard to do but try to find a property more closely linked to the relevant psychological characteristic. Interests — for example, wants and needs — are often expressed linguistically: “I want chocolate chip cookies”; “I need more vegetables.” This apparently led Madame Elfrida Clagg to define a being as “those who could speak with the human tongue.” But, again, we have an example where the definition is over- and under-inclusive. Trolls could be taught to say, but not understand, a few human sentences and were included in the community but should have been excluded. Once again, the merpeople, who could only speak Mermish, a non-human language, were left out when they should have been included.

In our own world, the focus on language and other activities as proxies for cognitive traits have been used to discuss the moral status of animals (also, here). Attempts to exclude animals from the moral community did, in fact, use speech-use and tool-use as reasons to exclude animals. Descartes famously claimed in part V of the Discourse on Methods that animals did not use language but were mere automatons. But apes can use sign language, and crows, elephants, otters and other animals can use tools. So, for many who want to only include humans as in the category of ‘being,’ these activity-based definitions turn out to be over-inclusive. But again, given the incapacity of new born humans to use language or tools, they would also leave out some humans and be under-inclusive. So, using a non-psychological property (an activity) to identify a psychological property is unsurprisingly problematic.

Apparently, the wizarding world got the memo regarding the problem of these definitions by the 19th century. In 1811, Minister of Magic Grogan Stump defined a being as “any creature that has sufficient intelligence to understand the laws of the magical community and to bear part of the responsibility in shaping those laws.” The philosophical term for this set of capabilities is autonomy, at least in the way Immanuel Kant defined the term.

One way to express Kant’s’ view is that the morally considerable beings, the beings that could be called ‘persons,’ were those that had the capacity to rationally identify their interests and then have the will to execute plans to see those interests realized.

Persons are also capable of seeing that others have this capacity and thus rationally adopt rules that limit what we can do to other persons. These are the moral rules that guide our interactions that ground our rights, legal and moral, as well as give us a voice in self- and communal-governance. In other words, the term ‘being’ in Fantastic Beasts is just the text’s term for ‘moral person.’ Furthermore, the relevant psychological characteristic of persons is autonomy as defined by Kant.

There is something questionable about this Kantian view of being-hood or person-hood. On this view, persons need sophisticated cognitive abilities to be identified as persons. Any entity that lacks these cognitive abilities needed for moral judgment are non-persons and thus wholly outside the moral community. In other words, non-persons are things, have only instrumental value, and can be equated with tools: you can own them and dispose of them without morally harming them. But, this definition also excludes human infants and humans with diminished cognitive abilities, but we do not think of them as outside the moral community.

Surely these implications for humans are unacceptable. They would probably be unacceptable to the fictional Newt Scamander as well as to people who fight for animal rights. But the Kantian view is binary: you are a person/being or a beast/thing. Those who find such a stark choice unappealing can and do recognize another category between person and things. This would be something that has interests, but not interests in having a voice in governance. These entities often are vulnerable to damaging impacts of the behavior of persons and have an interest in not suffering those impacts, even if they cannot directly communicate them.

So, we need a new set of terms to describe the new possible categories of moral considerability. Instead of just the categories being/person and beast/thing, we can discuss the categories of moral agent, moral patient, and thing.

A moral agent is an entity that meets the Kantian definition of person. It is an entity who is in the moral community and also shapes it. A thing is something that does not have interests and thus is outside the moral community. But a moral patient is an entity that has interests, specifically interests against harm and for beneficence that should be morally protected. Thus, they are members of the moral community, just not governing members. So, Centaurs and Merpeople and Muggles can all be considered moral agents and thus can, if they so desire, contribute to the governance of the magical community. But even if they don’t want to participate in governance, the magical community should still recognize them as being moral patients, as beings who can be impacted by and thus whose interests should be included in the discussion of governance. The giants, trolls, werewolves in werewolf form, and pixies should at least fall into this category of patient as well. In the human world, infants, young children, and those with cognitive impairment would also fall into this category.

To sum up, then, the text of Fantastic Beasts presents a view similar to Kant’s of the grounds of moral status, but it can be improved upon by recognizing the category of moral patients. Furthermore, Fantastic Beasts clearly supports psychological accounts of the grounds of moral status over physical accounts. In other words, what matters to many questions of identity and morality are psychological properties and not physical properties or behavioral capacities. This is consistent with a theme of the Harry Potter novels where the main villains focus on the physical characteristic of whether an entity has the right blood-status to be part of the wizarding community. In other words, only a villain would solely focus on physical characteristics as a source of moral value.

The Desire for Moral Impotence

photograph of hands tied behind man's back

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Richard Gibson and Nicholas Kreuder recently wrote about humans’ morally troublesome desire for control. The prospect of control is, Gibson notes, “intrinsically appealing” to humans, an “incredible common desire,” concurs Kreuder. Both writers also agree we should be wary of this desire for control. Gibson argues that this desire negatively influences our relationship with nature, while Kreuder argues that it “may leave our interactions with others feeling impoverished and hollow.” I largely agree, but I think there is another equally universal and deep-seated desire that also deserves some consideration — the desire to lack control.

An oft-repeated saying in philosophical ethics is “‘ought’ implies ‘can’.” In other words, if you can’t do something, then there’s no question of whether you ought to do it. Our moral responsibilities only extend as far as our abilities.

Because of this important link between what we ought to do and what we can do, being reminded that something is under our control often also serves as a reminder that it is also our responsibility.

The discovery that one has control is often not as joyous and anxiety-relieving an experience as you might expect given the universal human desire for control Gibson and Kreuder describe. In fact, anger, resentment, and bitterness are all common reactions to being reminded that we are in control of something. We often don’t want control. We yearn for it to be nothing to do with us — someone else’s problem.

Many of our responsibilities are, of course, distinctly moral ones. The world is an imperfect place, and we all have the capacity to make it better to some degree. In fact, many of us have the power to make it significantly better. In other words, most of us actually have a morally significant level of control over how the future unfolds.

Let’s take an example. It costs significantly more than most people think to save a life by donating to the most effective charities — about $2,300. But that’s still only about half as much as the average American spends at restaurants each year.

Ask yourself honestly; could you make a few lifestyle changes and afford the $2,300 needed to save a life? If so, how often? Once in your lifetime? Once a decade? Once a year? More?

How does this make you feel? Are you excited to learn or be reminded of your morally significant amount of control over the world? To discover that you (probably) have the radical power to give another human, a person just like you, the gift of life? Speaking for myself, far from feeling elated, I feel guilty and ashamed. My conscience would be clearer if highly cost-effective charities like this simply did not exist — if they did not grant me this ability to meaningfully reshape the world (at least for that one person and their family). Because having that ability means I have that moral responsibility. In my ordinary life, I act in bad faith. I think and act as though I don’t have the power to save lives with moderate charitable donations. For self-serving reasons, I think and act as though I lack control over the world that I actually possess.

In his discussion of Nathan Fielder’s The Rehearsal, Kreuder points out the attractiveness of having more control over our interactions with others. Imagine having more ability to decide how people will respond, being sure that you’re not going to say the “wrong thing.” He suggests this kind of control would provide relief for those “wrecked with anxiety and marred with feelings of powerlessness.” This is certainly a desire I can recognize.

But I can also see the inverse: the desire of having less control in our interactions with others, in many cases.

Imagine your younger sibling is going off the rails – drinking too much and partying too hard. Their grades are suffering. Your sibling doesn’t listen to your parents but they look up to you; you know they will listen in the end. So you know that you, and only you, can intervene and make them get back on track. You can sit them down and have the difficult conversation that neither of you want to have. In other words, you have a great degree of control over your sibling.

How would you feel about having this kind of interpersonal control? Far from relieving your anxiety, you might feel deeply burdened by it, and the significant responsibility that it entails. It would be understandable to wish that you weren’t in such a potent position, and that someone else was instead. You might even be tempted to deny to yourself that you have such control over your sibling to avoid having to deal with the moral burden.

Rather than the risks that accompany greater interpersonal control, Gibson is concerned primarily with the negative effect that our desire for (often illusory) control has on our relationship with nature. It influences how we approach debates about “designer babies, personalized medicine, cloning, synthetic biology” and his focus, “gene drives.”

Gibson contends that humans actually have much less control than we like to think. In a cosmic sense, I think he is right. But, at least as a collective, humanity is surely in firm control over much of nature, perhaps even too much. Unfortunately, we control the global climate via our CO2 emissions. We control global fish stocks via modern fishing practices. And now, as Gibson explains, we also control which species we want to continue living and which we want to drive to extinction via the emerging technology of gene drives.

With respect to nature, at least the biosphere of Earth, humanity surely has much more control than most of us would think is desirable.

Our catastrophic relationship to nature seems to me less a symptom of our desire to control nature, and more a symptom of our being in a blissful state of denial about just how much control we have.

To be clear, I think Gibson is right to warn against an excessively domineering attitude toward nature, and Kreuder is right to warn against having too much control over our interactions with others. But we should also be on guard against the equally human tendency to find narratives that absolve us of our burdensome responsibilities. If Gibson is right that, fundamentally, “we’re subject to, rather than the wielders of power,” if we can’t really exercise control over the world, then there’s no reason to ask ourselves the tough question — what should we do? Avoiding this question may feel good, but it would be morally disastrous.

Underdefined Terms in the Abortion Debate

photograph of pro-life protest signs in front of Supreme Court

With the Supreme Court’s ruling in Dobbs v. Women’s Health, debates about the regulation of abortion will likely become an even more contentious issue in the coming days – a sentence which would have seemed absurd even just a few years ago. State legislatures may now attempt to outright ban abortion, which would likely lead to highly publicized, intense debate on the issue.

My purpose in this and the subsequent article is not to advance a view on abortion or the regulatory limits the state ought to place on it. Instead, my goal is to consider some concepts central to the abortion debate and demonstrate that they are used sloppily in public debates about abortion.

The hope is that, by illuminating these concepts, we may perhaps gain some clarity into why it feels like this debate has made little to no progress in the nearly 50 years since Roe v. Wade.

Indeed, polling suggests that for nearly thirty years a slight majority of Americans have consistently felt abortion ought to be legal in most cases, while those who think abortion ought to usually be illegal make up somewhere in the high thirties to low forties of participants polled.

Before beginning, it may be worthwhile to put my biases on the table. I think that a right to access abortion is implied by a general right to bodily autonomy. However, I am not a legal scholar and cannot discuss in depth what legal rights substantive due process guarantees, or whether we even ought to adopt that standard.

Nonetheless, I find the reasoning in the majority’s opinion deeply troubling. The opinion, authored by Justice Samuel Alito, argues that abortion rights are not explicitly guaranteed by the Constitution. Further, the majority state that a right to abortion it is not “deeply rooted in this Nation’s history and tradition” as the procedure was illegal in most states until the “late 20th century.” However, plenty of other things seem obviously inappropriate for the state to regulate yet fail to pass these criteria; a right to marriage is not named in the Constitution, and interracial marriage was illegal in most states until the mid-20th century, with 16 hold-out states until Loving v. Virginia, a case decided just six years before Roe. Although it is worth noting that the majority opinion attempts to differentiate abortion from other purported rights grounded in the right to privacy, like interracial marriage, same-sex marriage, and access to contraception. (I will discuss more on this point later.)

Through my discussion, I will refer to those who are against the legalization of abortion as “anti-abortion” advocates, while calling those who support the legalization of abortion as arguing for “reproductive choice.” This is to avoid the way in which identifiers like “pro-life” may be normatively loaded.

I begin by focusing directly on some concepts which are the heart of debates about the morality of abortion. My next column focuses more directly on overlooked concepts involved with debates about its legality.

Humans & Persons

Generally speaking, the position of anti-abortion advocates gets its support from a very simple argument:

1. Killing is wrong.
2. Abortion kills a zygote, embryo, or fetus.
Therefore, abortion is wrong.

Line 1, while very intuitive, quickly runs into issues. Namely, not all killings are obviously wrong. For instance, most people see no moral problem with killing an animal to eat it, or exterminating unwanted pests in the home.

So, one might specify line 1 by changing it to 1’) Killing a human is wrong. However, even this may still be questioned. 1’ puts significant pressure on the second premise. A zygote is human in the sense that it has human DNA. But as a bundle of cells, it is no more human than, say, a tumor. Embryos and fetuses begin to resemble humans in their physical shape. So perhaps they might meet the criteria for being human, while a zygote does not.

But being a living being that looks human or has human DNA does not seem to grant “moral status” – having standing such that your interests are given equal moral importance to the interests of others.

Being human seems neither necessary, nor sufficient. Science fiction and fantasy are littered with non-humans who seem to deserve the same moral status as humans; some fans felt aggrieved that Chewbacca did not get a medal at the end of the first Star Wars film.

Further, imagine a hospital in a triage situation. Suppose that a new patient arrives who needs immediate care. Call her patient K. If patient K receives the care she needs, she will make a complete recovery. If not, she will quickly die. However, the only way to free up the resources necessary to treat patient K is to cease treating a different patient. Call her patient M. Patient M has suffered total frontal brain death. Although patient M’s body still performs vital functions like maintaining breath and heartbeat, there is no chance that she will ever regain consciousness or perform higher cognitive functions again.

Should the doctors halt patient M’s treatment to save patient K’s life? Most would think yes.

This is not to say that patient M deserves no consideration. But rather, when patient M’s interests conflict with patient K’s interest, it seems like patient K’s ought to win out. This means that patient K has a higher moral status than patient M.

This example in particular suggests that having full moral status requires more than being a living human being. Namely, it requires having certain psychological capacities – things like the ability to form a self-concept, to think rationally and to develop a plan for your life. This is why beings like Chewbacca deserve the same consideration that we do, despite the fact that they lack human DNA.

While zygotes, embryos, and fetuses are human in the sense that they are members of the human species, they seem to lack these psychological capacities. Thus, they lack what is required for what some call personhood in the moral sense, or moral personhood. Thus, even if their interests deserve some moral consideration, they may not deserve as much consideration as those of a clear moral person – in particular, a pregnant person.

However, it should be noted that some may ground full moral status in things other than possession of certain psychological capacities. For instance, some religiously motivated anti-abortion advocates may claim that a human has full moral status from the moment of conception onward due to the possession of something like a soul. Of course, unless their interlocutors share the belief in a soul, this move may simply bring the discussion to a halt.

“Potential Lives”

So, some anti-abortion advocates might change their rhetoric. They could instead claim that zygotes, embryos, and fetuses have elevated status because of their potential, rather than their current capacities. In criticizing the dissenting opinion, Justice Alito notes that the dissenters do not acknowledge “the States’ interests in protecting fetal life.” Elaborating further on this, he briefly states that comparing the decision in Dobbs to decisions in Griswold, Eisenstadt, Lawrence, and Obergefell relies on a false analogy – abortion “destroys a potential life” but contraception, sodomy, and same-sex marriage do not, claim the majority. (Although it is worth noting that Justice Clarence Thomas, in his concurrence, writes that the court should reconsider its due process precedents, specifically naming Griswold, Lawrence, and Obergefell.)

But this idea of a potential life is quite vague. Eggs in a bird’s nest are potential lives. Unless these are eggs of an endangered species, the state does not seem to have an interest in regulating our conduct towards them. So surely the majority means potential persons – the idea here being that because personhood is of moral significance, then having the potential to be a person must also be morally significant.

However, “potential person” is still too vague. Taken on its face, everything from a fertilized egg onward is a potential person. But it need not stop here. All it takes to begin a process that will eventually result in a person is to have a single sperm cell and a single egg cell. So, these two cells may be potential persons. This would lead to some absurd consequences – if the state has an interest in protecting potential persons, then they would have an interest in banning products like contraceptives or procedures like vasectomies. But surely, at least for now, this is not what the Court suggests.

As a result, we may wish to introduce a more fine-grained distinction about potentiality. Stephen Buckle distinguishes between potentiality in two senses – the potential to produce and the potential to become.

A single sperm and egg cell are potential persons in the sense that they may produce a person; they’re the necessary “raw ingredients” to start a process that eventually can result in a person. However, they themselves do not become a person. Instead, they produce other entities that become a person.

The power to become instead deals with what Buckle calls “distinct individuals which preserve their identity over time.” Thus, even a fertilized egg would not have the power to become a person. According to Buckle, preserving identity over time requires having some kind of unifying structure which makes one a distinct individual.

But the exact nature of this identity preserving unifying structure is an open question. Some might argue that identity is a matter of being the same living organism. Others claim that it is a matter of spatio-temporal continuity; that if we could plot your location over the course of time, the result would be an unbroken 4th dimensional “space-time-worm” where each “slice” is your location at a moment in time. I’ve noted elsewhere that psychological factors play a role in our identity.

Thus, there are multiple candidate answers to the question of what has the power to become a person. If our identity consists in being a human organism, then the power to become a person emerges at the point when the embryo becomes a fetus – after the cells have differentiated, taken on functions, and the overall structure of the organism is set into place. If we are instead space-time-worms, then our identity stabilizes as early as fertilization. But if our identity is a matter of psychology, then we may not develop the power to become persons until much later, perhaps not even until after birth when our experiences start shaping our future attitudes and behaviors.

As a result, the sense in which we call something a “potential life” has great bearing on what policies we should favor. If we mean those things which have the power to produce a person, then what behaviors the state has an interest in regulating would be very far reaching – any time that we are not reproducing as much as possible, we are destroying or at least preventing a potential life. But if we instead mean that a potential life exists only when something has the power to become a person, then we are stuck having to answer the question of what preserves our identity over time before we can answer any questions about whether a fetus has the potential to become a person.

Rethinking “Rethinking the Moral Status Debate”

photograph of boy and girl watching elephant at zoo

To have moral status is to be morally important for your own sake. Dogs, for instance, matter morally in a way that rocks don’t. Dogs have moral status. To have full moral status is to matter morally for your own sake as much as people do. Most people think dogs have moral status, but not full moral status. They think dogs matter morally, but not as much as people. Some philosophers agree with this, while others think this belief is a mere prejudice (“speciesism”), and that all conscious beings have full moral status. I have written about this here before.

Much of the abortion debate centers around the question of whether and when fetuses possess moral status, and/or full moral status. Is, say, an eight-week embryo morally important for its own sake? Is an eight-week embryo as morally important for its own sake as an eight-year old child? These questions may not automatically settle the abortion debate, but they are clearly important to it.

The relevance of moral status to abortion spurred an interesting recent piece in which Ben Rossi suggests that many of us are thinking about moral status incorrectly. Most of us think moral status is an intrinsic property, but Rossi thinks it is an extrinsic property. An intrinsic property depends on how you are in yourself, while an extrinsic property depends on how other things are. That I have a lap is an intrinsic property. That my cat, Apollo, is currently sitting in my lap is an extrinsic property: if he moved, I would lose the property “having a cat sitting in my lap,” even if nothing about me in and of myself changed. Or, in science, mass is an intrinsic property, while weight is an extrinsic property. If you put me on the moon, I would weigh a lot less, even if I was the same in and of myself.

Most of us think moral status depends on intrinsic properties. We think it depends, say, on whether you are capable of consciousness, or of reasoning, or whether you are a human, etc. Rossi thinks it depends on an extrinsic property: “the moral costs and benefits — principally understood in terms of aggregate welfare, autonomy, and equality — of adopting as a general rule that such-and-such a class of beings possesses strong moral subject-hood” determine whether you have full moral status. I take it we are supposed to accord full moral status to a being if doing so would have the best consequences.

We can see that this makes moral status extrinsic by noting that it means I could gain or lose moral status without changing anything about how I am in and of myself. If hostile aliens threatened to destroy the world unless we ceased treating Prindle Post writers as having full moral status, then I suppose it would be best not to treat Prindle Post writers as having full moral status. In that way, I could lose my moral status without changing how I am in and of myself. Or, to take things back to abortion, suppose we developed cheap artifical wombs, and could safely transplant embryos from pregnant people into these wombs. I suppose that on Rossi’s view, since this means the burden of treating embryos as having full moral status would no longer impose as great a burden on pregnant people, the invention of this device would strengthen the case for embryos having full moral status, without changing anything about embryos themselves.

Rossi notes that his account makes moral status extrinsic, and notes this is odd: it does not seem that the aliens issuing this threat could cause me to become less morally important. But since he has already discussed this, I will focus on a different worry for his view. I worry that his view involves a troublesome circularity. In order to know how good or bad an outcome is, we need to know the moral status of the individuals involved. But if the moral status of the individuals involved determines how good or bad the outcome is, then how good or bad the outcome is cannot then determine the moral status of the individuals involved. That’s a circle.

Consider, for instance, Rossi’s concern for equality. If people were allowed to kill eight year-olds, our society would not be one that respects equality. On the other hand, if people are allowed to kill hydrangeas, our society might still be one that respects equality. Why is that? Eight-year olds have full moral status, but we are not giving them the protections given to others who have full moral status. In this way, we fail to respect their equal moral standing. On the other hand, hydrangeas do not have full moral status. Perhaps they have no moral status at all. Accordingly, it is not an offense against equality to deny them the protections given to individuals with full moral status.

In our current society, people (at least in most states) can kill eight-week old embryos. Is this an offense against equality? To know this, we may need to know whether eight-week old embryos possess full moral status. If they do, perhaps we are failing to respect their equal moral status by failing to give them the protections we have. If not, then not. But if we need to know their moral status in order to know whether we are satisfying the demands of equality, and if satisfying the demands of equality is one of the things that determines how good or bad it is to treat them as having full moral status, then how good or bad it is to treat them as having full moral status cannot be what determines what moral status they actually have. That would be a circle.

What If You Aren’t Sure What’s Moral?

photograph of a fork in the path

Today, I woke up in a soft bed in a heated apartment. I got up and made full use of the miracle of indoor plumbing before moving on to breakfast. Pouring myself a bowl of vitamin-enriched cereal and milk (previously delivered to my doorstep) I had to admit it: modern life is good.

Opening up my laptop, my gratitude for modernity diminished as quickly as my browser tabs multiplied. Our phones and laptops are not just tools. They are portals to another world — a relentless world of news, opinion, and entertainment. We’re living through the age of information overload. On average, we now consume 174 newspapers worth of information each day. “I’ve processed more information in the last 48 hours than a medieval peasant would in a lifetime,” reads a well-liked tweet.

And yet, amid this tsunami of information, we seem to have less certainty than ever. Controversy and discord reign. There is little agreement about basic facts, let alone about what is to be done. Is it time to lift COVID-19 restrictions yet? Is American democracy at risk of failure? Are plastics killing us? Should we allow genetically modified foods? Will climate change be simply bad or disastrous? I have my opinions, and I’m sure you have yours, but do any of us know the answers to any of these questions with certainty?

As well as uncertainty about the facts, we continually find ourselves facing moral uncertainty. Moral theories and views divide both public and philosophical opinions. They defy consensus. Is euthanasia morally permissible? Is abortion? Eating meat? Amid our unprecedented access to a wide range of moral arguments and views, all competing for our allegiance, we are left to come to our own moral conclusions. If we are being brutally honest with ourselves, we probably aren’t absolutely certain about all of our moral views.

In these conditions, moral uncertainty is the norm. But, as the Samuel Beckett line goes, “You must go on.” Even if you don’t know for sure what the right moral view is, reality refuses to stop the clock to let you figure it out. You have to act one way or another, despite your moral uncertainty. Being uncertain doesn’t take you off the hook of moral responsibility. Neither does refusal to act. As climate change illustrates, refraining from taking decisions can be just as disastrous as making the wrong decisions.

So, how can you go on under these conditions of moral uncertainty? Let’s take a concrete example. What if you think eating meat is morally permissible, but you’re not totally sure? If you’re willing to admit there’s some chance you could be wrong about the morality of vegetarianism, what should you do? Keep eating meat? Or give it up?

The philosopher William MacAskill argues that if you are morally uncertain about vegetarianism, you should give up eating meat. In fact, even if you think there’s only a 10% chance that vegetarianism is the right moral view, you should still give up meat.

MacAskill thinks there’s an asymmetry in the moral risks you’re running. “If you eat veggie and eating meat is permissible, well, you’ve only lost out on a bit of pleasure,” says MacAskill, “But if you eat meat and eating meat is impermissible, you’ve done something very wrong.” Maybe you should give up a bit of pleasure to avoid the risk of doing something really morally terrible, even if the probability that you would be doing something really morally terrible is relatively low. “The morally safe option,” claims MacAskill, “is to eat vegetarian.”

We can apply MacAskill’s approach to other problems where we face moral uncertainty. Peter Singer famously argued that failing to donate money to help alleviate suffering in the developing world is just as morally wrong as letting a child drown in front of you. Most of us seem to think that Singer’s moral claims are too strong; we don’t think we are morally obligated to donate to charities, even if we think it is morally good – beyond what we are obligated to do – to donate. However, it seems at least possible that Singer is right. If he is right, then not giving any money would be very wrong, as wrong as letting a child drown. But if Singer is wrong, then all I’d lose by donating is a bit of money. Given the moral risk, the appropriate choice seems to be to donate some money to charity.

These two cases might make MacAskill’s approach look appealing. But it can also get strange. Imagine you really want to have a child. You are near-certain that having a child is morally permissible. In fact, you think having a child, bringing a happy person into the world, would be a serious moral good. You also think there’s a tiny (less than one percent) chance that anti-natalism is true. According to the version of anti-natalism you’re considering, by having a child you’re doing something morally terrible — bringing into existence a chain of human suffering that will continue for millennia. If anti-natalism says that having a child is morally wrong enough, then it would be less morally risky for you to simply not have a child. But should you really not have a child in such a case? Even though you believe with near-certainty that doing so would be a morally good thing? That seems like a strange conclusion.

The ethicists Johan Gustafsson and Olle Torpman give an alternative framework for thinking about how we should act under moral uncertainty. When we think of good, moral people, we generally think they are conscientious; they are typically true to what they believe is right. To put it another way, we think that a moral, conscientious person won’t do what they sincerely believe to be wrong. In the child example, your sincere, near-certain belief is that it is permissible, perhaps even a good thing, to have a child. MacAskill’s approach to dealing with moral uncertainty seems to say you ought not to have a child. But how can a moral theory that you don’t believe in matter more than the one you do believe in? For these reasons, Gustafsson and Torpman propose a much simpler approach: act in accordance with the moral view that you are most confident in. In this case, that would mean you should have the child that you want.

This simpler approach to dealing with moral uncertainty might seem straightforward and convincing. But I invite the reader to go back and apply Gustafsson and Torpman’s approach to the two cases discussed earlier, of charity and vegetarianism. Arguably, their approach gives less convincing advice in these cases.

How we should act given moral uncertainty is an important question for the discordant moment in which we are living. Whether we have the correct answer to this question remains far from clear.

Can Santa Know Who’s Naughty or Nice?

photograph of child with present watching silhoette of Santa's sleigh

Reliable sources indicate that every year, Santa makes a list of boys and girls he deems “naughty” or “nice.” The stakes could not be higher: on the basis of this classification, a child receives either wonderful gifts or coal in his or her stocking. Thus, it would appear that a serious inquiry into Santa’s methods is in order. In short: how does Santa know who’s naughty or nice?

There are actually two parts to this inquiry. The first concerns Santa’s definition of “naughty” and “nice.” I’ll leave this interesting question to the serious normative ethicists. The issue I’m interested in is this: even if you’ve been naughty, I assume that Santa will not judge you deserving of coal unless you are responsible for your naughtiness. Naughtiness and responsibility for naughtiness are distinct. After all, some people behave naughtily but are blameless for doing so: for example, those who are blamelessly ignorant of what they are doing. So, the question I want to focus on is how Santa knows who is responsible for being naughty, thus deserving coal in their stockings.

Most philosophers agree that responsibility for wrongdoing has two components: a control component and a mental component. Plausibly, you are not responsible for what is not under your control. Likewise, you are not responsible if you don’t know what you’re doing. So, responsibility requires at least some sort of awareness of what one does and some sort of control over what one does. (There is much more to be said about both components, but for our purposes this will suffice).

However, as the philosopher Michael Zimmerman has observed, if you are only responsible for what is under your control, then it would appear that those who attempt naughtiness are no more and no less responsible (read: blameworthy) than those who are naughty. (Most of what follows is taken from Zimmerman’s book.) Consider Sue and Sara. Sue throws a baseball at a neighbor’s window, shattering it. Sara throws a baseball at a neighbor’s window, but a strong gust of wind blows the baseball off course and it lands harmlessly in the neighbor’s yard. Is Sue more to blame than Sara? If we are responsible only for what is under our control, the answer appears to be “no.” After all, Sara would have shattered the window with the baseball had it not been for something entirely outside of her control: namely, the direction and strength of the wind. Arguably, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. A pithier way of making the same point is to say that Sara was lucky that she didn’t hit the window, but since her not hitting the window was a matter of luck, Sara’s blameworthiness is the same as Sue’s.

If Santa accepts this reasoning, he will put those who attempted naughtiness on the naughty list with those who succeeded. Perhaps this expansion of the list is tolerable to the old man. The problem is that this same line of reasoning threatens to scramble the whole system. Consider first that what we do is determined not only by what we decide to do, but also by the opportunities and character that we happen to have. Consider Susanna, whose parents have refused to buy her a baseball; and Shirley, who would never dream of throwing a baseball at a window because her parents have effectively taught her never to destroy other people’s property. It may be true that Susanna would have shattered her neighbor’s window had it not been for something outside of her control: whether or not her parents had bought her a baseball. And it may be true that Shirley would have shattered her neighbor’s window had her parents raised her differently, which is also something outside of her control. Once again, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. Thus, the same reasoning that consigned those who attempted naughtiness to the naughty list also seems to consign those who lack opportunity for naughtiness and those whose characters preclude naughtiness to the naughty list.

Even worse, the same line of reasoning implies that everyone is blameless and, indeed, praiseworthy. Just as it may be true that Sara, Susanna, and Shirley would have shattered the window had they been free to do so were it not for some factor outside of their control, so it is also true that the three girls would have performed praiseworthy acts that they did not actually perform were it not for lack of opportunity or their own bad characters, all of which are beyond their control. If a person would have done something praiseworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as praiseworthy as if she had done that thing.

Now we can see why, if the argument is sound, Santa’s whole system is in trouble. The claims we have been making about the three girls — that they would have done such-and-such, were they free to do so — are sometimes called “counterfactuals of freedom.” According to the argument just developed, for any child there are an indefinite number of “counterfactuals of freedom” that make that child blameless, praiseworthy, and blameworthy. Santa will never know all of these counterfactuals, so he is never in a position to weigh them up to make a final judgment on the child’s naughtiness or niceness.

So, what is poor Santa to do? Zimmerman thinks that he should put everyone on the nice list, since it is far worse to punish the innocent than to not punish the guilty. As the ultimate niceness or naughtiness of a child is unknowable to Santa, he should assume that all children are nice to avoid punishing some nice children.

But there is an alternative response. For utilitarians, blame and praise (as well as punishment and reward) are tools that we ought to use to bring about good effects – such as deterrence or moral growth – rather than ways of registering a person’s desert. From this perspective, it would make no sense to blame or punish someone for something someone would have done in some counterfactual circumstance of which we have no knowledge. Such punishment would be arbitrary. Even if we somehow could know the truth of some counterfactual of freedom, people can’t choose to avoid being the kind of person who would commit naughty acts under some counterfactual circumstance, so there is no deterrence value in punishing them for being that kind of person. By contrast, it does make sense from this perspective to punish someone for something they have actually done — not because that person is more deserving, but because in response to such punishment people (including the punishee) can choose to avoid committing naughty acts in the future.

So, if Zimmerman’s argument is sound, then Santa has at least two choices: put everyone on the nice list, or turn utilitarian. I recommend the latter.

Faramir’s Insight

photograph of Lord of the Rings book atop Middle-Earth map

When I last reread J.R.R. Tolkien’s Lord of the Rings trilogy, one line in particular stood out to me. It was something that Faramir says when talking with Frodo about whether to abandon Gollum. I’ll quote the line in context:

‘Frodo, I think you do very unwisely in this,’ said Faramir. ‘I do not think you should go with this creature. It is wicked.’

‘No, not altogether wicked,’ said Frodo.

‘Not wholly, perhaps,’ said Faramir; ‘but malice eats it like a canker, and the evil is growing. He will lead you to no good. If you will part with him, I will give him safe-conduct and guidance to any point on the borders of Gondor that he may name.’

‘He would not take it,’ said Frodo. ‘He would follow after me as he long has done. And I have promised many times to take him under my protection and to go where he led. You would not ask me to break faith with him?’

‘No,’ said Faramir. ‘But my heart would. For it seems less evil to counsel another man to break troth than to do so oneself, especially if one sees a friend bound unwitting to his own harm.

The last line rings true to my experience. It is one thing for me to take on a sacrifice to do what is right. It is another thing entirely to advise someone else to make that sacrifice.

It is easy to be frugal about stuff I buy for myself. Harder to be frugal about stuff I buy for my wife. Why? Because it is harder to ask someone else to sacrifice to do the right thing. I would never spend much money on jewelry for myself. And yet I felt tempted to spend too much money on an engagement ring for my wife. (Fortunately, my wife is as frugal as I am, and so independently insisted on a cheap $50 engagement/wedding ring.)

I want to talk about this phenomenon of expecting less of others than we expect of ourselves. I want to talk about it, because it first appears as almost noble. But the more I’ve thought about it, the more convinced I’ve become that it is actually a vice. It involves a neglected failure of character, a failure to take seriously the moral personality of others.

To see this, let’s consider possible explanations of the phenomenon. Two I’m sure are wrong, and three that might be right.

Wrong Hypothesis 1: Different Obligations

One possible explanation is that I really think my moral obligations are stricter than the obligations of others. Faramir thinks it would really be wrong for him to break his promise to Gollum, but that it would not be wrong for Frodo to do the same.

While I’m sure this sometimes happens, it does not explain the phenomenon I’m trying to understand.   The difference I’m responding to is not one of ‘weaker’ vs. ‘stronger’ but rather me vs. other. I will hold my close friends to higher standards than strangers, even if I think the strangers are better people. Likewise I am willing to hold myself to a higher standard than my close friends, even though I know many of my close friends are more virtuous than me.

Wrong Hypothesis 2: Selflessness

So could it be selflessness? Could it be that I just care more about the sacrifices of others than I do about my own sacrifices?

Perhaps this occurs for some people, but it is not the explanation in my case. For one thing, I’m not nearly that selfless. I spend far more time working on my own job applications than helping friends with their applications. Why? Because I selfishly care about my own future more than the future of my friends.

For another thing, the closer I am to someone the more, not the less, willing I am to ask them to sacrifice. If a stranger asks me for advice, I’m very unlikely to tell them to make the radical sacrifice. But the closer a friend, the more comfortable I am to ask of them sacrifice. As I will argue below, this is because the closer a friend, the more deeply I appreciate their moral personhood.

Possible Hypothesis 1: Moral Uncertainty

One possible explanation is that the hesitancy follows our moral uncertainty combined with a fear of responsibility.

Suppose someone held a gun to my head and told me to renounce Christianity. I’d like to think I’d have the courage to die rather than commit apostasy – that is, the abandonment of my personal beliefs. (After all, I believe that to commit apostasy is one of the worst things a human can do.) But the thing is, I’m not sure it’s really that bad. After all, I was being coerced. Everyone would be able to tell I don’t really mean it. So is it really committing apostasy? Or is it instead just playacting to pacify the criminal?

Those replies ring hollow when I imagine facing the situation myself. But they ring less hollow when I imagine a friend facing a similar situation. If a friend asked me what to do, I’d be much more tempted to tell them to “just say what the person wants to hear, you don’t have to mean it.” If a friend felt guilty about committing apostasy in such a situation, I’d be tempted to tell them not to feel bad, they didn’t do anything wrong.

The uncertainties loom larger when I consider a friend making the sacrifice. But not only then. They also loom larger when I consider making the choice myself but sacrificing another. These same worries would ring less hollow if someone held a gun to another person’s head and told me to commit apostasy. The significance of my uncertainty is magnified when others are at risk, because if someone else dies as a result of my choice, it’s not just sad but a moral tragedy. It’s not similarly a moral tragedy if I die as a consequence of my own choice. It’s sad if I die while trying to free climb a mountain, but it’s not a moral tragedy. It’s a moral tragedy if I kill someone else after taking a risk. Ultimately, I don’t want to give advice to others, because if I’m wrong I’ll feel responsible for that choice.

If this explanation is right, then the phenomenon is vicious. It denies the moral personality of others. I’m failing to recognize that the choice is still up to the other person, even when I advise them what I think is right.

This is easiest to see if we think of ourselves as the one getting advice. If I ask for the advice of someone else, I recognize it is still up to me whether to listen. It is myself, not them, who is ultimately responsible for the choice. As such, I don’t want the other person managing me with their advice in order to assuage their own conscience. Rather, I want them to be honest so that I can fully take responsibility for my own choice.

But if I’d want others to be honest with me so that I can make my own choice. Then, if I fully appreciated the moral personality of others, I’d not be tempted to be dishonest with them in order to shield myself from responsibility.

Possible Hypothesis 2: Shared Values

My wife’s theory is that the phenomenon is explained by the role of shared values in moral advice. The more someone shares your values, the more appropriate it is for them to act on your advice. You share perfectly your own values, you share extensively your close friend’s values, But you only partially share the values of a stranger. So, if moral testimony involves giving advice in light of shared values, then the more uncertain you are that values are shared, the less certain you are whether to recommend sacrifice.

Of all the plausible explanations of the phenomenon, this one looks closest to being noble. But even here I think there is something vicious about the phenomenon. If you believe the person is under the same moral obligation, but that they don’t share your values, to that extent you must think their values are mistaken. But if they are mistaken, and if you take seriously their moral personality, that gives you reason to explain as best you can the value you see in the sacrifice. It does not, if you take the moral personality of another seriously, give you reason to shrug your shoulders and consign them to vice. That would be to treat them as a moral lost cause.

Possible Hypothesis 3: Inside/Outside Tension

A final explanation of the phenomenon is that it is explained by a tension between the deliberative and external perspectives on action.

When you deliberate about your acts, you look at them from the inside. Because human death is a terrible thing, you recognize it would be terrible to kill one human to save three. You can tell, from the inside, that it would be wrong to kill someone even to save more lives.

But we can also take the external perspective. We can see, not a choice to kill, but rather a choice between two outcomes. It is not a choice to kill, but rather a choice between one killing and three deaths. And from that external perspective, we recognize that one killing is not worse than three deaths.

The force of moral constraints only emerges inside the deliberative perspective. They don’t appear when you look merely at the outside comparison of results. And this may explain our phenomenon: From the deliberative perspective, Faramir can see that it is an evil thing to break a promise. But from the outside perspective, a broken promise is far less evil than Frodo’s death and betrayal.

Once again the phenomenon would reveal an inability to appreciate the moral personality of others. It stems from a failure to see the other as a deliberating person, it instead looks at them merely from the outside as a sort of object. The phenomenon emerges when you look at a person, rather than when you look vicariously with a person at the choice they face.

I’m not sure which explanation is right. But I’m inclined to think all three plausible explanations reveal just how far I have to go before I really appreciate the full personhood of others.

COVID and Climate Change: Taking the Long-Term Seriously

photograph of ripple on lake expanding

Amid the ongoing COVID-19 pandemic, world leaders are assembling in Glasgow for COP26, the UN’s climate change conference. Both the pandemic and global warming are powerful reminders that the choices we make can have consequences that continue to unfurl over decades and centuries. But how much should we care about these hard-to-predict long-term consequences of our actions? According to some, so-called moral “longtermists,” we ought to care a great deal. Others, however, have called longtermism “the world’s most dangerous secular credo.”

COVID, climate change, and the long-term impact of our choices

The coronavirus now appears to be endemic. It is likely to continue to circulate across the globe indefinitely, causing more and more human suffering, economic damage, and disruption to our lives. The total sum of harm an endemic virus can cause is theoretically boundless. And yet, if China had better regulated its meat markets or its bio-labs (depending on your preferred origin theory), it would have likely prevented the outbreak entirely. This failure, in one place at one time, will have significant long-term costs.

The headline ambition of COP26 is for nations to commit to specific plans for achieving net zero (carbon and deforestation) by the middle of the century. Whether or not these talks are successful could have a profound long-term impact. Success could put humanity back onto a sustainable trajectory. We might avoid the worst effects of climate change: biodiversity collapse, flooding, extreme weather, drought, mass famine, mass refugee movements, possible population collapse, etc. Taking effective action on climate change now would provide a huge benefit to our grandchildren.

But the comparison between climate action and inaction does not stop there. As helping our grandchildren and great-grandchildren, the benefits of effective climate action now would likely continue to snowball deep into the next century. Instead of our great-grandchildren needing to allocate their resources and efforts on mitigating and reversing the damage of climate change, the twenty-second century might instead be spent in pursuit of other goals — eliminating poverty, making progress on global justice, and deepening our understanding of the universe, for example. Progress on these goals would, presumably, generate their own positive consequences in turn. The good we can achieve with effective climate action now would continue to accumulate indefinitely.

Commitment to taking the long-view

Both COVID and climate change make a strong intuitive case for moral “longtermism.” Longtermists think that how things go in the long-term future is just as valuable, morally speaking, as what happens in the near-term future. If you can either prevent one person from suffering today or two tomorrow, the longtermist says you morally ought to prevent the two from suffering tomorrow. But if you also had the option of preventing three people from suffering in a million years, they say you should do that instead. It doesn’t matter how far events are from us in time; morally, they’re just as significant.

The second part of the longtermist view is that we can influence the long-term future with our choices today. They argue that the long-term future that occurs depends on what humanity does in the next century. And the stakes are high. There are possible futures in which humanity overcomes the challenges we are faced with today: ones in which, over millennia, we populate the galaxy with trillions of wonderful, fulfilled lives. There are also possible futures in which humanity does not even survive this century. There is, in other words, a very valuable possibility — in moral philosopher Toby Ord’s words, a “vast and glorious” version of the future — that’s worth trying to make real.

A catastrophic future for humanity is not a particularly remote possibility. Ord, who studies existential risk, sees the next century as a particularly dangerous one for humanity. The risks that concern him are not just the cosmic ones (meteorites, supernova explosions) or the familiar ones (nuclear war, runaway global warming, a civilization-collapsing pandemic); they also include unintended and unforeseen consequences of quickly evolving fields such as biotech and artificial intelligence. Adding these risks together, he writes, “I put the existential risk this century at around one in six.” Humanity has the same odds of survival as a Russian roulette player.

The cost of failing to prevent an existential catastrophe (and the payoff of success) is incredibly high. If we can reduce the probability of an existential risk occurring (even by a percentage point or two), longtermists claim that any cost-benefit analysis will show it’s worth taking the required action, even if it incurs fairly significant costs; the good future we might save is so incredibly valuable that it easily compensates for those costs.

But, for whatever reason, reducing the probability of improbable catastrophes does not rise to the top of many agendas. Ord notes that the budget of the Biological Weapons Convention, the body that polices bioweapons around the globe, has an annual budget of just $1.6m, less than the average turnover of a McDonald’s restaurant. As Ord explains this strange quirk in our priorities, “Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it.”

Even short of generating or mitigating existential risks, the choices we make have the potential to put the world on different trajectories of radically different value. Our actions today can begin virtuous or vicious cycles that continue to create ever-greater benefits or costs for decades, centuries, or even millennia. So besides thinking about how we might mitigate existential risks, longtermists also claim we need to give more thought to getting onto more positive trajectories. Examples of this kind of opportunity for “trajectory change” include developing the right principles for governing artificial intelligence or, as COP26 is seeking to achieve, enacting national climate policies that will make human civilization ecologically sustainable deep into the future.

Challenges to longtermism

Last week, Phil Torres described longtermism as “the world’s most dangerous secular credo.” A particular worry about longtermism is that it seems to justify just about any action, no matter how monstrous, in the name of protecting long-term value. Torres quotes the statistician Olle Häggström who gives the following illustration:

Imagine a situation where the head of the CIA explains to the U.S. president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken [the longtermist] Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders. 

Longtermism entails that it’s morally permissible, perhaps even morally obligatory, to kill millions of innocent people to prevent a low-probability catastrophic event. But this can’t be right, say the critics; the view must be false.

But does Häggström’s thought experiment really show that longtermism is false? The president launching such a strike would presumably raise the risk of triggering a humanity-destroying global nuclear war. Other countries might lose faith in the judgment of the president and may launch a preventative strike against the U.S. to try to kill this madman before he does to them what he did to Germany. If this probability of catastrophic global nuclear war would be raised by any more than one-in-a-million, then longtermism would advise against the president’s strike on Germany. This is to say that if the president were a longtermist, it’s at least highly debatable whether he would order such an attack.

Of course, we can modify Häggström’s case to eliminate this complication. Imagine the chance of the madman succeeding in blowing up the world is much higher — one-in-two. In such a case, longtermism would likely speak in favor of the president’s nuclear strike to protect valuable possible futures (and the rest of humanity). But it’s also a lot less clear that such an act would be morally wrong compared with Häggström’s original case. It would be terrible, tragic, but perhaps it would not be wrong.

Maybe the real risk of longtermism is not that it gives us the wrong moral answers. Maybe the criticism is based on the fact that humans are flawed. Even if it were true that longtermism would rule out Häggström’s nuclear attack on Germany, the strategy still seems to place us in a much riskier world. Longtermism is an ideology that could theoretically justify terrible, genocidal acts whenever they seem to protect valuable long-term possible futures. And, ultimately, it’s more likely that flawed human minds perform unconscionable acts if they have an ideology like longtermism with which to attempt to justify their actions.

This last criticism does not show that moral longtermism is false, exactly. The criticism is simply that it’s dangerous for us humans to place such immense faith in our ability to anticipate possible futures and weigh competing risks. If the criticism succeeds, a longtermist would be forced to embrace the ironic position that longtermism is true but that we must prevent it from being embraced. Longtermists would have to push the view underground, hiding it from those in power who might make unwise and immoral decisions based on faulty longtermist justifications. Ironically, then, it might be that the best way to protect a “vast and glorious” possible future is to make sure we keep thinking short-term.

Considered Position: On Voluntary Non-Vaccination – Real World Complications

photograph of child among masked crowd

This piece concludes a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here (Part II).

In Part I of this series I tried to explain the moral intuition that I have less reason to engage in precautionary social-distancing behavior once those who are unvaccinated are unvaccinated by choice rather than because they could not access the vaccine. This intuition, I argued, is explained by the fact that justice mainly requires us to give people reasonable options for safety and does not require us to make others safe.

In Part II, I looked at what it takes for an option to be reasonable, and we realized that this is often sensitive to the reasons that someone does not want to get a vaccine. If someone’s objection to the vaccine is reasonable, then justice may still require me to adopt precautionary behavior.

In both posts, I idealized the questions at hand. I ignored real life complications, and just tried to identify the abstract principles involved. For that reason, my cases were often unrealistic, involving such absurdities as the unvaccinated engaging in moral blackmail or people taking pills to make themselves allergic to the vaccine.

In this final post, I want to turn to the real world and look at some of the complications which make this a difficult ethical question.

Complication 1: We judge the reasonableness of others in biased ways

We saw in Part II that if the other person has reasonable grounds for refusing the vaccine, then justice still requires me to protect them by social distancing (even if their grounds are mistaken). One challenge, however, is that we tend to be biased in our own assessments of what are reasonable grounds.

Consider, for example, the following two suggested grounds of vaccine hesitancy:

Skepticism 1: Distrust of a Racist Healthcare System

Some Black people in the U.S. are reluctant to get a vaccine due to distrust of the American medical system. While this is sometimes attributed to historical injustices, like the Tuskegee study, it is more plausibly explained by current disparities in health care treatment. (It also, as a whole, might just be overblown; but we will put that aside for now.) The thought might go as follows:

“As Ben Almassi has argued in the context of organ donation, there are good grounds, given persistent racial health inequities, for Black people in the U.S. to distrust that the medical system has their best interest at heart. But if one has good reason to distrust the health system, then one also has good reasons to distrust the recommendations of the health system. This is especially true because we know that drugs and vaccines can sometimes affect different racial groups differently, and we also know that Black people tend to be massively underrepresented in vaccine trials (even when those rates are reported).”

Skepticism 2. Distrust of the Liberal Narrative 

Some conservatives are reluctant to get the vaccine due to distrust in the way that mainstream media portrays medical information. They might say things like:

“Putting aside worries that the COVID threat was overhyped to sink former President Trump’s reelection chances; we have seen a systematic tendency for the media to provide unreliable coverage on how to respond to the vaccine in order to fit its preferred political narrative. First, we see the same caseloads suggest different reactions depending on who is in charge. The week after President Biden was sworn in, long before any change in policy could have altered risk, blue states began opening up and schools began seriously trying to return students to in-person education. That was true, even though the absolute case numbers were still worse than they were in the summer when everyone insisted things needed to be shut down under President Trump.

Then, of course, ‘the Scientists’ consistently lauded Democratic governors like Andrew Cuomo and panned Republican governors like Ron Desantis. And yet, we have consistently found that the media narrative was backwards. Florida, despite an extremely old population, came out of the pandemic pretty well, with a much stronger economy, and with many more kids staying in school. This is not just cherry-picking. Republican states, on average, had fewer deaths and caused significantly less damage to the economy. Then ‘the scientists’ told us to take our kids out of school, but didn’t object to massive unmasked Black lives matter protests. ‘The scientists’ told us not to wear masks, until they needed a social symbol for purposes of public shaming.”

I bring up these two explanations of skepticism, not because I find them equally plausible, but because I don’t find them equally plausible. Intuitively, I find the first reasonable and the second ludicrous. But here is the issue: it’s hard to identify any objective reason the first is more reasonable. Sure, it fits better with my own preferred political narrative; but I think there are decisive objections to both lines of reasoning, and I don’t think the errors in one are in any sense more egregious than the errors in the other.

The danger, then, is that I am more sympathetic to members of what I see as my political in-group. But that differential in sympathy means I’m especially likely to inconsistently apply a standard as squishy as ‘reasonable.’

I don’t have a good solution here, and so just advise extreme caution when you label political allies reasonable or when you label political opponents unreasonable.

Complication 2: Immunization isn’t up to some people

While there are few, if any, groups in the U.S. for whom it would be dangerous to get the vaccine. There is some evidence that immunocompromised patients, while they can safely get the vaccine, do not always produce the required antibodies.

Similarly, there is a group of people in the U.S. who cannot choose to vaccinate: children. This is true in the limited sense that the vaccines are not currently approved for use in children below the age of 16. But it is also true in the sense that, even once the FDA approves the vaccine for children, children cannot choose to be vaccinated without a parent’s permission. Unvaccinated children, then, might not be unvaccinated by any choice of their own.

These are important complications, but I’m not sure that on their own they would show you must socially distance until we reach herd immunity (on the hypothesis that there are a large percentage of vaccine skeptical holdouts). Children are far less susceptible to COVID-19, and only a very small portion of the population are severely immunocompromised. Given these facts, the threat posed to children and the immunocompromised is far smaller than the risk posed by pre-pandemic activities when most people did not have access to the vaccine. Certainly, you should engage in some precautionary measures, especially if you know you are likely to be around someone who is immunocompromised. But it is unclear that those are any different than the ordinary obligations one has during flu season.

Complication 3: Deception and consent

One further complication is that deception tends to undermine voluntariness. For example, if I lie to you about the results of a surgery, then your consent to that surgery does not actually constitute voluntary consent. Similar issues arise about sexual consent.

Or suppose you told your friend that you would pick them up from the airport. But then I, just for the fun of it, lie to you and tell you that your friend’s flight was delayed, that they were not able to reach you, and that they don’t need a ride after all. If you don’t pick your friend up from the airport, then breaking the promise was involuntary. It was involuntary because I am the one who bears responsibility for your failure.

Now, if it is true that deception can undermine voluntariness, then one worry we might have is that there may be a good number of people who refuse the vaccine because they were lied to, and if so, it is those who lied who bear the actual responsibility for the non-vaccination.

One reason this is an important point to notice, is because a lot of people are especially likely to think that those with unreasonable reasons for refusing the vaccine accept those reasons because they are being lied to by their media ecosystem. Thus, many on the left think the vaccine hesitancy on the right is ludicrous, but those same people on the left are also likely to think that Fox News, OAN, or Newsmax, are systematically deceiving their viewers. Similarly, many on the right think that concerns of racism are blown way out of proportion, but those same people on the right are also likely to think that mainstream media organizations — like CNN or The New York Times — are providing systematically distorted information on those very issues.

Indeed, it is not just cases of outright lying that might trigger a shift in responsibility. Not only do I preclude the voluntariness of your action by lying to you, I do the same thing if I tell you something false when I should have known better. If I tell you something false but am really honestly trying to tell you the best I know, then your actions are still voluntary. You made the choice to trust me in good faith. But if I am not acting in good faith myself, then I am the one at fault when you err.

Conclusion

So once vaccines are widely available (such that the unvaccinated are mostly unvaccinated by choice) but before we reach herd immunity (due to widespread vaccine hesitancy) can you return to pre-pandemic behavior?

As we’ve seen, this is a difficult question. However, it seems likely that the right answer is generally yes. For the most part, because it is reasonable to expect people to get the vaccine, it is reasonable to return to behaviors that would be safe were others to be vaccinated. This is true, even without factoring in the fact that the vaccinated are very unlikely to spread COVID. And so, it does seem like justice allows life to return to normal.

However, we have also learned an important moral lesson about what it takes to justly live together in a society. For justice to allow us to return to pre-pandemic activities, it does not just require the vaccine to be widely available. It also depends on other people being able to voluntarily refuse the vaccine. And as it turns out, there are complicated ways in which we can undermine the voluntariness of other’s actions. When we are not fastidious about what we tell others, we risk undermining their own capacity to make voluntary choices. If I thoughtlessly spread misinformation or repeat something as fact that I am only pretty sure of, then I undermine one of the fundamental building blocks of our system of justice.

My own testimonial irresponsibility undermines the voluntariness of those who believe me in good faith. And systems of justice largely depend on the mutual voluntariness of everyone’s choices. This is one reason why lying and bullshitting are such profound moral wrongs. It is not just that others end up with some wrong beliefs (we all have a bunch of misguided convictions), but that other people are rendered, in a sense, passive in relation to your choices. By breaking down even a small portion of the reciprocal norms of justice, you don’t just lie to another but partially bring the whole system of justice down on top of you.

Considered Position: On Voluntary Non-Vaccination – Types of Reasons

photograph of masked and unmasked people in a crowd

This piece is part of a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here.

Hopefully pretty soon, my state, Florida, will reach the point that anyone who wants to be vaccinated can be vaccinated. In Part I of this series, I argued that once we reach that point, I have fewer reasons of justice to engage in aggressive social distancing. After all, everyone has the option to get the vaccine and so protect themselves from whatever risks I impose by my everyday activities.

In that argument, however, I ignored an important variable in our assessment of justice: why are people not getting vaccinated? This is important because different reasons give rise to different duties on my end.

To see this, let’s start with a particularly extreme contrast of cases:

Coordinated Moral Blackmail: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population are excited to return to normal and immediately get vaccinated, the other 40% coordinate together and refuse to get vaccinated until their demands are met. The 40% realize that the other 60% don’t want to put anyone’s health at risk, and so they refuse to receive the free vaccine unless the other 60% agree to collectively pay each person in the 40% camp three hundred dollars. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a tool of moral blackmail.

Widespread Deadly Allergy: Suppose that herd immunity requires that 80% of the population get vaccinated. Let us also suppose that one of the weird things about this vaccine is that it produces a severe, possibly fatal, allergic reaction in people with peanut allergies. But while 60% of the population have no allergies and so get the vaccine, the other 40% all have severe peanut allergies and so forgo vaccination until a safer vaccine is approved. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated because the vaccination is not worth the risk of anaphylaxis.

I presume that you share my intuitions about these two cases. In the first case, the 60% have no reasons of justice to forgo returning to pre-pandemic life. But in the second case, the 60% still have reasons to social distance. Indeed, I would think that the 60% in the second case have duties pretty similar to the duties we would have if the vaccine were only available to 60% of the population.

Normally, if someone voluntarily chooses to forgo the vaccination that changes my duty of justice. However, it only makes a difference to duties of justice if we can reasonably expect the person to get the vaccine.

It is unreasonable to expect someone to take a very risky vaccine, just as it would be unreasonable to expect a family to bankrupt themselves to buy the vaccine. But such pragmatic concerns are not the only relevant ones. For instance, if it is unethical to get the vaccine, say because it was produced by slave labor, then again if the majority of the population refuse the slave made vaccine, justice requires me to continue to maintain pandemic precautions.

If it would be a mistake to get the vaccine, then I am still bound by the same norms of justice to engage in precautionary behaviors. If people truly believe that they should not get the vaccine, then I can’t reasonably expect them to vaccinate.

But what if instead people falsely believe there is a problem with vaccines? Well, in that case things become more complicated. There are two kinds of error that could explain why one thinks a right action is wrong, both of which are described by Elizabeth Anscombe. First, one might be wrong about the ‘facts on the ground,’ the circumstantial descriptions of what is going on — for instance, one might think the vaccine is less safe than it is, or one might falsely think it is produced by slaves. Second, one might be wrong about the moral implications of those ‘facts on the ground.’ For instance, one might know the risk is tiny, but have the false principle that one cannot be expected to undergo even the smallest possible risk for the public good. Following a convention that I use in my own work, let’s call the first an error about circumstantial facts and the second an error of classificatory facts.

Error 1: Mistakes about Circumstance

The COVID vaccines are not, on the whole, very risky. The consensus judgment of the medical community is that the vaccines, like most vaccines, are safe and effective. But even if the vaccines are safe, some people might believe them to be unsafe. And this raises a question of whether such honest beliefs change my own reasons of justice.

Widespread Error of Belief: Suppose that herd immunity requires that 80% of the population get vaccinated. But while 60% of the population get the vaccine, truly believing that vaccines are safe and effective, 40% of the population mistakenly believe that the vaccine is dangerous. By engaging in ordinary, non-social distancing activities, the 60% will put some people at risk. However, the only people who will be put at risk are those who voluntarily refuse to get vaccinated as a result of mistaken beliefs.

In this case, does justice require me to forgo pre-pandemic activities? To answer this question, we need to know more information. In particular, we need to know why people have the false belief.

Suppose that preliminary scientific studies, incorrectly, suggest that the vaccine is unsafe for pregnant women. This information is widely announced and the majority of scientists and doctors accept the result. As it turns out, however, the conclusion is the result of a subtle mathematical error that no one has yet noticed.

If the best scientific evidence suggests that pregnant women should not get the vaccine, then it is clearly unreasonable for you to expect pregnant women to get the vaccine. It does not matter that the studies are wrong, because you cannot expect the average person to realize that the studies are wrong. If this is right, then at least some of the time false beliefs about matters of circumstance (such as the safety of belief), really do make it unreasonable for me to expect you to be vaccinated.

But not all mistaken factual beliefs work that way. Now imagine someone who is profoundly lazy and just assumes vaccines are unsafe. Because they are lazy and do not care about public health, they never bother to do any research to check whether their assumption is right.

We can accept that the person really thinks that the vaccines are unsafe. And we can further accept that the person cannot, by a voluntary act of will, get herself to believe the vaccines are unsafe (for instance, you cannot, at this very moment, choose to believe there is an elephant behind you even if I offered you five hundred dollars to form the belief).

So suppose our imagined interlocutor says: “I’m not getting a vaccine because I really believe they are unsafe; ultimately I don’t choose what I believe, so you can’t blame me for not getting vaccinated.” Is this right? Does the fact that we cannot choose our own beliefs mean we cannot be blamed for our false beliefs?

Elizabeth Anscombe, in her article “On Being in Good Faith,” argues this line of thought is mistaken. While good faith belief can make one’s behavior reasonable. A good faith belief, in the sense that excuses behavior, requires more than you really believing the thing you say you believe:

“Consider a rash and uncharitable judgement which led one to slander someone gravely. One does not – usually at least—say to oneself ‘Now I will judge rashly’ or ‘I am determined, rash as it is, to judge that so-and-so’- one simply judges rashly. What does ‘in good faith’ mean? If it means that one has not got one’s tongue in one’s cheek, is not consciously lying or hypocritical, then good faith is not enough to exonerate. If it is enough to exonerate, then the slander uttered under the influence of a rash and passionate conviction is not a statement made in good faith. . . . Thus good faith or sincerity are either not much good as a defence, or not so easily assured as we might think at first. . . . ‘He ought to have realised…’, ‘He ought to have thought of…’, ‘He had no business to suppose that…’, ‘He ought to have found out…’, are phrases that bear witness to the fact that a man’s beliefs do not suffice to justify him so long as he acts accordingly.”

What Anscombe is arguing is that one can be responsible for false beliefs if we could have expected you to know the truth. If the reason you have a false belief is because the entire scientific community is mistaken, we can’t expect you to know any better. But if the reason you have a false belief is because you are too lazy to do research, then of course we can blame you for your error.

If I accidentally kill a child because I didn’t know they had a deadly carrot allergy, then I’m usually not responsible for that death. However, if the only reason I don’t know about the allergy is because I skipped our camp’s health and safety meeting, then I am at least somewhat responsible. I can’t object that ‘I didn’t know,’ because I should have known.

The same principle applies to vaccines. If you have a false belief that might make your non-vaccination reasonable, but it only does so if we could not have expected you to know better.

Error 2: Mistakes about Classification

What about cases of moral ignorance? That is, someone knows all the ‘facts on the ground,’ but she mistakenly thinks that those facts mean she should not take the vaccine.

Some philosophers think moral ignorance is unique, that while circumstantial ignorance can excuse, moral ignorance never can. Other philosophers disagree. (I discussed that debate at length in a previous Prindle Post.)

In this context, however, it seems that moral ignorance is probably not unique. That is because we want justice to allow for at least some moral pluralism. If there are no reasonable, though false, moral principles; then the project of democratic pluralism is in serious danger. If we want to live together in society with at least some level of deep moral disagreement; then we should acknowledge the reasonability of people acting on at least some moral principles we think are wrong.

Now, in our society we don’t have widespread reasonable moral ignorance preventing us from reaching herd immunity. But there are at least some real-life cases where it is reasonable to wrongly refuse a vaccine on moral grounds. A good example is those who don’t want to take a vaccine that was created via use of aborted fetal tissue. This seems to me to be to be too morally fastidious, but the reasoning is not unreasonable, and I do not think we can expect people to never make that error.

Conclusion

The reason that people refuse the vaccine matters. If they are right to refuse it, then one is just as responsible to take precautionary actions as one was before the vaccine became available. If they are wrong to refuse, then the question becomes whether or not it is reasonable to expect them to take the right action. If their incorrect refusal is explained by a reasonable form of circumstantial or moral ignorance, then justice continues to make the same demands.

Continue to Part III – “Real World Complications”

A Problem with Emotions

abstract acrylic painting of divided canvas

There is a certain challenge to the adequacy of our emotional reactions — especially those reactions, like grief and joy, which feel ‘called for’ at certain times. Suppose a family has a child who falls grievously ill. After many sleepless nights, the child stabilizes and eventually recovers. There are appropriate emotional responses to this sequence; the parents will, and should, feel relieved and joyed at the child’s recovery. Now suppose another family has a child who similarly falls grievously ill. Except this child does not recover and eventually dies. Again, there are appropriate emotional responses. The parents will, and should, feel grieved and heartbroken at the child’s death.

So far, there is no challenge. But now suppose that instead of two different families, it was one family with two children — one recovers, one dies. Here, what are the parents supposed to feel? There are a couple of options.

Perhaps they should feel a sort of moderated grief. After all, something wonderful has happened (a child has recovered) and something terrible has happened (a child has died). Do they partially cancel out (but maybe weighted in the direction of grief since ‘bad is stronger than good’)? The problem with this answer is that the grief is a response to the tragedy of the child’s death. And that child’s death is no less a tragedy just because the other child survived. Moderation would be appropriate if something happened to moderate the tragedy of the child’s death — such as the child being spared death and instead placed within an enchanted sleep — but it does not seem like the appropriate response to some other good thing occurring.

Perhaps then, you just need to feel either emotion. Both grief and joy are appropriate — so long as you feel one, then you are feeling well. But this won’t do either. There is something wrong with the parent who feels nothing for the recovery of their child, just as there is something wrong with the parent who feels nothing for the child’s death.

In fact, the only response that seems appropriate to the situation is to feel both grief and joy. You ought to be grieved at the one child’s death and joyed at the other child’s recovery.

But here is the issue. It doesn’t seem possible to fully feel both at once. Feelings, unlike some other mental states, compete with each other. When I feel happy about one thing, it pushes sadness about other things to the periphery. This is unlike, say, beliefs. The parents can fully believe that one child recovered while, at the same moment, fully believing that the other child died. This is because beliefs do not require active attention. Moments ago, you believed all sorts of things about your former elementary school, but I expect until you read this sentence you were not actively attending to any of those beliefs.

Emotions, however, do require attention. If I can become fully absorbed in my work, then for a time my grief will retreat. (Of course, one of the frustrating things about grief is the way that it maintains a ‘grip’ on your attention — forcing your thoughts to circle back and return again, and again, to the tragedy.)

So, to fully feel the grief at the one child’s death, and to fully feel the joy at the other child’s recovery, would require me to keep my full attention on both at the same time. But we can’t do that with attention, attention is a limited resource. It can only be fully engaged in one direction.

The best we can do, then, is a sort of ping-ponging back and forth between grief and joy. Feeling complete grief when attending to the death, feeling thankful and relieved when attending to the recovery. But at no point, it seems, can my emotions be completely responsive to what is called for.

Berislav Marušić, in his essay “Do Reasons Expire”, considers a related puzzle:

“Grief is, plausibly, a response to reasons; the reason for my grief was my mother’s death; her death does not change over time; but it is not wrong for me to grieve less over time. Yet how could the diminution of grief not be wrong, if my reason for grief stays the same?”

The reason the problem is similar is that there is a disconnect between the response demanded by the event (the tragedy of someone’s death) and the psychological realities of our capacity to have emotions. You just can’t indefinitely grieve, and in turn you don’t indefinitely grieve. But doesn’t it seem as if there is a sense in which you ought to?

There is a conflict, then, between the psychological realities that constraint our emotions, and the appropriateness conditions surrounding what emotions we ‘ought’ to feel.

This is an important conflict to think about. One reason it’s important to be aware of this conflict is because it helps recognize exactly why we need to be so skeptical of grounding our moral decisions simply on emotions like anger or grief. Since we can only feel some emotions to an extent, our emotional responses, at a given time, are usually not responsive to the full range of relevant considerations. You can feel outrage about an injustice, or hopeful at political progress that has been made, but you can’t feel both at the same time to the appropriate extent. But given that psychological reality, that means that basing policy recommendations on emotions of rage or optimistic hope is likely to be morally dangerous.

This does not mean that emotions should play no role in our moral decision-making. Emotions are important. Instead, what this means is that we need to be extremely cautious when acting on our emotional reactions. We should always bear in mind that emotions are likely to not be reflective of the full range of complexities in any given case.

Scarce Goods and Rationalization

photograph of crowded waiting room

A friend of mine recently posted on Facebook asking for insight into “the ethics of (1) getting vaccinated as quickly as possible for the common good and (2) not using privilege to be vaccinated ahead of vulnerable people.”

Many responded with arguments along the lines of, “by getting a vaccine you are contributing to herd immunity, so it is a good thing to do.” Others linked to this New York Times ethics column in which Dr. Appiah argues that the advantage of easy management means that people should get vaccines when they can get them (and not worry too much about whether others might need them more), and further that by getting the vaccine “you are contributing not just to your own well-being but to the health of the community.”

Another friend recently mentioned in a group chat how she was able to get a vaccine that, technically, she did not yet legally qualify for (since Florida is only officially vaccinating K-12 educators, and not college instructors). I demurred, saying it’s important as healthy youngish people to wait our turn, and a third friend argued that even if you are not the ideal person to get the vaccine, you should still get it if you can since more vaccines are better than fewer and you can help protect others by getting vaccinated.

Assessing the Arguments

The Herd Immunity Argument — The thing that unites all these replies is the thought that by getting the vaccine you are helping to protect others. But in these cases, that is probably wrong. I want to be clear. I am not denying that more people being vaccinated contributes to herd immunity. What I am denying is that my friends getting a vaccine contributes to more people being vaccinated.

Right now the vaccines are a scarce good. If I do not get a vaccine, someone else will get that particular injection. As such, in getting a vaccine I have not actually done anything to increase the percentage of the population that is vaccinated, I have simply made sure that I, rather than someone else, am part of that vaccinated percentage.

The Waste Rejoinder — Some commenters on Facebook mentioned that some vaccines go to waste. But for the most part the vaccine distribution process has sorted itself out. While a good number of vaccines were being wasted in January, we are now in mid-March and the number wasted is utterly tiny in comparison to the number used. The odds that if you do not get a vaccine that the vaccine will end up in the trash is extraordinarily small.

So sure, if you happen to be in a situation where the alternative to not getting a vaccine is throwing it away, then get the vaccine. But unless you know that to be the alternative, you should not think that in getting the vaccine you are heroically contributing to solving the problem.

Speed of Distribution — While no one in the threads mentioned this argument, there is something that could be said for skipping the line. Even if someone else would have gotten that same vaccine, it’s possible it would have taken longer for the vaccine to get in someone’s arm. Now, it’s true that at this point the states are not sitting on nearly as large a vaccine stockpile as they were originally. But it is still the case that some vaccines, while they are not being wasted, are taking longer than ideal to end up in someone’ arm. Indeed, this seems to be happening where I am in Tallahassee.

But the problem is, this was not the situation either of my friends were in. Sure, this situation might be more common than the wasted vaccine situation. But it will still be rare (and indeed, markets are such that this waste usually does not last very long; soon after that article about Tallahassee was published demand at the site increased).

The Lesson

Now, I don’t want to argue that it is wrong to get the vaccine if you have the chance to do so. Probably sometimes it’s right and sometimes it’s wrong. As is often the case, it all depends on the details.

Instead, I want to suggest that we need to be careful to not convince ourselves that our selfish acts serve an altruistic motive. I think it’s probably ok to be somewhat selfish. It’s reasonable to care more about saving your own life than  the lives of a stranger (even Aquinas agreed as much). But I think when you are prioritizing your own good over the good of others, it’s important to recognize that that is what you are doing.

So if I get the vaccine perhaps that is ok. But I should recognize that if I get the vaccine someone else will not. I should also recognize that since I am young and healthy, that other person probably would have gotten more value from the protection than I did. The question, as far as altruism goes, is how do I compare to the average person getting a vaccine these days? Am I younger than the average person who would get the vaccine instead of me? Then probably it is better that the other person gets it. Am I healthier than the average person who would get the vaccine instead of me? Then probably it is better that the other person gets it.

The thing is, we have strong biases in favor of rationalizing our own selfish acts. Thus, we often look for reasons to think doing the thing we want is also good in general. This is a very dangerous tendency. People often accept really bad arguments, if those really bad arguments help them think well of their own selfish activity. This should scare us, and make us all a little more self-critical about our moral reasoning anytime we come up with plausible reasons for thinking the thing we want to do is also the best thing for the world as a whole. Remember, we all have a tendency to think that way, even when the act is merely selfish.

Intuitions and the Duty to Aid

photograph of a cluster of traffic lights sending mixed signals

Many philosophers have considered whether folks who are better off have a moral obligation to help those who are desperately poor through no fault of their own. This issue is especially salient at the moment due to the economic fallout from the coronavirus pandemic and ensuing lockdowns across the globe. The result is that the global poor are hardest hit, and the trend of eradicating poverty over the past few decades is reversing. We are thus left to wonder what, if anything, we owe the global poor.

Here enters the work of perhaps the most famous contemporary philosopher, Peter Singer, who argues that many folks in rich countries — like the USA, Japan, and Germany — have a moral obligation to donate a large amount of their income to the global poor because they can afford to without falling into poverty themselves. He motivates this position by an appeal to a simple thought experiment:

“On your way to work, you pass a small pond. … [You] are surprised to see a child splashing about in the pond […] it is a very young child, just a toddler, who is flailing about, unable to stay upright or walk out of the pond. […] The child is unable to keep his head above the water for more than a few seconds at a time. If you don’t wade in and pull him out, he seems likely to drown. Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago, and get your suit wet and muddy.”

Singer thinks we have a moral obligation to save the child based on the strong intuition that it just seems like the right thing to do — it wouldn’t cost us much to save the child, but it would benefit the child significantly. We can formulate Singer’s argument like this:

  1. Suffering and death from lack of food, shelter, and medical care is bad.
  2. If it is in your power to prevent something bad from happening, without sacrificing anything nearly as important, it is wrong not to do so.
  3. By donating to aid agencies, you can prevent suffering and death from lack of food, shelter and medical care, without sacrificing anything nearly as important.
  4. Therefore, if you do not donate to aid agencies, you are doing something wrong.

The first step of the argument seems obvious: the pain that comes from failing to have your basic needs met is obviously bad. We wouldn’t go to such lengths to try to prevent these things in our own lives if we thought otherwise. And we will grant, for the sake of argument, Singer’s third step: donating to trustworthy and competent aid agencies allows us to mitigate the harm that comes from people being unable to meet their basic needs.

Our focus here is on the second step of the argument. Do we actually have a moral obligation to the poor; or does it merely seem like that? Should we trust our intuition to save the child? Based on recent experimental evidence from psychology and economics, we should be skeptical of our intuition to save the child. Allow me to make the case.

We have solid experimental evidence from psychology and economics that people care how they look to others. As a species, humans are highly cooperative and social, and depend on help from others to survive — so much so that the ancients believed banishment from society worse than death, as it entailed not only death, but prolonged suffering as well. But relying on cooperation from others makes us susceptible to free-riders: individuals who enjoy the fruits of everyone else’s labor, while contributing less than their fair share. We thus use reputations to distinguish the trustworthy from the untrustworthy; we don’t want to cooperate with someone who might defect, especially in situations with high stakes — e.g., it matters who we choose to have children with. And since we cannot read the minds of others, we must rely on a high-fidelity signal of trustworthiness.

Here we need an example of a high-fidelity signal. The classic example is peacocks: their colorful feathery display is a costly signal to prospective mates that even with amplified risk of predation, he can still thrive — a signal that indicates fitness and is hard to fake. Or consider the ability to lift two-hundred pounds over one’s head as a reliable signal of strength: one cannot lift that much weight without possessing enough strength. Colorful features on a peacock would be a fatal liability if the bird weren’t healthy enough; someone simply wouldn’t be able to lift that much weight if they lacked sufficient strength. If we are to rightly trust others, we need a signal of trustworthiness that would be hard to fake by those who are untrustworthy.

One way to signal trustworthiness and communicate that one is a suitable partner for collaboration is through uncalculating cooperation: helping someone without waiting to consider whether the benefits of helping exceed the costs of doing so. By not calculating the advantages and disadvantages of pitching in, we signal to others that we can be trusted as a potential cooperator: we aren’t participating simply because we’ve determined that it’s in our interests. When we cooperate without doing the cost-benefit analysis, we signal we’re committed to the joint venture not merely because it would benefit us. As the authors of a recent study explain:

“To provide empirical support for this account, we experimentally test the hypothesis that people avoid calculating the costs of cooperation because of reputational concerns. Across two experiments, we demonstrate that when people’s decision-making processes are observable to others, they behave in a less calculating way. This observation suggests that they use uncalculating cooperation to gain reputational benefits.”

We often cooperate with others without calculating the cost. We grant friends’ requests without making inquiries about the time and trouble the request might take; we come to the aid of strangers in distress; we adhere to strict moral and religious precepts that are other-regarding, whatever the costs and benefits. These types of behaviors make sense once we frame them in terms of reputation: we cooperate without calculating because of how it makes us look to others. So it looks like we might have the intuition to save the drowning child because it makes us look good to others — consider the social pressure one would feel refusing to help a drowning child in the presence of onlookers. Just the thought of refusing to help seems unconscionable.

The strong intuition to save the drowning child looks like a product of our evolutionary history as a social, cooperative species and the need to look good to others for the sake of our survival. This should make us doubt that our intuitions in the drowning child case track the moral facts; it looks like we have these intuitions for evolutionary reasons rather than anything having to do with any moral obligations — we may only have such intuitions because they aid in our survive and reproduction, not because there is actually a moral obligation to save the child.

Someone may, of course, reply that we could have a strong intuition to save the drowning child both because it is morally required, as well as for reputational and evolutionary reasons. The trouble here though is that we simply cannot rule out that we have the intuition only because it helps us survive and reproduce. By example: it could be that the lottery ticket you hold in your hand is a winner or a loser; you simply do not know, even though it is highly likely the ticket is a loser given the odds. However, since you cannot rule out that the ticket is a winner — this is a distinct possibility — you don’t know the ticket is a loser. By similar logic: we cannot rule out that our drowning child intuitions are only an evolutionary by-product, so we should doubt we know that we have a moral obligation to save the child — and, of course, by extension, that we have a moral obligation to donate to the poor.

Philosophical Insights from Great Literature

photograph of tall stack of children's books

I expect that some of you are feeling a little worn down, it has been a tough year. And so I want to try something a little lighter than usual, I want to talk about some of the philosophical lessons we can learn from great literature.

Of course, by great literature I mean great children’s books.

I should perhaps mention that the inspiration for this post is a passage in chapter four of G.K. Chesterton’s book Orthodoxy. There Chesterton notes, in passing, that many great ethical principles can be extracted from children’s fairy tales.

“But I deal here with what ethic and philosophy come from being fed on fairy tales. If I were describing them in detail I could note many noble and healthy principles that arise from them. There is the chivalrous lesson of “Jack the Giant Killer”; that giants should be killed because they are gigantic. It is a manly mutiny against pride as such. For the rebel is older than all the kingdoms, and the Jacobin has more tradition than the Jacobite. There is the lesson of “Cinderella,” which is the same as that of the Magnificat— EXALTAVIT HUMILES. There is the great lesson of “Beauty and the Beast”; that a thing must be loved BEFORE it is loveable. There is the terrible allegory of the “Sleeping Beauty,” which tells how the human creature was blessed with all birthday gifts, yet cursed with death; and how death also may perhaps be softened to a sleep.”

So let us extend this analysis, what are some philosophical lessons we can learn from great literature?

To make things more difficult, let’s also limit ourselves to non-obvious lessons. Obviously, we can learn from The Little Engine That Could about the value of optimism and hard work. And obviously we can learn Where the Wild Things Are lessons about emotional management and community. But those books are, at least in part, written to teach us those lessons. What I’m looking for are hidden lessons, insights deeper than the author’s own awareness.

For example, I don’t know if Crockett Johnson read much of Immanuel Kant’s Critique of Pure Reason when writing Harold and the Purple Crayon. But whether he had Kant in mind or not, I know of no story that so well captures the Kantian idea that the world we experience, even up to and including space and time, are a construction we create out of the order our minds to impose on the world.

Nor are the insights of that book limited to metaphysics. Consider the profound psychological insight revealed when Harold, after drawing up an apple tree bursting with fruit, frightens himself away from the good things he created by enlisting a frightening dragon to guard the tree. How often, I wonder, do we fence ourselves off from goods simply because we overreact to the fear that others may take something we think our own?

Or perhaps some of you are looking for a more political lesson to draw from the book. And so we reach the point of the book where Harold was looking for his own window. And since “he couldn’t think where it might be. He decided to ask a policeman.” But of course “the policeman pointed the way Harold was going anyway,” and so Harold learnt, even from a young age, the way that police power is employed to maintain the, even unjust, status quo.

But let us move on from Harold and the Purple Crayon. 

There is the profound lesson of If You Give a Mouse a Cookie; that your reward for doing a good deed is not usually the relaxation of conscience, but rather the deepening of charity and so a dawning appreciation of the higher and harder acts of love still required of you. And then there are the prosaic lessons of Goodnight Moon. First on the value of attending to the commonplace goods of the everyday, and second the value of taking goodbyes and goodnights seriously, since you never know for certain that you will both wake again in the morning.

There are some obvious Aristotelian themes on temperance in The Very Hungry Caterpillar. (What with how unhealthful attempts to indulge appetite, rather than leading to relief, merely result in the appetites growing all the more persistent; and how he is only sated when he eats his proper food of a single leaf). But there are also deeply hidden philosophical nuggets. For instance, do you remember how the pages of that book grew with each passing day? You could see what foods were coming up, but not yet see their quantity nor read what the effects of indulgence would be. What a profound commentary on what happens to us when we try to pursue future goods. We see, at least somewhat, the good we are after. But we are insensitive to quantity, and rarely notice the unfortunate unintended consequences of the pursuit of the apparent good.

Of course, sometimes the philosophical lessons of children’s books are more abstract and less practical. Thus, in Chapter III of A.A. Milne’s Winnie-the-Pooh (entitled In Which Pooh and Piglet Go Hunting and Nearly Catch a Woozle”) Pooh starts tracking what he thinks is a Woozle, only to find as he tracks that a second Woozle seems to have joined the first. Piglet joins the hunt, and before long yet another type of animal seems to have joined the pack they are tracking. Eventually Pooh realizes that he has “been Foolish and Deluded,” because of course it was his own tracks that he was chasing.

This is, of course, almost identical to the story that John Perry tells in his famous and rightly acclaimed “The Problem of the Essential Indexical”:

“I once followed a trail of sugar on a supermarket floor, pushing my cart down the aisle on one side of a tall counter and back the aisle on the other, seeking the shopper with the torn sack to tell him he was making a mess. With each trip around the counter, the trail became thicker. But I seemed unable to catch up. Finally it dawned on me. I was the shopper I was trying to catch.”

Perry might extract more philosophical insights about the essential indexical than our foolish and deluded Pooh does. But I’m still glad to know that our bear of very little brains came up with the raw material for the philosophical insight 53 years before Perry did.

I hope you’ve found these examples entertaining. There are dozens, maybe hundreds, more hidden philosophical lessons to be drawn from children’s books, but I’d rather hear the examples others come up with. Send me an email with your own favorite examples of hidden insights from great literature; I’ll eventually make a follow up post either here or at least on my personal blog.

But What Are We Doing Here?

Before I end this post, I just want to head off a certain potential skepticism. Some of you may doubt that it is the business of the ethicist to be extracting lessons from children’s books. Philosophers are trying to make great new ethical discoveries, seeing ethical truths others have missed! Why bother with the moral insights so humdrum as to trickle down to the tales we tell kids?

Well, perhaps there is sometimes a role for ethicists to find new ethical truths, identify unnoticed principles, or apply principles in original ways. But I don’t think that is what ethicists should usually be doing. Consider, for instance, this footnote in Immanuel Kant’s Critique of Practical Reason:

“A reviewer who wanted to say something censuring this work hit the mark better than he himself may have intended when he said that no new principle of morality is set forth in it but only a new formula. But who could even want to introduce a new principle of all morality and, as it were, first invent it? Just as if, before him, the world had been ignorant of what duty is or in thoroughgoing error about it.”

Kant’s point, and I think it is a good one, is that ethics is not like natural science at least in this one respect: ethical truth is something to which humanity has always had access just in virtue of being human. We do not need to discover ethics the way we discover that force equals mass times acceleration. Rather, we need to recognize and remember those ethical truths which, in a sense, we already knew. Thus, it is the job of the ethicist not to invent new principles, nor even discover unknown truths, but to give us new and clearer formulations of those principles which we somehow had already.