← Return to search results
Back to Prindle Institute

Ethics Laid Bare

Part of the reason why HBO’s new series The Last of Us is so impactful — and the reason why the video game it is based on won dozens of awards — is because it shows how quickly it can all fall apart.

Early in the first episode, we find ourselves in the middle of an unfolding catastrophe. A fungus has evolved a way to invade the human nervous system and cause those who it infects to become manically violent; and our protagonist, Joel, along with his brother Tommy, has just narrowly saved his young daughter Sarah from a neighbor who was among the first to be infected. With a shocked Sarah still processing the violence which she had just witnessed — her father, armed with a wrench, killing their infected neighbor to save her — the group scrambles into a truck and frantically tries to flee.

With Tommy at the wheel and police cars screaming by, Sarah pleads for information which neither Joel nor Tommy can provide. The radio and cell towers are out, and the military, believing that the outbreak began in the city, has blocked off the main highway. But for all of the uncertainty, our protagonists do know that something is horrendously wrong: having seen their infected neighbor, they speed down country roads, by homes engulfed in flames, trying to get to a highway.

As they drive, our perspective shifts to that of Sarah, who sees a family come into view: a man, woman, and small child. With their van halfway into the ditch beside the road, the man, hands raised, runs into the street pleading for help. Tommy begins to downshift, and as we hear the brakes screech, Joel protests; Tommy mentions the child in the woman’s arms, but Joel reminds him that they have a child to consider too, and without allowing space for further discussion, tells Tommy to keep driving. Sarah tries to interject, saying that the family could ride in the back of the truck, but Tommy acquiesces to Joel’s request, and Sarah watches the still-pleading family through the rear windshield.

As we hear the pleading fade, Joel says that someone else will come along. The camera turns to Sarah, whose eyes well with tears.

* * *

It’s easy, in the study of ethics, to abstract oneself from the context of lived decision-making: for the vast majority of moral decisions, we do not make our choice with a robust understanding of the utility which is at stake, or if our will could simultaneously become universal law. When philosophers write papers on ethics, we have a luxury which many do not: the time to think deeply and clearly.

This is part of why, among many other reasons, dystopian stories like The Last of Us can be so emotionally compelling. The world is crashing down around our protagonists, and over the next 15 minutes of runtime, the devastation and violence which they witness is suffocating. There isn’t time to breathe or think — only to act. Shortly after encountering the family on the road, our protagonists try to navigate a town center, where panicked people have flooded out of burning buildings and into the street among mobs of the infected. Yet, even as the apocalypse arrives in one of the most chaotic and frenzied forms imaginable, we still see desperate gasps of moral virtue: through the windshield of the truck, we still see people carrying the injured and tending to the wounded.

This is why, even in all of the madness of the show’s first episode, the encounter on the road can be so deeply challenging to our conscience. This is ethics laid bare: when there is no one to call and no time to think, what do you do? Is your inclination like Tommy’s — to stop the truck at the risk of your loved ones? Do you think like Joel, choosing to protect those loved ones while hoping that someone else will take the risk? Or perhaps you’re like Sarah, desperately trying to find a third path and break free from the chokehold of the choice. Maybe you relate to those who stopped to help the wounded; maybe you relate to those who ran. Maybe you relate to both.

I do not believe that there is a clear answer to the question of whether or not our protagonists should have helped the family. I can see the deontologist’s argument that to not render aid to those in need violates some form of moral responsibility, but I can also see that the deontologist has room for Joel’s view, holding that responsibilities to one’s family supersede. The utilitarian would likely point out that, had they taken the family into the nearby town, their chances of surviving the chaos were dim to pitch dark. Or, maybe things would have turned out differently.

The encounter on the road is not challenging because of the various approaches which could be used to assess its moral dimensions; rather, it’s challenging because it strips us of the information and time required to think clearly about what’s at stake — and reveals what is underneath. We have no sense of the ethical lives of these characters prior to this moment, but their reactions to it show their dispositions, and lay their values and virtues bare. The encounter on the road speaks to a fundamental insight: that, at times, ethics is less what you think and more who you are, and how that is reflected in what you do.

It is not likely, I admit, that we will find ourselves in the middle of a zombie apocalypse. But the lesson of this encounter is nonetheless valuable, especially when extrapolated to our everyday reality. Sometimes the challenges which we will face are extraordinary: no matter one’s role — whether you be a healthcare provider, freight yard worker, beach-goer, or a bystander — we may all find ourselves in the position one day to make a choice resembling Joel’s, and have our own virtues laid bare. But even in the quotidian, in the ways in which you treat those around you, in those who you choose to acknowledge, in your actions and your omissions, in your choice to consume, in the choice to stand up when other won’t — in each of these choices is a reflection of who you are.

* * *

The choice presents itself: as you become conscious of the dilemma, precious seconds pass. The need is maybe great, but so is the risk — and the decision is yours. With no time to think, what remains is you.

What do you do?

COVID-19 and the Ethics of Belief

photograph of scientist with mask and gloves looking through microscope

The current COVID-19 pandemic will likely have long-term effects that will be difficult to predict. This has certainly been the case with past pandemics. For example, the Black Death may have left a lasting mark on the human genome. Because of variations in human genetics, some people have genes which provide an immunological advantage to certain kinds of diseases. During the Black Death, those who had such genes were more likely to live and those without were more likely to do die. For example, a study of Rroma people, whose ancestors migrated to Europe from India one thousand years ago, revealed that those who migrated to Europe possessed genetic differences from their Indian ancestors that were relevant to the immune system response to Yersinia pestis, the bacterium that causes the Black Death. It’s possible that COVID-19 could lead to similar kinds of long-term effects. Are there moral conclusions that we can draw from this?

By itself, not really. Despite this being an example of natural selection at work, the fact that certain people are more likely to survive certain selection pressures than others does not indicate any kind of moral superiority. However, one moral lesson that we could take away is a willingness to make sure that our beliefs are well adapted to our environment. For example, a certain gene is neither good or bad in itself but becomes good or bad through the biochemical interactions within the organism in its environment. Genes that promote survival demonstrate their value to us by being put to (or being capable of being put to) the test of environmental conditions. In the time of COVID-19 one moral lesson the public at large should learn is to avoid wishful thinking and to demonstrate the fitness of our beliefs by putting them to empirical testing. The beliefs that are empirically successful are the beliefs that should carry on and be adopted.

For example, despite the complaints and resistance to social distancing, the idea has begun to demonstrate its value by being put to the test. This week the U.S. revised its model of projected deaths down from a minimum of 100,000 to 60,000 with the changes being credited to social distancing. In Canada, similar signs suggest that social distancing is “flattening the curve” and reducing the number of infections and thus reducing the strain on the healthcare system. On the other hand, stress, fear, and panic may lead us to accept ideas that are encouraging but not tested.

This is why it isn’t a good idea to look to “easy” solutions like hydroxychloroquine as a treatment for COVID-19. As Dr. Fauci has noted, there is no empirical evidence that the drug is effective at treating it. While there are reports of some success, these are merely anecdotal. He notes, “There have been cases that show there may be an effect and there are others to show there’s no effect.” Any benefits the drug may possess are mitigated by a number of factors that are not known. Variations among the population may exist and so need to be controlled for in a clinical study. Just as certain genes may only be beneficial under certain environing conditions, the same may be true of beliefs. An idea may seem positive or beneficial, but that may only be under certain conditions. Ideas and beliefs need to be tested under different conditions to see whether they hold up. While studies are being conducted on hydroxychloroquine, they are not finished.

Relying on wishful thinking instead can be dangerous. The president has claimed that he downplayed the virus at first because he wanted to be “America’s cheerleader,” but being optimistic or hopeful without seriously considering what one is up against, or by ignoring the warning signs, is a recipe for failure. The optimism that an outbreak wouldn’t occur delayed government action to engage in social distancing measures in Italy and in the U.S. and as a result thousands may die who may not have had the matter been treated more seriously sooner.

As a corollary from the last point, we need to get better at relying on experts. But we need to be clear about who has expertise and why? These are people who possess years of experience studying, researching, and investigating ideas in their field to determine which ones hold up to scrutiny and which ones fail. They may not always agree, but this is often owing to disagreements over assumptions that go into the model or because different models may not be measuring exactly the same thing. This kind of disagreement is okay, however, because anyone is theoretically capable of examining their assumptions and holding them up to critical scrutiny.

But why do the projections keep changing? Haven’t they been wrong? How can we rely on them? The answer is that the projections change as we learn more data. But this far preferable to believing the same thing regardless of changing findings. It may not be as comforting getting a single specific unchanging answer, but these are still the only ideas that have been informed by empirical testing. Even if an expert is proven wrong, the field can still learn from those mistakes and improve their conclusions.

But it is also important to recognize that non-medical experts cannot give expert medical advice. Even having a Ph.D. in economics does not qualify Peter Navarro to give advice relating to medicine, biochemistry, virology, epidemiology, or public health policy. Only having years of experience in that field will allow you to consider the relevant information necessary for solving technical problems and putting forward solutions best suited to survive the empirical test.

Perhaps we have seen evidence that a broad shift in thinking has already occurred. There are estimates that a vaccine could be six months to a year away. Polling has shown a decrease in the number of people who would question the safety of vaccines. So perhaps the relative success of ending the pandemic will inspire new trust in expert opinion. Or, maybe people are just scared and will later rationalize it.

Adopting the habit of putting our beliefs to the empirical test, the moral consequences of which are very serious right now, is going to be needed sooner rather than later. If and when a vaccine comes along for COVID-19, the anti-vaccination debate may magnify. And, once the COVID-19 situation settles, climate change is still an ongoing issue that could cause future pandemics. Trusting empirically-tested theories and expert testimony more, and relying less on hearsay, rumor, and fake news could be one of the most important moral decisions we make moving forward.

Disagreements in Ethical Reasoning: Opinion and Inquiry

photograph of graffiti image on building with two arms pointing in opposite directions

With the school year about to begin there are going to be plenty of students entering colleges and universities who have never taken an ethics course before. When I teach introductory philosophy courses the common response that I get when I ask students about ethical issues is “it’s all a matter of opinion.” This is part of a general attitude that when it comes to ethics there is no judgment that is better than any other. This habit of thinking can be so hard to break that even after an entire semester of talking about moral problems and debating the merits of different moral theories, students will still report that it is all just a matter of opinion. Why is this a problem? The habit of thinking that ethics is just a matter of opinion ultimately serves as a roadblock to ethical thinking and moral inquiry.

Moral relativism can be a complicated topic in philosophy, but for our purposes we can define it as the view that moral judgments are not true or false in the same way as factual judgments. Instead, morality is dependent on groups or cultures, each with their own incompatible ways of understanding the world. J. David Velleman has argued that based on data collected from various communities, different communities understand moral actions differently. Jesse Prinz argues that emotional sentiment plays a strong role in moral judgments; an action is wrong if it stirs a negative sentiment. Moral relativism is also often connected to tolerance; if there are no universal moral principles, the moral principles of one culture are not objectively superior to others so we should be tolerant of other cultural practices.

Relativism would seem to offer support for the idea that ethics is all a matter of opinion. Being tolerant of other moral worldviews is generally considered a good thing. Often moral issues can strike different emotional chords with people and it can seem disrespectful to tell people that they are wrong. If ethics is about how we feel about moral problems, then it seems hard to claim that it can rise above mere opinion. However, the view that ethics is all just a matter of opinion and relativism are not necessarily the same. If one believes that morality is dependent on culture, it would not warrant the claim that morality is all a matter of opinion, especially if we are only talking about a single person. Littering is considered a cultural faux-pas in North America so an individual would not be able to claim they are morally okay littering merely because it is their personal opinion that it is morally okay.

Indeed, while the justification for the view that ethics is just a matter of opinion and the moral relativist view can overlap, the position that ethics is just a mere matter of opinion (especially personal opinion) is especially problematic. For starters, one can be tolerant of other cultures and their moral views without having to believe that ethics is merely opinionated. For instance, a moral pluralist may claim that there are objectively correct and incorrect ways to react to moral problems and that moral answers can vary depending on local concerns. Second, while ethics does contain an emotional component, we are not therefore obligated to accept that ethics is merely emotional. Just because you or many others feel something about a moral issue does not mean that that feeling justifies any possible response.

The biggest problem, however, with the view that ethics is merely a matter of opinion is that more often it becomes an excuse to not think too deeply about moral problems. Consider this example: You have a strong desire to help others and are trying to determine what charities you wish to donate to and how much. You could investigate how effective each charity is, who may need it the most, and how much money you wish to give relative to other financial needs and desires you may have. But instead, you decide to take your cash and shred it.

Certainly, we can debate what might be the right thing to do in this situation, but it would require a fairly idiosyncratic person to decide that shredding money was the moral thing to do in that situation. We may not all agree on what the right thing to do in that situation is, but we can establish a fairly broad consensus on what is the wrong thing to do in that situation. Someone who is genuinely interested in helping others and is genuinely conflicted how to do it is not justified in shredding their money. Objectively, this is because it doesn’t solve their own moral problem. In other words, mere opinion is insufficient to justify any possible answer.

Now let’s say that in the same situation I decide that the most moral thing to do is to give money to an animal charity. You may disagree and opt instead for a charity that alleviates hunger. Should we conclude that our disagreement is a mere matter of opinion? Two moral people can come to different conclusions, with each trying to secure different goods and avoid certain problems. Each can also recognize the moral reasoning of the other as being legitimate without having to conclude that the other was morally wrong for doing what they did. This is not merely because the two have a difference of opinion. It is because each appreciates the moral reasoning of the other; they are capable of recognizing the legitimacy of other courses of action. However, they may not recognize the morality of a mere opinion that hasn’t been thought through. Both could agree that shredding your money is morally wrong action and both could recognize the importance of moral reasoning as a means of revising and refining a proposed course of action.

American philosopher Charles S. Peirce believed in the importance of inquiry for settling disagreements and disputes of opinion, not only between each other but with ourselves. If we could only inquire long enough, he argued, we could test our ideas in practice. Because of this, he claimed that part of the bedrock of reasoning is that we do not take steps to block the path of inquiry. The instinct to look at any moral problem and claim that it is all a matter of opinion does exactly this. The immediate response that the answer to any moral problem is a matter of opinion cuts off inquiry before it begins. If we accepted that there is no better answer, we will not seek it. It is an excuse to not look for a better answer, to not rely on our reasoning, to not discuss our proposed solutions with others, and to not seek consensus by refining our ideas.

The notion that the answer to any moral problem is a matter of opinion and that is all there is to say about it is intellectual laziness. If you are a new student who is taking their first ethics class, I urge you to look beyond such an attitude and to inquire further. We may end up concluding that our answers are only opinionated, but we have no justification for starting with that answer. Instead, we may find that we have missed several better responses that can only come from a willingness to inquire further.