← Return to search results
Back to Prindle Institute

What If You Aren’t Sure What’s Moral?

photograph of a fork in the path

Today, I woke up in a soft bed in a heated apartment. I got up and made full use of the miracle of indoor plumbing before moving on to breakfast. Pouring myself a bowl of vitamin-enriched cereal and milk (previously delivered to my doorstep) I had to admit it: modern life is good.

Opening up my laptop, my gratitude for modernity diminished as quickly as my browser tabs multiplied. Our phones and laptops are not just tools. They are portals to another world — a relentless world of news, opinion, and entertainment. We’re living through the age of information overload. On average, we now consume 174 newspapers worth of information each day. “I’ve processed more information in the last 48 hours than a medieval peasant would in a lifetime,” reads a well-liked tweet.

And yet, amid this tsunami of information, we seem to have less certainty than ever. Controversy and discord reign. There is little agreement about basic facts, let alone about what is to be done. Is it time to lift COVID-19 restrictions yet? Is American democracy at risk of failure? Are plastics killing us? Should we allow genetically modified foods? Will climate change be simply bad or disastrous? I have my opinions, and I’m sure you have yours, but do any of us know the answers to any of these questions with certainty?

As well as uncertainty about the facts, we continually find ourselves facing moral uncertainty. Moral theories and views divide both public and philosophical opinions. They defy consensus. Is euthanasia morally permissible? Is abortion? Eating meat? Amid our unprecedented access to a wide range of moral arguments and views, all competing for our allegiance, we are left to come to our own moral conclusions. If we are being brutally honest with ourselves, we probably aren’t absolutely certain about all of our moral views.

In these conditions, moral uncertainty is the norm. But, as the Samuel Beckett line goes, “You must go on.” Even if you don’t know for sure what the right moral view is, reality refuses to stop the clock to let you figure it out. You have to act one way or another, despite your moral uncertainty. Being uncertain doesn’t take you off the hook of moral responsibility. Neither does refusal to act. As climate change illustrates, refraining from taking decisions can be just as disastrous as making the wrong decisions.

So, how can you go on under these conditions of moral uncertainty? Let’s take a concrete example. What if you think eating meat is morally permissible, but you’re not totally sure? If you’re willing to admit there’s some chance you could be wrong about the morality of vegetarianism, what should you do? Keep eating meat? Or give it up?

The philosopher William MacAskill argues that if you are morally uncertain about vegetarianism, you should give up eating meat. In fact, even if you think there’s only a 10% chance that vegetarianism is the right moral view, you should still give up meat.

MacAskill thinks there’s an asymmetry in the moral risks you’re running. “If you eat veggie and eating meat is permissible, well, you’ve only lost out on a bit of pleasure,” says MacAskill, “But if you eat meat and eating meat is impermissible, you’ve done something very wrong.” Maybe you should give up a bit of pleasure to avoid the risk of doing something really morally terrible, even if the probability that you would be doing something really morally terrible is relatively low. “The morally safe option,” claims MacAskill, “is to eat vegetarian.”

We can apply MacAskill’s approach to other problems where we face moral uncertainty. Peter Singer famously argued that failing to donate money to help alleviate suffering in the developing world is just as morally wrong as letting a child drown in front of you. Most of us seem to think that Singer’s moral claims are too strong; we don’t think we are morally obligated to donate to charities, even if we think it is morally good – beyond what we are obligated to do – to donate. However, it seems at least possible that Singer is right. If he is right, then not giving any money would be very wrong, as wrong as letting a child drown. But if Singer is wrong, then all I’d lose by donating is a bit of money. Given the moral risk, the appropriate choice seems to be to donate some money to charity.

These two cases might make MacAskill’s approach look appealing. But it can also get strange. Imagine you really want to have a child. You are near-certain that having a child is morally permissible. In fact, you think having a child, bringing a happy person into the world, would be a serious moral good. You also think there’s a tiny (less than one percent) chance that anti-natalism is true. According to the version of anti-natalism you’re considering, by having a child you’re doing something morally terrible — bringing into existence a chain of human suffering that will continue for millennia. If anti-natalism says that having a child is morally wrong enough, then it would be less morally risky for you to simply not have a child. But should you really not have a child in such a case? Even though you believe with near-certainty that doing so would be a morally good thing? That seems like a strange conclusion.

The ethicists Johan Gustafsson and Olle Torpman give an alternative framework for thinking about how we should act under moral uncertainty. When we think of good, moral people, we generally think they are conscientious; they are typically true to what they believe is right. To put it another way, we think that a moral, conscientious person won’t do what they sincerely believe to be wrong. In the child example, your sincere, near-certain belief is that it is permissible, perhaps even a good thing, to have a child. MacAskill’s approach to dealing with moral uncertainty seems to say you ought not to have a child. But how can a moral theory that you don’t believe in matter more than the one you do believe in? For these reasons, Gustafsson and Torpman propose a much simpler approach: act in accordance with the moral view that you are most confident in. In this case, that would mean you should have the child that you want.

This simpler approach to dealing with moral uncertainty might seem straightforward and convincing. But I invite the reader to go back and apply Gustafsson and Torpman’s approach to the two cases discussed earlier, of charity and vegetarianism. Arguably, their approach gives less convincing advice in these cases.

How we should act given moral uncertainty is an important question for the discordant moment in which we are living. Whether we have the correct answer to this question remains far from clear.

On Speaking Up in Polite Company

photograph of place settings at table for Christmas dinner

One of the less joyous aspects of a typical holiday season is breaking bread with family members whose views one finds not merely wrongheaded, but abhorrent. When they choose to air those views around the table, one faces a dilemma: speak up or quietly endure? As with so many choices we encounter in our daily lives, philosophy can help us sort out the good arguments for acting from the bad.

There are three basic positions one could take on this issue: that we always ought to speak up, that we never ought to speak up, and that we sometimes ought to speak up. I will consider these positions in turn, arguing that the last is probably the correct one.

There are at least four arguments for always speaking up. The first is that if you don’t speak up, you are a hypocrite. The second is that if you don’t speak up, then you are choosing to do what is “polite,” rather than what is morally required. But the norms of politeness are always trumped by moral norms, so one ought to always speak up. The third argument is that we are naturally inclined not to speak up, so the best policy — the policy that will ensure that we do the right thing most often — is to always speak up. Finally, the fourth argument is that it is always possible to speak up diplomatically, thereby mitigating any harm that might be done by speaking up.

The hypocrisy argument leads with a false premise and then begs the question. It is simply not the case that if you don’t speak up, you’re a hypocrite. A hypocrite is someone who makes a pretense of conformity to some value or norm for illegitimate reasons. (This is why hypocrisy is a term of opprobrium.) Even if not speaking up always involved making a false impression that one agrees with some sentiment or adheres to some norm, one’s reasons for not speaking up need not be illegitimate. For example, maintaining familial tranquility for the sake of others is not always an illegitimate reason. In any case, the argument also assumes that being a hypocrite is always a morally bad thing. But hypocrisy can be morally justified, at least all-things-considered. For example, it may be permissible for a sexist employer to hire well-qualified female employees in order to impress a progressive female colleague. Here, the employer’s hypocrisy is arguably justified by the good results it produces.

The politeness argument simply assumes that the norms of politeness are not moral norms. But in many cases, etiquette supports morality. The requirement to be courteous, for example, seems to derive its force and legitimacy from the clearly moral requirements to show basic respect or to be kind. As Karen Stohr argues, the conventions of etiquette are the primary means by which we express our moral attitudes and carry out important moral goals. So, in choosing to do what is polite, one does not always depart from the norms of morality. If politeness requires not speaking up, that may be because it is the morally right thing to do.

The claim that always speaking up is the best policy may well be true. After all, most of us are probably seriously biased in favor of not speaking up. So, adopting an inflexible policy of always speaking up may maximize our chances of doing the right thing. But from the fact that the policy of always speaking up will most often lead us to do the right thing it does not follow that speaking up is always the right thing to do. In general, we are sometimes justified in adopting moral policies if they lead us to do the right thing most often, even if they sometimes lead us morally astray. For example, if I know that I am a bad sport at tennis, I may adopt a policy of sprinting away from my opponent after a loss to keep myself giving him the middle finger. This policy will lead me to refrain from doing the wrong thing most of the time, and so may be the one I ought to adopt, even though there may be instances where my opponent richly deserves the finger.

The fourth argument, that we are always able to speak up diplomatically, can help us see a bit more clearly what speaking up involves. It seems to me that it is impossible to speak up diplomatically. Diplomats try to finesse conflict to the point that it ceases to appear to be conflict. Speaking up means, at minimum, making one’s opposition to another person’s views as clear as possible. So, far from always being able to speak up diplomatically, we are in fact not speaking up if we try to do it diplomatically. What we should perhaps aim at is speaking up civilly, but this just means that we should speak up with politeness or courtesy, by showing basic respect to our opponent. This is different from finessing our conflict with our opponent, and even civil opposition can be highly inflammatory in certain contexts.

The arguments for always speaking up appear to be flawed in various ways. On the other hand, the arguments for never speaking up seem to be even worse. Some people will point out that speaking up will rarely change one’s opponent’s mind. This may well be true, but rarely changing one’s opponent’s mind is not the same as never doing so. More fundamentally, for the argument to work, it must assume that the only purpose of speaking up is to change one’s opponent’s mind. In fact, it seems to me that the reason one should speak up is primarily to signal to others that one does or does not support some sentiment, norm, or value, which may give them comfort, strength, or the courage to voice their own views. For example, if a family member voices strong contempt for homosexuality in front of one’s gay cousin, signaling that one does not agree with that contempt can let the cousin know that she is not alone or unloved, and may empower others in the family to confront the homophobe. The signaling function of speaking up is why I earlier claimed that speaking up means making one’s opposition to another person’s views as clear as possible: one must send a clear signal of one’s opposition in order to comfort or encourage others.

We come, then, to the conclusion that we sometimes ought to speak up. But when should we do it? The answer in abstract is deceptively simple, even simplistic: when doing so would bring about more good than any other option realistically available. In saying this, I am doing nothing more than applying the moral doctrine of consequentialism to a practical problem. Consequentialism tells us that we ought to judge an action’s rightness by its consequences, and I see no reason why this philosophy does not capture every morally relevant feature of the problem of speaking up.

In saying that the right thing to do with respect to speaking up is whatever brings about the most good, however, I am not necessarily recommending that people try to perform a consequentialist calculus whenever they face such situations. In practice it may be difficult to know which options available to us will do more good than others. Our epistemic limitations, together with our own biases against conflict, are reasons why we might be justified from a consequentialist point of view in adopting a policy of always speaking up — even if sometimes this policy will lead us to speak up when doing so will not bring about the most good.

The Politics of Imponderables

photograph of scales made of pebbles balanced on a boulder

As I glance over the front page (or, let’s be real, the home page) of various newspapers, nearly every story is about either COVID-19 or U.S. racial injustice. Here I want to pause and look not at the stories themselves, but at the discourse developing around both stories. I want to look at the moral outrage we feel when others suggest imponderable comparisons. Consider two personal examples.

I was outraged at Lt. Governor Dan Patrick’s suggestion that we should restart our economy even though it increases the chance that our grandparents might die from COVID-19. I was appalled that anyone would compare the merely economic harm of shutdowns to the incalculable and irreplaceable loss of human lives. I felt a similar outrage at people bemoaning the damage done by the looting which followed the murder of George Floyd. I was appalled that anyone would compare the merely economic harm of looting to the inexcusable and incalculable evil of racialized violence in the criminal justice system.

It is this outrage that I want to pause and consider because, while I believe the outrage is appropriate, I worry about the effects of rendering certain comparisons imponderable. In ordinary times I would be horrified by the harm done by a global recession. In ordinary times I would happily condemn looting that exasperates inner city food deserts or harms locally owned businesses. Yet, in this context, not only am I not worried about these harms, but I find myself incensed at those who talk about these harms too openly. Do they not realize how insignificant those concerns are given what else is as stake? Do they not realize how talking about property damage trivializes the continuous damage visited on minority communities?

Fortunately, there has been excellent research on the outrage we feel when people consider certain trade-offs. This outrage arises when we see others trading off what the social scientist Phillip Tetlock calls ‘sacred’ values for ‘secular’ ones.

To take a commonly cited example, if you present people with a story in which a hospital administrator faces a choice between saving the life of a boy and saving the hospital a million dollars, people won’t just condemn the administrator who chooses the money, they are willing to punish an administrator who even lingered over the question before eventually choosing the child. We are not just upset with those who make the wrong trade, we are outraged with those who even ponder the trade-off we consider taboo. This is true even though a hospital that routinely sacrificed a million dollars anytime it could use that money to save a life would not remain solvent for long.

So why do I find certain trade-offs outrageous. My mind codes the economic liberties of Dan Patrick’s “American way of life” as a merely secular value, not to be compared with the sacred lives of my grandparents. But of course, to many more patriotically inclined citizens, our economic way of life is absolutely a sacred value, and thus something that might sometimes require tragic trade-offs (a ‘tragic trade-off’ is one where we must sacrifice one sacred value for another; a ‘routine trade-off’ is one where we sacrifice one secular value for another).

Once we understand the psychological underpinnings of our moral emotions, those emotions begin to flounder. As I study moral outrage, I learn that what we consider sacred depends, in part, on the peculiarities of how something is presented. Is life insurance a way to bet on a loved one’s death, or a way to secure the financial security of one’s children? Does social security reform break faith with senior citizens, or technically rework bureaucratic infrastructure? A large portion of politics is reframing taboo trade-offs into routine or tragic ones.

By default, I treat life as a sacred value. But if you want to change how I code it, just point out that every year half a million people die of malaria (and well over one million people from Tuberculosis) and yet I’m not constantly outraged that trillions are not being siphoned from the global economy to malaria eradication. I’m unwilling to admit to myself that I care more about diseases that threaten U.S. lives, and thus, to avoid moral hypocrisy, will swiftly ‘mentally recode’ the millions of deaths caused by preventable disease from a moral atrocity to a routine statistical artifact of a large global population.

Similarly, by default, I’m appalled at the thought that we would send U.S. factory workers back to work where they risk contracting COVID-19 just to jumpstart global supply chains. But if you want to change how I code it, just point me towards the UN University’s recent working paper suggesting the disruption of global supply chains runs the risk of plummeting half a billion people back into poverty and undoing a decade of progress toward the UN’s development goal of ending poverty by the year 2030.

Indeed, present me with both of these arguments and suddenly I feel outraged that my fellow U.S. citizens, who are shielded from the worst economic impacts by stimulus checks and a comparatively excellent public health infrastructure, are willing to cripple the economic foundations of the developing world just to avoid a statistically small risk of death.

As Phillip Tetlock puts it, the “boundaries of the thinkable ebb and flow as political partisans fend off charges of taboo trade-offs and fire them back at rivals.” So what role should these ‘imponderables’ play in my politics? Are they a recognition of incalculable human dignity, or a tool of self-deception by which I write off the legitimate worries of those of different political persuasions while indulging in the personal catharsis of moral outrage?

Should I do away with my imponderables? According to many great ethicists, the answer is: No. The great Catholic philosopher Elizabeth Anscombe famously said she was uninterested in arguing with anyone who thought it was an open question whether “procuring the judicial execution of the innocent” could be justified, because such people “show a corrupt mind.” Raimond Gaita, emeritus professor of moral philosophy at King’s College London, agrees with Anscombe and argues that even to see certain courses of action as possible, or certain trade-offs as legitimate, is already to have exhibited a deep moral failing.

To these philosophers, moral imponderables are not a mere peculiarity of moral psychology. Rather, they are essential to a healthy moral sensibility. What it is to have a proper regard for justice is to demand justice be done, whatever the economic costs may be. Why? Because humans have something like a sacred value, or an inner dignity, which dictates that justice take precedence even over the social good.

Why is it appropriate to feel outrage when people bring up the injustices done by looting during a national conversation on racial violence? It is not because the injustices of property damage don’t matter, nor because such injustices are less important (though of course they are). Rather, it is because to recognize the dignity of George Floyd, Breonna Taylor, or Ahmaud Arbery is to recognize that the tragedy of their death is, in a very real sense, incomparable to any other injustice. To bring up other injustices in comparison is already to have missed the incalculable dignity of a human being.

Of course, many disagree with Anscombe and Gaita’s claim that moral sensibility involves a recognition of certain imponderables. Consider this vision of intellectual life offered by Simone Weil:

“The degree of intellectual honesty which is obligatory for me, by reason of my particular vocation, demands that my thought should be indifferent to all ideas without exception – it must be equally welcoming and equally reserved with regard to everyone of them. Water is indifferent in this way to the objects which fall into it. It does not weigh them; it is they which weigh themselves, after a certain time of oscillation.”

It is a beautiful picture of unwavering commitment to honest investigation. Yet, for all its beauty, the position seems contradictory. The thought seems to be that the value of truth is so great that one should be able to ask any question and consider any thought, no matter how vile, if it can help one reach the truth. But note what has happened. We’ve rejected all sacred values, made everything thinkable, because of our commitment to the final sanctity of truth. Weil has made it thinkable to transgress any sacred value, but only for the sake of her own sacred value which she privileges above all others.

We cannot escape sacred values. Of course, it is also difficult to put a recognition of sacred value into practice. It sounds nice to say the value of justice is incalculable, but we cannot spend billions on every trial to make absolutely certain that justice is done. The painful reality is that there is only so much we can spend on any given life. Some trade-offs must sometimes be made.

So, what can we conclude about the politics of the imponderables?

I simply want to urge caution. First, to urge caution when we are tempted to quickly condemn others for making comparisons we find inhumane. When we recognize how fickle our own outrage can be, it should encourage humility and self-reflection. We must remember how our own mental biases might distort what we are willing to consider, and thus might seal ourselves off from insight.

However, we also need to be cautious of the opposite temptation. There is a certain seductive temptation in being willing to trade off certain values. There is a “titillation” in thinking “dangerous thoughts.” We love to congratulate ourselves on being brave enough to think the thoughts others refuse to face. We get to feel smugly superior to everyone else who remains unable to remove their moral blinders. But this too is a dangerous and distorting temptation, and it’s a temptation that compromises our ability to appreciate the sacred.

Disagreements in Ethical Reasoning: Opinion and Inquiry

photograph of graffiti image on building with two arms pointing in opposite directions

With the school year about to begin there are going to be plenty of students entering colleges and universities who have never taken an ethics course before. When I teach introductory philosophy courses the common response that I get when I ask students about ethical issues is “it’s all a matter of opinion.” This is part of a general attitude that when it comes to ethics there is no judgment that is better than any other. This habit of thinking can be so hard to break that even after an entire semester of talking about moral problems and debating the merits of different moral theories, students will still report that it is all just a matter of opinion. Why is this a problem? The habit of thinking that ethics is just a matter of opinion ultimately serves as a roadblock to ethical thinking and moral inquiry.

Moral relativism can be a complicated topic in philosophy, but for our purposes we can define it as the view that moral judgments are not true or false in the same way as factual judgments. Instead, morality is dependent on groups or cultures, each with their own incompatible ways of understanding the world. J. David Velleman has argued that based on data collected from various communities, different communities understand moral actions differently. Jesse Prinz argues that emotional sentiment plays a strong role in moral judgments; an action is wrong if it stirs a negative sentiment. Moral relativism is also often connected to tolerance; if there are no universal moral principles, the moral principles of one culture are not objectively superior to others so we should be tolerant of other cultural practices.

Relativism would seem to offer support for the idea that ethics is all a matter of opinion. Being tolerant of other moral worldviews is generally considered a good thing. Often moral issues can strike different emotional chords with people and it can seem disrespectful to tell people that they are wrong. If ethics is about how we feel about moral problems, then it seems hard to claim that it can rise above mere opinion. However, the view that ethics is all just a matter of opinion and relativism are not necessarily the same. If one believes that morality is dependent on culture, it would not warrant the claim that morality is all a matter of opinion, especially if we are only talking about a single person. Littering is considered a cultural faux-pas in North America so an individual would not be able to claim they are morally okay littering merely because it is their personal opinion that it is morally okay.

Indeed, while the justification for the view that ethics is just a matter of opinion and the moral relativist view can overlap, the position that ethics is just a mere matter of opinion (especially personal opinion) is especially problematic. For starters, one can be tolerant of other cultures and their moral views without having to believe that ethics is merely opinionated. For instance, a moral pluralist may claim that there are objectively correct and incorrect ways to react to moral problems and that moral answers can vary depending on local concerns. Second, while ethics does contain an emotional component, we are not therefore obligated to accept that ethics is merely emotional. Just because you or many others feel something about a moral issue does not mean that that feeling justifies any possible response.

The biggest problem, however, with the view that ethics is merely a matter of opinion is that more often it becomes an excuse to not think too deeply about moral problems. Consider this example: You have a strong desire to help others and are trying to determine what charities you wish to donate to and how much. You could investigate how effective each charity is, who may need it the most, and how much money you wish to give relative to other financial needs and desires you may have. But instead, you decide to take your cash and shred it.

Certainly, we can debate what might be the right thing to do in this situation, but it would require a fairly idiosyncratic person to decide that shredding money was the moral thing to do in that situation. We may not all agree on what the right thing to do in that situation is, but we can establish a fairly broad consensus on what is the wrong thing to do in that situation. Someone who is genuinely interested in helping others and is genuinely conflicted how to do it is not justified in shredding their money. Objectively, this is because it doesn’t solve their own moral problem. In other words, mere opinion is insufficient to justify any possible answer.

Now let’s say that in the same situation I decide that the most moral thing to do is to give money to an animal charity. You may disagree and opt instead for a charity that alleviates hunger. Should we conclude that our disagreement is a mere matter of opinion? Two moral people can come to different conclusions, with each trying to secure different goods and avoid certain problems. Each can also recognize the moral reasoning of the other as being legitimate without having to conclude that the other was morally wrong for doing what they did. This is not merely because the two have a difference of opinion. It is because each appreciates the moral reasoning of the other; they are capable of recognizing the legitimacy of other courses of action. However, they may not recognize the morality of a mere opinion that hasn’t been thought through. Both could agree that shredding your money is morally wrong action and both could recognize the importance of moral reasoning as a means of revising and refining a proposed course of action.

American philosopher Charles S. Peirce believed in the importance of inquiry for settling disagreements and disputes of opinion, not only between each other but with ourselves. If we could only inquire long enough, he argued, we could test our ideas in practice. Because of this, he claimed that part of the bedrock of reasoning is that we do not take steps to block the path of inquiry. The instinct to look at any moral problem and claim that it is all a matter of opinion does exactly this. The immediate response that the answer to any moral problem is a matter of opinion cuts off inquiry before it begins. If we accepted that there is no better answer, we will not seek it. It is an excuse to not look for a better answer, to not rely on our reasoning, to not discuss our proposed solutions with others, and to not seek consensus by refining our ideas.

The notion that the answer to any moral problem is a matter of opinion and that is all there is to say about it is intellectual laziness. If you are a new student who is taking their first ethics class, I urge you to look beyond such an attitude and to inquire further. We may end up concluding that our answers are only opinionated, but we have no justification for starting with that answer. Instead, we may find that we have missed several better responses that can only come from a willingness to inquire further.

Why Are Political Debates So Difficult?: A Holiday Survival Guide

Group of people gathered around a holiday table

The holiday season is upon us, which often means spending more time with family. For many of us, this also means the risk of heated political disagreements around the dinner table. If you’re like me, you’ve since learned that trying to talk politics with family members is more often than not a waste of time: no one ever really changes their mind, and everyone just ends up being mad at each other. So perhaps you’ve adopted a new policy: ignore the debates, or don’t engage, or change the topic as quickly as you can. It’s easier on everyone.

Why do these dinner table arguments seem so futile? I think one reason is that many of our political disagreements come down to an underlying moral disagreement, namely disagreements about what’s right and wrong, what kinds of obligations we have to others, or just how people should be treated in general. So when you and I disagree about whether, say, we ought to increase minimum wage, or whether we ought to tax people for services that they don’t themselves use, a major part of our disagreement is about when we ought to make sacrifices for the benefits of others. And then it’s up for debate as to how much of a hit myself and my family should take for the well-being of others: some people think we ought to do a lot to help each other out, especially if we have a lot, whereas others think that they shouldn’t be asked to make sacrifices, especially if what they have is something that they feel that they have earned and are entitled to.

While moral debates happen all the time, experience suggests they’re difficult to resolve. Why might this be the case? First off, what often seems to be so difficult about moral debates is that those who disagree with us about moral matters don’t seem terribly interested in actually listening to what we have to say: they don’t want to change their minds, they just want to hold on to what they think is right. Second, that someone disagrees with us about a moral matter might lead us to start thinking in “us” versus “them” kind of terms. Thinking in this way could bring along with it biases that lead us to think that “they” not worth listening to, or that “their” arguments couldn’t possibly be any good. This happens all the time when we try to talk politics: we start thinking of the other person not as an individual, but as a member of a group that we don’t like (those heartless Republicans don’t want to listen to us level-headed Democrats, perhaps, or those hippie Democrats don’t want to listen to us level-headed Republicans).

There are other factors that complicate moral disagreements. Consider first the ways in which we might try to resolve disagreements of different kinds. Say, for example, that you and I disagree about the year a movie was released, or what the capital of Indiana is, or how many feet are in a yard. These disagreements are easily resolved: a quick appeal to the internet will settle the matter. Or maybe we disagree about something more complicated: say we work in construction and we disagree about where the best place to build that bridge is. It seems like the best way to resolve this debate is for both of us to present our reasons and evidence, and then, as long as we’re willing to listen to each other, the better plan will become apparent through our conversations with each other. Not all such debates will go so smoothly, of course, but they seem to definitely be resolvable, much more easily than debates that we have about what’s right and wrong.

So here’s where I think part of the problem lies: we can resolve, or at least make progress on disagreements about movie release dates, the imperial measurement system, state capitals, and even optimal bridge placement, by acquiring new knowledge. One of the main reasons we disagree about these matters is that we know, or think that we know, different things. In order to resolve our disagreement, then, we need to get on the same page by knowing the same relevant things. Acquiring this knowledge can be easy, like when we look up something on the internet, or it can be more difficult, like when we need to do more to consider what we have evidence for thinking is true when building a bridge. Either way, we can get this knowledge by listening to others, by consulting reputable sources, and by considering the evidence.

But this doesn’t appear to be how we resolve our moral debates. I can’t look up online how I ought to balance my personal sacrifices against the possible increased wellbeing of others. Actually, I probably can find at least what someone thinks is an answer to this kind of question on the internet. But it’s not going to settle any debates if I point to someone on the internet who says “you should care more about others!” in the way that I can point to the fact that Wikipedia says that “Indianapolis is the capital of Indiana!” It’s also hard to see how I could try to give you the knowledge that I think you’re missing in order to resolve our moral debate: if I think that you really should give more to those who need it, and you think you’re doing plenty already, it often seems like the best we can do is to agree to disagree. But this is not a resolution, it’s a stalemate. As Kayla Chadwick laments, it’s hard to see how we can convince someone of something so basic as the fact that they should care about other people.

So what’s the solution? Here’s a suggestion: perhaps moral debates need to be resolved not by just sharing knowledge with each other, but by seeking out new understanding. This might require helping others see things from a new perspective, or helping them draw new connections between their beliefs that they hadn’t considered before, or challenging conclusions that they’ve drawn in the past, or helping them have new experiences, or all of the above. It may be the case that not all of these tasks can be accomplished just by talking to one another: for example, if you’re really not moved by the plight of someone that you are easily able to help, it’s hard to see how I can get you to understand just by giving you information at the dinner table.

Nevertheless, we might still be able to accomplish at least part of the task of conveying understanding by talking to one another: I might be able to use my words to share experiences I’ve had, or to challenge assumptions that you have made, or to help you see relationships between things you believe that you didn’t realize before. What’s probably not going to work is what works in other kinds of debates, namely the bald presentation of your reasons, or simply telling someone that this is the right way to think about things. The mere fact that you think something is true is probably not going to help me understand why it’s true, and so if we’re going to resolve our moral debates we’ll probably have to work a lot harder.