← Return to search results
Back to Prindle Institute

Is Academic Philosophy Pointless?

photograph of Dead End sign where road meets woodline

Back when I taught philosophy, certain students — often the ones most interested in the subject — would invariably confront me at the end of the semester with the same complaint. “I’ve read brilliant arguments for diametrically opposed positions,” they would say, “and I’ve read brilliant critiques of every argument. Now I don’t know which position to choose. And if I can’t choose a position, what was the point of working through them all?” At the time, I didn’t have a good answer for them. I think I have a better answer now — more on that in a bit — but I fundamentally sympathize with their complaint. There is, indeed, something futile about academic philosophy. Or so I will argue.

I left professional philosophy two years ago for a variety of reasons, but mainly that, after three years on the job market, the prospect of securing a tenure-track position at a decent institution appeared dim. Since then, I have had some time to reflect on what I decided to do with my third decade on Earth. I’ve concluded that I’m very happy to have studied philosophy for over ten years, but that I do not in any way regret leaving the profession. In this column, I will explain why I feel this way. Part of the explanation comes back to my students’ complaint.

First, why was getting a PhD worth it for me? I came to graduate school with a burning desire to answer two questions that had puzzled me since high school: what is the nature of moral facts, and what is the true ethical theory? (I didn’t use this language in high school, of course).

After spending a decade thinking about the various answers philosophers have mooted, I arrived at conclusions that remain reasonably satisfactory to me. Even leaving aside the friends I made, the brilliant people I got to talk to, and the other things I learned, getting those answers alone made the experience worthwhile.

I am, however, all too aware that the answers I’ve come to, and the arguments for them that I find convincing, strike a good proportion of academic philosophers — many much smarter and more able than I — as less than compelling. Some have even said so in print. I would expect no less from philosophers, since they are trained to analyze arguments — particularly to see where they may fail.

This leads me to why I don’t regret leaving the profession. The problem is not that I dislike disagreements. The issue I have with academic philosophy is that most of the discipline’s research questions are inherently unresolvable. By “resolution,” I mean the provision of answers or solutions which the preponderance of the available evidence and arguments favor over all others.

In other words, academic philosophy’s questions do not remain unresolved because they’re hard, or because we just haven’t discovered the best arguments or sufficient evidence yet. They are unresolvable in principle, because of their very nature.

Among my reasons for thinking this is that most of the basic questions in academic philosophy have remained pretty much the same for over 2000 years. I’m not an expert in metaphysics or epistemology, but I can confirm that this is true with respect to the most important questions in ethics. Moreover, many prominent contemporary answers to these ethical questions can be found in some form in the classic ancient texts. Jeremy Bentham may have invented the term “utilitarianism” to describe his ethical theory, but the same basic approach can be found in Platonic dialogues and the gnomic pronouncements of Epicurus. And really, if Bentham, John Stuart Mill, Henry Sidgwick, J.J.C. Smart, G.E. Moore, either of the Peters (Singer and Railton), James Griffin, Walter Sinnott-Armstrong, or Richard Brandt — among many, many others — have not come up with arguments for consequentialism that establish it as the theory more likely to be correct than all the others, how likely could it be that such arguments are still out there, waiting to be discovered?

The fact of continued disagreement over these fundamental questions among some of the most brilliant minds of many generations is at least suggestive that these issues will never be resolved — and not because they’re just hard.

Before I explain why I think this fact may make much of academic philosophy pointless, I must observe that judging by their conversation, some philosophers are not willing to concede the essential irresolvability of philosophical questions. I have actually met Kantians who think deontology is not just the right ethical approach, but obviously the right approach. You’d have to be crazy to be a consequentialist. I don’t know how seriously to take this talk; it may be partly explained by various institutional and cultural incentives to engage in intellectual chest-thumping. Still, the fact of persistent disagreement highlighted in the last paragraph surely makes the view that deontology — or consequentialism or virtue ethics — is obviously the correct approach to ethics somewhat farcical. You’d have to be crazy to think plausible answers to deep philosophical problems are ever obviously true or false.

The reason I think that the irresolvability of philosophical problems makes academic philosophy substantially pointless is that academic disciplines that purport to be in the business of evaluating truth claims should be able, at least in principle, to make progress. By “progress,” I mean nothing other than resolving the research questions or problems that characterize that discipline. Note that this view allows that the research questions themselves might change over time; for example, resolving some questions might raise more questions. But the inability of a truth claim-oriented discipline to resolve its research questions is a problem that has to be addressed.

There are a number of ways an advocate for academic philosophy might respond. First, she might point out that there are other truth claim-oriented disciplines in which unresolvable questions are commonplace. All agree that these disciplines are not pointless, so the inference from unresolvable questions to pointlessness is flawed. I’m unable to fully assess this argument because I’m not sufficiently familiar with every truth claim-oriented discipline, and all the advocate of academic philosophy really needs is one example. But I could imagine her invoking some other humanities discipline, like history. Historical questions are often unresolvable, but history’s value as a discipline seems unassailable.

History, though, is different from philosophy in two ways. First, some of the unresolvable questions in history are questions of how best to interpret sets of historical facts, and it’s not clear that the primary criterion for evaluating historical interpretations is related to truth rather than, say, fruitfulness or explanatory power. Did the Holocaust inevitably flow from the logic of Nazism, or was it not inevitable until it became official state policy sometime in 1941? Historians arguing this question all draw on the same body of evidence: for example, the genocidal implications of Hitler’s Mein Kampf; his 1939 speech in which he threatened that if another world war began, European Jewry would be annihilated; his plan to deport Jews to Madagascar after France fell in 1940; and records of the 1942 Wannsee conference. The debate concerns not what the facts are, or whether we have good reasons for believing them, but rather which interpretation of the facts better or more fruitfully explains the Nazi genocide.

More importantly, to the extent that historical questions concern historical truth claims, their irresolvability is a function of the paucity of evidence, not the nature of the questions themselves.

Looked at one way, the Holocaust question hinges on the motives of the historical actors involved. We may simply be unable to determine those motives by a preponderance of the available evidence. This implies that new evidence could come to light that would resolve this question. By contrast, as I’ve suggested, philosophical questions are not unresolvable because we don’t have enough evidence at the moment. They are unresolvable by nature.

It’s no doubt true that many questions in a wide range of disciplines remain, and perhaps always will remain, unresolved. In general, that’s because we lack the evidence required to prove that a particular answer is more likely to be true than all the others. This does not make these disciplines futile, in part because we can’t know a priori whether sufficient evidence will become available to resolve their research questions. We have to do the research first. Moreover, the fact is that many disciplines do resolve their characteristic questions.

A second argument for academic philosophy is that it makes progress of a sort, even if it cannot resolve its questions. Philosophical progress consists in refining competing answers to philosophical questions, as well as the questions themselves. You can find the fundamental tenets of consequentialism in the ancient texts, but modern philosophers have arguably explored the theory at a much higher level of detail, sophistication, and thoroughness. Similarly, modern philosophers have been able to refine our understanding of a classic question in metaethics — why be moral? — with some even arguing that the question isn’t well-formed. Thus, even if academic philosophy doesn’t resolve its questions, its exploration of the logical space of answers is a good enough reason to support it. (Incidentally, this iterative process of refinement has also led philosophers to develop an elaborate jargon that makes cutting-edge articles in ethics nearly impossible for laypeople to understand, but in my view that’s not objectionable in itself.)

Although I grant that this is a form of progress, and it certainly requires great intellectual ingenuity, I’m not sure continual refinement alone can justify a discipline.

Suppose that the question whether the universe is heliocentric were for some reason unresolvable in principle. In this world, astronomers are doomed to merely add more and more elaborate conceptual curlicues to their preferred heliocentric or geocentric theories for all eternity — and they know it. Would this question still be worth the effort and resources expended to try and answer it?

A third argument is that learning and doing philosophy are valuable in all sorts of ways for those who engage in these activities. Among other things, they help individuals and societies think through problems they may actually confront in real life. This is obviously true for subfields like ethics and political philosophy, but it also fully applies to epistemology and metaphysics as well. For example, I have argued that a certain view about the nature of race underlies conservatives’ arguments against affirmative action. The question of what races are is a metaphysical question.

There are other very good reasons to learn and do philosophy. Philosophy is intellectually stimulating. It helps develop critical reasoning skills. It promotes both open-mindedness and a healthy skepticism. It helps us ask better questions and to evaluate possible answers.

Academic philosophers do and learn philosophy. They therefore benefit in all of the ways I’ve described, and it might be argued that this justifies the discipline. Obviously, this is a dubious argument, since it seems implausible that benefits to practitioners of the discipline alone can justify a discipline. More compelling is the fact that academic philosophers teach students, thereby enabling and encouraging the latter to do and learn philosophy and reap the benefits.

I do not dispute that it is valuable for academic philosophers to teach philosophy. The trouble is that, in my view, the contemporary discipline of academic philosophy is not primarily focused on pedagogy or public outreach. When I was in graduate school, instruction in pedagogy was, to put it charitably, an afterthought. American Philosophical Association meetings, which largely serve as showcases for new research, remain the most important annual events in the academic philosophy world. Of course, some professional philosophers practice the discipline differently from others. At some colleges, research output does not even factor into tenure decisions, and professors therefore focus more on teaching. Yet no one rises in the profession by winning a teaching award or publishing an opinion piece in The New York Times. Prominence in academic philosophy is primarily a function of publishing books and articles that other professional philosophers admire.

So, the value of learning and doing philosophy fails to justify the discipline of philosophy as currently practiced — or so it seems. But the advocate for academic philosophy may reply that effective teaching or public philosophizing actually requires ongoing philosophical research. Imagine if philosophers had stopped doing research in moral philosophy after G.E.M. Anscombe published her famous article, “Modern Moral Philosophy,” in 1958. (In that article, Anscombe declared that “[i]t is not profitable for us at present to do moral philosophy”). In this world, students could study, and professors teach, only books and articles that are at least sixty years old. They could not, for instance, examine any critiques of the arguments found in that article that were published after it appeared. Wouldn’t that be, well, crummy?

This argument has some visceral force for me. It gains added force when we remember that philosophers certainly make a kind of progress by exploring the logical space of possible answers.

Philosophers can enlighten the public about these possible answers, which we sometimes call “traditions” (e.g., the just war tradition), which can in turn help the public think through real-world problems. Because continual research can uncover more possible answers, it can be valuable for this reason.

Does this justify academic philosophy as currently practiced? Frankly, I’m not sure. In my experience, many philosophical articles are written as if aimed at resolving their questions — something I’ve argued they cannot do in principle. As I’ve mentioned, there is also a heavy emphasis on criticizing opposing views. Is this the best way of exploring the logical space of plausible answers? Adam Smith famously observed that “it is not through the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.” His point is that markets work by exploiting self-interest in ways that redound to society’s benefit. Similarly, the defender of academic philosophy might argue that the best way to explore the logical space of answers to a philosophical question is to incentivize philosophers to believe, or at least to argue as if, their preferred answer actually resolves the question. In other words, what looks to me like a mistaken belief among those Kantians who think or at least act as if consequentialism is obviously wrong may redound to the benefit of philosophy as a whole. Perhaps this is true, but I’m just not sure.

To recap, I’ve argued so far that since academic philosophy cannot resolve its research questions, its only hope of justification lies in its ability to disseminate philosophical ideas and modes of thinking to the broader public. Doing this effectively may require a certain amount of research aimed at exploring the logical space of answers and identifying those that seem most plausible. But for me, it is an open question whether the way research is currently conducted is the best way to explore the logical space of answers.

I must conclude, then, that much of academic philosophy as currently practiced may, indeed, be pointless. Curiously, though, I think I have a better answer to my students’ complaint about why they should study philosophy, despite its inherent irresolvability. As a layman who seeks answers to philosophical questions, one need not wait until arguments are found showing that one answer is more likely to be correct than all the others in order to endorse that answer. One can rationally choose whatever answer is most subjectively satisfactory, as long as it is at least as plausible as any other answer. In addition, the value of learning and doing philosophy does not solely consist in finding answers to difficult questions. As Socrates shows us, it also lies in learning how to ask the right questions.

The Good Place and the Good Life

Warning: This article contains spoilers.

Students all across the country have recently found new motivation to be interested in philosophy—NBC’s The Good Place, which aired its final episode on January 30, 2020. The series explicitly engages with philosophy through the storyline of one of the central characters—Chidi Adagonye—who was, in life, a philosophy professor. In the afterlife, Chidi teaches ethics to a group of wayward souls who, as the show progresses, become the best of friends. Chidi provides a useful narrative vehicle for direct discussion of philosophy. Even in the absence of Chidi’s philosophical explanations, the show is inherently philosophical. It demonstrates that, rather than being an exclusively scholarly pursuit, living philosophically is part of what it is to be a flourishing person.

On the face of it, the series appears to be about death. It begins with Eleanor Shellstrop’s arrival in what appears to be heaven, where she meets an angelic architect named Michael. As the show progresses, however, it becomes clear that, far from being about death, the series is actually about what it is to live a good life. Throughout, the audience is left wondering “what makes heaven heavenly?” And, by contrast, what would make hell torture? Fundamentally these are questions about what kinds of things are worth avoiding and what kinds of things are really worth pursuing.

By the end of the first season, the cast of characters come to the realization that they were not, as they had been told, in “The Good Place.” Instead, they are in “The Bad Place.” Michael is not an angelic architect after all, but a fire demon conducting an experiment. Instead of setting people on fire or feeding them to swarms of insects, Michael is attempting to torture departed humans using the particulars of individual personalities and exposure to other people.

One of the reasons that this storyline is surprising is that the people involved seem to be pretty good people. They certainly have their faults, but none of them are the kind of person that we might think deserves to be tortured for all eternity. As David Lewis argues in his paper, Divine Evil, infinite punishment may be unjust for any finite crime. Even so, if we picture anyone in hell, it tends to be people like Hitler rather than childlike petty criminals like Jason Mendoza. At this stage, the cast of friends is put in a position to analyze their own behavior. They become reflective agents, considering their virtues and vices. They learn lessons in Aristotelian moderation. Eleanor learns to be less selfish, Chidi less indecisive, and Tahani less concerned with what other people think of her. Interestingly, Jason’s personality is such that he might be largely forgiven for his bad actions in life, so it seems somewhat unfair that he’s in The Bad Place at all.

The group also learns that it is not possible to live a flawless human life. As this storyline unfolds, the series gently ribs Peter Singer and the Effective Altruism movement (which turns out to be for the best—Kristen Bell contributed her voice to the free audiobook of Singer’s The Life You Can Save, which you can find here). A character based roughly on Singer (or at least his philosophy), Doug Forcett, is celebrated in the afterlife because, during a drug-induced hallucination, he guessed exactly what the structure of life after death is like. To do well on the cosmic scorecard, he spends his entire life avoiding doing any harm and actively trying to do the most good he can do. Nevertheless, in our global culture in which every consumer choice we make has implications far beyond what we can see, Forcett’s scorecard is still insufficient to get into The Good Place. The lesson we are left with is that we should do the most good we can do, in full recognition that we’ll never be perfect. We can live meaningful lives by actively doing as much good for others as we reasonably can.

In many ways, the series is about living a meaningful life by living a morally good life. Some people understand life’s meaning by appeal to a grand plan set into motion before any being lived on earth, perhaps by a divine authority like God. One interesting feature about The Good Place is that, despite the fact that it is, on its face, a show about the afterlife, it remains remarkably agnostic about religion. We encounter angels and demons, but not God or the Devil. The Good Place is a show about persons; it is a story about moral beings that make choices, act for reasons, have weaknesses, grow, and change. Like Sartre’s No Exit, the afterlife is about interactions with other persons. The Good Place does not conclude, like Sartre, that “hell is other people.” Instead, the message is quite the opposite: it is our interactions with other people that allow us to grow into the best possible versions of ourselves.

The penultimate episode of the show includes a surprising twist. The group of friends, including Janet and Michael, all make it to The Good Place. Chidi looks forward to meeting the philosophers he is sure he will find there, but he learns that, for various reasons, many of his favorite historical thinkers didn’t make it. He does find Hypatia, an Ancient Greek female philosopher played by Lisa Kudrow. The group of friends learns from Hypatia that heaven is not all that it’s cracked up to be. Infinite pleasure for eternity is boring, and it changes one’s mind to mush. As the new architect of The Good Place, Michael comes up with a solution—when they’re satisfied with the experiences they’ve had, a person can walk through a door and fade into non-existence. The idea that conscious experience can end lends meaning to existence.

This storyline tracks a classic debate in philosophy: what makes death bad for the person who dies? On the other side of the coin, would immortality be desirable? In philosopher Bernard William’s famous paper, The Makropolus Case: Reflections on the Tedium of Immortality, he argues that death is bad for the person that no longer exists because their desires go unsatisfied. The things they wanted in life can no longer be achieved. That said, according to Williams, immortality is not desirable. If we lived forever, we would either change so much that our future identity would be, from our current perspective, unrecognizable to us, or we would become bored.

The final episode of The Good Place takes some lessons from Williams. Most people desire existence to come to an end. The real Good Place is a place where you can try everything and perfect every skill. Unlike in William’s paper, the desire to bring an end to existence is not motivated by boredom—not exactly. Final departure from The Good Place comes with a sense of peaceful satisfaction. Chidi describes a metaphor from Buddhist philosophy—the person is a wave returning to the ocean.

Though generally held in very high esteem, some viewers struggled with what they thought of as a dark ending to an otherwise light-hearted comedy. After all, most of the main characters cease to exist! In many ways, there is no more fitting way for the show to end. The show was never about heaven or hell; it was always about living a good life with the help of good friends. Crucially, it was about living a philosophical life, which is incomplete without coming to terms with death.

Climate Change and the Philosophical Pitfalls of Grounding Duty to Future Generations

Two young women in the foreground of a protest march, with signs behind them saying "our future our choice"

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Reports of mass extinction, extreme weather events, speedily melting ice caps and the inundation of low-lying islands by rising seas suggest that the environmental disaster scientists and activists have been warning about for several decades, has now begun.  

On the face of it, there isn’t really a good argument to be made against a moral imperative to fight climate change. Those who voice opposition to, and those who lobby against, climate action generally deny climate change is real rather than argue against an obligation to do something about it.

Governments across the world are nowhere near where they need to be on acting to prevent worst-case scenario outcomes, even where climate change is grudgingly acknowledged by the powers that be and even as its effects start to become difficult to ignore.

In January this year, David Attenborough told a cohort of business and government leaders, diplomats and influential celebrities at the Davos World Economic Forum that “What we do now, and in the next few years, will profoundly affect the next few thousand years,” and “Unless we sort ourselves out in the next decade or so we are dooming our children and our grandchildren to an appalling future.”

Few would now disagree that there is urgency about the issue of climate change; however, the question of our moral relation to future generations is philosophically complex. Does it make sense to claim that we have moral duties to persons who do not exist?  Do future persons, not already alive, have rights? How are the rights of future persons connected with moral duties that we have now?

There are some meta-ethical issues at play here (issues about what we take ethics to be centrally concerned with). A common foundation for morality is how the behavior of persons affects other persons (and sometimes other creatures/entities). The ‘moral community’ is that group to whom we owe moral consideration; whose well-being makes moral claims on us, or whose interests, or rights, provide imperatives and checks on our actions.

On one level, using a simple example given by Derek Parfit, we can see how, straightforwardly, the actions of someone now can harm an unknown, hypothetical person in the future. Parfit points out that, if I leave some broken glass in the undergrowth of a wood, which a hundred years later wounds a child, my act harms this child. This is of course familiar and quite intuitive reasoning; it forms the basis of things we tell our children every day, of the ilk: “don’t leave that there, someone could trip over it.”

Is it morally significant how far in the future that harm occurs? On one view it is not, as there is a direct causal relation between action of burying the glass and the child’s being cut by it. When I tell my child not to do something as it may harm someone, I am likely not to be thinking that far into the future as Parfit’s example, but this seems to be a result of the psychology, rather than the morality, of temporal distance.

However, it could be argued that moral accountability is weakened by temporal distance, because part of what it means to be in a moral community is that there is moral accountability, by virtue of reciprocity, between members – so that there is an in-principle possibility of the wronged party holding the other to account.

In response to Parfit’s example it should also be noted the person burying the glass is only causing a future harm as long as the child (or someone else) is later cut by it. That outcome is highly contingent. If the lack of reciprocity between individuals who are not one another’s contemporary, together with the contingency involved in any particular outcome, are problematic then it may be even more difficult to make decisions about the behaviour of a current population’s effect on a future population.

The question about how current people’s actions harm, or directly impact, future people encounters a paradox: because each person born is the product of so many and such complex contingencies, and all future persons are a product of actions taken in the present, a different set of actions (even one different variable) will produce a different human being.

Imagine two possible scenarios. In the first, no action is taken and climate change produces a disastrous future for all people on the planet. In the second, massive effort is undertaken, now with the outcome that disaster is averted and future generations are able to pursue happiness and have the opportunity to flourish.

Because of this paradox, it isn’t quite right to say that particular future persons will be better off if action is taken, since particular future persons who come into existence in the first scenario, if action is not taken, would not have existed in the second scenario. Can the people of the future in the first scenario really blame us, since had we made different choices they would not exist?

This line of thinking may appear to yield the conclusion that even if we do not conserve the environment for future generations of people, it cannot consistently be said that we have wronged them. But can we cogently argue that they cannot complain since in any other set of circumstances they would never have existed?

This is a difficult moral question – borne out in other problems or areas of practical ethics, such as whether to choose to have a disabled child. It opens up issues of how we value existence, what type of existence we value, and what level of difficulty we would be prepared to accept for the sake of existence itself. I shall not try to resolve this problem here – but it is not necessarily agreed that such future persons, in unbearable hardship, have no right to complain of the actions of their forebears that led to that hardship.

This paradox seems to arise in part where morality is taken to be centrally concerned with how actions of certain individuals affect other individuals – the problem here is that there is too much focus on particular individuals.  (Parfit himself thought that we should rescind the principle of morality being centrally concerned with individuals and employ a more impartial principle to ground morality in actions which produce the most wellbeing.)

But this solution creates another problem, which is encountered in consequentialist ethics of a utilitarian persuasion. Since utilitarianism is based on a principle of maximising happiness or wellbeing, it functions as a calculation of total possible happiness. This produces the counterintuitive outcome that a very large number of people who were borderline miserable would be preferable to a very small number of very happy people. Obviously this system cannot provide a foundation for a reasonable, binding, moral relation to future generations.

An argument from the notion of rights appears to fare better. If we acknowledge universal and inalienable human rights such as a right to life, liberty and security then, by virtue of their universality, we could extend them to future persons by including them in the moral community of holders of rights.

It has been noted by some philosophers, however, that the concept of rights is in some sense morally inadequate – it can fail to capture the moral seriousness of a situation. Imagine having to answer to future persons living with the devastation of our failure to act when we had the means and opportunity. It would not go all the way to the heart (so to speak) of their moral grievance to simply note that their right to live a full human life was violated – in the same way that the moral terribleness of murder is not adequately captured by noting that the murdered person’s right to life has been violated.

A still better grounding might be in a notion of moral duty as suggested by Immanuel Kant in the principle of universalisation: that we discover our moral duty by asking if we could will any action to be a universal law. Applying this principle our moral duty to future generations becomes clear when we simply ask what we would have endorsed were we to find ourselves in the same situation.

The window, we are being told by scientists, is closing fast. We may have little more than a decade to avoid unstoppable climate catastrophe. This means that the future is arriving. In a sense, the future is already here. Children born in this decade may be alive at the end of the century, and will be directly affected by our current actions or failure to act. Those future generations that appear in the abstract in the philosophical discussions of the past twenty to thirty years are already here. There are some hopeful signs; the new generation is starting to demand action as high-profile strikes by schoolchildren across the world put pressure on those in power to act to rescue the future they will inherit.

Antinatalism: The Tragedy of Being Born

A baby's hand holding a daffodil petal

On February 7th, Mumbai business executive Raphael Samuel made international headlines by indicating his intent to sue his parents for causing his life. Samuel explained that his parents’ decision to procreate was purely in their own self-interest and never accounted for the likelihood of suffering that he would later endure; just like how he might sue someone for causing him physical and mental distress, Samuel believes that his parents’ choice to give birth to him led to essentially the same result as if he had been kidnapped: he was forced to go somewhere against his will. Although he has been unable to find a lawyer to represent him and no judge has indicated a willingness to hear the case, Samuel insists that he is mostly concerned with making a public statement to underline his belief that procreation is not necessarily a good thing – and this is also why he plans to sue his parents for only one dollar.

Samuel affirms what’s known as ‘antinatalism,’ a philosophical position which contends that it is always, in principle, wrong to procreate. Though antinatalism can take a variety of forms, a common threadline amongst its defenders is not simply that an increased population overstresses the environment or that giving birth to people leads to problems for others down the line, but rather that it is bad for the person who is born that they are born – that is to say, antinatalism argues that birth is an inherent harm, not merely an instrumental one.

In the words of philosopher David Benatar, life is “permeated by badness” to a degree that irrevocably tips the scale against any possible assessment in its favor; despite being filled with pleasurable experiences and beautiful things, the world is also home to (literally) every kind of evil and pain – to force someone into such an arena against their will is to expose them to possible goods, but guaranteed harms. Of course, death is also a harm, so Benatar insists that it is only morally permissible to perpetuate a life, not to cause one to either begin or end.

Samuel is also concerned about the impact of humanity on other species; as he told the BBC, “There’s no point to humanity. So many people are suffering. If humanity is extinct, Earth and animals would be happier. They’ll certainly be better off. Also no human will then suffer. Human existence is totally pointless.” By 2050, Samuel’s home country of India alone is predicted to have nearly 1.7 billion residents, a threatening problem that has sparked national conversations about government policies to curtail overpopulation. In Samuel’s mind, antinatalism could serve a functional role to better manage the limited resources of an already-crowded globe.

In 1968, Garrett Hardin published a paper titled “The Tragedy of the Commons” where he argued that rational agents acting in their own self-interest could easily deplete a shared resource of limited size (for it always makes sense to each individual to take a little bit more, despite the eventual burden placed on the system as a whole). Particularly as questions of climate change, sustainability, and overpopulation loom in the contemporary discourse, Hardin’s illustration of a hillside laid barren by nothing but rational choices resonates more than many would care to admit.

So, although it is unlikely that many will find Raphael Samuel’s nihilistic doctrine or David Benatar’s anti-birth philosophy attractive in itself, a second look at the antinatalist thesis might make more sense than people initially think – even if it might make for some awkward tension at your next family gathering.

New Year’s Resolutions and The Problem of Self-Promising

Photograph of an open notebook with a pen on it; written on the notebook is "New Years Resolutions"

For many people, early January provides an opportunity to reflect on the successes and failures of the previous year with an aim to set goals for improving one’s character going forward. Though difficult to trace, the practice of setting ‘New Year’s resolutions’ may have its roots in ancient religious celebrations thanking various gods for their favor and promising continued faithfulness throughout the coming year. Of the many who christen a new calendar with some hopeful ambition, it is estimated that fewer than 10% actually succeed in setting new habits or breaking old ones. I take it for granted that most readers will, in general, agree that promises should be kept – what, then should we think about breaking resolutions? As we look forward to 2019, should we be worried that we might be setting ourselves up for additional moral failures if we similarly neglect the gym, the savings plan, the sleep schedule, or the nicotine patch? In short, is it wrong to break a promise to yourself?

Thomas Hobbes, the English political philosopher (and eponymic inspiration for a certain cartoon stuffed tiger), certainly thought not; in Book II, Chapter 26 of his magisterial 1651 work Leviathan, Hobbes writes, “Nor is it possible for any person to be bound to himselfe [sic]; because he that can bind, can release; and therefore he that is bound to himselfe onely [sic], is not bound.” Because any promise requires both a promise-maker and a promise-receiver – where, in virtue of the promise, the former is bound to act in a particular way with respect to the latter – a case of self-promising is, at best, odd. A promise-receiver is always able to release a promise-maker from their obligation to fulfill the promise, so if the receiver and the maker are one and the same, then promising something to yourself — as in the case of a New Year’s resolution — could never actually be binding. If you (as the maker) no longer wish to fulfill the promise, then you (as the receiver) can automatically release yourself from the obligation.

Consider a promise that Calvin has made to Susie about repaying a debt of two dollars; as the promise-maker, Calvin can only be released from his commitment if Susie, the promise-receiver, chooses to free him. Such is, Hobbes thinks, the function of a promise: it guarantees future action even if the agent desires to do otherwise. If Calvin fails to return the two dollars, then he has committed a moral violation — that is to say, he has done something wrong. But what if Calvin has only made a promise to himself? Say, for example, that Calvin has made a New Year’s resolution to eat a healthy breakfast every day, but finds himself tempted to instead eat a bowl of chocolate cereal sometime during the first week of January. As the promise-receiver, Calvin could simply release himself (as the promise-maker) from the obligation to eat a healthy breakfast and continue on without a moral care. Consequently, the lack of their binding obligation led Hobbes to think that self-promises were simply impossible.

However, several contemporary philosophers aren’t so sure: Derek Parfit has argued that a past version of yourself and a future version of yourself are not exactly the same thing as the present version of yourself, so it is not quite right to say that you are simultaneously promise-maker and promise-receiver — to break a resolution, on this view, is to break a promise made to a historical version of yourself. Connie Rosati contends that self-promises serve as a validation of our sense of personal authority, so to break a resolution is to undermine that which gives us confidence in our general ability to make decisions and exercise our autonomy. Rather than see self-promises as impossible, these views present them as gauges of our sense of self over time.

Most interestingly, in arguing against Hobbes, Allen Habib thinks that the power to release yourself from a reflexive promise actually serves as evidence for the normative power behind promises in general, saying “the possibility of promisee release adds flexibility and power to the practice of promising, and this in turn makes the sorts of arrangements that can be made with promises more subtle and useful.” That is to say: Habib rejects Hobbes’ implicit idea that obligations are only truly obligatory if they are unavoidably binding. What happens if a promise somehow becomes impossible to fulfill (Habib uses the example of a casino burning down on Tuesday after I promise to drive you there on Wednesday)? In such a case, whether the promise-receiver intentionally releases the promise-maker or not, the promise itself has become “orphaned” and, thereby, is no longer binding.

In general, Habib argues that it is actually beneficial to maintain this more flexible view about the contextual features of promises writ large, including in the case of self-promises. So, concerning Calvin’s resolution about healthy breakfasts, his obligation to honor his self-promise might be justifiably jettisoned if the context he finds himself in changes sufficiently (say, if he finds himself trapped inside a chocolate cereal factory with nothing else to eat), but might not justifiably change simply on a whim (at least not without Calvin being guilty of shirking the obligation just as much as he would be guilty towards any other promise-receiver).

So, as we look ahead to 2019, if you are the sort of person who is inclined to commit to a project of self-betterment, then know that you have some philosophical heads supporting you in your quest. And if your resolution involves looking for a few new philosophers to read, then any of the names mentioned in this article would be a good place to start.

Trump, Puerto Rico, and the Ethics of Skepticism

Arial photograph of destruction caused by Hurricane Maria

In September, Donald Trump claimed on Twitter that the number of deaths in Puerto Rico caused by Hurricane Maria reported by the media was exaggerated: instead of the widely cited number of approximately 3000, Trump claimed that the real death toll was closer to 16. According to Trump the number was inflated by his political opponents with the intention of making him look bad. To support such a bold claim one would expect to be presented with a significant amount of evidence, but Trump presented none. Instead, it seems that he merely raised the possibility of a conspiracy and appealed to his supporters’ distrust of the political left in order to try to deflect criticism that he did not sufficiently address the problems created by Hurricane Maria.

Many interpreted Trump’s claims as abhorrent: not only was Trump apparently attempting to capitalize on a recent tragedy in order to score political points, he was also expressing a complete disregard for a significant loss of life. Carmen Yuliz Cruz, the mayor of San Juan, put the matter succinctly when she tweeted: “Mr Trump you can try and bully us with your tweets BUT WE KNOW OUR LIVES MATTER”.

While many from those on both sides of the political spectrum repudiated Trump’s claims, responses from some diehard Trump supporters differed. It is common to find comments on articles and tweets made by those that praise Trump for what they take to be expressions of truth, and chastise what they take to be bias in reporting. Here are some representative responses on Trump’s follow up to his original tweet:

“I think Puerto Rico needs to show a list of the names .. just like when Hurricane Katrina hit in 2005 … It  only took 5 to 6 days…”

“It’s a rabbit trail designed to distract. Don’t waste time beating that dead horse Mr. President. Time to start tweeting out your MANY success. Let the mainstream media chase its tail.”

“Our President Mr Trump had done nothing wrong when Maria hit Puerto Rico he did exactly what he was supposed to. All the goods he had sent there sat in haulers no one wanted to drive them and you blame this on our President. It was their President that failed them.”

“I am amazed that “fake news” has infiltrated the weather channel. News reporters acting like the wind is about to blow them down with people walking in background at a normal pace. Then, reporters telling about a death toll with so much exaggeration. Shameful scaring of people.”

Of course, these claims are not generally supported with evidence, either. Instead, in the wake of Trump’s tweets, many of his followers have responded in the following way: it is really impossible to know, exactly, how many people died in Hurricane Maria, perhaps because it took a significant amount of time for the death toll numbers to come in, or perhaps because there are suspicions that those in charge of reporting such numbers are corrupt or incompetent. Since his supporters see Trump as trustworthy and his opponents untrustworthy, they claim that it is more plausible that Trump’s numbers are accurate.

It is unclear whether Trump truly believes what he is tweeting, or if he is trying to purposely mislead people. At the very least, what Trump appears to be doing is sowing seeds of doubt in his supporters, in this case by raising the possibility that the officially reported death toll numbers are wrong, solely on the basis of egotism and distrust. He is, then, engaging in a disingenuous form of skepticism. It is sometimes a good thing to be skeptical – we do not want to believe just anything that anyone tells us without thinking about it, and so it is often a good idea to scrutinize information we’re given or to look for additional evidence ourselves. But skepticism without cause and that is based not on trying to get to the truth can be detrimental and, in some cases, even unethical.

When philosophers talk about skeptics they have in mind someone who attempts to convince us that we do not know something (or in general, that we do not know anything) by reminding us of all the ways that we could be mistaken. For instance, the classic philosophical skeptics challenge us to consider whether we could merely be dreaming, or raise that possibility that we could be living a life in the computer simulation like the Matrix. Since these are possibilities that I can’t rule out – I really can’t tell whether I’m dreaming right now or whether I’m awake, and if I were in a computer simulation I would never know it – it seems like I’m stuck: for all I know I could very well be wrong about everything I thought that I knew.

In the real world, skepticism is typically much more narrowly focused: someone expresses a belief, and that belief is called into question because of the possibility that someone could be wrong. Again, this can be a good thing: it is a good practice to call one’s beliefs into questions and to make sure that one has good reason to believe them. But it can also be unhelpful: when we have a significant amount of evidence, raising the mere possibility of being wrong can be a distraction, something that prevents us from believing what’s true. Conspiracies are often based on unfounded skepticism: that the moon landing was faked in a Hollywood studio, or that the roundness of the Earth is a NASA plot are both possibilities, but not ones that most people take seriously. We should only pay attention to the skeptic, it seems, when they have good reasons for their skepticism.

Trump’s skepticism seems to fall squarely into the category of that which we should ignore, as there is significant evidence for the numbers that are widely reported to be accurate – for instance, in the form of an independent report conducted by The George Washington University – and no evidence that they have been fabricated. While it is still true that it is possible that the report was conducted incorrectly, and that it is possible that there is a conspiracy at play in an attempt to further discredit Donald Trump, these possibilities are not ones that we really need to take seriously: there is no evidence for these claims, and so much evidence that they are false, that we should not be worried about being wrong.

One worry with Trump’s recent tweets, then, is that he is spreading false information. However, expressing his skepticism in this way has moral consequences, as well. By convincing others not to believe that the reported death toll is correct, they will not only be less inclined to provide any assistance (say, in the form of donations to those affected by Hurricane Maria), but also threatens to strip from Puerto Ricans the right to seek such assistance. The people of Puerto Rico should be considered victims of a natural disaster, and as such we have certain obligations to help them. Trump’s skepticism, however, attempts to relinquish himself and his followers from any such obligations. The more significant problem behind Trump’s tweets, then, is not merely a dispute about numbers, but rather that an unfounded skepticism of reliable reports can result in lasting damage to people in need of aid.

Just How Useful is the Trolley Problem?

Image of a streetcar in a city.

Philosophy can be perceived as a rather dry, boring subject. Perhaps for that very reason, divulgers have attempted to use stimulating and provocative thought experiments and hypothetical scenarios, in order to arouse students and get them to think about deep problems.

Surely one of the most popular thought experiments is the so-called “Trolley Problem”, widely discussed across American colleges as a way to introduce ethics. It actually goes back to an obscure paper written by Philippa Foot in the 1960s. Foot wondered if a surgeon could ethically kill one healthy patient in order to give her organs to five sick patients, and thus save their life. Then, she wondered whether the driver of a trolley on course to run over five people could divert the trolley onto another track in which only one person would be killed.

Continue reading “Just How Useful is the Trolley Problem?”

Why Give $75 Million to Philosophy?

Image of Johns Hopkins University's Main Campus

When Bill Miller, a wealthy businessman, recently made a $75 million donation to the philosophy profession—specifically, to the Johns Hopkins philosophy department—philosophers rejoiced in unison, right? Not exactly. Some rejoiced while others engaged in a debate. Mike Huemer, a philosopher at the University of Colorado, kicked it off in a Facebook post, which was reposted at the What’s Wrong? blog.

Continue reading “Why Give $75 Million to Philosophy?”

What Does Kant Have to Say about Conspiracy Theorists?

An old diagram depicting a scientist's theory about a flat earth.

The Economist reported last week that more and more Americans are coming to believe the Earth is shaped like a pancake and not like a ball. The report comes as California resident Mike Hughes, hoping to prove our home planet is flat, is finalizing plans to fling himself 1,800 feet into the atmosphere above the desert in a homemade rocket in order to take a snapshot of Earth.

These are just the latest in a recent flurry of flat-Earth blips on our national radar. In January 2016, Atlanta rapper B.o.B. unloosed a torrent of tweets insisting the Earth is flat, attracting the ultimately unheeded Twitter refutations of prominent astrophysicist Neil deGrasse Tyson.

Continue reading “What Does Kant Have to Say about Conspiracy Theorists?”

The Kilogram: A Case Study in How We Think about Norms

A photo of the kilogram standard encased in a glass jar.

To establish a standard of physical measurement — the meter, the newton, the kilogram — is to establish the possibility of objective answers to questions such as, “Am I allowed to carry my luggage on,” “How much will I have to work to cram my overstuffed suitcase into the overhead bin,” and “Is my luggage allowed in the cargo hold?” But what about the standards themselves? Is there, say, an objective answer to the question, “Is it good to use this definition of the kilogram?”

Continue reading “The Kilogram: A Case Study in How We Think about Norms”

What Happened at Ramjas: Tyranny of the Nation

The censorship, riots, and public outcry surrounding the events at Ramjas College in Delhi, India, sparked public debate about the future of India as a democracy. What happened at Ramjas – as explained in the first article of this series, “What Happened at Ramjas : A Voiceless India – was a clear violation of Indians’ right to free speech under the name of nationalism. Identifying the philosophical structures used to justify actions on both sides will help us gain a better understanding of a pressing issue facing modern day India.

Continue reading “What Happened at Ramjas: Tyranny of the Nation”

Should We Avoid or Engage Moral Dilemmas?

It is common for parents to teach their children to avoid moral danger. Parental advice includes avoiding certain peer groups, adhering to a curfew, and ensuring that responsible adult supervision is always present.  Parents tend to think that these kinds of policies make it more likely that their children won’t encounter situations in which they might make bad decisions.  

Continue reading “Should We Avoid or Engage Moral Dilemmas?”

What Does John Stuart Mill Have to Say about the Hijab?

The European Union’s highest court has recently ruled that companies are allowed to ban hijabs in their workplaces. It is a response to two cases: Samora Achbita, a woman working for a company in Belgium, was fired over her refusal to take off her veil at work; Asma Bougnani was likewise fired by a company in France, for the same reasons.

This is yet another battle in the long hijab wars that have been fought in Europe over the last 20 years. As usual, there is a political aligning on this issue: the far right welcomes such bans, the multicultural left vehemently opposes them, and the rest of the parties are either undecided, or simply confused, about their stand.

Continue reading “What Does John Stuart Mill Have to Say about the Hijab?”

Will Robots Ever Deserve Moral and Legal Rights?

Twenty-one years ago (February 10, 1996), Deep Blue, an IBM Supercomputer, defeated Russian Grand Master Gary Kasparov in a game of chess. Kasparov ultimately won the overall match, but a rematch in May of 1997 went to Deep Blue. About six years ago (February 14-15, 2011), another IBM creation named Watson defeated Champions Ken Jennings and Brad Rutter in televised Jeopardy! matches.

The capabilities of computers continue to expand dramatically and surpass human intelligence in certain specific tasks, and it is possible that computing power may develop in the next several decades to match human capacities in areas of emotional intelligence, autonomous decision making and artistic imagination. When machines achieve cognitive capacities that make them resemble humans as thinking, feeling beings, ought we to accord them legal rights? What about moral rights?

Continue reading “Will Robots Ever Deserve Moral and Legal Rights?”

Hey Hey, Ho Ho: Does Western Civ Have to Go?

Is colonialism a bad thing? It is fashionable to think so, and with good reason. Genocide, racism, slavery, depredation, epidemics, cultural inferiority complexes, etc., are all traceable to Europe’s colonial expansion beginning in the 16th Century. It would be naïve to think it is over, even if the United Nations’ list of non-self-governing territories is rather short. Colonialism persists. Whether it is America invading Iraq to get its oil, or Nike setting up sweatshops in Bangladesh, colonialism is alive and kicking, and it continues to cause great damage to people of color.

Continue reading “Hey Hey, Ho Ho: Does Western Civ Have to Go?”

Reckoning with the Legacy of Derek Parfit

Philosopher Derek Parfit died on January 1st. Let us hope he will go to heaven. Will he? Parfit, who was an agnostic, was not much concerned with the existence of heaven or hell. But, he did famously argue that, even if such places do exist, the person going there would not be the same person who previously died. And, thus, someone would be punished or rewarded for the deeds of another person. This is deeply unjust, as unfair as sending someone to prison because of the crimes committed by his identical twin brother.

Continue reading “Reckoning with the Legacy of Derek Parfit”

When Is It Rational to Trust a Stranger?

This post originally appeared October 13, 2015.

A father hands over the keys to his house to a stranger, his children fast asleep upstairs. Two grandparents share their living room with a traveling salesman in town for the week. A young woman falls asleep in the guest room of a man she has never met before that night. While such scenarios may sound like the beginning of a horror film, it is now a fact that millions of individuals in over one-hundred-and-ninety countries rely on online services to rent lodgings, most often in private homes. The broader sharing economy encompasses, among other things, the sharing of workspace, the sharing of automobiles, and even the sharing of home-cooked meals. In some cases what is shared is publicly owned, such as in most bicycle sharing schemes. But typically one party owns what is shared in an exchange between strangers.

All this cooperative activity between strangers is taking place in an age when parents feel the need to install “nanny-cams” in their children’s rooms, companies monitor their employee’s web surfing, and a proliferation of online services allow anyone to order a background check on anyone else. Do these apparently divergent cultural trends point to a more fundamental polarization of values? Or do they simply represent differential responses to varying social circumstances?

To the skeptic, the trustful enthusiasm of the sharing economy is a symptom of naïveté. The only cure is a painful experience with cynical breach of faith. Recent cases like the alleged sexual assault of an airbnb guest are the canaries in the coalmine. To the optimist, these sensational cases are remarkable precisely because of their paucity. What the amazingly rapid growth of the sharing economy teaches us is that human beings, in aggregate, are much more trustworthy than previously imagined.

I think that both the skeptic and the optimist have got it wrong. On the one hand, it’s silly to think that involvement in the sharing economy confers upon its participants the esteemed moral character trait of trustworthiness. On the other hand, the trusting attitudes manifested in many corners of the sharing economy are both rational and prudent, under the right conditions.

Borrowing a term from Princeton philosopher Philip Pettit, I will refer to these as the conditions for trust-responsiveness. In a paper delightfully entitled “The Cunning of Trust”, Pettit makes the case that you can have reason to trust others even if you have no antecedent knowledge about their reliability. This is because you can make them responsive to your trust simply by communicating that you trust them. This might seem like pulling a rabbit out of hat. But on reflection, the dynamic is not unfamiliar.

Pettit’s analysis rests on a relatively uncontroversial psychological claim: human beings care very much about their standing in the eyes of others, and are often moved by love of regard. In his Theory of Moral Sentiments, Adam Smith, the father of modern economics and a keen psychologist, went so far as to say that Nature has endowed Man “with an original desire to please, and an original aversion to offend his brethren. She taught him to feel pleasure in their favorable, and pain in their unfavorable regard. She rendered their approbation most flattering and most agreeable to him for its own sake; and their disapprobation most mortifying and offensive.”

What other people think of us matters to us a great deal. In particular, a reputation for trustworthiness has both intrinsic and instrumental value. Pettit notices that when people decide to rely on us, they signal to others that they regard us as trustworthy. We are motivated to be responsive to their trust because we want that signal to be broadcast. As Pettit puts it, “The act of trust will communicate in the most credible currency available to human beings – in the gold currency of action, not the paper money of words – that the trustor believes the trustee to be truly trustworthy, or is prepared to act on the presumption that he is.”

When a person is widely perceived to be trusted, he or she gains a highly valuable status. When we manifest trusting reliance, we give the person we rely on an incentive to do the very thing we rely on them to do, because they want to cultivate and maintain that status. This is why the trust of strangers can be a rational gamble. It is never a sure bet, but it is a good bet more often than one might imagine. And this is why the skeptic is wrong. The dramatic growth of the sharing economy is predicated on fundamental facts about of human psychology.

But the optimist gets something wrong as well. There is no necessary connection between ubiquitous sharing and the dawn of a new age of trustworthiness. Trust-responsiveness and trustworthiness are altogether different animals. A trustworthy person will do what he or she is trusted to do regardless of whether anyone else is watching. This is why we hold trustworthy people in esteem and think that trustworthiness is a morally desirable trait of character. In contrast, trust-responsiveness is predicated on a desire for good opinion and is therefore, at best, morally neutral. Moreover, trust-responsiveness will only survive under certain institutional conditions.

It’s worth noting that these conditions exist par excellence in many corners of the sharing economy. The oxygen in which this economy exists is the collection and dissemination of reviews. On airbnb, for example, hosts who meet certain criteria of responsiveness, commitment, and five-star reviews are granted the coveted status of “superhost” which is signified by a red and yellow badge of approval on their profile. This status may increase demand for booking, thereby providing a financial incentive to hosts looking to juice their profits. It also works because it flatters people who self-identify as open, warm, and hospitable.

But we shouldn’t be too cynical about all this. Aristotle noticed that moral virtue could be acquired by cultivating good habits. It may be that exercise of the dispositions of trust-responsiveness can help cultivate the morally desirable trait of genuine trustworthiness. Maybe. I think the jury is still out on that one.

Our judgments about whether to expose ourselves to the hazard of trust are influenced both by our beliefs as by arational factors. Sometimes we just have a bad feeling about someone – we don’t like the cut of their jib. These kinds of knee-jerk responses can be wiser than our reflective selves, which are prone to rationalization. But just as often our “intuitive” judgements reflect unexamined biases. A 2014 Harvard Business School study found that “non-black hosts are able to charge approximately 12% more than black hosts, holding location, rental characteristics, and quality constant. Moreover, black hosts receive a larger price penalty for having a poor location score relative to non-black hosts.” Clearly we have a long way to go in learning how to trust well and without prejudice.

My own family rents out a garden-level apartment in our house on airbnb. We’ve met many interesting people, including one guest who eventually became a co-author of mine. And the money we earn helps to pay a hefty daycare bill. When we tell our friends and family that we have lent our keys to scores of people, they sometimes respond with horror and disbelief. And to be honest, in some frames of mind, we feel pretty nervous ourselves. But overall I think we are making a rational bet, and not one that presupposes a Pollyannaish faith in humanity. Of course, a truly malicious person can always rack up sterling reviews with the express purpose of lowering his victim’s defenses. But this kind of evil, like moral virtue, is rare.

My other work on trust and promises can be here.

Pleasure (and Happiness and Good Lives)

Philosophers known as hedonists, and probably some slightly more normal people as well, have held that pleasure is the only thing we desire in itself, that pleasure is the only thing good in itself, and that it is the only thing that makes a person’s life good. To evaluate these claims, we must distinguish three distinct types of pleasure: sensory, or the pleasure of a massage or caress; intentional, or taking pleasure in some object or activity, as when I take pleasure in a round of golf or in the new car that I own; and pure feeling, the warm glow we get when learning of some award or accomplishment. Philosophers these days often seek to reduce the first and third types to the second. Sensory pleasure is supposed to be simply a sensation we take pleasure in. But this will not do. We can have sensory pleasures that we take no pleasure in, if they are guilty or addictive pleasures. Then we have sensory pleasure but no intentional pleasure. And masochists take pleasure in sensory pains. Then they have intentional pleasure but sensory pains. Nor can the reduction go the other way, attempts at which used to be more common. We can take pleasure in various activities without having any particular sensations. The pure feeling type falls between the other two: it is a bodily feeling, but without specific location in the body, and it takes objects, as does the intentional kind.

Now we can ask whether any of these types fills the exalted bill of the hedonists. We sometimes do aim at sensory pleasures, as in sex, food, and music, and they are good. We might feel frustrated or impoverished without them. But unless we are Don Giovanni, Falstaff, or Mozart, they are not the cornerstones of a good life. Pure feeling pleasures or warm glows are far more rare and not aimed at directly. Intentional pleasures are more diverse and numerous. They are therefore the best candidates for sources of goodness in our lives and goals of our desires. Focusing on intentional pleasure therefore makes hedonism more plausible, while naive attacks on hedonism most often implicitly focus on sensory pleasures.

Nevertheless, even the more sophisticated versions that view intentional pleasure as our ultimate goal and/or source of value in my view does not survive close reflection. We take pleasure in many different kinds of objects and activities. But we do not aim directly at taking pleasure. Instead, we aim to engage in the activities and experience the objects, the pleasure being a byproduct of their successful pursuit. In fact we find activities most pleasurable when we are fully “in the flow,” therefore directly aware of the pleasure we take in them only in retrospect or future prospect. When we desire an object, we typically have pleasant thoughts about it, but we desire the object, not the pleasant thoughts or the pleasure we will take in fulfilling that desire. Thus, pleasure of the intentional type is not the typical aim or object of our desires.

So what’s all the fuss about pleasure, especially among philosophers? One explanation is the equation, or I would say confusion, of pleasure with happiness. But happiness is not of fundamental importance to a good life either. In my view happiness is a judgment, most often implicit, that one’s life is going well, sometimes producing a feeling of pleasure. What’s really important is that one’s life is going well, that one’s rational, by which I mean prioritized, coherent, and informed, desires are being satisfied. Desires are coherent when the satisfaction of one does not frustrate the satisfaction of more or deeper desires. They are informed when one knows what it would be like to satisfy them. The satisfaction of such desires is a measure of one’s welfare, of how good one’s life is overall or at a given time. Nonsensory pleasures are both effects and symptoms of a high degree of welfare. But they come from fulfilling desires for more important things. Fulfilling rational desires brings value to our lives, not the pleasure we take in doing so, although the pleasure is a sign or reflection of that value.

It has been my pleasure to produce this post. But the important thing is that I have produced it and that you have read it, whether or not you took pleasure in doing so (but I hope you did).

Is Envy Always Malicious? (Part One)

envy

This post originally appeared on November 18, 2014
Imagine you check your email and find a congratulatory message from your boss announcing that your colleague has just been promoted. This colleague joined the company at approximately the same time as you did, and works in your sector. You were in line for the same promotion and were anxiously waiting for the outcome. How do you feel?
It’s reasonable to suppose that you might feel a burning, intense, painful bout of an emotion hard to confess even to yourself: envy. It doesn’t matter that you like this person, that she works hard, that she is brilliant and competent, and that she fully deserves this honor. If anything, being fully aware of her merits is likely to make you feel worse. Even if you acknowledge that it is an objectively just outcome, you can’t help but ask “Why not me?!”
Let me assure you that you are in good company: envy can be found in all ages, with all genders, and in all strata of the society. According to anthropologists (Foster 1972, Lindholm 2008), it is panhuman phenomenon, whose disquieting traces can be found everywhere in human history and culture. The Bible, the sacred text for Jews and Christians, is filled with stories of envy, most notably Cain and Abel. The fall of Adam and Eve, Satan’s rebellion against God, and Christ’s crucifixion have all been interpreted as caused by envy (Acquaro 2004, Schimmel 2008). All main human religions condemn envy, and most peasant and tribal societies share the superstition of the evil eye, a destructive power emanating—usually involuntarily—from the look of an envious person. Many publicized crimes and intergroup conflicts have been attributed to envy (Schoeck 1969, Beck 1999). Young children are warned about the evils of envy when hearing the fables of Cinderella, Snow White, and many others. There are countless literary tales of envy and the misdeeds it provokes, but we do not need to look into fictions to see what envy looks like. As advertisers, economists, and psychoanalysts all know, envy populates our daily interactions. (Beck 2008, Vidaillet 2008, Zizzo 2008).
Notwithstanding its ubiquitousness, envy may be the only “deadly sin” that is still considered unforgivable and difficult to confess openly and straightforwardly. As Francois de la Rochefoucauld vividly put it: “We can often be vain of our passions, even the guiltiest ones; but envy is so sneaking and shameful that we never dare confess it.” (Maxims) Even though it may be not framed in moral terms for everybody, feeling and expressing envy is still stigmatized and seen as a social taboo. And yet, feeling and acting on envy is as widespread as ever, as the popularity of novels, self-help books, and editorials that deal with the topic shows. This is unsurprising: class inequality has not only not disappeared, but it has possibly increased in some affluent societies, like the United States. Furthermore, there will always be scarcity of some goods, such as honor and “coolness,” and hence competition for them, and envy for those who succeed in securing them.
Envy thus has an important positive signaling value: it reveals to us what we care about, what we feel we lack, and what we are prepared to do to get it. But is it otherwise always bad? In my work I argue that while envy undoubtedly shows a dark side of human nature—our tendency to covet the possessions and talents of our neighbors, cast an evil eye on them, and rejoice of their misfortune—it also presents a more luminous one: our tendency to work hard in order to reach and surpass those neighbors, and strive for excellence.
Furthermore, I don’t believe that envy necessarily involves hostility and aggression toward the envied. Social psychologists and philosophers are divided about whether a non-malicious emotion can be appropriately categorized as envy. I defend the view that what I call “emulative envy”, while being a kind of envy proper, is neither morally nor prudentially bad. In my next post I am going to describe what this emotion looks like. In the mean time, what do you think about this: is envy always malicious? Have you ever felt a “benign” kind of envy?

Citations
Aquaro, G. R.A. 2004, Death by envy: The evil eye and envy in the Christian tradition, Lincoln, NB: Universe.
Beck, A. 1999, Prisoners of hate: The cognitive basis of anger, hostility, and violence, New York: HarperCollins.
Belk, R. W. 2008, “Marketing and Envy,” in Smith, R. H. (ed.), 211–226. Foster, G. M. 1972, “The anatomy of envy: A study in symbolic behavior,” Current Anthropology, 13: 165–202.
Lindholm, C. 2008, “Culture and Envy,” in Smith, R. H. (ed.), 227–244.
Schimmel, S. 1997, The Seven Deadly Sins: Jewish, Christian, and Classical Reflections on Human Psychology, Oxford: Oxford University Press.
Schimmel, S. 2008, “Envy in Jewish Though and Literature,” in Smith, R. H. (ed.), 16–38.
Schoeck, H. 1969, Envy: A theory of social behaviour, San Diego: Harcourt, Brace & World.
Smith, R. H. (ed.) 2008, Envy: Theory and Research, Oxford: Oxford University Press.
Vidaillet, B. 2008, “Psychoanalytic Contributions to Understanding Envy: Classical and Contemporary Perspectives,” in Smith, R. H. (ed.), 267–289.
Zizzo, D. J. 2008, “The Cognitive and Behavioral Economics of Envy,” in Smith, R. H. (ed.), 190–210.

To Be Ethical Beings: The Refugee Crisis and Europe

An opinion piece in The New York Times by Aaron James Wendland details what we owe each other – specifically, involving refugees – if we are to be ethical beings. He uses the works of Jewish philosopher and Holocaust survivor Emmanuel Levinas, whose family was killed in the Holocaust, to explain “the concrete source of of ethical relations between human beings: our ability to respond to the wants and needs of others.”

Continue reading “To Be Ethical Beings: The Refugee Crisis and Europe”