← Return to search results
Back to Prindle Institute

Letting the Poor Die

photograph of empty hospital bed in dark room

In 2021, one out of every thirty Canadian deaths was the result of euthanasia. In March this year, access to euthanasia will be further expanded for mental illness (raising issues Daniel Burkett discusses here and here).

“I find that the act of offering the option of an assisted death is one of the most therapeutic things we do,” says Stefanie Green, a Canadian doctor specialized in euthanasia who has written a book about her experiences. Euthanasia can be a welcome solution to severe and incurable end-of-life pain. It may be difficult to think about, but there is surely such a thing as a life, even a human life, that is no longer worth living. Death can be a mercy, and euthanasia a means of providing that mercy. The mere option of euthanasia in the future, if needed, offers profound comfort to many who are diagnosed with conditions which could result in extreme suffering.

However, a number of stories have recently surfaced in Canada which exhibit the same worrying pattern. Among those thousands seeking euthanasia each year in Canada are some who are motivated, at least in part, by poverty or another lack of basic resources.

For instance, Althea Gibb-Carsley, a former care coordinator and social worker at Vancouver Coastal Health’s assisted dying program, describes Mary, 55. Mary suffers from fibromyalgia and chronic fatigue, causing pain and suffering. Mary knows she could ease her symptoms with vitamins, a special diet, and physiotherapy, all of which she cannot afford. “Mary identifies poverty as the driver of her [euthanasia] request,” writes Gibb-Carsley. “What she really needs is an extra $600 or so each month.”

Mary’s case is not unique. Christine Gauthier, a paraplegic retired Army Corporal, claims she was offered a medically assisted death after having requested a home wheelchair ramp for five years. Les Laundry, a 65-year-old pensioner fearing homelessness, reportedly received approval for euthanasia from a doctor after having told assessors he “doesn’t want to die” but “can’t afford to live comfortably.”

Cases such as Mary, Gauthier, and Laundry seem to contradict the promise of Justin Trudeau, the Prime Minister of Canada, that nobody would be euthanized “because you’re not getting the support and care that you actually need.” Trudeau’s promise also seems to be dismissed by at least some euthanasia providers. Alexander Raikin reports in The New Atlantis that he was told by a euthanasia provider that “we live in a society where we have structural inequalities. And that really sucks… But it doesn’t mean that we should be sheltering people from the option of having an assisted death.”

The seeming acceptance of poverty-induced euthanasia has led Ross Douthat, opinion columnist at The New York Times, to ask:

What if a society remains liberal but ceases to be civilized?… The idea that human rights encompass a right to self-destruction, the conceit that people in a state of terrible suffering and vulnerability are really “free” to make a choice that ends all choices, the idea that a healing profession should include death in its battery of treatments — these are inherently destructive ideas. Left unchecked, they will forge a cruel brave new world, a dehumanizing final chapter for the liberal story.

Douthat’s arguments against euthanasia are fairly explicitly based on Christian values – human life is not merely valuable but sacred. The right to bodily autonomy does not, in Douthat’s view, extend to a right to self-destruction. The unspoken premise in his arguments is that, since human life is sacred, the choice of death is not rightfully ours to make.

A more secular worry is that offering a new option to someone who is vulnerable and lacks good alternatives might itself be coercive. The anthropologist Lawrence Cohen found that in India, where paid kidney donation was legal, impoverished women were offered loans by creditors if they would sign over their kidneys as collateral. This could be seen as increasing these women’s options and hence raising their autonomy. Offering these kidney-backed loans could allow these women to start a successful small business, for instance. But there were also disturbing reports of family members pressuring and coercing women into signing the contracts for the benefit of the family. Without the option provided by these creditors, these women would have never faced such coercion. Perhaps giving them the option, ironically, made some of them less free.

We might worry about a similar dynamic in the Canadian case. If euthanasia is illegal, there is no opportunity for the sick, poor, mentally ill, or otherwise “burdensome” from feeling pressured to choose it. By providing that choice, we may reduce their autonomy if we inadvertently generate coercive pressure on them to “choose” it.

Richard Hanania, on the other hand, describes the permissive Canadian approach to euthanasia as “moral progress”:

It is true, for example, that some people might feel “pressured” to commit suicide because they don’t want to be burdens on their families or the government. I don’t think there’s anything wrong with this — in practically every other kind of situation, it is usually considered pro-social to care about the impact your life has on others.

As for the apparent links between euthanasia and poverty, Hanania claims, “one can’t turn around and say financial considerations should never have a role to play in the decisions people make.”

Hanania is right about this much: the choice to seek assistance in dying will inevitably take place within the entire context of a human life, a life which itself unfolds amid a complex economic and social system over which an individual has relatively little control.

It may be that we can never entirely remove financial considerations from an individual’s choices, including their choices over life and death. But does that mean we don’t have to worry about them, from a moral point of view? The answer could depend on whether we view the financial considerations as removing a person’s autonomy, their capacity to make a “free choice.”

Consider the concept of “wage slavery.” Thinkers from Cicero to Karl Marx to Noam Chomsky have argued that those who earn low wages with no alternative sources of income enjoy little more freedom than legal slaves. These workers’ “choice” to continue working is not clearly a genuinely free one. “They must therefore find someone to hire them, or die of hunger. Is that to be free?” wrote a French journalist around the time of the revolution. If a “wage slave,” someone in a state of economic desperation, unable to escape the cycle of poverty or move to somewhere with better job prospects, decided to endanger their health and sell a kidney, or take on the risks associated with street prostitution, we could hardly comfort ourselves with the thought, “But it was their choice!” Their financial situation has made them particularly vulnerable and left them with no decent options to choose from.

Similarly, it provides little comfort that Canadians seeking euthanasia are “choosing” to do so if they are being driven to that choice by their lack of decent alternatives, caused by their financial situations. In this vulnerable state, their choices might not reflect their sincere wishes. Their choices might not be fully free and autonomous.

Even if these autonomy-related concerns are overlooked, the emerging links between euthanasia and poverty remain a potent reminder of the extent of the financial desperation which still exists even in highly developed societies like ours – a desperation that preceded euthanasia’s legalization and would outlive any ban.

With Students Like These, Who Needs a Fatwa?

photograph of empty classroom

On August 12, 2022, a twenty-four-year-old man nearly murdered Salman Rushdie for something he wrote before the man was born. The assailant set upon Rushdie as he was about to deliver a public lecture at the Chautauqua Institute in New York, stabbing him multiple times in the neck, stomach, eye, and chest and inflicting over twenty wounds. That night, Salman’s life seemed to be hanging in the balance. But the next day, the world learned that he would survive, albeit with grave and permanent injuries, including the loss of sight in one eye and use of one hand. Nevertheless, Ayatollah Khomeini’s religious decree or fatwa, issued in 1989 and calling for Rushdie’s assassination, remains unfulfilled.

Yet reading about recent statements and actions of students and administrators at Hamline University in St. Paul, Minnesota, one could be forgiven for concluding that Khomeini’s message has, at least in part, carried the day.

Last fall, an adjunct professor there was fired for displaying a fourteenth-century painting of the Prophet Muhammed in her class after a student complained that showing the image was an act of “disrespect.” In the fatwa against Rushdie, Khomeini explained that Rushdie’s murder was warranted because his novel, The Satanic Verses, “insult[s] the sacred beliefs of Muslims . . . .” Of course, no one at Hamline was calling for the lecturer’s blood. But that showing the image was, as school officials averred, “undeniably inconsiderate, disrespectful, and Islamophobic,” or that avoiding “disrespecting and offending” Islam should always “supersede academic freedom,” are ideas that seem more at home in an Islamic theocracy than a liberal democracy.

Not being an expert in the history or theology of Islamic iconoclasm, I will not engage with the argument that showing an image of the Prophet is always and clearly Islamophobic. It’s worth noting, though, that the image was created for an illustrated world history commissioned by a Muslim ruler of Persia and written by a Muslim convert. That fact in itself suggests a further fact that, at least outside of Hamline’s Office of Inclusive Excellence, is widely acknowledged: there is a broad range of views within Islam about the propriety of such depictions. As Amna Khalid trenchantly observes, it is the assumption that the Muslim community is a monolith with respect to this issue that seems Islamophobic.

But besides bolstering the argument that showing the image served a legitimate pedagogical purpose and was not aimed at causing offense, the contention that the image is not insulting to all Muslim is somewhat beside the point. The real question is: even if it were, would that automatically make showing it in a university classroom impermissible?

Suppose that, in a class about the history of European political satire or journalistic ethics, a professor displayed the cartoons whose publication by the French satirical magazine Charlie Hebdo led to the murder of twelve people in 2015. These cartoons are undeniably irreverent and, yes, even insulting and offensive to some. But unless showing them has no pedagogical benefit under any set of circumstances — unless it is undeniably an attempt simply to insult students — academic freedom absolutely supersedes these students’ hurt feelings. The very idea of an institution dedicated to the production and dissemination of knowledge, through the exchange of ideas and arguments among diverse participants, each with their own unique perspective, depends upon this principle. If anyone’s bare claim to be disrespected, offended, or insulted is sufficient to justify censorship, then there is almost no topic of any human interest that can be discussed with the candor required to examine it at any level of depth or sophistication.

It seems, however, that a non-trivial number of students at Hamline disagree with me. When the university’s student newspaper, The Oracle, published a defense of the lecturer written by Prof. Mark Berkson, the chair of the Hamline Department of Religion, the ensuing backlash led its editorial board to retract the article within days. In an unsigned editorial explaining the move, the board wrote that because one of its “core tenets” is to “minimize harm,” the publication “will not participate in conversations where a person must defend their lived experience or trauma as topics of discussion or debate.” In other words, publishing the chair’s defense adversely affected other students by “challenging” their “trauma.”

There are two features of this argument I find interesting: the “minimize harm” principle, and the use of the term “trauma.” Both, I think, can be fruitfully examined in light of a useful distinction the philosopher Sally Haslanger draws between a term’s manifest concept and its operative concept.

According to Haslanger, a term’s manifest concept is determined by the meaning that language users understand a term to have; it is the term’s “ordinary” or “dictionary” definition. By contrast, its operative concept is determined by the properties or entities actually tracked by the linguistic practice in which the term is employed. In her work on race, Haslanger observes that the manifest concepts associated with the term “race” and similar terms include some biological or physical components, yet the way we actually apply these terms does not track any physical characteristic (think of how the term “white” was once not applied to Sicilians).

Using this distinction, we can see how the editorial board performs a neat sleight of hand in its use of the term “trauma.” The dictionary definition or manifest concept of “trauma” is something like Merriam-Webster’s “a disordered psychic or behavioral state resulting from severe emotional stress or physical injury.” When The Oracle’s editorial board uses the term, and further, implicitly asserts that no one should question whether a person’s trauma is warranted or justified, this sounds eminently reasonable because of the term’s manifest concept. But when we look at how the board actually uses the term, it becomes clear that its operative concept is something like “insult, offense, or a feeling of being disrespected.” Once we see this, the claim that a person’s “trauma” should never be questioned begins to look quite doubtful. A person may be mistaken in feeling insulted or offended, and in such situations, it may sometimes be permissible to respectfully point this fact out to them. This is precisely what Prof. Berkson was trying to do in his defense of the lecturer. And once again, I must insist that it is even sometimes justifiable to cause offense in the classroom in order to achieve a legitimate pedagogical goal.

There is another sleight of hand at play in the board’s “minimize harm” principle. The board invokes the Pulitzer Center’s characterization of this principle as involving “compassion and sensitivity for those who may be adversely affected by news coverage.” On its face, this seems beyond reproach — particularly since the Center’s definition clearly implies that newspapers may justifiably publish material that adversely affects others, so long as they do so in a sensitive and compassionate manner. But the board’s application of the principle to this case reveals that for it, “minimize harm” really means “cause no harm,” or even “cause no offense.”

While the principle of minimizing harm implicitly calls for exercising moral judgment in weighing whether the harm caused is justified by the benefits to be gained, and moral courage in defending that judgment when it is challenged, the principle of causing no harm is, for journalists, equivalent to a demand that they not do their job.

For example, if The Oracle published an article uncovering massive corruption in the Office of Inclusive Excellence that led to multiple school officials’ termination, it would cause concrete harm to those officials. “Cause no offense” is, of course, an even more craven abdication of the journalist’s vocation.

There is a final point that I think is worth making about this sorry affair. Before showing the painting to her students, the lecturer reportedly took every possible precaution to safeguard their exceedingly fragile mental health. She made that particular class activity optional. She provided a trigger warning. And she explained exactly why she was showing the painting: to illustrate how different religions have depicted the divine and how standards for such depictions change over time. She behaved like a true pedagogue. None of this prevented the mindless frenzy that followed. This suggests that instead of actually helping students cope with “trauma,” trigger warnings and the like may actually prime students to have strong emotional reactions that they would not otherwise have. Indeed, the complainant told The New York Times that the lecturer’s provision of a trigger warning actually proved that she shouldn’t have shown the image. What a world.

Why Bother with Political Arguments?

photograph of protestors marching

Moral arguments can seem pointless in our hyper-polarized, post-truth society. People rarely seem to change their minds on hot-button political issues, like abortion, gun control, or climate change. Yet Congress recently passed a bill protecting same-sex marriage, and with the support of many Republicans. That bipartisan effort would have been impossible ten years ago.

Does social change like this result from moral arguments, though? Moral reasoning gets a bad rap, but it truly fuels moral progress, so long as it comes from a place of mutual trust and respect.

Ethics professors (like us) might be prone to valorize moral reasoning. We study moral arguments for a living, yet don’t appear to be more ethical than other people. We’re just skilled at poking holes in opposing moral views or coming up with reasons to support our own.

If anything, arguing about politics only seems to make each side dig in their heels (or worse, become more extreme). Cognitive science reveals that, when it comes to ethics and politics, we regularly use reasoning to rationalize the values we already hold. Climate activists often assume that skeptics just don’t understand the science. But research increasingly suggests that the more skeptics know about the science the less they think climate change is a serious threat.

Yet some political partisans do change their minds in light of arguments. For years, Jerry Taylor, a former fellow of the Cato Institute, churned out talking points for climate skeptics. Respected peers eventually convinced him of flaws in his sources and holes in his arguments. Eventually Taylor’s skepticism eroded, and he left Cato to become a climate activist.

It’s not just conservatives who change their minds in response to reasons. The science writer, Bethany Brookshire, once produced a viral tweet about the apparent sexism hitting her inbox. Of the scientists who correspond with her, men seemed much less likely to start their emails with her proper title, “Dr. Brookshire.” However, going back through her emails revealed that it was women who were slightly less likely to use the more formal and respectful title. So Brookshire publicly corrected her mistake.

Even if some people are persuaded by rational argument, aren’t these cases incredibly rare? These stories are anecdotes, but they make vivid a tendency present in us all that’s usually just blocked by other factors. As Julia Galef puts it, although we commonly adopt a “solider mindset,” hellbent on defending core beliefs at all costs, it isn’t inevitable. We are able to adopt a “scout mindset” aimed at an accurate map of the complex terrain.

Recent experiments suggest ordinary people’s attitudes and behavior can respond to arguments about contemporary moral issues. One intriguing study found that students in a college ethics class who studied a famous moral argument for vegetarianism purchased less meat from the dining hall, compared to another group of students who were randomly assigned to study an argument in favor of donating more to charity. Another series of experiments provided participants an opportunity to donate part of a bonus to a charitable organization. The researchers found that reading moral arguments could increase charitable giving, provided the arguments are engaging. These and other studies provide evidence that moral reasoning can change real moral behavior, not just self-reported attitudes.

The trick is to recognize the many forms of reasoning. Moral arguments can be presented as a boring set of premises that logically lead to a conclusion, or lobbed at opponents within a culture of contempt. But these aren’t the only, or the most effective, forms of moral reasoning.

Us humans are not cold robots designed to dispassionately reason alone. Moral reasoning evolved among social primates to resolve problems of interdependent living. Competing ideas arise in dialogue with others, and you’re not going to buy an argument from someone you despise or distrust, and certainly not from someone who treats you with contempt. Mutual trust and respect are required for arguments to be taken seriously in the first place.

Should we conclude that emotions, not reasons, drive social change? It’s both. Mutual trust and respect enable reasoning to do its work of changing moral attitudes.

Consider one way support for same-sex marriage has increased dramatically in the past few decades. Many people — including Republicans like Rob Portman and Dick Cheney — have discovered that a beloved friend or family member is gay. Existing empathy and respect for a loved one removes barriers to understanding the oppression of gay people and to seeing true love between same-sex partners. People have reasoned that if their loved one doesn’t deserve discrimination and stigma, then other gay people don’t either. Consistency reasoning of this sort is ubiquitous in moral life.

Moral arguments from the opposing side are certainly hard to swallow, for they often conflict with our values and challenge our identities. But when we deride reasoning in politics, we’re no better than a physician who concludes that a drug is ineffective because patients refuse to take it. As Taylor emphasizes, once he heard arguments from people he trusted and respected, he opened up, and over time his skepticism began to weaken because he appreciated the arguments.

When moral arguments are planted in fertile ground, they are merely sowed seeds. And we’re not talking about jalapeño seeds, which produce peppers in a few months. Think avocados, which can take a dozen years to bear fruit. During that time, the ground must remain fertile as well. Combative arguments brimming with contempt can poison the well and yield crop failure.

Moral reasoning so conceived is truly a driving force for social change. Without it, progress is impossible. The key is patience, persistence, and mutual respect. Under the right conditions, moral arguments can move mountains — slowly but surely.

Is Academic Philosophy Pointless?

photograph of Dead End sign where road meets woodline

Back when I taught philosophy, certain students — often the ones most interested in the subject — would invariably confront me at the end of the semester with the same complaint. “I’ve read brilliant arguments for diametrically opposed positions,” they would say, “and I’ve read brilliant critiques of every argument. Now I don’t know which position to choose. And if I can’t choose a position, what was the point of working through them all?” At the time, I didn’t have a good answer for them. I think I have a better answer now — more on that in a bit — but I fundamentally sympathize with their complaint. There is, indeed, something futile about academic philosophy. Or so I will argue.

I left professional philosophy two years ago for a variety of reasons, but mainly that, after three years on the job market, the prospect of securing a tenure-track position at a decent institution appeared dim. Since then, I have had some time to reflect on what I decided to do with my third decade on Earth. I’ve concluded that I’m very happy to have studied philosophy for over ten years, but that I do not in any way regret leaving the profession. In this column, I will explain why I feel this way. Part of the explanation comes back to my students’ complaint.

First, why was getting a PhD worth it for me? I came to graduate school with a burning desire to answer two questions that had puzzled me since high school: what is the nature of moral facts, and what is the true ethical theory? (I didn’t use this language in high school, of course).

After spending a decade thinking about the various answers philosophers have mooted, I arrived at conclusions that remain reasonably satisfactory to me. Even leaving aside the friends I made, the brilliant people I got to talk to, and the other things I learned, getting those answers alone made the experience worthwhile.

I am, however, all too aware that the answers I’ve come to, and the arguments for them that I find convincing, strike a good proportion of academic philosophers — many much smarter and more able than I — as less than compelling. Some have even said so in print. I would expect no less from philosophers, since they are trained to analyze arguments — particularly to see where they may fail.

This leads me to why I don’t regret leaving the profession. The problem is not that I dislike disagreements. The issue I have with academic philosophy is that most of the discipline’s research questions are inherently unresolvable. By “resolution,” I mean the provision of answers or solutions which the preponderance of the available evidence and arguments favor over all others.

In other words, academic philosophy’s questions do not remain unresolved because they’re hard, or because we just haven’t discovered the best arguments or sufficient evidence yet. They are unresolvable in principle, because of their very nature.

Among my reasons for thinking this is that most of the basic questions in academic philosophy have remained pretty much the same for over 2000 years. I’m not an expert in metaphysics or epistemology, but I can confirm that this is true with respect to the most important questions in ethics. Moreover, many prominent contemporary answers to these ethical questions can be found in some form in the classic ancient texts. Jeremy Bentham may have invented the term “utilitarianism” to describe his ethical theory, but the same basic approach can be found in Platonic dialogues and the gnomic pronouncements of Epicurus. And really, if Bentham, John Stuart Mill, Henry Sidgwick, J.J.C. Smart, G.E. Moore, either of the Peters (Singer and Railton), James Griffin, Walter Sinnott-Armstrong, or Richard Brandt — among many, many others — have not come up with arguments for consequentialism that establish it as the theory more likely to be correct than all the others, how likely could it be that such arguments are still out there, waiting to be discovered?

The fact of continued disagreement over these fundamental questions among some of the most brilliant minds of many generations is at least suggestive that these issues will never be resolved — and not because they’re just hard.

Before I explain why I think this fact may make much of academic philosophy pointless, I must observe that judging by their conversation, some philosophers are not willing to concede the essential irresolvability of philosophical questions. I have actually met Kantians who think deontology is not just the right ethical approach, but obviously the right approach. You’d have to be crazy to be a consequentialist. I don’t know how seriously to take this talk; it may be partly explained by various institutional and cultural incentives to engage in intellectual chest-thumping. Still, the fact of persistent disagreement highlighted in the last paragraph surely makes the view that deontology — or consequentialism or virtue ethics — is obviously the correct approach to ethics somewhat farcical. You’d have to be crazy to think plausible answers to deep philosophical problems are ever obviously true or false.

The reason I think that the irresolvability of philosophical problems makes academic philosophy substantially pointless is that academic disciplines that purport to be in the business of evaluating truth claims should be able, at least in principle, to make progress. By “progress,” I mean nothing other than resolving the research questions or problems that characterize that discipline. Note that this view allows that the research questions themselves might change over time; for example, resolving some questions might raise more questions. But the inability of a truth claim-oriented discipline to resolve its research questions is a problem that has to be addressed.

There are a number of ways an advocate for academic philosophy might respond. First, she might point out that there are other truth claim-oriented disciplines in which unresolvable questions are commonplace. All agree that these disciplines are not pointless, so the inference from unresolvable questions to pointlessness is flawed. I’m unable to fully assess this argument because I’m not sufficiently familiar with every truth claim-oriented discipline, and all the advocate of academic philosophy really needs is one example. But I could imagine her invoking some other humanities discipline, like history. Historical questions are often unresolvable, but history’s value as a discipline seems unassailable.

History, though, is different from philosophy in two ways. First, some of the unresolvable questions in history are questions of how best to interpret sets of historical facts, and it’s not clear that the primary criterion for evaluating historical interpretations is related to truth rather than, say, fruitfulness or explanatory power. Did the Holocaust inevitably flow from the logic of Nazism, or was it not inevitable until it became official state policy sometime in 1941? Historians arguing this question all draw on the same body of evidence: for example, the genocidal implications of Hitler’s Mein Kampf; his 1939 speech in which he threatened that if another world war began, European Jewry would be annihilated; his plan to deport Jews to Madagascar after France fell in 1940; and records of the 1942 Wannsee conference. The debate concerns not what the facts are, or whether we have good reasons for believing them, but rather which interpretation of the facts better or more fruitfully explains the Nazi genocide.

More importantly, to the extent that historical questions concern historical truth claims, their irresolvability is a function of the paucity of evidence, not the nature of the questions themselves.

Looked at one way, the Holocaust question hinges on the motives of the historical actors involved. We may simply be unable to determine those motives by a preponderance of the available evidence. This implies that new evidence could come to light that would resolve this question. By contrast, as I’ve suggested, philosophical questions are not unresolvable because we don’t have enough evidence at the moment. They are unresolvable by nature.

It’s no doubt true that many questions in a wide range of disciplines remain, and perhaps always will remain, unresolved. In general, that’s because we lack the evidence required to prove that a particular answer is more likely to be true than all the others. This does not make these disciplines futile, in part because we can’t know a priori whether sufficient evidence will become available to resolve their research questions. We have to do the research first. Moreover, the fact is that many disciplines do resolve their characteristic questions.

A second argument for academic philosophy is that it makes progress of a sort, even if it cannot resolve its questions. Philosophical progress consists in refining competing answers to philosophical questions, as well as the questions themselves. You can find the fundamental tenets of consequentialism in the ancient texts, but modern philosophers have arguably explored the theory at a much higher level of detail, sophistication, and thoroughness. Similarly, modern philosophers have been able to refine our understanding of a classic question in metaethics — why be moral? — with some even arguing that the question isn’t well-formed. Thus, even if academic philosophy doesn’t resolve its questions, its exploration of the logical space of answers is a good enough reason to support it. (Incidentally, this iterative process of refinement has also led philosophers to develop an elaborate jargon that makes cutting-edge articles in ethics nearly impossible for laypeople to understand, but in my view that’s not objectionable in itself.)

Although I grant that this is a form of progress, and it certainly requires great intellectual ingenuity, I’m not sure continual refinement alone can justify a discipline.

Suppose that the question whether the universe is heliocentric were for some reason unresolvable in principle. In this world, astronomers are doomed to merely add more and more elaborate conceptual curlicues to their preferred heliocentric or geocentric theories for all eternity — and they know it. Would this question still be worth the effort and resources expended to try and answer it?

A third argument is that learning and doing philosophy are valuable in all sorts of ways for those who engage in these activities. Among other things, they help individuals and societies think through problems they may actually confront in real life. This is obviously true for subfields like ethics and political philosophy, but it also fully applies to epistemology and metaphysics as well. For example, I have argued that a certain view about the nature of race underlies conservatives’ arguments against affirmative action. The question of what races are is a metaphysical question.

There are other very good reasons to learn and do philosophy. Philosophy is intellectually stimulating. It helps develop critical reasoning skills. It promotes both open-mindedness and a healthy skepticism. It helps us ask better questions and to evaluate possible answers.

Academic philosophers do and learn philosophy. They therefore benefit in all of the ways I’ve described, and it might be argued that this justifies the discipline. Obviously, this is a dubious argument, since it seems implausible that benefits to practitioners of the discipline alone can justify a discipline. More compelling is the fact that academic philosophers teach students, thereby enabling and encouraging the latter to do and learn philosophy and reap the benefits.

I do not dispute that it is valuable for academic philosophers to teach philosophy. The trouble is that, in my view, the contemporary discipline of academic philosophy is not primarily focused on pedagogy or public outreach. When I was in graduate school, instruction in pedagogy was, to put it charitably, an afterthought. American Philosophical Association meetings, which largely serve as showcases for new research, remain the most important annual events in the academic philosophy world. Of course, some professional philosophers practice the discipline differently from others. At some colleges, research output does not even factor into tenure decisions, and professors therefore focus more on teaching. Yet no one rises in the profession by winning a teaching award or publishing an opinion piece in The New York Times. Prominence in academic philosophy is primarily a function of publishing books and articles that other professional philosophers admire.

So, the value of learning and doing philosophy fails to justify the discipline of philosophy as currently practiced — or so it seems. But the advocate for academic philosophy may reply that effective teaching or public philosophizing actually requires ongoing philosophical research. Imagine if philosophers had stopped doing research in moral philosophy after G.E.M. Anscombe published her famous article, “Modern Moral Philosophy,” in 1958. (In that article, Anscombe declared that “[i]t is not profitable for us at present to do moral philosophy”). In this world, students could study, and professors teach, only books and articles that are at least sixty years old. They could not, for instance, examine any critiques of the arguments found in that article that were published after it appeared. Wouldn’t that be, well, crummy?

This argument has some visceral force for me. It gains added force when we remember that philosophers certainly make a kind of progress by exploring the logical space of possible answers.

Philosophers can enlighten the public about these possible answers, which we sometimes call “traditions” (e.g., the just war tradition), which can in turn help the public think through real-world problems. Because continual research can uncover more possible answers, it can be valuable for this reason.

Does this justify academic philosophy as currently practiced? Frankly, I’m not sure. In my experience, many philosophical articles are written as if aimed at resolving their questions — something I’ve argued they cannot do in principle. As I’ve mentioned, there is also a heavy emphasis on criticizing opposing views. Is this the best way of exploring the logical space of plausible answers? Adam Smith famously observed that “it is not through the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.” His point is that markets work by exploiting self-interest in ways that redound to society’s benefit. Similarly, the defender of academic philosophy might argue that the best way to explore the logical space of answers to a philosophical question is to incentivize philosophers to believe, or at least to argue as if, their preferred answer actually resolves the question. In other words, what looks to me like a mistaken belief among those Kantians who think or at least act as if consequentialism is obviously wrong may redound to the benefit of philosophy as a whole. Perhaps this is true, but I’m just not sure.

To recap, I’ve argued so far that since academic philosophy cannot resolve its research questions, its only hope of justification lies in its ability to disseminate philosophical ideas and modes of thinking to the broader public. Doing this effectively may require a certain amount of research aimed at exploring the logical space of answers and identifying those that seem most plausible. But for me, it is an open question whether the way research is currently conducted is the best way to explore the logical space of answers.

I must conclude, then, that much of academic philosophy as currently practiced may, indeed, be pointless. Curiously, though, I think I have a better answer to my students’ complaint about why they should study philosophy, despite its inherent irresolvability. As a layman who seeks answers to philosophical questions, one need not wait until arguments are found showing that one answer is more likely to be correct than all the others in order to endorse that answer. One can rationally choose whatever answer is most subjectively satisfactory, as long as it is at least as plausible as any other answer. In addition, the value of learning and doing philosophy does not solely consist in finding answers to difficult questions. As Socrates shows us, it also lies in learning how to ask the right questions.

Can We Declare the Death of “Personal Truth”?

photograph of dictionary entry fro "truth"

According to Google Trends, the concepts of “your truth” and “speaking your truth” began a noticeable increase around the mid-2010s, likely as a response to the MeToo movement. At the time, the concept of speaking a personal truth was met with controversy. Just a few months ago actress Cate Blanchett ridiculed the concept, and now with the discussion of Prince Harry’s various “personal truths” it seems it’s made its way back in the news again. But surely if the pandemic has taught us anything, it’s that facts do matter and misinformation is a growing problem. Can we finally put an end to a concept that might be more harmful than helpful?

Before we consider the problems with the concept of personal truth and the idea of speaking one’s own truth, we should consider the uses and insights such a concept does provide. It isn’t a surprise that the concept of personal truth took on a new prominence in the wake of MeToo. The concept of personal truth emerged in response to a problem where women were not believed or taken seriously in their reports of sexual harassment and sexual assault, prompting a call for the public to “believe women.” It can be powerful to affirm “your” truth in the face of a skeptical world that refuses to take seriously your account as representing “the” truth. As Garance Franke-Ruta explains, “sometimes you know something is real and happened and is wrong, even if the world says it’s just the way things are.”

Oprah helped popularize the concept when she used it during the Golden Globes ceremony and her example can demonstrate another important aspect of the concept. Oprah had a difficult childhood, living in poverty and being abused by her family, and the notion that she was “destined for greatness” was considered to be “her” truth. Many feel a connection to such “personal truths” as they allow people who are rarely heard to tell their story and connect their individual experiences to systematic issues.

In philosophy, standpoint theory holds that an individual’s perspectives are shaped by their social experiences and that marginalized people have a unique perspective in light of their particular experiences of power relations.

Sandra Harding’s concept of “strong objectivity” holds that by focusing on the perspectives of those who are marginalized from knowledge production, we can produce more objective knowledge. Thus, by focusing on what many might very well call “their truths” (in other words what they claim to be true in contrast to those who are not marginalized) we might achieve greater objectivity.

On the other hand, even if we recognize the value of such experiential accounts and even if we recognize that there is a problem when people who are abused aren’t believed, it still doesn’t mean that there is any such thing as personal or subjective truth. There seems to be a growing attitude that people are entitled to believe whatever they want individually. But “personal truth” is a contradiction in terms. To understand why we can look to John Dewey’s “The Problem of Truth” which investigates truth not only as a logical concept but as a social one as well.

Truth is supposed to be authoritative. If I tell you something is my opinion, nothing follows from that. If, on the other hand, I state that my opinion is true, then the claim takes on an authority that forces others to evaluate for themselves whether they believe it is true or false. As Dewey explains, “The opposite of truth is not error, but lying, the willful misleading of others.” To represent things as they are

is to represent them in ways that maintain a common understanding; to misrepresent them is to injure—whether wilfully or no—the conditions of common understanding …understanding is a social necessity because it is a prerequisite of all community of action.

Dewey’s point is that truth developed as a social concept that became necessary for social groups to function. This is important because truth and accountability go hand in hand. When we represent something as the truth, we are making a public statement. To say that something is true means that the claim we are making can be assessed by anyone who might investigate it (with enough training and resources) – it means others can reproduce one’s results and corroborate one’s findings. Something held merely in private, on the other hand, has no truth value. As Dewey explains,

So far as a person’s way of feeling, observing and imagining and stating are not connected with social consequences, so far as they have no more to do with truth and falsity than his dreams and reveries. A man’s private affairs are his private affairs, and that is all there is to be said of them. Being nobody else’s business, it is absurd to regard them as either true or false.

While figuratively it can be beneficial to talk about personal truths, ethically it is far more problematic. While many may (rightfully) criticize cultural relativism, at least with cultural relativism, you still have public accountability because culture is the benchmark for truth. In the end, “truth” requires verification. We do not get to claim that something is true until it has survived empirical testing from an ever-growing community of fellow knowers. To claim that something is “true” prior to this, based on individual experience alone, is to take something that rightly belongs to the community. It negates the possibility of delusion or poor interpretation since no one gets to question it. Thus, asserting something to be true on one’s own account is anti-social.

If truth is meant to be publicly accessible and if you are expected to be accountable for things you claim to be true in light of this, then the concept of personal or private truth negates this. If something is true, then it is in light of evidence that extends beyond yourself. Thus, if something is true then there is nothing “personal” about it, and if it is merely personal, it can’t be “true.” Figurative language is nice, but people growing up today hearing about “personal” truths in the media are becoming increasingly confused about the nature of truth, evidence, and reasoning.

As we collectively grapple with growing problems like misinformation, polarization, and conspiracy theories, it is hypocritical to both condemn these things while simultaneously encouraging people to embrace their own personal truths. This notion erases the difference between what is true and what is delusional, and fails to recognize “truth” as a properly social and scientific value. It’s high time we let this concept die.

ChatGPT and Deception

photograph of exams stacked on desk at front of class

For many, writing can be a challenging, even painful, experience. What seems like it should be a simple job – taking the thoughts in your head, turning them into words, and then typing them out – can be a task of almost insurmountable odds for some. Heck, I do it for a living, and I still find the blank page to be an ungodly sight, one that I sometimes flee from and promise to tackle later on.

As such, it should be no surprise that the advent of an AI system that can write for us has caused quite a stir. That system is ChatGPT, and while I could explain what it is, it’s more effective (not to mention easier) to let the tool do that for itself:

ChatGPT is a state-of-the-art language generation model developed by OpenAI. It is based on a transformer architecture and pre-trained on a large dataset of text, allowing it to generate human-like text.

ChatGPT can be fine-tuned for various natural language processing tasks such as language translation, text summarization, question answering, and text completion. It can also be used to generate creative writing, poetry, and even code.

One of the key features of ChatGPT is its ability to generate text that is coherent and coherent with the context it was given, making it able to generate text that flows well and makes sense. ChatGPT can be used to generate text on a wide range of topics and can be fine-tuned to suit specific use cases.

It is important to note that ChatGPT is a machine learning model, it is not capable of independent thinking or decision making, it is only able to generate text based on patterns it has learned from the data it was trained on.

I didn’t write any of that. Instead, I gave ChatGPT the prompt “explain what ChatGPT is” and let it fly. As it just noted, however, ChatGPT is not limited to dry, explanatory statements. The system has demonstrated an apparent aptitude for creative, even funny, writing, from explaining quantum theory to a child in the style of Snoop Dogg to creating an analogy for what it’s like to eat the keto diet, from giving an account of Attack on Titan in the manner of Donald Trump to writing a biblical verse explaining how to remove a peanut butter sandwich from a VCR. The tool really does seem adaptable.

Yet, despite the hilarity, ChatGPT’s emergence has brought some pressing issues regarding ownership and authenticity of work to the fore. If an AI generates text for you, can you claim it as your own? For example, Ammaar Reshi is facing considerable backlash for using ChatGPT to write a children’s book (which he then illustrated using Midjorney, an AI art generator). Reshi did not directly write or illustrate the book he is claiming as his product; he gave ChatGPT the required prompts and then used its output.

But, it has been in the educational sector where such concerns have really taken hold. So much so that some, such as New York City’s Department of Education, have blocked access to ChatGPT on school devices for fear of its misuse. The problems are relatively easy to grasp:

What is stopping students from passing off ChatPGT-produced essays and other forms of assessed work as their own? How should educators respond if a student uses ChatGPT to write an essay? And are students actually doing anything wrong if they use ChatGPT like this?

The answer to this last question is vastly complex and intertwined with the very purpose of assessment and learning monitoring. The point of assigning assessments, such as essays, is not so students produce a piece of text. The production of the essay is merely a step towards another goal. These forms of assessment act as a representation of the students’ learning. When a teacher asks you to write a 3,000-word paper on Frederick Douglas, for example, it is not the paper with which they are concerned; it is with your ability to recall, appraise, and communicate what you know about Douglas’ life, work, and impact. The essay is a medium through which such appraisal is conducted.

As philosopher Rebecca Mace remarked in an episode of BBC’s Inside Science:

A lot of people, including the newspapers, seem to have misunderstood the point of homework. So the purpose of homework is not to produce an essay, but to assess student understanding in order that the teachers can assist them with the gaps, or work out what they’ve not taught very well, or what they maybe need to go over again, or what that individual student really needs help with. Then the essay itself is irrelevant in many ways because that’s all the essay’s doings; it’s a means to an end.

Thus, according to such a way of thinking, the danger of ChatGPT comes from its potential to misrepresent student learning, giving the impression that a student knows more about a subject than they actually do. The issue is not one of principle but of outcome, and the use of ChatGPT brings with it the risk that learning is negatively impacted.

This stance, however, seems to overlook something important in using ChatGPT in educational settings. If accurate – if the threat of ChatGPT comes from its capacity to hide academic failings (both on the student’s and teacher’s behalf) – then we shouldn’t have any qualms about it being used in situations where this isn’t a factor. But, academically gifted students who know their subjects inside and out still seem to commit some wrong when they pass the algorithmically-generated text off as their own. This wrong emerges not from the impact such usage might have on their academic performance, nor on their teacher’s ability to assess their grasp of a subject accurately, but from the fact that they are attempting to deceive their assessor. It is wrong not because of an outcome but because of an adherence to principles – the virtue of honesty and the vice of deception.

That is not to say that this is the only reason why ChatGPT presents a potential harm to education and educational practices. The use of AI to game the academic-assessment system by hiding one’s failure to meet the standards are most certainly a concern (perhaps the central one). But, such an acknowledgement should not lead us to overlook the fact that, much like plagiarism, academic wrongs don’t simply emerge from their deleterious impact. They also come from deception and attempting to pass something off as one’s work when, in fact, they had minimal input in its creation.

Calibrating Trust Amidst Information Chaos

photograph of Twitter check mark on iphone with Twitter logo in background

It’s been a tumultuous past few months on Twitter. Ever since Elon Musk’s takeover, there have been almost daily news stories about some change to the company or platform, and while there’s no doubt that Musk has his share of fans, many of the changes he’s made have not been well-received. Many of these criticisms have focused on questionable business decisions and almost unfathomable amounts of lost money, but Musk’s reign has also produced a kind of informational chaos that makes it even more difficult to identify good sources of information on Twitter.

For example, one early change that received a lot of attention was the introduction of the “paid blue check mark,” where one could pay for the privilege of having what was previously a feature reserved for notable figures on Twitter. This infamously led to a slew of impersonators creating fake accounts, the most notable being the phony Eli Lilly account that had real-world consequences. In response, changes were made: the paid check system was modified, then re-modified, then color-coded, then the colors changed, and now it’s not clear how the system will work in the future. Additional changes have been proposed, such as a massive increase in the character limits for tweets, although it’s not clear if they will be implemented.  Others have recently made their debut, such as a “view count” that has been added to each tweet, next to “replies,” “retweets,” and “likes.”

It can be difficult to keep up with all the changes. This is not a mere annoyance: since it’s not clear what will happen next, or what some of the symbols on Tweets really represent anymore – such as those aforementioned check marks – it can be difficult for users to find their bearings in order to identify trustworthy sources.

More than a mere cause of confusion, informational chaos presents a real risk of undermining the stability of online indicators that help people evaluate online information.

When evaluating information on social media, people appeal to a range of factors to determine whether they should accept it, for better or for worse. Some of these factors include visible metrics on posts, such as how many times it’s been approved of – be it in the form of a “like” or a “heart” or an “upvote,” etc. – shared, or interacted with in the form of comments, replies, or other measures. This might seem to be a blunt and perhaps ineffective way of evaluating information, but it’s not just that people tend to believe what’s popular: given that in many social media it’s easy to misrepresent oneself and generally just make stuff up, users tend to look to aspects of their social media experience that cannot easily be faked. While it’s of course not impossible to fabricate numbers of likes, retweets, and comments, it is at least more difficult to do so, and so these kinds of markers often serve as quick heuristics to determine if some content is worth engaging with.

There are others. People will use the endorsement of sources they trust when evaluating an unknown source, and the Eli Lilly debacle showed how people used the blue check mark at least as an indicator of authenticity – unsurprisingly, given its original function. Similar markers play the same role on other social media sites – the “verified badge” on Instagram, for example, at least gives users the information that the given account is authentic; although it’s not clear how much “authenticity” translates to “credibility.”

(For something that is so often coveted among the influencers and influencer-wannabes there appears to be surprisingly little research on the actual effects of verification on levels of trust among users: some studies seem to suggest that it makes little to no difference in perceived trustworthiness or engagement, while others suggest the opposite).

In short: the online world is messy, and it can be hard to get one’s bearings when evaluating the information that comes at one constantly on social media.

This is why making sudden changes to even superficial markers of authenticity and credibility can make this problem significantly worse. While people might not be the best at interpreting these markers in the most reliable ways, having them be stable can at the very least allow us to consider how we should respond to them.

It’s not as though this is the first change that’s been made to how people evaluate entries on social media. In late 2021, YouTube removed publicly-visible counts of how many dislikes videos received, a change that arguably made it more difficult to identify spam, off-topic, or otherwise low-quality videos at a glance. While relying on a heuristic like “don’t trust videos with a bunch of dislikes” is not always going to lead you to the best results, having a stable set of indicators can at least help users calibrate their levels of trust.

So, it’s not that users will be unable to adjust to changes to their favorite online platforms. But with numerous changes of uncertain value or longevity comes disorientation. Combined with Musk’s recent unbanning of accounts that were previously deemed problematic, resulting in the overall increase of misinformation being spread around the site, conditions are made even worse for those looking for trustworthy sources of information online.