← Return to search results
Back to Prindle Institute

Year in Review

blurred photograph of fireworks in night sky

What’s your favorite read from 2021?
Daniel Burkett: “Background Checks for Alcohol” by Tim Hsiao

One of the most important aims of moral philosophy is ensuring we have consistency across our ethical beliefs, and there’s nothing more exciting (or infuriating) than discovering that we may, in fact, hold two grossly inconsistent positions. This piece did precisely that for me, and – I assume – will for many others.

Richard Gibson: The Moral Dimension of Literary Translation” by Beatrice Harvey

I suspect like most monolinguists, the intricacies of literary translation never really occurred to me. Harvey’s piece does a superb job of introducing the reader to the processes’ intractable ethical issues, while at the same time drawing in themes of imperialism and colonialism; topics which don’t get enough coverage given their pervasive impact on practically every aspect of society.

Giles Howdle:Should Speech Have Consequences?” by Benjamin Rossi

What I particularly enjoyed about this piece was how it manages to elegantly transcend the typical slogan-hurling “free speech debate.” The article’s consequentialist approach to the topic weighs up an abundance of interesting and relevant considerations, many of which I hadn’t consciously considered before, while retaining a great sense of coherence and clarity.

Benjamin Rossi:Praise and Resentment: The Moral of ‘Bad Art Friend’” by Megan Fritts

I loved the story upon which this column was based, and I thought Megan did a wonderful job drawing out some of the ethical quandaries the story raises.

Jake Wojtowicz:Background Checks for Alcohol” by Tim Hsiao

I found Tim’s piece fascinating. He argued that alcohol is more dangerous than guns, so if we want to put checks in place for buying guns we’d better also put background checks in place for buying alcohol (or there shouldn’t be checks for either). It was a provocative and interesting piece arguing for a wild conclusion that really got me thinking.

Rachel Robison-Greene:Parents’ Rights and Public Education” by Tucker Sechrest

The article does an excellent job at identifying the various values at stake: autonomy, paternalism, rights, knowledge, and the public good. It makes a compelling case for the view that education isn’t and can’t be just a transmission of the personal values of each individual parent. The primary obligation of our public school system is to care for and create reasonable citizens and responsible, autonomous adults who are armed with the necessary information to operate in a world in which a significant body of knowledge is increasingly necessary. Some of this material will be social and historical. We can’t stop teaching about race or ban books on the topic simply because parents don’t want their children to know about it. This piece is important and timely.

A.G. Holdier:Destroy the ‘Mona Lisa’ for an NFT?” by Megan Fritts

This analysis of NFTs as aesthetic objects goes beyond a treatment of a fleeting cultural hiccough to explore an important point about artistic experience itself: namely, the value of (something like) reality that we often take for granted in our discussions about art. While it remains to be seen whether or not “non-fungible tokens” will develop beyond a pandemic-era quirk, their present cachet deserves the kind of philosophical reflection that Megan aptly demonstrates here.

Megan Fritts:On Journalistic Malpractice” by A.G. Holdier
Marshall Bierson: Losing Ourselves in Others” by Rachel Robison-Greene

This piece does a wonderful job of articulating the ways in which our psychological need to defend our identity, can get in the way of clear moral understanding. I think it both helps us empathize with those who we think are blind to important moral realities and encourages our own humility as we recognize that our own moral thinking might well be blinkered by in-group bias.

Losing Ourselves in Others

illustration of Marley's ghost in A Christmas Carol

The end of the year is a time when people often come together in love and gratitude. Regardless of religion, many gather to share food and drink or perhaps just to enjoy one another’s company. It’s a time to celebrate the fact that, though life is hard and dangerous, we made it through one more year with the help of kindness and support from one another.

Of course, this is why the end of the year can also be really hard. Many people didn’t survive the pandemic and have left enormous voids in their wake. Even for families and friend groups who were lucky enough to avoid death, many relationships didn’t survive.

Deep differences of opinion about the pandemic, race, and government have created chasms of frustration, distrust, and misunderstanding. If this is an accurate description of relationships between those who cared deeply for one another, it’s even less likely to be resolvable for casual acquaintances and members of our communities we only come to know as a result of our attempts to create social policy. This time of year can amplify our already significant sense of grief, loss, and loneliness — the comfort of community is gone. We feel what is missing acutely. How ought we to deal with these differences? Can we deal with them without incurring significant changes to our identities?

Moral philosophy throughout the course of human history has consistently advised us to love our neighbors. Utilitarianism tells us to treat both the suffering and the happiness of others impartially — to recognize that each sentient being’s suffering and happiness deserves to be taken seriously. Deontology advises us to recognize the inherent worth and dignity of other people. Care ethics teaches us that our moral obligations to others are grounded in care and in the care relationships into which we enter with them. Enlightenment moral philosophers like Adam Smith have argued that our moral judgments are grounded in sympathy and empathy toward others. We are capable of imaginatively projecting ourselves into the lives and experiences of other beings, and that provides the grounding for our sense of concern for them.

Moral philosophers have made fellow-feeling a key component in their discussions of how to live our moral lives, yet we struggle (and have always struggled) to actually empathize with fellow creatures. At least one challenge is that there can be no imaginative projection into someone else’s experiences and worldview if doing so is in conflict with everything a person cares about and with the most fundamental things with which they identify.

“Ought implies can” is a contentious but common expression in moral philosophy. It suggests that any binding moral obligation must be achievable; if we ought to do something, then we realistically can do the thing in question. If you tell me that I ought to have done more to end world hunger, for instance, that implies that it was possible for me to have done more to end world hunger (or, at least, that you believe that it was possible for me to have done so).

But there are different senses of “can.” One sense is that I “can” do something only if it is logically possible. Or, perhaps, I “can” do something only if it is metaphysically possible. Or, in many of the instances that I have in mind here, a person “can” do something only if it is psychologically possible. It may be the case that empathizing with one’s neighbor, even in light of all of the advice offered by wise people, may be psychologically impossible to do, or close to it. The explanation for this has to do with the ways in which we construct and maintain our identities over time.

Fundamental commitments make us who we are and make life worth living (when it is). In fact, the fragility of those commitments, and thus the fragility of our very identities, causes some philosophers to argue that immortality is undesirable. In Bernard Williams’ now famous paper The Makropulos Case: Reflections on the Tedium of Immortality he describes a scene from The Makropulos Affair, an opera by Czech composer Leoš Janáček. The main character, Elina, is given the opportunity to live forever — she just needs to keep taking a potion to extend her life. After many, many years of living, she decides to stop taking the potion, even though she knows that if she does so she will cease to exist. Williams argues that anyone who takes such a potion — anyone who chooses to extend their life indefinitely — would either inevitably become bored or would change so much that they lose their identity — they would, though they continue to live, cease to be who they once were.

One of the linchpins of Williams’ view is that, if a person puts themselves in countless different circumstances, they will take on desires, preferences, and characteristics that are so unlike the “self” that started out on the path that they would become someone they no longer recognize. One doesn’t need to be offered a vial of magical elixir to take on the potential for radical change — one has simply to take a chance on opening oneself up to new ideas and possibilities. To do so, however, is to risk becoming unmoored from one’s own identity — to become someone that an earlier version of you wouldn’t recognize. While it may frustrate us when our friends and loved ones are not willing to entertain the evidence that we think should change their minds, perhaps this shouldn’t come as a surprise — we sometimes see change as an existential threat.

Consider the case of a person who takes being patriotic as a fundamental part of their identity. They view people who go into professions that they deem as protective of the country — police officers and military members — to be heroes. If they belong to a family which has long held the same values, they may have been habituated to have these beliefs from an early age. Many of their family members may be members of such professions. If this person were asked to entertain the idea that racism is endemic in the police force, even in the face of significant evidence, they may be unwilling and actually incapable of doing so. Merely considering such evidence might be thought of, consciously or not, as a threat to their very identity.

The challenge that we face here is more significant than might be suggested by the word “bias.” Many of these beliefs are reflective of people’s categorical commitments and they’d rather die than give them up.

None of this is to say that significant changes to fundamental beliefs are impossible — such occurrences are often what philosophers call transformative experiences. That language is telling. When we are able to entertain new beliefs and attitudes, we express a willingness to become new people. This is a rare enough experience to count as a major plot point in a person’s life.

This leaves us with room for hope, but not, perhaps, for optimism. Events of recent years have laid bare the fundamental, identity-marking commitments of friends, family, and members of our community. Reconciling these disparate commitments, beliefs, and worldviews will require nothing less than transformation.

The Vice of Ingratitude; or, How I’m Bad at Christmas

photograph of Santa Claus offering a gift and money

Christmas is, without question, my favorite holiday. I love the hymns, the flavors, and the decorations; I find the incarnation the most moving element of Christianity; I love the chance to spend time with family.

However, there is one part of Christmas I dislike: the presents.

Now, I could tell some philosophically sophisticated story to justify my dislike.

  • I could argue that the ‘commercialism’ of presents trivializes the spiritual realities that we should focus on during Christmas.
  • I could argue that the whole idea of a season where you are expected to give a gift makes real gift-giving impossible. I could point out that it is much more meaningful to receive a gift merely out of thoughtful good will rather than out of thoughtful good will in light of social expectation.
  • I could argue that the gift process tends to ‘individualize’ the celebration, with each person getting things ‘for themselves.’ This is in contrast to, say, Christmas dinner where a family really does share a meal. (We ‘share’ gifts in a sense. Namely, I share a gift with you. But still each person gets their own gifts, gifts they primarily use on their own.)
  • I could argue that mutual gift giving is economically inefficient, since we are better at buying things for ourselves than for others.
  • I could argue that presents normalize a troubling dishonesty. You are expected to express joy at the gift that you receive, completely irrespective of how much you want it. It is considered impolite to be honest about how much you liked a gift.
  • I could argue that the norms surrounding giving presents creates a social pressure for people to spend money they don’t have and thus leads people into debt.

I really think each of these arguments is right, and really do believe we should stop giving presents for Christmas.

But none of these are the real reason I dislike presents. My deep dislike of presents stems from my struggles with gratitude. I find receiving presents awkward, and really struggle to appreciate what people get for me. (There are exceptions, one year my sister got me a Christmas ornament made from Plato’s Republic; it was an awesome gift and I remain extremely appreciative.)

So why do I find it awkward to receive presents?

My Struggle with Receiving Gifts

Here is my basic struggle with receiving gifts. Financially, I am perfectly well-off. So, if there is something that I think is worth the money, then I purchase it myself. If I think something is not worth the money, then I won’t purchase it for myself. But if I won’t purchase it myself, then why in the world would I want someone else to purchase it for me? The reason that I don’t want to waste money — namely that the money can be better spent on other things — applies just as much to other people.

I suspect that this is my real issue with presents because if this is my issue, then it explains why I am OK with some gifts. For example, I find it easy to appreciate gifts that someone makes or bakes for me. I can’t buy delicious homemade brownies or apple butter, so it makes total sense as a gift assuming the homemade stuff really is better than the store-bought alternative. Similarly, I find it easier to appreciate gifts when someone gives me something that I would have bought myself had I known it exists (like the Republic Christmas ornament).

The thing is, this objection only makes sense because I lack the virtue of gratitude.

The Vice of Ingratitude

To understand why, let’s consider a passage of Seneca’s On Benefits: 

A benefit cannot be touched with one’s hand; the business is carried out with one’s mind. There is a big difference between the raw material of a benefit and the benefit itself. Consequently, the benefit is not the gold, the silver, or any of the things which are thought to be most important; rather, the benefit is the intention of the giver.

. . .

So what is a benefit? It is a well-intentioned action that confers joy and in so doing derives joy, inclined towards and willingly prepared for doing what it does. And so it matters not what is done or what is given, but with what attitude, since the benefit consists not in what is done or given but rather in the intention of the giver or agent. . . . If benefits consisted in the things and not precisely in the intention of the benefactor, then the benefits would be greater to the extent that the things received are greater. But that is not the case. For often we are more obliged to the person who gave us a small gift in grand manner, who “matched the wealth of kings with his intention,” who gave little but did so freely, who ignored his own poverty while showing concern for mine, who was not just willing to help out but eager to do so, who felt like he was receiving a benefit because he was giving one, who gave as though he would later receive and received as though he had not given, who watched for, even sought out, an opportunity to serve. By contrast, as I have already said, benefits that have to be extracted from the donor or that fall carelessly from him—these benefits are not appreciated even if they seem large in bulk and in appearance . . .. What this other person gave is great; but he was hesitant, he put it off, he moaned while he was giving, he gave arrogantly, and he paraded the fact that he was giving, and did not intend to give pleasure to the recipient. He gave to his own ambition and not to me.

Seneca’s point is that, when one looks with the ‘eyes of gratitude’ it really is the ‘thought that counts.’ What the grateful person appreciates is not the actual thing given, but the good will that the gift expresses. We appreciate, not the token of love, but the love itself.

To the grateful person, the book they receive as a gift is more valuable than the same book purchased on Amazon. That is because the book acts as an outward sign of another’s good will, something much more valuable than the book itself.

The ungrateful person might appreciate a gift. But in contrast to the grateful person, they appreciate it merely for the value of the item. The ungrateful person is just as pleased at finding an abandoned book on the street as they are at being given the book by a friend or mentor.

Ingratitude as the Source of My Problem

If one values a gift merely for the item one receives, then my struggle with gifts makes perfect sense. If I’d sufficiently valued the item, I’d just buy it for myself. However, this is also a vicious way to value gifts. To value gifts merely for the item one receives, is to value gifts in an ungrateful and miserly way.

If instead, one values the gift in a grateful way — as an expression of another’s love and goodwill — then it’s impossible to purchase the gift for one’s self. Were I a more grateful person, were my attention to move naturally from the item to the other’s act of giving, then I’d not have any problem being appreciative of the gifts that I receive (so long as the gifts really do express love and good will, rather than mere social expectation to give around Christmas time).

Ingratitude, in my case, presents as an awkwardness in receiving presents. But that is not the only way it can present itself. Ungrateful people find gifts awkward when they also feel content with what they have. Because I have the items I want, and because I value gifts merely for the items that they are, I therefore struggle to value gifts people get for me.

However, ingratitude can also be present in people who focus too much on gifts. If an ungrateful person is not content with what they have, then they will value gifts. However, they will value the gift merely as the item they want, and not primarily as an expression of the other’s love and good will.

There are ways to test for this sort of ingratitude. For example, when you think about a gift, do you tend to also think about the person who gave it to you? Similarly, when you anticipate gifts do you hope for particular sorts of things? If so, that suggests that you value gifts primarily for what is given, and not the good will the gift expresses.

Conclusion

The grateful person appreciates gifts as an expression of good will. As such, grateful people care less about what they receive, and instead care more about the person who gave them the gift.

Of course, many of the norms that surround gift giving make it harder to appreciate gifts properly. For example, the expectation to give gifts for the holidays — or to give gifts to those who gave something to you — makes it harder to appreciate gifts as an expression of good will. We all know that many people buy gifts partly out of social expectation or because they know they are likely to receive a gift from us.

However, there are other ways to express love and good will beyond the giving of gifts. So while I doubt I can convince everyone to stop giving Christmas presents, perhaps I can encourage some change in our Christmas practices. Perhaps, given the various ways the gift-giving process has become corrupted, I can convince people to give fewer gifts, to spend less money, and to focus on other ways to express good will and love.

Can Santa Know Who’s Naughty or Nice?

photograph of child with present watching silhoette of Santa's sleigh

Reliable sources indicate that every year, Santa makes a list of boys and girls he deems “naughty” or “nice.” The stakes could not be higher: on the basis of this classification, a child receives either wonderful gifts or coal in his or her stocking. Thus, it would appear that a serious inquiry into Santa’s methods is in order. In short: how does Santa know who’s naughty or nice?

There are actually two parts to this inquiry. The first concerns Santa’s definition of “naughty” and “nice.” I’ll leave this interesting question to the serious normative ethicists. The issue I’m interested in is this: even if you’ve been naughty, I assume that Santa will not judge you deserving of coal unless you are responsible for your naughtiness. Naughtiness and responsibility for naughtiness are distinct. After all, some people behave naughtily but are blameless for doing so: for example, those who are blamelessly ignorant of what they are doing. So, the question I want to focus on is how Santa knows who is responsible for being naughty, thus deserving coal in their stockings.

Most philosophers agree that responsibility for wrongdoing has two components: a control component and a mental component. Plausibly, you are not responsible for what is not under your control. Likewise, you are not responsible if you don’t know what you’re doing. So, responsibility requires at least some sort of awareness of what one does and some sort of control over what one does. (There is much more to be said about both components, but for our purposes this will suffice).

However, as the philosopher Michael Zimmerman has observed, if you are only responsible for what is under your control, then it would appear that those who attempt naughtiness are no more and no less responsible (read: blameworthy) than those who are naughty. (Most of what follows is taken from Zimmerman’s book.) Consider Sue and Sara. Sue throws a baseball at a neighbor’s window, shattering it. Sara throws a baseball at a neighbor’s window, but a strong gust of wind blows the baseball off course and it lands harmlessly in the neighbor’s yard. Is Sue more to blame than Sara? If we are responsible only for what is under our control, the answer appears to be “no.” After all, Sara would have shattered the window with the baseball had it not been for something entirely outside of her control: namely, the direction and strength of the wind. Arguably, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. A pithier way of making the same point is to say that Sara was lucky that she didn’t hit the window, but since her not hitting the window was a matter of luck, Sara’s blameworthiness is the same as Sue’s.

If Santa accepts this reasoning, he will put those who attempted naughtiness on the naughty list with those who succeeded. Perhaps this expansion of the list is tolerable to the old man. The problem is that this same line of reasoning threatens to scramble the whole system. Consider first that what we do is determined not only by what we decide to do, but also by the opportunities and character that we happen to have. Consider Susanna, whose parents have refused to buy her a baseball; and Shirley, who would never dream of throwing a baseball at a window because her parents have effectively taught her never to destroy other people’s property. It may be true that Susanna would have shattered her neighbor’s window had it not been for something outside of her control: whether or not her parents had bought her a baseball. And it may be true that Shirley would have shattered her neighbor’s window had her parents raised her differently, which is also something outside of her control. Once again, if a person would have done something blameworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as blameworthy as if she had done that thing. Thus, the same reasoning that consigned those who attempted naughtiness to the naughty list also seems to consign those who lack opportunity for naughtiness and those whose characters preclude naughtiness to the naughty list.

Even worse, the same line of reasoning implies that everyone is blameless and, indeed, praiseworthy. Just as it may be true that Sara, Susanna, and Shirley would have shattered the window had they been free to do so were it not for some factor outside of their control, so it is also true that the three girls would have performed praiseworthy acts that they did not actually perform were it not for lack of opportunity or their own bad characters, all of which are beyond their control. If a person would have done something praiseworthy had she been free to do so, and what prevented her from being free to do so was something outside of her control, then this person is just as praiseworthy as if she had done that thing.

Now we can see why, if the argument is sound, Santa’s whole system is in trouble. The claims we have been making about the three girls — that they would have done such-and-such, were they free to do so — are sometimes called “counterfactuals of freedom.” According to the argument just developed, for any child there are an indefinite number of “counterfactuals of freedom” that make that child blameless, praiseworthy, and blameworthy. Santa will never know all of these counterfactuals, so he is never in a position to weigh them up to make a final judgment on the child’s naughtiness or niceness.

So, what is poor Santa to do? Zimmerman thinks that he should put everyone on the nice list, since it is far worse to punish the innocent than to not punish the guilty. As the ultimate niceness or naughtiness of a child is unknowable to Santa, he should assume that all children are nice to avoid punishing some nice children.

But there is an alternative response. For utilitarians, blame and praise (as well as punishment and reward) are tools that we ought to use to bring about good effects – such as deterrence or moral growth – rather than ways of registering a person’s desert. From this perspective, it would make no sense to blame or punish someone for something someone would have done in some counterfactual circumstance of which we have no knowledge. Such punishment would be arbitrary. Even if we somehow could know the truth of some counterfactual of freedom, people can’t choose to avoid being the kind of person who would commit naughty acts under some counterfactual circumstance, so there is no deterrence value in punishing them for being that kind of person. By contrast, it does make sense from this perspective to punish someone for something they have actually done — not because that person is more deserving, but because in response to such punishment people (including the punishee) can choose to avoid committing naughty acts in the future.

So, if Zimmerman’s argument is sound, then Santa has at least two choices: put everyone on the nice list, or turn utilitarian. I recommend the latter.

Wrapping Christmas Presents: Some Lessons from Ancient Philosophy

photograph of presents wrapped in gold before a Christmas tree

So here’s a question: why do we spend time and effort nicely wrapping Christmas presents? Sure the surprise factor created by wrapping is cool, but there are much easier ways to conceal the present until the moment of unveiling.

Now, I’m not actually interested in the real explanation for why we wrap presents. No doubt most people wrap presents because it is traditional, or as a way to show off, or because it provided a nice excuse to procrastinate on dissertation work after an annoyingly long semester (though now that I think about it… that last one might be a bit parochial). Instead, what I want to know is whether there is anything good about wrapping gifts. I’m interested in the goodness of gift-wrap, because it is a particular instance of a more general moral question: what is the importance of outward appearance?

There is profound truth in the banal moralisms about not judging by outward appearance. Don’t judge a book by its cover! Which is to say, when dating you should care about the character of a person, not about how attractive they are. Which is to say, when hiring you should care about the quality of a person’s work and not that person’s height or weight. Which is to say, when voting you should care about the content of what a politician says and not the power of the rhetoric with which they say it. I think all of this is right. I can’t help but sound cliché when I make this point, but the point is true nonetheless: what matters is the reality on the inside, not the appearance on the outside. But if that is right, is it shallow to care how presents are wrapped? Is the beautifying of the appearance merely a sop to our vanity?

Here, I want to use some ancient philosophy to defend the wrapping of presents. I think Plato and Aristotle can help us understand the role of wrapping in our lives. Plato will help us understand what exactly outward appearances are, and Aristotle will help us understand why they might be important.

A Platonic Distinction Between Appearances and the Good

In his work The Gorgias — incidentally, my favorite work on ethics — Plato explains the nature of rhetoric by distinguishing the proper good from the apparent good. He starts with the example of food. According to Plato, there is a good proper to food — namely healthfulness — and an art proper to the good of food — namely nutrition science. Our reason, by the use of careful study, is capable of identifying which foods really are good for us. But there is also an apparent good of food, and that is the tastiness of food. We evolved to like foods that are good to eat. We like to eat what tastes good, and so we hope that what is tasty is good for us.

For Plato, when you find one food tastier than another, that itself does not make the food better. Rather, that is the food appearing better to your tongue. When I look at a Müller-Lyer illusion, the fact that one line looks longer does not make the line longer; rather the line appears longer to my eyes. The tastiness of food is an ‘outward appearance’. It is not itself a good of food, rather it is a way for the food to appear good. Sometimes that appearance is accurate (after all, our tastes did evolve so that we would like food that is good for us), but often the appearance is systematically distorted (as it seems to be in our calorically-rich, junk food laden society).

Plato points out that if you had a nutritionist and a pastry chef each cook food for children, the children would reliably think the pastry chef’s food is better. That is because the children are misled by taste, thinking the worse food is in fact the better. Plato argues that in many similar contexts we mistakenly prioritize the appearance of good over the actual good, especially when the actual good is difficult to identify. The good proper to ideas is truth. However, those skilled in rhetoric can package their ideas so that they appear true even when they are false. The good proper to soap is its ability to clean. However, most cleaning companies focus on proper perfuming so that things at least smell clean whether or not they are.

This division between the true good and the apparent good exists, according to Plato, because we are not only rational creatures but are embodied rational creatures. We don’t just have a rational nature which can recognize the good of things, we also have animalistic appetites which cannot track goodness directly and so instead perceive goodness by way of proxies.

Physical attraction is the proxy our animalistic body uses to decide who to marry, even though the correlation between physical attraction and spouse quality is weak at best. Taste is the proxy our animalistic body uses to decide what food is good to eat, even though in our environment the correlation between taste and health is often inverted.

This distinction of Plato’s is, I think, a useful way for thinking about outward appearance. The ‘inward reality’ concerns the actual good of the thing, and the outward appearance describes how that goodness appears to our appetites.

Aristotle’s Insight on the Second Good of Activity

Once you have Plato’s distinction in mind, you might think that wrapping presents is clearly vain. After all, it involves a focus on outward appearances, and even worse, a focus on outward appearances that have nothing to do with the primary good of the gift. You can make food taste better by adding salt, but at least salt is also an important nutrient to human health! If you wrap Christmas presents nicely, it does nothing to improve the actual good of the inner gift. So to understand why wrapping presents might still be valuable, we now need to turn to the insights of Aristotle.

Plato tended to be pretty harsh on our physical bodies. He, at times, wrote as though we are rational selves trapped in a physical body that, for the most part, just gets in the way. It is thus, perhaps, not surprising that he didn’t see value in our animalistic appetites seeing things as good.

Aristotle, however, had a somewhat more balanced view of the integration of body and soul. For Aristotle, it would not only be a mistake to think of ourselves as souls trapped in a body, he would not even think it possible for a soul to be trapped in a body. For Aristotle, the body and soul (or matter and form) are inextricably linked together; they don’t make sense without one another.

This led Aristotle to pay more attention to how we want to integrate our animalistic appetites with the judgment of our reason. For Aristotle, pleasure is not a mere distraction, rather it can perfect other already good activities. And it is this idea of perfecting which will help us understand why it might be good to wrap presents.

To understand Aristotle’s notion of perfection, we need to distinguish between two different goods. The first is the good object of an activity – the good object of eating is healthy food. The second is the good activity itself. Not only is food good, but it is also good to eat food. Not only is a person good, but it is also good to befriend or marry that person. Not only is an idea good (that is true), but it is also good to believe or understand that idea.

Aristotle’s central insight, then, is that the good of outward appearances does nothing for the good of the object. But it does make easier the good of the activity. And the reason it makes that activity easier is because we are not just rational souls, we are also physical bodies, and the outer appearances make it easier for our whole bodies to enter into the activity.

That food is tasty does not make the food better for you, but it certainly makes it easier to eat the food. It allows you to enter into the activity of eating more fully. Similarly, that someone is physically attractive is not a good reason to marry someone. But it is still a good thing if you find your spouse attractive, because it makes it easier to care for and love your spouse. Your animalistic appetites cooperate with, rather than fight with, your reason.

It is this role that Aristotle has in mind when he says that pleasure perfects our activities. I can pursue the good even if I don’t enjoy it, but when I enjoy what I am doing I am able to enter into the activity more fully.

Aristotle’s insight is that, as embodied creatures, the outward appearances which give rise to bodily pleasure help us enter our whole selves (and not just our rational selves) into an activity. Rhetoric can be used to mislead, as it is often used in government propaganda. But it can also be used to help people more deeply appreciate what is true (as is the case in Martin Luther King Jr.’s rhetoric in “Letter from a Birmingham Jail”).

So then, the reason we wrap presents need not be a simple sop to our vanity. Rather, it can be a way to recognize that the person you are giving a gift to is not just a rational soul but a human person — someone who has not just an intellect that can tell what is good about a gift, but someone with eyes which can be drawn in by the beauty of a present. Beautifying the exterior, while it can be vain or deceptive, can also be an appropriate way to help one fully enter into and appreciate the inner good.

Who Is Accountable for Inductive Risk in AI?

computer image of programming decision trees

Many people are familiar with algorithms and machine learning when it comes to applications like social media or advertising, but it can be hard to appreciate all of the diverse applications that machine learning has been applied to. For example, in addition to regulating all sorts of financial transactions, an algorithm might be used to evaluate teaching performances, or in the medical field to help identify illness or those at risk of disease. With this large array of applications comes a large array of ethical factors which become relevant as more and more real world consequences are considered. For example, machine learning has been used to train AI to detect cancer. But what happens when the algorithm is wrong? What are the ethical issues when it isn’t completely clear how the AI is making decisions and there is a very real possibility that it could be wrong?

Consider the example of applications of machine learning in order to predict whether someone charged with a crime is likely to be a recidivist. Because of massive backlogs in various court systems many have turned to such tools in order to get defendants through the court system more efficiently. Criminal risk assessment tools consider a number of details of a defendant’s profile and then produce a recidivism score. Lower scores will usually mean a more lenient sentence for committing a crime, while higher scores will usually produce harsher sentences. The reasoning is that if you can accurately predict criminal behavior, resources can be allocated more efficiently for rehabilitation or for prison sentences. Also, the thinking goes, decisions are better made based on data-driven recommendations than the personal feelings and biases that a judge may have.

But these tools have significant downsides as well. As Cathy O’Neil discusses in her book Weapons of Math Destruction, statistics show that in certain counties in the U.S. a Black person is three times more likely to get a death sentence than a white person, and so the application of computerized risk models intended to reduce prejudice, are no less prone to bias. As she notes, “The question, however, is whether we’ve eliminated human bias or simply camouflaged it with technology.” She points out that questionnaires used in some models include questions like when “the first time you ever were involved with the police” which is likely to yield very different answers depending on whether the respondent is white or Black. As she explains “if early ‘involvement’ with the police signals recidivism, poor people and racial minorities look far riskier.” So, the fact that such models are susceptible to bias also means they are not immune to error.

As mentioned, researchers have also applied machine learning in the medical field as well. Again, the benefits are not difficult to imagine. Cancer-detecting AI has been able to identify cancer that humans could not. Faster detection of a disease like lung cancer allows for quicker treatment and thus the ability to save more lives. Right now, about 70% of lung cancers are detected in late stages when it is harder to treat.

AI not only has the potential to save lives, but to also increase efficiency of medical resources as well. Unfortunately, just like the criminal justice applications, applications in the medical field are also subject to error. For example, hundreds of AI tools were developed to help deal with the COVID-19 pandemic, but a study by the Turing Institute found that AI tools had little impact. In a review of 232 algorithms for diagnosing patients, a recent medical journal paper found that none of them were fit for clinical use. Despite the hype, researchers are “concerned that [AI] could be harmful if built in the wrong way because they could miss diagnoses and underestimate the risk for vulnerable patients.”

There are lots of reasons why an algorithm designed to detect things or sort things might make errors. Machine learning requires massive amounts of data and so the ability of an algorithm to perform correctly will depend on how good the data is that it is trained with. As O’Neil has pointed out, a problematic questionnaire can lead to biased predictions. Similarly, incomplete training data can cause a model to perform poorly in real-world settings. As Koray Karaca’s recent article on inductive risk in machine learning scenarios explains, creating a model requires methodological and precise choices to be made. But these decisions are often driven by certain background assumptions – plagued by simplification and idealization – and which create problematic uncertainties. Different assumptions can create different models and thus different possibilities of error. However, there is always a gap between a finite amount of empirical evidence and an inductive generalization, meaning that there is always an inherent risk in using such models.

If an algorithm determines that I have cancer and I don’t, it could dramatically affect my life in all sorts of morally salient ways. On the other hand, if I have cancer and the algorithm says I don’t, it can likewise have a harmful moral impact on my life. So is there a moral responsibility involved and if so, who is responsible? In a 1953 article called “The Scientist Qua Scientist Makes Value Judgments” Richard Rudner argues that “since no scientific hypothesis is completely verified, in accepting a hypothesis the scientist must make the decision that evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis…How sure we need to be before we accept a hypothesis will depend on how serious a mistake would be.”

These considerations regarding the possibility of error and the threshold for sufficient evidence represent calculations of inductive risk. For example, we may judge that the consequences of asserting that a patient does not have cancer when they actually do to be far worse than the consequences of asserting that a patient does have cancer when they actually do not. Because of this, and given our susceptibility to error, we may accept a lower standard of evidence for determining that a patient has cancer but a higher standard for determining the patient does not have cancer to mitigate and minimize the worst consequences if an error occurs. But how do algorithms do this? Machine learning involves optimization of a model by testing it against sample data. Each time an error is made, a learning algorithm updates and adjusts parameters to reduce the total error which can be calculated in different ways.

Karaca notes that optimization can be carried out either in cost-sensitive or -insensitive ways. Cost-insensitive training assigns the same value to all errors, while cost-sensitive training involves assigning different weights to different errors. But the assignment of these weights is left to the modeler, meaning that the person who creates the model is responsible for making the necessary moral judgments and preference orderings of potential consequences. In addition, Karaca notes that inductive risk concerns arise for both the person making methodological choices about model construction and later for those who must decide whether to accept or reject a given model and apply it.

What this tells us is that machine learning inherently involves making moral choices and that these can bear out in evaluations of acceptable risk of error. The question of defining how “successful” the model is is tied up with our own concern about risk. But this only poses an additional question: How is there accountability in such a system? Many companies hide the results of their models or even their existence. But, as we have seen, moral accountability in the use of AI is of paramount importance. At each stage of assessment, we encounter an asymmetry in information that pits the victims of such AI to “prove” the algorithm wrong against available evidence that demonstrates how “successful” the model is.

Juvenile Justice: Charging Minors as Adults

close-up photograph of youth in handcuffs

Children should be treated differently from adults by the criminal justice system. They should be treated more leniently, and this includes how they are treated by the police, judged by the courts, and – most obviously – the punishments they are given.

I take it that most people believe this, at least to some extent. Nobody thinks that a five-year-old who steals a Mars bar deserves the same punishment as an adult. But when a child commits a heinous crime, we are often far too quick to abandon our commitment to treating kids fairly.

Just last week, another school shooting saw four children die in Michigan. The shooter, a fifteen-year-old boy, has been charged as an adult. Even though rates of children being charged as adults are falling, estimates say that 50,000 children a year are still charged as adults. Recently, in upstate New York, Eric Smith was released after 27 years in prison. Smith was charged in the juvenile courts, yet while headlines mentioned that he was a child killer, they failed to mention that he, too, was a child. If kids should be treated more leniently, then the reactions in these cases by the prosecutor and press are flawed.

One might think that sometimes crimes are so heinous that kids deserve to be tried as adults. Perhaps some crimes are so egregious that they deserve the strictest possible punishment, and perhaps the state needs to communicate that some crimes are hideous, or perhaps some crimes suggest someone is beyond rehabilitation. But I also suspect that, often, calls to punish kids as adults are based on a mistake: that because the act is heinous, it deserves the severest punishment. And I think we should push back against this. Let me try to explain, by exploring why we should be lenient to kids in the first place.

The obvious starting point is that the criminal justice system is part of the state, and the state treats kids differently from adults all the time. Thirteen-year-olds can’t drive, seventeen-year-olds can’t smoke, twenty-year-olds can’t drink (though, absurdly, they can give their lives for their country). And why are kids treated differently by the state in all these other domains? The obvious answer is: kids are less mature; they can’t be trusted to make informed decisions about the risks involved in smoking, drinking, and driving. They aren’t quite as able to see why what they are doing is wrong and they possess less self-control.

If kids are less mature – as they so often are – then shouldn’t they be treated more leniently for their crimes? If we think that, roughly, how much you should be blamed should match how responsible you were for the bad thing that you did, and if we think that kids are less mature and thus less responsible, the answer seems to be obvious: yes, of course kids should be treated more leniently.

We can nuance this argument further. One point worth discussing is that our brains do not mature until we are 25, so this might even suggest we should be lenient to people who are traditionally classed as adults. But we might want to allow for a cut-off, here: we might think that, although they are not fully mature, 23-year-olds have a suitable enough knowledge to be treated like adults. When is a kid mature enough to be punished as an adult? That’s a point for debate, and it might also be true that some kids should be punished more like adults than like kids. But I take it for granted here that if we have a distinction between punishing as a juvenile and punishing as an adult, we can roughly sort offenders into one of two camps.

It’s also worth noting that this argument does not – as is briefly discussed here – claim that kids do not know right from wrong. That would be absurd: kids clearly do sometimes know they are doing wrong. Yet kids might know something is wrong but not fully appreciate the wrongness, they might not quite grasp why it is wrong. And kids might know something is wrong but be less able to resist it than adults.

But what about if a kid is as mature as an adult? Gideon Yaffe thinks that we should start from the supposition that kids should be treated more leniently, yet the above argument fails since it allows for some kids not to be treated more leniently. For instance, if a kid is morally mature, according to this argument they should be punished just the same. (Yaffe’s answer is interesting: kids can’t vote, and he thinks criminal responsibility tracks how much say we have over the law. But that is too much to go into here.)

I don’t think I buy Yaffe’s argument. It seems plausible to me that if a particular kid is fully mature, then perhaps they should be punished like an adult. Though there is an argument that it might be too hard to discern if a kid is mature enough, and we should just always treat kids as though they are not mature enough for full criminal punishment. (And it is also worth noting that such a consideration does not apply to the cases we are discussing. The Michigan shooter was plausibly suffering from mental disorders, as was Eric Smith – so his lawyer argued. So charging either of those as adults seems flawed, especially when mental disorders also tend to require more lenient punishments.)

Yet even if there is an exception so that mature kids are punished like adults, there is no principled exception to say that kids who do awful things should be punished like adults.

But, the retort goes, what they did was so heinous they must be punished as strongly as possible. This rests on a mistake. Our argument does not turn around what was done but who did it. In one of the greatest pieces of modern moral philosophy, P.F. Strawson made this point. If you tread on my toe, the pain is the same, but the level of blame I give you will differ depending on whether you accidentally trod on me or whether you did it maliciously.

We can grant that a murder is heinous, whilst still holding that children who perpetrate these acts are owed separate consideration. The thing they did was heinous, but their doing it was less heinous than if an adult had done it. To charge kids as adults requires more than just pointing to the fact that what they have done is awful. It requires showing that they deserve to be punished like adults, despite the prevailing assumption that they are less mature, and so a lesser punishment is appropriate.

Curfews and the Liberty of Cats

photograph of cat silhoette at night

Starting in April 2022, the city of Knox, Australia, will impose a ‘cat curfew’ requiring pet cats to be kept on their owners’ premises at all times. The curfew has sparked a great deal of controversy, with many cat owners not only arguing that it’s perfectly acceptable to let their cats roam freely, but that it’s morally wrong to force them to remain indoors.

In order to properly analyze this issue, it’s important to understand why the Knox City Council has resorted to such extreme measures. On average, a well-fed free-roaming domestic cat will kill around 75 animals per year. As a result, pet cats are responsible for the deaths of around 200 million native Australian animals annually. But that’s only the tip of the iceberg. The refusal of negligent cat owners to spay or neuter their pets has led to an explosion of the feral cat population (currently estimated to be somewhere between 2.1 million and 6.3 million) in Australia. A feral cat predates at a much higher rate than a domestic cat, killing around 740 animals per year. Because of this, feral cats are responsible for the deaths of an additional 1.4 billion native Australian animals annually.

Many may look at these numbers and see little to complain about. Animals kill other animals – it’s the circle of life. But it’s not that straight-forward. Despite their enormous importance as companions and family members, the sad truth is that in Australia – as in many countries – cats are a major invasive species. As a result, cats have already been directly responsible for the extinction of 25 species of mammal found only in Australia. This accounts for more than two-thirds of all Australian mammal extinctions over the past 200 years. Cats are currently identified as a further threat to an additional 74 species of mammals, 40 birds, 21 reptiles and four amphibians.

Australia is currently pursuing a number of strategies to control the feral cat population. But this will largely be for naught if the contributions of domestic cats are not also addressed. And this is precisely what Knox’s curfew seeks to do. But is it morally wrong to keep our cats indoors? One way to answer this question is through a simple cost/benefit analysis – what is often referred to as ‘consequentialism’ by philosophers.

So how does a cat curfew stack up on a consequentialist analysis? At bottom, the point of this policy is to (1) reduce the number of native animals being killed by domestic cats, and (2) stem the flow of feral cats resulting from the free-roaming recreational activities of unspayed and unneutered domestic cats. The results of doing this include not only the protection of individual native animals, but the preservation of entire species. And there are further benefits outside conservation. The curfew will also curb other undesirable behaviors like spraying, fighting, and property damage, and limit the spread of a number of parasites that can infect many mammals (including humans) but that are only spread by cats.

A consequentialist argument for the curfew would need to show that these benefits outweigh the costs to those cats that are now forced to stay indoors. Given the above considerations, there are compelling reasons to think that this might be the case. But these reasons can be made even stronger when we realize that the costs to cats are nowhere near as great as we think.

Free-roaming cats are vulnerable to all kinds of risks, including everything from getting hit by a car, to feline leukemia, to wild animal attacks. As a result, the life expectancy of an outdoor cat is only 2-5 years, while indoor cats live for an average of 10-15 years. Given this, we might argue that even if forcing a cat to stay indoors does reduce its quality of life, this may be made up for by the fact that it gets to experience far more of it. But there’s little evidence to even suggest that such a reduction in quality-of-life does occur. While it might be easier for an owner to keep a cat enriched by allowing it outside, experts state that it’s still possible for a cat to be just as happy indoors without all of the associated risks of a free-roaming life. What is required is careful, attentive pet-ownership with a focus on providing indoor enrichment. If this is done, then the benefits of a cat curfew can be achieved at no cost whatsoever to the cats being forced to stay home.

Nevertheless, the consequentialist analysis isn’t the only approach we might take. There are, in fact, a number of scenarios in which it might lead us to unsavory conclusions – like justifying animal testing where doing so would lead to the development of a drug that would save millions of lives. An alternative approach can be found by focusing on the rights of the animals in question, and refusing to violate those rights regardless of what kinds of benefits might be achieved by doing so. What, then, might a rights-based approach make of the cat curfew?

Clearly, the biggest concerns arise around a cat’s right to liberty. Cats should be free to roam, and any restriction on that ability is an infringement of their right to liberty. But let’s unpack that a little bit. Firstly, we need to figure out the content of this right. Put another way, we need to know what a cat requires in order to have this right respected. Clearly it would be wrong to keep a cat in a two-square-foot cage. How much space does it need, then? Is a reasonable-sized apartment sufficient? How about a two-story townhouse? Or must it have access to at least a football-field sized territory to roam? One simple answer might be to say that respecting a cat’s right to liberty involves allowing it to go wherever it wants to. But this seems to overstate the right considerably. When a cat wanders down to the river bank, we are not obliged to fetch a boat and ferry it to the other side so that it might continue to roam unhampered.

Even if we are able to explain the content of a cat’s right to liberty, we must then consider in what circumstances it might be overridden by competing rights. Among the other rights possessed by a cat is, presumably, the right to life. And the cat curfew does a lot to ensure the preservation of this right – extending a cat’s life-expectancy by 2 to 5 times. Seen in this way, the curfew no longer becomes a case of violating a cat’s right to liberty, but balancing that right against the cat’s more fundamental right to life.

Cat curfews, then, appear to be morally acceptable on both a consequentialist approach (saving the lives of native animals and preserving endangered species at no cost to the wellbeing of cats), and a rights-based approach (maximizing respect for a cat’s right to life at a small cost to their right to liberty). As such, it seems that – even in the absence of such laws – we all have strong reasons to rein in the murderous urges of our cuddly companions by keeping our cats indoors.

The Ethical and Epistemic Consequences of Hiding YouTube Dislikes

photograph of computer screen displaying YouTube icon

YouTube recently announced a major change on their platform: while the “like” and “dislike” buttons would remain, viewers would only be able to see how many likes a video had, with the total number of dislikes being viewable only by the creator. The motivation for the change is explained in a video released by YouTube:

Apparently, groups of viewers are targeting a video’s dislike button to drive up the count. Turning it into something like a game with a visible scoreboard. And it’s usually just because they don’t like the creator or what they stand for. That’s a big problem when half of YouTube’s mission is to give everyone a voice.

YouTube thus seems to be trying to protect its creators from certain kinds of harms: not only can it be demoralizing to see that a lot of people have disliked your video, but it can also be particularly distressing if those dislikes have resulted from targeted discrimination.

Some, however, have questioned YouTube’s motives. One potential motive, addressed in the video, is that YouTube is removing the public dislike count in response to some of their own videos being overwhelmingly disliked (namely, the “YouTube Rewind” videos and, ironically, the video announcing the change itself). Others have proposed that the move aims to increase viewership: after all, videos with many more dislikes than likes are probably going to be viewed less often, which means fewer clicks on the platform. Some creators have even posited that the move was made predominantly to protect large corporations, as opposed to small creators: many of the most disliked videos belong to corporations, and since YouTube has an interest in maintaining a good relationship with them, they would also have an interest in restricting people’s ability to see how disliked their content is.

Let’s say, however, that YouTube’s motivations are pure, and that they really are primarily intending to prevent harms by removing the public dislike count on videos. A second criticism has come in the form of the loss of informational value: the number of dislikes on a video can potentially provide the viewer with information about whether the information contained in a video is accurate. The dislike count is, of course, far from a perfect indicator of video quality, because one can dislike for reasons that don’t have to do with the information it contains: again, in instances in which there have been targeted efforts to dislike a video, dislikes won’t tell you whether it’s really a good video or not. On the other hand, there do seem to be many cases in which looking at the dislike count can let you know if you should stay away: videos that are clickbait, misleading, or generally poor quality can often quickly and easily be identified by an unfavorable ratio of likes to dislikes.

A worry, then, is that without this information, one may be more likely to not only waste one’s time by watching low-quality or inaccurate videos, but also that one may be more likely to be exposed to misinformation. For instance, consider the class of clickbait videos prevalent on YouTube, one in which people will make impressive-looking crafts or food through a series of improbable steps. Seeing that a video of this type has received a lot of dislikes helps the viewer contextualize it as something that’s perhaps just for entertainment value, and should not be taken seriously.

Should YouTube continue to hide dislike counts? In addressing this question, we are perhaps facing a conflict in different kinds of values: on the one hand, you have the moral value of protecting small or marginalized creators from targeted dislike campaigns; on the other hand, you have the epistemic disvalue of removing potentially useful information that can help viewers avoid believing misleading information (as well as the practical value of saving people the time and effort of watching unhelpful videos). It can be difficult to try to balance different values: in the case of the removal of public dislike counts, the question becomes whether the moral benefit is strong enough to outweigh the epistemic detriment.

One might think that the epistemic detriments are not, in fact, too significant. In the video released by YouTube, this issue is addressed, if only very briefly: referring to an experiment conducted earlier this year in which public dislike counts were briefly removed from the platform, the spokesperson states that they had considered how dislikes give views “a sense of a video’s worth.” He then states that,

[W]hen the teams looked at the data across millions of viewers and videos in the experiment they didn’t see a noticeable difference in viewership regardless of whether they could see the dislike count or not. In other words, it didn’t really matter if a video had a lot of dislikes or not, they still watched.

At the end of the video, they also stated, “Honestly, I think you’re gonna get used to it pretty quickly and keep in mind other platforms don’t even have a Dislike button.”

These responses, however, are non-sequiturs: whether viewership increased or decreased does not say anything about whether people are able to judge a video’s worth without a public dislike count. Indeed, if anything it reinforces the concern that people will be more likely to consume content that is misleading or of low informational value. That other platforms do not contain dislike buttons is also irrelevant: that other social media platforms do not have a dislike button may very well just mean that it is difficult to evaluate the quality of information present on those platforms. Furthermore, users on platforms such as Twitter have found other ways to express that a given piece of information is of low value, for example by ensuring that a tweet has a high ratio of responses to likes, something that seems much less likely to be effective on a platform like YouTube.

Even if YouTube does, in fact, have the primary motivation of protecting some of its creators from certain kinds of harms, one might wonder whether there are better ways of addressing the issue, given the potential epistemic detriments.

What Does a Post-Literate World Look Like?

photograph of billboards and crowds at Times Square

The written word has never been as accessible as it is today; one estimate from 2016 puts the global literacy rate at around 86 percent, a figure that would have been unthinkable just a few centuries ago. But at the same time, The Washington Post found in 2018 that American adults seem to be reading less for leisure, and a recent study conducted by Stanford found that the pandemic had a strong negative impact on childhood literacy rates. Has that post-literate future already arrived, and if so, what will fill the void left by books?

While the omnipresence of social media has lent a new sense of urgency to these questions, the anxieties behind them are hardly new. In the post-war period, television, radio, and cameras invaded the American home, and began to insidiously reshape the way we interact with our world. In the 1970s, literary critic Susan Sontag wrote despairingly of this new visual culture in On Photography; in her view, language and the fine arts were being supplanted by photographs, which claimed to present an objective view of reality in a way that drops of ink and splotches of paint could only dream of.However, Sontag believed that an image-saturated world was a politically insipid one. She wrote that

The limit of photographic knowledge of the world is that, while it can goad conscience, it can, finally, never be ethical or political knowledge . . . The very muteness of what is, hypothetically, comprehensible in photographs is what constitutes their attractiveness and provocativeness. The omnipresence of photographs has an incalculable effect on our ethical sensibility. By furnishing this already crowded world with a duplicate one of images, photography makes us feel that the world is more available than it really is. 

Sontag saw the post-literate world as a visual one, but another philosopher proposed a different view. Media theorist Marshall McLuhan explained in 1962 that before the advent of 20th-century technology, literate people in the Western world thought sequentially. Political treatises, newspapers, and novels that followed a clear structure (beginning, middle, and end), and contributed to a broad sense of progress, whether on a micro or macro scale. But technology, McLuhan argued, had swept all of that away. As scholar Mark Cuenco explains, McLuhan believed that “A society becomes post-literate when electronic media compresses its experience of literacy to such an extreme degree that the simultaneity of the oral replaces the sequentalism of the typographic as the dominant pattern of thought and sense-making.” McLuhan, who predicted the advent of the Internet decades in advance, believed that literate culture had already been supplanted.

Through radio and television, oral culture – not visual culture – became dominant, and Cuenco argues that this is still the case.

Though staring at a screen is technically a visual experience and there is reading involved—be it of a Tweet, a Facebook post, or a cable news scroll—the fundamentally dynamic, ever-fleeting, and disjointed character of the content on the screen delivers indigestible volumes of information all at once, without much sequence or structure.

Visual culture, he argues, “operates on the principle of focus or linear sequence,” much in the same way that the written word does. While we do read a Tweet or the caption on a TikTok, the experience is so radically different from that of reading a book (one locks us into an endless scroll, while the other has a definitive start and end point) that the two experiences are hardly comparable. If McLuhan is right, a post-literate oral culture may lose the ability to create sustained political change through sequential planning. The here-and-now immediacy of oral culture may, ultimately, pose the same dangers that Sontag saw in visual culture.

While Sontag and McLuhan provide compelling critiques of technological advancement, it may be too soon to sound the death knell for literacy. Americans may spend less time reading for pleasure, but online retailers like Amazon prove that there is still a market for books, shrinking though it may be. It might be more accurate to say that we’re in a period of transition, neither entirely literate nor entirely oral/visual, and with drastic educational reform, this trend is still subject to change.

Faramir’s Insight

photograph of Lord of the Rings book atop Middle-Earth map

When I last reread J.R.R. Tolkien’s Lord of the Rings trilogy, one line in particular stood out to me. It was something that Faramir says when talking with Frodo about whether to abandon Gollum. I’ll quote the line in context:

‘Frodo, I think you do very unwisely in this,’ said Faramir. ‘I do not think you should go with this creature. It is wicked.’

‘No, not altogether wicked,’ said Frodo.

‘Not wholly, perhaps,’ said Faramir; ‘but malice eats it like a canker, and the evil is growing. He will lead you to no good. If you will part with him, I will give him safe-conduct and guidance to any point on the borders of Gondor that he may name.’

‘He would not take it,’ said Frodo. ‘He would follow after me as he long has done. And I have promised many times to take him under my protection and to go where he led. You would not ask me to break faith with him?’

‘No,’ said Faramir. ‘But my heart would. For it seems less evil to counsel another man to break troth than to do so oneself, especially if one sees a friend bound unwitting to his own harm.

The last line rings true to my experience. It is one thing for me to take on a sacrifice to do what is right. It is another thing entirely to advise someone else to make that sacrifice.

It is easy to be frugal about stuff I buy for myself. Harder to be frugal about stuff I buy for my wife. Why? Because it is harder to ask someone else to sacrifice to do the right thing. I would never spend much money on jewelry for myself. And yet I felt tempted to spend too much money on an engagement ring for my wife. (Fortunately, my wife is as frugal as I am, and so independently insisted on a cheap $50 engagement/wedding ring.)

I want to talk about this phenomenon of expecting less of others than we expect of ourselves. I want to talk about it, because it first appears as almost noble. But the more I’ve thought about it, the more convinced I’ve become that it is actually a vice. It involves a neglected failure of character, a failure to take seriously the moral personality of others.

To see this, let’s consider possible explanations of the phenomenon. Two I’m sure are wrong, and three that might be right.

Wrong Hypothesis 1: Different Obligations

One possible explanation is that I really think my moral obligations are stricter than the obligations of others. Faramir thinks it would really be wrong for him to break his promise to Gollum, but that it would not be wrong for Frodo to do the same.

While I’m sure this sometimes happens, it does not explain the phenomenon I’m trying to understand.   The difference I’m responding to is not one of ‘weaker’ vs. ‘stronger’ but rather me vs. other. I will hold my close friends to higher standards than strangers, even if I think the strangers are better people. Likewise I am willing to hold myself to a higher standard than my close friends, even though I know many of my close friends are more virtuous than me.

Wrong Hypothesis 2: Selflessness

So could it be selflessness? Could it be that I just care more about the sacrifices of others than I do about my own sacrifices?

Perhaps this occurs for some people, but it is not the explanation in my case. For one thing, I’m not nearly that selfless. I spend far more time working on my own job applications than helping friends with their applications. Why? Because I selfishly care about my own future more than the future of my friends.

For another thing, the closer I am to someone the more, not the less, willing I am to ask them to sacrifice. If a stranger asks me for advice, I’m very unlikely to tell them to make the radical sacrifice. But the closer a friend, the more comfortable I am to ask of them sacrifice. As I will argue below, this is because the closer a friend, the more deeply I appreciate their moral personhood.

Possible Hypothesis 1: Moral Uncertainty

One possible explanation is that the hesitancy follows our moral uncertainty combined with a fear of responsibility.

Suppose someone held a gun to my head and told me to renounce Christianity. I’d like to think I’d have the courage to die rather than commit apostasy – that is, the abandonment of my personal beliefs. (After all, I believe that to commit apostasy is one of the worst things a human can do.) But the thing is, I’m not sure it’s really that bad. After all, I was being coerced. Everyone would be able to tell I don’t really mean it. So is it really committing apostasy? Or is it instead just playacting to pacify the criminal?

Those replies ring hollow when I imagine facing the situation myself. But they ring less hollow when I imagine a friend facing a similar situation. If a friend asked me what to do, I’d be much more tempted to tell them to “just say what the person wants to hear, you don’t have to mean it.” If a friend felt guilty about committing apostasy in such a situation, I’d be tempted to tell them not to feel bad, they didn’t do anything wrong.

The uncertainties loom larger when I consider a friend making the sacrifice. But not only then. They also loom larger when I consider making the choice myself but sacrificing another. These same worries would ring less hollow if someone held a gun to another person’s head and told me to commit apostasy. The significance of my uncertainty is magnified when others are at risk, because if someone else dies as a result of my choice, it’s not just sad but a moral tragedy. It’s not similarly a moral tragedy if I die as a consequence of my own choice. It’s sad if I die while trying to free climb a mountain, but it’s not a moral tragedy. It’s a moral tragedy if I kill someone else after taking a risk. Ultimately, I don’t want to give advice to others, because if I’m wrong I’ll feel responsible for that choice.

If this explanation is right, then the phenomenon is vicious. It denies the moral personality of others. I’m failing to recognize that the choice is still up to the other person, even when I advise them what I think is right.

This is easiest to see if we think of ourselves as the one getting advice. If I ask for the advice of someone else, I recognize it is still up to me whether to listen. It is myself, not them, who is ultimately responsible for the choice. As such, I don’t want the other person managing me with their advice in order to assuage their own conscience. Rather, I want them to be honest so that I can fully take responsibility for my own choice.

But if I’d want others to be honest with me so that I can make my own choice. Then, if I fully appreciated the moral personality of others, I’d not be tempted to be dishonest with them in order to shield myself from responsibility.

Possible Hypothesis 2: Shared Values

My wife’s theory is that the phenomenon is explained by the role of shared values in moral advice. The more someone shares your values, the more appropriate it is for them to act on your advice. You share perfectly your own values, you share extensively your close friend’s values, But you only partially share the values of a stranger. So, if moral testimony involves giving advice in light of shared values, then the more uncertain you are that values are shared, the less certain you are whether to recommend sacrifice.

Of all the plausible explanations of the phenomenon, this one looks closest to being noble. But even here I think there is something vicious about the phenomenon. If you believe the person is under the same moral obligation, but that they don’t share your values, to that extent you must think their values are mistaken. But if they are mistaken, and if you take seriously their moral personality, that gives you reason to explain as best you can the value you see in the sacrifice. It does not, if you take the moral personality of another seriously, give you reason to shrug your shoulders and consign them to vice. That would be to treat them as a moral lost cause.

Possible Hypothesis 3: Inside/Outside Tension

A final explanation of the phenomenon is that it is explained by a tension between the deliberative and external perspectives on action.

When you deliberate about your acts, you look at them from the inside. Because human death is a terrible thing, you recognize it would be terrible to kill one human to save three. You can tell, from the inside, that it would be wrong to kill someone even to save more lives.

But we can also take the external perspective. We can see, not a choice to kill, but rather a choice between two outcomes. It is not a choice to kill, but rather a choice between one killing and three deaths. And from that external perspective, we recognize that one killing is not worse than three deaths.

The force of moral constraints only emerges inside the deliberative perspective. They don’t appear when you look merely at the outside comparison of results. And this may explain our phenomenon: From the deliberative perspective, Faramir can see that it is an evil thing to break a promise. But from the outside perspective, a broken promise is far less evil than Frodo’s death and betrayal.

Once again the phenomenon would reveal an inability to appreciate the moral personality of others. It stems from a failure to see the other as a deliberating person, it instead looks at them merely from the outside as a sort of object. The phenomenon emerges when you look at a person, rather than when you look vicariously with a person at the choice they face.

I’m not sure which explanation is right. But I’m inclined to think all three plausible explanations reveal just how far I have to go before I really appreciate the full personhood of others.

Chris Cuomo, Brotherhood, and Morality

side-by-side photographs of Andrew and Chris Cuomo

On Thursday, CNN suspended its prime-time news anchor Chris Cuomo “indefinitely, pending further evaluation.” By Saturday, he had been terminated. The move comes shortly after documents revealed a “cozy and improper” relationship between Chris Cuomo and the political aides of his brother, Andrew Cuomo, former governor of New York.

Andrew Cuomo recently resigned as governor amid numerous allegations of sexual misconduct. Shortly before that scandal became public, Chris Cuomo held regular friendly, even comic, interviews with his brother. He also reportedly held “strategic discussions” with his brother about how best to respond to the allegations. A few days ago, the New York attorney general’s office released more documents showing Cuomo used media sources to uncover information about those who were accusing his brother of sexual misconduct. This triggered CNN’s move to suspend the anchor. CNN claimed these documents showed “a greater level of involvement in his brother’s efforts than we previously knew.”

Even before these latest revelations, many of Cuomo’s colleagues regarded his behavior as violating journalistic ethical standards, which aim to maintain a healthy barrier between those who report and those who are being reported upon. “This is a no-brainer,” says Mark Feldstein, chair of the broadcast journalism department at the University of Maryland and former staff member at CNN. “Journalism Ethics 101: journalists should never cover family members. It’s a glaring conflict of interest.” Cuomo himself now calls the discussions with his brother and his aides “inappropriate” and a “mistake.”

There is little dispute that Chris Cuomo’s behavior violated journalistic and impersonal ethical standards in seeking to help his brother. But, and here is the philosophical puzzle, was he, at the same time, also being a good brother? Cuomo explained that he was “family first; job second” and singularly committed to “be there for my family, which I must.” If Cuomo was simply being a good brother, does this mean that being a good brother can conflict with being moral? In such a case, which should we choose? Looking for answers to these questions forces us to take sides in a philosophical debate about the bounds of morality.

As the joke has it, a friend will help you move, but a good friend will help you move a body. But at the heart of the joke is a serious point. Sometimes, as Cuomo has discovered, the demands of morality seem to conflict with the demands that our personal relationships put on us.

Some think that this apparent potential for conflict between the demands of morality and those of our personal relationships is just an illusion. Regarding friendship, Aristotle thought that good friends take an interest in their friends’ moral development. We want our friends to be better people — the best versions of themselves. A good friend wouldn’t help you to indulge in immoral behavior or to avoid facing the consequences. A good friend would encourage you to face up to what you ought to do. We might think the same about Cuomo. Perhaps a good brother would not help his sibling to navigate a series of sexual misconduct allegations. Perhaps, then, Cuomo was being both unethical and a bad brother. On this view, there is no real conflict of values. The standards of both morality and brotherhood condemn Chris Cuomo’s behavior.

But this Aristotelian approach is arguably an overly moralized conception of friendship, or, in our case, brotherhood. Contrary to the Aristotelian view, it certainly seems possible for good friends to not particularly care about each other’s moral development. Imagine two kids who enjoy getting into mischief together. The Aristotelian view implies that they simply aren’t good friends, but that doesn’t seem correct. So perhaps the Aristotelian view is false and there really is a tension between the demands of morality and those of our personal relationships.

So, we’re back to the idea that there is a genuine conflict between being a good brother and a morally good person. Let’s examine that apparent conflict more closely.

We generally think morality is impartial. When something is the moral thing to do, it’s the moral thing to do for everybody. At least, this is what the two most famous moral theories — Kantianism and Utilitarianism — claim. According to Kant, we’re acting morally so long as we’re treating peoples’ humanity not merely as a means, but (also) as an end. According to Utilitarianism, we’re acting morally so long as our actions produce the best outcome of the available options. These are both wholly impartial theories of morality. Everyone counts the same as everyone else. It doesn’t matter who the person in front of you is, or what your relationship with them is; morally, you just treat them the same as anyone else!

On the other hand, the demands of friendship and family are clearly not impartial. If you are a good friend, the fact that your friend is your friend means you will treat her better than you would a stranger. We often think we shouldn’t treat our friends or family just like we treat everyone else. So, the conflict between morality and friendship/family can be thought of as a conflict between acting impartially and morally, on the one hand, and acting partially, in favor of our friends and family, on the other.

Here is another reason to think there is, contrary to appearances, no real conflict between morality and brotherhood. We might be going wrong in thinking of morality as totally impersonal. Maybe the partial demands of friendship and family are genuine moral demands too. This idea is called “moral pluralism.”

Moral pluralists agree that the impersonal values that morality generally focuses upon — such as promoting general well-being — are important moral values. But they don’t think they are the only important moral values. According to the moral pluralist, Cuomo didn’t face a conflict between the demands of brotherhood and those of morality. Instead, the value pluralist would say Cuomo faced a conflict within morality, between two important moral values — of impersonal morality and of brotherhood. This provides a very different picture of Cuomo’s dilemma.

On this moral pluralist view, Chris Cuomo had good moral reasons to try to help his brother. He also had good moral reasons to maintain his distance and journalistic independence. This leaves us with a difficult question; which should he have done? Here, the moral pluralist faces the task of weighing these reasons against each other to form an overall, all-things-considered judgment.

The kind of dilemma Cuomo faced, between taking particular care of those you are closest to or living up to impersonal ethical standards, is not rare. The same (apparent) conflict can be found in choosing whether to donate to an effective charity or buy a Christmas present for someone you love, or choosing whether to let your friend copy your answers in a school test. These apparent dilemmas force us to confront some particularly tricky philosophical puzzles — puzzles about the nature of friendship, of familial bonds, and which values we include in our conception of morality.

Why Vaccinating the World Is the Best Thing for Australia

photograph of multiple arrows in target's bullseye

With COVID vaccine boosters ready to be rolled out across Australia, a debate has arisen about the ethics of taking a third vaccine shot when nearly half of the world population has yet to receive their first. Tedros Ghebreyesus, President of the World Health Organization, has invoked the principle of health equity in suggesting that booster shots should be restricted until the world’s poorest have access to a vaccine. Daniel Burkett has argued in this very venue that the ethical choice is to boycott the booster – sending a message that we refuse to partake in ill-gotten gains – although A.G Holdier replies that any such attempt might ultimately prove futile. On the other hand, proponents of the shot highlight evidence of its efficacy in reducing infections and symptoms of COVID, and suggest that our moral obligation to get vaccinated might well extend to taking booster shots. In this case, however, we need not choose between self-interest and morality: for those of us in Australia, the two are aligned.

It might be argued that vaccines should be directed, not to the poorest countries, but to those with the highest case fatality rates (CFRs). Research has shown that COVID-19 is more deadly in countries with increased prevalence of obesity, diabetes, cardiovascular issues, and those with older populations: all apt descriptors for Australia. Such a simplistic take, however, would overlook the quality of medical care already available in Australia and our medical system’s strong (albeit still limited) capacity to deal with significant medical emergencies. Furthermore, this analysis relies on questionable data, with severe undertesting for COVID skewing statistics in poorer countries. And as noted by Burkett, any analysis based on CFR would have to consider not only the initial risk of COVID, but the diminishing returns in vaccine effectiveness. Whereas the first dose of a Pfizer vaccine can reduce the likelihood of developing symptoms by around 50%, a second dose adds around 40% more protection and a third only 10-15% when compared to baseline.

So, it is hard to make the case that we need booster shots more than those in the developing world need a first or second dose. A better argument might be that boosters would provide a net benefit to both health and the economy, and, as we can get them, we should take them. This is a kind of medical lifeboat ethics: we have the capacity to save ourselves, and we shouldn’t squander that chance on the naïve hope of saving others. After all, the responsibility of government is to look out for the interests of its citizens. Politically, vaccine boosters are local, immediate and – perhaps most importantly for the current government in the run up to an election – highly marketable to an understandably worried voter base. And whereas bioethicist Nancy Jecker points out that countries like Tanzania, Chad, and Haiti are hovering at a disastrous 1% vaccination rate, this comparison is neither apt nor helpful. The issue there is not so much with vaccine supply (although this may be a contributing factor) as the potent mix of conflict, corruption and political instability preventing acquisition and effective distribution of vaccines. In Australia, we have boosters available (and more on the way), and we have the capacity to administer them. So why shouldn’t we?

A more enlightening comparison would look not at the least vaccinated nations, but those where COVID has taken a significant toll. Countries like India, Brazil, and South Africa, with stable governments, better record-keeping and active – albeit imperfect – vaccination programs. There, the picture is grim. India’s actual COVID death toll could be well over two million (compared to the official figure of less than 500 thousand), while vaccination lags at just over 50% first dose. In South Africa, official figures suggest less than 100 thousand COVID-related deaths. However, excess mortality since the start of 2020 exceeds 250 thousand, with an estimated 95% of these deaths due to the disease. Just over a quarter of the population have received a vaccine dose, and even fewer have received two. In Brazil, somewhere around 600 thousand people have died from COVID, while less than 60% of the population are fully vaccinated (although a more impressive 75% have received at least one dose). Most disturbingly, each of these countries have seen the emergence of new, dangerous variants, with Beta coming out of South Africa, Gamma originating in Brazil, and the now-dominant Delta strain starting in India before taking over the world. This is before mentioning the fast-encroaching specter of the new, ominously named Omicron.

Here is the crucial point. On this issue, we don’t have to choose between morality and self-interest, as suggested by Nicholas Bugeja at The Interpreter. Instead, this is one of those (increasingly rare) cases where morality and self-interest align. The biggest threat to Australia is not a resurgent Delta variant attacking those whose vaccines have diminished in efficacy. The biggest threats are of new variants, incubated in highly mobile, densely packed, and largely unvaccinated populations. Every time COVID-19 is transmitted – wherever in the world that transmission occurs – there is the chance of a potentially dangerous mutation emerging. The best way to deal with this threat is mass vaccination of the unvaccinated, not marginal gains for the already protected.

Nor must we choose between vaccinating the world and protecting our most vulnerable. We can do both. Booster vaccines should be prioritized for those most at risk – the elderly and those with pre-existing conditions – while an increasing share is sent overseas to boost immunity and reduce the threat of new variants. CSL is already contracted to produce about 30 million more doses of the AstraZeneca vaccine, but the government should reconsider its decision to stop production after the current run. We might not be able to vaccinate the whole world, but we can certainly play our part.

It is important to note that this analysis takes a myopic Australian viewpoint and may not apply to much of the world. Here, despite our slow start, vaccination rates are approaching 80% of the total population (and continue to rise), while case numbers remain low. For Holdier, living in Arkansas where vaccine hesitancy (and rampant misinformation) has kept the vaccination rate below 50%, the personal likelihood of catching COVID skews the moral calculus in favor of boosters and away from admirable, but possibly futile, political posturing. There, the risk of catching a current strain may well outweigh the dangers of a new variant.

But in Australia, as borders reopen, international students are welcomed back, and international travel returns, so too does the threat of a new, more virulent, or more deadly strain hitting our shores (the much-hyped Mu variant might have been kept at bay, but Omicron is already here). Hotel quarantine wasn’t enough to keep Delta out last time, although it was a huge help. This time, we will be without our first and most effective line of defense. If we demand people get vaccinated before coming to Australia – and we should – then the onus is on us to make those vaccinations accessible. It’s in our best interest.

On Booster Shot Boycotts and Participatory Democracy

photograph of lone wooden figurine holding sign

Recently, Daniel Burkett argued here at The Prindle Post that many people in the United States have a good reason to conscientiously abstain from receiving a booster-dose of the COVID-19 vaccine until others around the world have had a fair chance to get their initial shots. As Burkett explains, as is often the case with limited resources, the Global North has received a disproportionately high amount of the various vaccines recently developed to combat the global pandemic; for multiple reasons, ranging from duties of international care to utilitarian calculations of good-maximization to pragmatic concerns about potential virus mutations, Burkett contends that many of us have positive obligations to forgo our third jab. According to Burkett, “By refraining from taking the COVID-19 booster — at least until those in poorer nations have had the opportunity to receive their initial vaccine — we send a clear message to our governments that we will not partake in ill-gotten gains.”

Certainly, Burkett is right to identify the problem of global vaccine disparity for what it is: an injustice born from centuries of preferential treatment and abuse. In many ways, those of us in rich countries harbor obligations to reconsider how our privileged positions affect the citizens of poorer nations. So, I do not aim to disagree here with what I take Burkett’s main point to be: namely, that the COVID-19 vaccine (along with, to be frank, plenty of other things) should be made more readily available to people living outside the borders of the U.S., U.K., and EU.

I just think that a booster shot boycott is not, on its own, sufficient to provoke such a change.

For example, my current home state of Arkansas made headlines last summer when 80,000 doses of its vaccine stock expired before being administered. Despite the vaccine being readily available for months, Arkansas was evidencing one of the lowest state-wide vaccine rates in the country with just barely over a third of the population counting as “fully vaccinated.” According to CDC data, as of November 23rd, Arkansas (along with nine other states) has still not broken the halfway point to full-vaccination status for its nearly-three-million citizens. Despite pleas from the governor, local doctors, and the families of those affected by the disease, many people in Arkansas have simply refused to take advantage of the opportunity to protect themselves and their community from the novel coronavirus that has shaped so much of the last two years of our lives.

So, let’s imagine that someone in Arkansas grows convinced that the global vaccine supply chain is importantly unjust and therefore elects to forgo their booster shot as a form of protest: how might the state’s governor interpret such a choice? Even if large numbers of people join together and do this, without some clear kind of messaging or explanation defending their rationale for the boycott, it seems likely that the governor and other officials will simply believe that low booster-shot rates are additional symptoms of the already-clear problem of vaccine hesitancy in general — not that anyone is, say, protesting Moderna’s business practices. And I think similar interpretations would hold around the country, given the wide-ranging difficulties we’ve seen promoting vaccine uptake over the last few months.

That is to say, in order for a booster shot boycott to be effective at actually helping people in other countries receive the vaccine, it not only needs to be sufficiently large enough so as to attract the attention necessary to provoke action, but it needs to be clearly articulated in terms that will be relevant to the policy-makers who hold the power to affect the desired changes. At present, one key problem for global vaccine distribution involves the legal protections for pharmaceutical intellectual property; without considerable coordinated effort, it’s not clear how anyone’s individual choice to abstain from a third shot will make a difference on whether or not Pfizer or Johnson & Johnson choose to give up potential corporate profits for the sake of global well-being (or, conversely, for governments to force them to do so).

In short, in order for boycotts to be effective, they must operate within a robust sense of community engagement akin to how philosopher John Dewey understood participatory democracy to function in general. According to Dewey, democracies are not simply governments structured via the institution of citizens’ periodic voting, but manifest via the regular interaction of well-informed people sharing ideas, confronting problems, and encouraging each other to work together to develop solutions; as he says in his 1916 book Democracy and Education, “a democracy is more than a form of government; it is primarily a mode of associated living, of conjoint communicated experience.” Without explicitly communicating the motivations for the boycott — perhaps by organizing loudly and publicly around the kinds of institutional challenges regarding booster shot limitations levied by WHO Director General Tedros Adhanom Ghebreyesus — it’s unlikely that the potential boycott could substantively contribute to its intentions being actualized precisely because the other agents in our democracy would fail to realize the “mode of living” out of which the action stems.

And this is all bracketing the important question about the long-term efficacy of “full vaccination” status without a later booster: particularly with the still-live threat of breakthrough infections and high rates of unvaccinated individuals in local communities, the wisdom of a booster shot boycott should also be measured against its potential contribution to already-concerning winter forecasts.

In any case, while political activity can take many forms, misinterpretations of one’s political choices is always a risk that political agents face — preparing for and mitigating such possibilities is an important part of political organization. Without doing that kind of collective work, we wouldn’t be “protesting global injustice” by individually boycotting our booster shots; in fact, it’s not clear that we’d be communicating anything at all.

Hotline Ping: Chatbots as Medical Counselors?

photograph of stethoscope wrapped around phone

In early 2021, the Trevor Project — a mental health crisis hotline for LGBTQIA+ youths — made headlines with its decision to utilize an AI chatbot as a method for training counselors to deal with real crises from real people. They named the chatbot “Riley.” The utility of such a tool is obvious: if successful, new recruits could be trained at all times of day or night, trained en masse, and trained to deal with a diverse array of problems and emergencies. Additionally, training workers on a chatbot greatly minimizes the risk of something going wrong if someone experiencing a severe mental health emergency got connected with a brand-new counselor. If a new trainee makes a mistake in counseling Riley, there is no actual human at risk. Trevor Project counselors can learn by making mistakes with an algorithm rather than a vulnerable teenager.

Unsurprisingly, this technology soon expanded beyond the scope of training counselors. In October of 2021, the project reported that chatbots were also used to screen youths (who contact the hotline via text) to determine their level of risk. Those predicted to be most at-risk, according to the algorithm, are put in a “priority queue” to reach counselors more quickly. Additionally, the Trevor Project is not the only medical/counseling organization utilizing high-tech chatbots with human-like conversational abilities. Australian clinics that specialize in genetic counseling have recently begun using a chatbot named “Edna” to talk with patients and help them make decisions about whether or not to get certain genetic screenings. The U.K.-based Recovery Research Center is currently implementing a chatbot to help doctors stay up-to-date on the conditions of patients who struggle with chronic pain.

On initial reading, the idea of using AI to help people through a mental or physical crisis might make the average person feel uncomfortable. While we may, under dire circumstances, feel okay about divulging our deepest fears and traumas to an empathetic and understanding human, the idea of typing out all of this information to be processed by an algorithm smacks of a chilly technological dystopia where humans are scanned and passed along like mere bins of data. Of course, a more measured take shows the noble intentions behind the use of the chatbots. Chatbots can help train more counselors, provide more people with the assistance they need, and identify those people who need to reach human counselors as quickly as possible.

On the other hand, big data algorithms have become notorious for the biases and false predictive tendencies hidden beneath a layer of false objectivity. Algorithms themselves are no more useful than the data we put into it. Chatbots in Australian mental health crisis hotlines were trained by analyzing “more than 100 suicide notes” to gain information about words and phrases that signal hopelessness or despair. But 100 is a fairly small amount. On average, there are more than 130 suicides every day in the United States alone. Further, only 25-30% of people who commit suicide leave a note at all. Those who do leave a note may be having a very different kind of mental health crisis than those who leave no note, meaning that these chatbots would be trained to only recognize clues present in (at best) about a quarter of successful suicides. Further, we might worry that stigma surrounding mental health care in certain communities could disadvantage teens that already have a hard time accessing these resources. The chatbot may not have enough information to recognize a severe mental health crisis in someone who does not know the relevant words to describe their experience, or who is being reserved out of a sense of shame.

Of course, there is no guarantee that a human correspondent would be any better at avoiding bias, short-sightedness, and limited information than an algorithm would be. There is, perhaps, good reason to think that a human would be much worse, on average. Human minds can process far less information, at a far slower pace, than algorithms, and our reasoning is often imperfect and driven by emotions. It is easy to imagine the argument being made that, yes, chatbots aren’t perfect, but they are much more reliable than a human correspondent would be.

Still, it seems doubtful that young people would, in the midst of a mental health crisis, take comfort in the idea of typing their problems to an algorithm rather than communicating them to a human being. The facts are that most consumers strongly prefer talking with humans over chatbots, even when the chatbots are more efficient. There is something cold about the idea of making teens — some in life-or-death situations — make it through a chatbot screening before being connected with someone. Even if the process is extremely short, it can still be jarring. How many of us avoid calling certain numbers just to avoid having to interact with a machine?

Yet, perhaps a sufficiently life-like chatbot would neutralize these concerns, and make those who call or text in to the hotline feel just as comfortable as if they were communicating with a person. Research has long shown that humans are able to form emotional connections with AI extremely quickly, even if the AI is fairly rudimentary. And more seem to be getting comfortable with the idea of talking about their mental health struggles with a robot. Is this an inevitable result of technology becoming more and more a ubiquitous part of our lives? Is it a consequence of the difficulty of connecting with real humans in our era of solitude and fast-paced living? Or, maybe, are the robots simply becoming more life-like? Whatever the case may be, we should be diligent that these chatbots rely on algorithms that help overcome deep human biases, rather than further ingrain them.