← Return to search results
Back to Prindle Institute

Is It Okay to Be Mean to AI?

Fast-food chain Taco Bell recently replaced drive-through workers with AI chatbots at a select number of locations across America. The outcome was perhaps predictable: numerous videos went viral on social media showing customers becoming infuriated with the AI’s mistakes. People also started to see what they could get away with, including one instance where a customer ordered 18,000 waters, temporarily crashing the system.

As AI programs start to occupy more mundane areas of our lives, more and more people are getting mad at them, are being mean to them, or are just trying to mess with them. This behavior has apparently become so pervasive that AI company Anthropic announced that its chatbot Claude would now end conversations when they were deemed “abusive.” Never one to shy away from offering his opinion, Elon Musk went to Twitter to express his concerns, remarking that “torturing AI is not okay.”

Using terms like “abuse” and “torture” already risks anthropomorphizing AI, so let’s ask a simpler question: is it okay to be mean to AI?

We asked a similar question at the Prindle Post a few years ago, when chatbots had only recently become mainstream. That article argued that we should not be cruel to AIs, since by acting cruelly towards one thing we might get into the habit of acting cruelly towards other things, as well. However, chatbots and our relationships with them have changed in the years since their introduction. Is it still the case that we shouldn’t be mean to them? I think the answer has become a bit more complicated.

There is certainly still an argument to be made that, as a rule, we should avoid acting cruelly whenever possible, even if it is towards inanimate objects. Recent developments in AI have, however, raised a potentially different question regarding the treatment of chatbots: whether they can be harmed. The statements from Anthropic and Musk seem to imply that they can, or at least that there is a chance that they can be, and thus that you shouldn’t be cruel to chatbots because doing so at least risks causing harm to the chatbot itself.

In other words, we might think that we shouldn’t be mean to chatbots because they have moral status: they are the kinds of things that can be morally harmed, benefitted, and evaluated as good or bad. There are lots of things that have moral status – people and other complex animals are usually the things we think of first, but we might also think about simpler animals, plants, and maybe even nature. There are also lots of things that we don’t typically think have moral status, as well: inanimate objects, machines, single-cell organisms, things like that.

So how can we determine whether something has moral status? Here’s one approach: whether something has moral status depends on certain properties that it has. For example, we might think that the reason people have moral status is because they have consciousness, or perhaps because they have brains and a nervous system, or some other property. These aren’t the only properties we can choose. For example, 18th-century philosopher Jeremy Bentham argued that animals should be afforded many more rights than they were at the time, not because they have consciousness or the ability to reason, per se, but simply because they are capable of suffering.

What about AI chatbots, then? Despite ongoing hype, there still is no good reason to believe any chatbot is capable of reasoning in the way that people are, nor is there any good reason to believe that they possess “consciousness” or are capable of suffering in any sense. So if it can’t reason, isn’t conscious, and can’t suffer, should we definitively rule out chatbots from having moral status?

There is potentially another way of thinking about moral status: instead of thinking about the properties of the thing itself, we should think about our relationship with it. Philosopher of technology Mark Coeckelbergh considers cases where people have become attached to robot companions, arguing that, for example, “if an elderly person is already very attached to her Paro robot and regards it as a pet or baby, then what needs to be discussed is that relation, rather than the ‘moral standing’ of the robot.” According to this view, it’s not important whether a robot, AI, or really anything else has consciousness or can feel pain when thinking about moral status. Instead, what’s important when considering how we should treat something is our experiences with and relationship to it.

You may have had a similar experience: we can become attached to objects and feel that they deserve consideration that other objects do not. We might also ascribe more moral status to some things rather than others, depending on our relationship with them. For example, someone who eats meat can recognize that their pet dog or cat is comparable in terms of relevant properties to a pig, insofar as they are all capable of suffering, have brains and complex nervous systems, etc. Regardless, although they have no problem eating a pig, they would likely be horrified if someone suggested they eat their pet. In this case, they might ascribe some moral status to a pig, but would ascribe much more moral status to their pet because of their relationship with it.

Indeed, we have also seen cases where people have become very attached to their chatbots, in some cases forming relationships with them or even attempting to marry them. In such cases, we might think that there is a meaningful moral relationship, regardless of any properties the chatbot has. If we were to ascribe a chatbot moral status because of our relationship with it, though, its being a chatbot is incidental: it would be a thing that we are attached to and consider important, but that doesn’t mean that it thereby has any of the important properties we typically associate with having moral status. Nor would our relationship be generalizable: just because one person has an emotional attachment to a chatbot does not mean that all relationships with chatbots are morally significant.

Indeed, we have seen that not all of our experiences with AI have been positive. As AI chatbots and other programs occupy a larger part of our lives, they can make our lives more frustrating and difficult, and thus we might establish relationships with them that do not hold them up as objects of our affection or care, but as obstacles and even detriments to our wellbeing. Are there cases, then, where a chatbot might not be deserving of our care, but rather our condemnation?

For example, we have all likely been in a situation where we had to deal with frustrating technology. Maybe it was an outdated piece of software you were forced to use, or an appliance that never worked as it was supposed to, or a printer that constantly jammed for seemingly no good reason. None of these things have the properties that make them a legitimate subject of moral evaluation: they don’t know what they’re doing, have no intentions to upset anyone, and have none of the obligations that we would expect from a person. Nevertheless, it is the relationship we’ve established with them that seems to make them an appropriate target of our ire. It is not only cathartic to yell profanities at the office printer after its umpteenth failure at completing a simple printing task, it is justified.

When an AI chatbot takes the place of a person and fails to work properly, it is no surprise that we would start to have negative experiences with it. While failing to properly take a Taco Bell order is, all things considered, not a significant indignity, it is symptomatic of a larger trend of problems that AI has been creating, ranging from environmental impact, to job displacement, to overreliance resulting in cognitive debt, to simply creating more work for us than before they existed. Perhaps, then, ordering 18,000 waters in an attempt to crash an unwelcome AI system is less cruel as it is a righteous expression of indignation.

The dominant narrative around AI – perpetrated by tech companies – is that it will bring untold benefits that will make our lives easier, and that it will one day be intelligent in the way human beings are. If these things were true, then it would be easier to be concerned with the so-called “abuse” of AI. However, given that AI programs do not have the properties for moral status, and that our relationships with them are frequently ones of frustration, perhaps being mean to an AI isn’t such a big deal after all.

Perspective, Persistence, Patience: Lessons from “The Oner”

Seth Rogen and Evan Goldberg recently won an Emmy for their directing in the show The Studio, which took home an impressive 13 awards. The episode they won for, “The Oner,” was an obvious pick for production enthusiasts and its meta elements were a delight to watch. The episode follows a film crew trying to pull off a complicated single-shot sequence at sunset. Shots like this are aptly called oners. The creators of this episode demonstrate their prowess by shooting the episode in a single take. (Though Rogen and Goldberg admit that it is technically several oners stitched together, their efforts and creativity are still praiseworthy.)

Since its airing, I have rewatched the episode a few times. Perhaps one of the reasons I am attracted to this episode is because shooting a continuous take is technically demanding. It requires thoughtful planning, rehearsing, and execution by a team of people working in perfect sync. While film and TV are inherently collaborative, pulling off a oner increases the stakes. Its technical achievement is, in itself, impressive, and so I rewatch it to enjoy the spectacle of it all. But, I think its significance goes beyond its technical achievement.

In today’s media environment, we are bombarded with short-form content. The pervasiveness of Instagram reels, TikTok, and Youtube shorts, and quick posts on X or Facebook trains us to consume information in small, decontextualized bursts. And the way we are presented with content matters. In Amusing Ourselves to Death, Neil Postman warns us that television has eroded our public discourse because it does not require the discipline and concentration required by print. Reading requires patience. One must move sentence by sentence, holding ideas in mind until they resolve themselves at the end of a section, chapter, or book. Television, by contrast, is able to show us images that are easier to consume but harder to organize meaningfully.

While I do not fully agree with Postman about all of his comparisons between reading and television, I think he makes a compelling argument about attending to the form of our media. The dominant form of media in one’s culture will shape the kind of information one consumes and the way one understands the information it tries to convey. Postman notes that the idea of “daily news” does not make sense before the telegraph machine arrives because prior to the telegraph news could only travel between 20-30mph (that was the speed of a train).

What fascinates me about the oner, however, is that it resists standard television logic. Unlike much of the form, it demands sustained attention. Most shows are cut-heavy, forcing us to switch perspectives. We shift from over-the-shoulder to wide shots, and between a variety of third-person vantage points, all of which are used to serve important storytelling purposes. However, these cuts also fragment our viewing experience.

Contrast this with a continuous, uncut shot. With a oner, we are allowed to occupy a singular perspective that flows with the action of the scene in real time. This allows us to be a part of the reality we are witnessing in real time, mimicking the way we actually experience the world. Watching Seth Rogen’s character, Matt Remick, infiltrate the set and disrupt the production, I felt as if I was also there on set, perhaps as a silent PA in the background. The format draws us into the scene and allows us to feel the events that unfold more naturally.

What is striking about the long, unbroken shot in “The Oner,” is that it asks us to practice something that resembles the skills of reading. Consider one of the jokes made in the beginning of the episode. Remick suggests to the director that the actress should be smoking in the scene because it will create a nice bookend, and who doesn’t love a nice bookend? Of course, this suggestion is just one of the many things that Remick does that ultimately thwarts the success of the shot in the episode. The joke for us as viewers is that this episode itself contains a bookend. Remick parks his car in the driveway in the very beginning of the episode, only to be told at the very end of the episode that his car is blocking the shot. The gag only really works if you exercise attention, much like the attention that Postman tells us is required for reading.

The oner can do more than just engage our attention. When executed properly, it has the power to bring us closer to the truth. This is also apt when we consider what Heidegger says in The Question Concerning Technology. In this piece, Heidegger attempts to uncover the nature of technology. He argues that technology is a kind of revealing or uncovering, which he traces to the ancient Greek notion of revealing or alētheia.

The video camera is undoubtedly an expression of technology. But its ability to bring us closer to truth is dependent on the way it is used. When we consider the nature of film and TV, we cannot do so without considering the edit and the final cut. Every cut and juxtaposition of one shot next to another will convey some kind of reality or another. Cuts can be used to bring us closer to something true or further away.

When presented with a single-shot scene, the viewer’s ability to accept what is happening as true increases because the alternative, viz., a cut up, edited scene, implies a certain kind of concealment. In the heavily edited scene, we must infer what happens in-between cuts. Even if the cuts appear to happen instantly, we lose something in the movement between shots.

The oner, however, does not blink. It attempts to show us an unbroken version of reality. This is why a long shot can feel more believable when it’s a part of a fictional story and why videos of real-world events often feel more trustworthy when there are fewer cuts. The longer the camera runs, the more we are forced to accept what it shows. The oner is powerful because it refuses to hide reality.

This is not to say that every single, continuous shot is inherently more truthful than an edited sequence, nor is it to suggest that one cannot fake the truth of an unbroken shot. Indeed, many one-shot takes in film and television are cheated (both Birdman and 1917 are edited to look as though they were shot in one continuous take, when in fact they were not). Even if the shot is genuinely done in one take, it is still limited by its own perspective in time and space. But the oner stands out to me because, in today’s media landscape, where we prioritize short attention spans and fast payoffs, the single-shot sequence asks us to sit down, wait for the scene to unfold, and hold our judgment until the very end.

Memory Erasure and Forgetting Who You Are

Earlier this month, MSN News detailed new research coming out of Japan that may revolutionize how we approach memory. The technology – being developed at Tohoku University – allows for the selective deletion of traumatic memories. While only tested on mice so far, the human application could provide widespread benefits – particularly for those who suffer from post-traumatic stress disorder (PTSD). But, as with the advent of any new technology, it’s important to pause and consider the implications of such a development. Might we have moral reason to not remove certain memories – even the really bad ones?

Immanuel Kant had much to say on matters like these. For Kant, our rationality – our ability to reach reasoned decisions – was of paramount importance. This is why he saw the circumvention of our rational processes as one of the worst things we could do. Kant’s fundamental moral rule – the “Categorical Imperative” – demands that we always treat people (including ourselves) as ends in themselves, and never merely as a means to an end. It’d be wrong, for example, to befriend a lawyer merely for free legal advice. This formulation of Kant’s rule also creates strong prohibitions against lying and other forms of deception. Why? Because feeding someone with false information will necessarily derail their ability to make informed – and therefore rational – decisions. That’s what’s so egregious about the blatant spread of misinformation by politicians and pundits.

It’s worth noting that, according to Kant, deception isn’t wrong just some of the time, but – rather – all of the time. And, for many, this might seem too strict. Consider, for example, the concept of “white lies.” Many of us believe that it’s morally permissible to engage in occasional harmless deceits – especially where doing so spares the feelings of others. Suppose that I was to play you a piece of music on my mandolin (an instrument I’m only just beginning to learn) and ask you what you thought of it. Suppose, further, that – owing to my inexperience – the performance wasn’t very good. It would be tempting to tell a white lie, and compliment my performance; but – according to Kant – it would be wrong for you to do so. And maybe we can see why. Among my ends, we might assume, is the desire to be a good mandolin player – but I won’t be able to achieve that end if I believe that my amateurish fumblings already sound excellent. In order to rationally decide how much practice I need, I require honest feedback – even if it comes at the cost of my feelings.

Let’s return, then, to the technology offered by Tohoku University. Removing one’s memories might be seen as a sort of “self-deception” – a lie we tell ourselves in order to feel better. But, if Kant’s approach is correct, then it’s morally impermissible for us to do this. Our experiences – even those that are traumatic – inform our rational decision-making. Put another way, they allow us to make better decisions going forward. To remove such memories might therefore be a case of failing to respect our own ends, instead treating ourselves as a mere means to the end of greater happiness.

Of course, subscribers to certain other ethical theories would say that this is precisely the point. Hedonistic utilitarians, for example, claim that an action is right so long as it maximizes pleasure (or, at the very least, minimizes pain). Utilitarians of this stripe are all for lying if it achieves some greater good. On this approach, then, the removal of one’s memories would be the right thing to do if it genuinely made the person happier.

But whether it’s morally permissible to remove one’s memories is only half the concern here. A more troubling consideration is whether we can even make sense of you removing your memories in the first place.

Philosophers spend a lot of time thinking about the problem of  “personal identity” – that is, what makes someone the same person over time. To be clear, there are many ways in which we aren’t the same person over time. We obviously change – inside and out. We grow, we age, we learn, and we change our minds about a great many things. But in spite of all these qualitative changes, there is a sense in which we still persist as the same person over time. When I look at a photo of my ten-year-old self, I know that’s me – even though he looks, acts, and thinks very differently to the me who sits here now. What, then, makes that ten-year-old the same person as me? This is the problem of personal identity.

A number of different answers have been given to this question (including the answer that there is no good answer). But one of the most popular solutions suggests that it’s psychological continuity – specifically, our memories – that make us the same person over time. In other words, the ten-year-old in that picture is me simply because I remember being him.

What this means, then, is that if I lost all of my memories, I would cease to exist. Sure, there would still be someone sitting here in this body, but that person wouldn’t be me. For many, this fits well with their intuitions regarding personal identity. What’s unclear, however, is how many of our memories we can afford to lose while continuing to be the same person. It seems that the answer is, at least, “some.” I have no memory of writing my first piece for The Prindle Post, but I can still confidently claim that the person who wrote that piece was me. But there are, it seems, a “critical mass” of memories (especially important, character-forming memories) that, if lost, would mean that I had gone out of existence. There’s a chance, then, that utilizing – or, at least over-utilizingtechnology like that being developed at Tohoku University might not merely raise moral concerns, but threaten our continued existence altogether.

Striking First?: Morality and Defensive Violence

Last week, President Donald Trump announced that the United States military completed an operation – a lethal strike on a boat traveling through international waters carrying 11 Venezuelans. The Trump administration claims that those onboard the boat were drug smugglers and members of Tren De Argua, a Venezuelan gang, with the president going so far as to refer to them as “terrorists.” The administration has not, at least publicly, provided evidence to substantiate their accusations. Nor did they brief any members of Congress on their intentions prior to the attack.

Information about the strike is still trickling out, and in some cases it is shifting. Secretary of State Marco Rubio initially stated the vessel was likely headed to Trinidad and Tobago or another Caribbean nation, yet dropped this claim after Trump said it was headed towards the U.S. The New York Times now reports that the boat had turned around prior to the strike after those on it spotted military aircraft. Senator Rand Paul informed The Intercept that the military carried out the attack via drone strike, and the same outlet reports that follow-up strikes killed survivors of the initial attack. Senator Jack Reed, the ranking democrat on the Senate Armed Services committee, said that the Pentagon has “offered no positive identification that the boat was Venezuelan, nor that its crew were members of Tren de Aragua or any other cartel,” following a Congressional briefing.

Many have criticized this act as violating both national and international law. To put it bluntly, the U.S. military killed 11 foreign citizens based on an accusation of wrongdoing. Part 8 of the United Nations Convention on the Law of the Sea establishes that international waters shall be utilized for peaceful purposes, and force should only be deployed as a last resort. While the U.S. is not a signatory to the convention, as the senate never formally considered the treaty, the military’s position across several administrations has been to comply with its parameters. The strike likely violates the charter of the U.N., as the vessel did not appear to be engaged in an attack against the U.S. Article 6 of the International Covenant on Civil and Political Rights – to which the U.S. is a signatory – affirms that the right to life is protected by law and no one shall be killed arbitrarily. Furthermore, U.S. code labels the murder of civilians as a war crime, even if those civilians are in the midst of lawbreaking. So, the act appears illegal even according to U.S. law for wartime conduct.

However, I am not a lawyer. So I cannot assess legality beyond mere appearance. But I am an ethicist! The law and morality often diverge; one may have legal permission to do morally abhorrent things, while other times righteous actions may be illegal. So, instead I want to assess whether this operation can be morally justified.

I have elsewhere critiqued actions undertaken by this administration without due process and argued for the moral importance of this concept to punishment. I will not repeat that argument here, other than simply to emphasize the following. Without due process, no one’s rights are safe; the mere accusation that one is a member of some group purported to be engaged in illicit activity becomes sufficient for punishment. We should adopt a stance of serious moral skepticism toward any punishment doled out without a process intended to ensure the accusations are correct.

So, for our purposes let us set aside concerns about process and instead focus on outcome. Suppose that all of the administration’s accusations are true: the ship was occupied by gang members, intent on smuggling drugs into the U.S. Would this justify the use of lethal force?

Members of the administration have argued that the attack was a matter of national defense. Rubio declared that the U.S. has the right “to eliminate imminent threats” and Vance claimed that attack “kill[ed] cartel members who poison our fellow citizens” while praising this as the “highest and best use of our military.” During 2023, there were 105,007 (31.3 per 100,000 people) documented drug overdose deaths in the U.S. according to the CDC. But for the presence of those drugs, those people would presumably be alive. Thus, according to this argument, those who smuggle drugs into the U.S. pose potentially lethal threats to American citizens. So, the act of striking the alleged smugglers was an attempt to pre-emptively save the lives of those who would otherwise die of an overdose.

To assess the merit of this justification, we must consider some general features of the morality of defense. Just war theorists, those who study the moral justification of military force, often analyze considerations about national defense by determining first what justifies the use of lethal defensive force in the context of self-defense.They then attempt to “scale up” that justification to the level of a nation.

To avoid some of the complexities of this case – that the individuals killed are not members of a uniformed military, that the U.S. is not in a declared war against enemy combatants, that the strike occurred in international waters rather than within a nation’s borders – let us leave the analysis at the level of self-defense. My assumption is that, if the argument from individual self-defense fails here, then it will certainly fail when the waters are muddied with even further details which analyze this as a military act.

Back in August, Matthew Silk discussed justified conditions for self-defense in the context of Canadian law. Among the conditions that Silk raised is the idea that self-defense generally must be an appropriate response to the threat. He gives the example of responding to someone pushing you by striking them with a tire iron as an egregious case. While Silk considers the reasonability of this response, as Canadian law codifies it, philosophers discuss this idea in terms of proportionality: that the force with which you respond ought to be of a similar magnitude to the threat you face.

I agree with Silk that responding to a push with a weapon is unreasonable and disproportionate. But I want to mine Silk’s example further. Notice that while a push is hardly a severe assault, it is still potentially lethal. There is a chance, however remote, that one could die from a push – annually, several thousand people in the U.S. die from traumatic brain injuries caused by falls.

Nonetheless, striking a pusher with a tire iron would be too much. The remote chance of death does not justify a disproportionate response. This case does help illuminate, though, that threats we face are all a matter of risk. Suppose a stranger shouts that he wants to kill me and pulls out a gun. At this moment, it is still a matter of chance that I even suffer injury; perhaps this is some kind of sick joke, he may miss the shot, the gun could misfire, etc. However, even though it is not guaranteed that I suffer harm, I’d still be justified in drawing my own gun and firing to defend myself (provided I’m a quicker draw than the Waco Kid). The stranger is liable to lethal defensive harm on my part because of the magnitude and severity of the risk he imposes upon me. The pusher, however, is not liable to the same defensive harm; although a push could be lethal, the chances of its being so are quite low.

With this in mind, we can reassess the risk posed by the alleged drug boat to American citizens. The earlier figure of overdose deaths was due to all drugs in the United States. To be clear, each of these deaths is an avoidable tragedy. But it is not clear that a single drug smuggling operation is sufficient to significantly raise the risk of death for any specific person – for each of the last three years, U.S. Customs and Border Patrol reports that it seized between 550,000 and 660,000 pounds of drugs. Keep in mind that these are drugs seized; the amount that enters the U.S. is likely an order of magnitude greater. Given the scale of the operation, a speedboat containing 11 people and the scope of drug use in the U.S., it seems unlikely that the risk imposed here is sufficient to make lethal force proportional. It is analogous to destroying a cargo ship and its crew who are knowingly importing faulty, aftermarket brake pads; it has the potential to contribute to a larger problem, in this case automobile deaths, but the risk of death is small and distant enough that we would not be justified in using lethal force to prevent it.

However, even if one believes that the risk of harm posed by additional drugs arriving in the U.S. would be sufficient to justify lethal force, this is only one part of the moral calculus. Proportionality is a necessary condition to justify lethal force, but so is necessity. Suppose in my earlier case that the man trying to kill me was the victim of reverse-Manchurian candidate style programming, where speaking an activation word causes him to fall unconscious. Suppose further that I know the activation word. If my options are to utter it and make him fall unconscious, or shoot him dead, it seems I am no longer justified in shooting him. Once there are viable non-lethal options available to remove a threat, the use of lethal force is no longer morally justified.

And it is here that the moral justification of the attack unravels. To strike the vessel, the administration must have been tracking its movements. If it were traveling to the U.S., they could have been aware of the precise moment when it entered U.S. territorial waters. At this point then the U.S. Coast Guard would have the authority to intercept the ship.

There is much to find deeply troubling about this act. The process by which this course of action was decided upon has been wholly opaque. Its legality is, at best, highly dubious. Further, even if we take every claim by members of the administration at its word, the act fails to meet even the most basic standards of moral justification. The lethality of the threat posed by these individuals was both small and remote while there were non-lethal means available to eliminate the threat. Transnational drug smuggling operations, and the cartels that manage them, do pose a serious risk to the safety of innocents throughout the world. But our solution to dissolving these threats should not involve abandoning our moral standards.

The Feelings of Chickens

Last month, I discussed how recent findings on the ability of fish to feel pain should cause us to reconsider our relationship with the most widely consumed meat in the world. I argued that, since a standard tuna sandwich is the product of around 100 seconds of intense animal suffering, we have strong moral reasons to opt for a better (i.e., plant-based) alternative. But is this the only way?

I regularly judge ethics competitions for students ranging from elementary to high-school age. Discussions about the ethics of eating meat often arise, and – when they do – most students tend to approach the problem in a remarkably similar way: They argue that eating meat is an excusable necessity, one dictated by human biology. But they’re not indifferent to the suffering of animals. In fact, it tends to be at the forefront of their thoughts on this topic. Their argument isn’t that we should stop eating meat altogether; but rather that we should do all we can to minimize the suffering associated with the fulfillment of our dietary needs.

Are they on to something?

A study released last week on the experiences of farmed chickens might provide us with a way of understanding how this should be done. Getting people to care about fish is hard – but I’d hope that chickens are an easier sell. Our relationship with birds is, after all, more intimate. We attract them with feeders, we keep them as pets, and we adopt them as national icons. But while we might have endless concern for hummingbirds, or parakeets, or bald eagles, our regard for the humble chicken is severely lacking. The average American consumes approximately 53 kilograms of chicken every year. Around 9 billion chickens are killed every year to satisfy this voracious appetite – with another half billion dying before making it to slaughter. And the lives of these chickens are short and brutal. Breeding practices have seen the standard chicken size triple since the 1950s, causing them to suffer lameness, heart failure, and debilitating pain. What’s more, the standard broiler chicken lives for only 6-7 weeks before slaughter; a fraction of their expected lifespan of five to ten years.

The European Chicken Commitment (ECC) is an attempt to reduce this brutality. A voluntary pledge, the ECC emphasizes the use of slower-growing breeds and the improvement of welfare standards. But here’s the astounding thing: the research released last week indicates that adoption of the ECC guidelines would see the prevention of between 15 to 100 hours of intense pain per bird, at a cost of just $1 more per kilogram of meat. Put another way: avoiding an hour of animal suffering could cost as little as one-hundredth of a cent.

Suppose that – like those aforementioned students – we adopt the premise that eating meat is necessary, but that we have an obligation to minimize the harm caused in the process. On this basis, adoption of the ECC seems like a no-brainer. Yet, despite this, welfare improvements in the poultry industry are moving at a snail’s pace. But why? As I see it, there are three possible explanations.

Firstly, the continuingly abysmal conditions for chickens might simply be the result of ignorance. We just don’t realize how bad chickens have it – nor how simply conditions might be improved. If that’s the case, then research like that published last week takes on new importance – as it’s precisely the kind of thing that might lead to swift reform.

A second – and perhaps more likely – explanation is that the extra cost to improve chicken welfare simply isn’t seen as worth it by consumers or producers. We’re in a cost-of-living-crisis, and the idea of paying even more for necessities will be anathema to most. But there seems to be some bad-faith reasoning in such an argument. If cost is what really matters, then chicken is a poor choice of protein. At the time of writing, a four-pack of chicken breasts – yielding a total of 100 grams of protein – will set you back $7.99. Obtaining the same amount of protein through tofu, on the other hand, would cost only $6.92. Chickpeas provide an even more economical alternative, at only $5.66 per 100g of protein. Put simply: if you’re already splurging on chicken as your source of protein, then it seems somewhat disingenuous to claim that cost is a reason to not improve the welfare of those same chickens.

Which is what leads to the third possibility: that it’s not a matter of ignorance or cost, but simply a lack of will. Most of us simply don’t care to reduce chicken suffering. Why? Because we don’t see that suffering as morally important. Chickens aren’t like us; they’re unintelligent, uncommunicative, and – unlike cats or dogs – far removed from our experiences. So it’s easy to dismiss their suffering. Of course, this is – for reasons I’ve noted before – an incredibly bad argument, and it smacks of the speciesism that Peter Singer warned us about. Suffering is suffering. If we would happily pay one hundredth of a cent to avoid an hour of intense suffering by, say, our pet dog, or cat, or bird, then consistency demands we do the same for animals that give their lives to feed us. There is – for the reasons outlined above – no reason not to.

Robot Kitchens, AI Cooks, and the Meaning of Food

I knew that I was very probably not going to die, of course. Very few people get ill from pufferfish in restaurants. But I still felt giddy as I took my first bite, as though I could taste the proximity of death in that chewy, translucent flesh. I swilled my saki, squeezed some lemon onto the rest of my sashimi, and looked up. Through the serving window I could see the chef who held my life in his busy hands. We made eye contact for a moment. I took another bite. This is absurd. I am absurd. I pictured the people I love, across the ocean in sleeping California, stirring gently in their warm, musky beds.

My experience in Tokyo eating pufferfish, a delicacy known as fugu, was rich and profound. Fugu has an unremarkable taste. But pufferfish is poisonous; it can be lethal unless it is prepared in just the right way by a highly trained chef. My experience was inflected with my knowledge of the food’s provenance and properties: that this flesh in my mouth was swimming in a tank a few minutes ago and was extracted from its lethal encasement by a man who has dedicated his life to this delicate task. Seconds ago, it was twitching on my plate. And now it might bring me a lonely death in an unfamiliar land. This knowledge produced a cascade of emotions and associations as I ate, prompting reflections on my life and the things I care about.

Fugu is an unfamiliar illustration of the familiar fact that our eating experiences are often constituted by more than physical sensations and a drive for sustenance. Attitudes relating to the origin or context of our food (such as a belief that this food might kill me, or that this food was made with a caring hand) often affect our eating experiences. There is much more to food, as a site of human experience and culture, than sensory and nutritional properties.

You would be hard pressed to find someone who denies this. Yet we are on the cusp of societal changes in food production that could systematically alter our relationship to food and, consequently, our eating experiences. These changes are part of broader trends apparent across nearly all spheres of life resulting from advances in artificial intelligence and other automation technologies. Just as an AI system can now drive your taxi, process your loan application, and write your emails, so AI and related automation tools can now make your food, at home or in a restaurant. Many technologists in Silicon Valley are trying to make automated food production ubiquitous. One CEO of a successful company I spoke with said he expects that almost no human beings will be cooking in thirty years’ time, kind of like how today very few humans make soap, toys, or clothing by hand. It may sound ridiculous, but I’ve found that this vision is common in influential industry spaces.

What might life look like if this technological vision were to come about? This question can appear trivial relative to louder questions about autonomous weapons systems, AI medicine, or the existential threat of a superintelligence. It is not a question of life and death. But I think the question points to a more insidious possibility: that our technological advances might quietly erode the conditions that enable us to experience our day-to-day lives as meaningful.

On the one hand, the struggle for sustenance is a universal feature of human life, and everyone is a potential beneficiary of technology that streamlines food production, like AI that invents recipes or performs kitchen managerial work and robots that prepare food. Home cooking robots could save people time and effort that would be better spent elsewhere. A restaurant that staffs fewer humans could save on labor costs and pass these savings on to customers. Robots could mitigate human errors relating to hygiene or allergies. And then there is the possibility of automated systems that can personalize food to each consumer’s specific tastes and dietary requirements. Virtually every technologist I have spoken to in this industry is excited about a future where every diner can receive a bespoke meal that leaves them totally satisfied and healthy, every time.

Automation brings interesting aesthetic possibilities, too. AI can augment human creativity by helping pioneer unusual flavor pairings. The knowledge that your food was created by a sexy robot could enhance your eating experience, especially if the alternative would be a miserable and underpaid laborer.

These are nice possibilities. But one thing that automation tends to do is create distance between humans and the things that are automated. Our food systems already limit our contact with the sources of our food. For example, factory farming hides the processes through which meat is produced, concealing moral problems and detracting from pleasures of eating that are rooted in participation in food production. AI and robotics could create even more distance between us and our food. Think of the Star Trek replicator as an extreme case; the diner calls for food, and it simply appears via a wholly automated process.

Why is the prospect of losing touch with food processes concerning? For some it might not be. There are many sources of value in the world, and there is no one right way to relate to food. But, personally, I find the prospect of losing touch with food concerning because my most memorable food experiences have all been conditioned by my contact with the processes through which my food came to be.

I have a sybaritic streak. I enjoy being regaled at fancy restaurants with diseased goose livers, spherified tonics, perfectly plated tongues, and other edible exotica. But these experiences tend to pass for me like a kaleidoscopic dream, filled with rarefied sensations that can’t be recalled upon waking. The eating experiences I cherish most are those in which my food is thickly connected to other things that I care about, like relationships, ideas, and questions that matter to me. These evocative connections are established through contact with the process through which my food was made.

I’ve already mentioned one example, but I can think of many others. Like when, in the colicky confusion of graduate school, Sam and I slaughtered and consumed a chicken in the living room of his condo so that we might, as men of principle, become better acquainted with the hidden costs of our food. Or when I ordered tripas tacos for Stephen, my houseguest in Santa Barbara, which he thoroughly enjoyed until, three tacos in, he asked me what ‘tripas’ meant. Or when I made that terrible tuna-fish casserole filled with glorious portions of shredded cheese and Goldfish crackers for Amy, Jacob, and Allison so that they might become sensuously acquainted with a piece of my childhood. Or when Catelynn and I sat in that tiny four-seat kitchen overlooking the glittering ocean in Big Sur and were served sushi, omakase style, directly from the chef’s greasy, gentle hands, defining a shared moment of multisensory beauty.

These experiences fit into the fabric of my life in unique and highly meaningful ways. These experiences are mine, but you probably have some like it. The thing to notice is that these sorts of experiences would be inaccessible without contact with the provenance of food. They would not be possible in a world where all food was produced by a Star Trek replicator. This suggests that food automation threatens to erode an important source of human meaning.

Really, there are all sorts of concerns you might have about AI and robotics in the culinary sphere. Many of these have been identified by my colleague Patrick Lin. But for me, the erosion of meaning is worth emphasizing in discussions about technology because this kind of cost resists quantification, making it easy to overlook. It’s the sort of thing that might not show up on a cost-benefit of a tech CEO who speaks glibly about eliminating human cooking.

The point I’m making is not that we should reject automation. The point is that as we augment and replace human labor in restaurants, home kitchens, and other spheres of life, we need to be attentive to how the processes we hope to automate away may enrich our lives. An increase in efficiency according to quantifiable criteria (time, money, waste) can diminish squishier but no less important things. Sometimes this provides a reason to insist on an alternative vision in which humans remain in contact with the processes in question. I would argue this is true in the kitchen; humans should retain robust roles in the processes through which our food comes to be.

After my meal in Tokyo, I used my phone to find an elevated walkway on which to smoke. I took a drag on a cigarette and watched a group of men under an overpass producing music, in the old way, by a faint neon light. I could feel the fugu in my belly, and my thoughts flashed to my loves and hopes. One of the men playing a guitar looked up. We made eye contact for a moment. I took another drag. This is nice. I am happy.

 

Note: This material is based upon work supported by the National Science Foundation under Award No. 2220888.  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Can an AI Be Your Friend?

According to a July report by the World Health Organization, one in six humans are experiencing loneliness and social isolation – a condition with serious public health consequences ranging from anxiety to chronic illness. This builds on enduring concerns of a “loneliness epidemic,” especially among young men in developed economies. Although some take issue with “epidemic” language, arguing it misframes longstanding loneliness concerns as new and spreading ones, the threat is real and persistent.

Meanwhile, large language model chatbots such as ChatGPT, Claude, and Gemini, as well as AI companions such as Replika and Nomi, have emerged as sources of digital support and friendship. Many teens report social interactions with AI companions, although only 9% explicitly consider them friends. But the numbers may grow, with 83% of Gen Z believing they can form emotional ties with AI companions per an, admittedly self-interested, report by the Girlfriend.ai platform.

Should AI chatbots be part of the solution to the loneliness epidemic?

Of course, AI as a tool can be part of the solution. One can ask ChatGPT about social events in their city, to help them craft a text asking someone out, or to propose some hobbies based on their interests. This is using AI as a writing aid and search tool. But the ethical issue I’m concerned with is whether an AI friend or companion should be part of the solution.

One place to start is with what we want friendship to be. In the Nicomachean Ethics, Aristotle designates three kinds of friendship: utility, pleasure, and virtue. With utility, a friend provides something useful but does not care about your well-being. A friendship of pleasure involves mutual activities or enjoyment. Finally, a friendship of virtue involves genuine mutual care for well-being and each other’s growth. Aristotle considered friendship of virtue to be true friendship.

An AI chatbot can provide utility, and one may derive pleasure from interacting with a chatbot or AI companion, so it can provide some of the functions of friendship, but current AI chatbots cannot genuinely care about someone’s well-being. At least from an Aristotelian perspective, then, AI cannot be a true friend.

This does not rule out the value of AI companionship. Humans often have asymmetric relationships that nonetheless provide great satisfaction, for example relationships with pets or parasocial relationships with celebrities. (Granted, many would allege that at least some pets like dogs and cats can care about other’s well-being even if they cannot help one grow as a person.) The human tendency to anthropomorphize has led to a long legacy of relationships with completely mindless entities, from pet rocks to digital pets like Tamagotchi. And then, of course, there are imaginary friends.

But none of those are really seriously proposed as solutions to loneliness. Plausibly, a surge of emotional support through pet rocks, imaginary friends, or more realistically, dogs, is more a symptom of loneliness than an actual solution.

Moreover, there seems to be something distinct about chatbots. A dog may provide some of the intimacy of human friendship, but the dog will never pretend to be a human. By contrast, chatbots and AI companions are designed to act like human friends. Or, well, not quite human friends — there’s a key difference.

AI companions are programmed to “listen” attentively, respond generously, and support and affirm the beliefs of those communicating with them. This provides a particularly cotton candy-esque imitation of friendship, based on agreement and validation. AI sycophancy, it is sometimes called. Undoubtedly, this feels good. But does it do us good?

This August police reported one of the first cases of an AI chatbot potentially leading to murder. ChatGPT usage continually reinforced 56-year-old Stein-Erik Soelburg’s paranoia about his mother drugging him. Ultimately, he killed her and then himself.

The parents of 16-year-old Adam Raine similarly allege that ChatGPT contributed to his suicide, and are now suing OpenAI, the company behind ChatGPT.

While extreme examples, in both cases, the endless affirmations of ChatGPT emerge as a concern. Increasingly psychologists are seeing “AI psychosis” where the incredibly human-like, flattering, and supportive nature of chatbots can suck people further into delusion. By contrast, a virtuous friend (on Aristotle’s account) is interested in your well-being, but not necessarily in people-pleasing. They can tell you to snap out of a negative spiral or that you are the problem.

Can better programming fix this? An August 26th, OpenAI published a blog post, “Helping people when they need it most,” discussing some of the safeguards they built into ChatGPT and where they are still trying to improve. This includes avoiding providing guidance on self-harm and working with physicians and psychologists on mental health protections.

However, programming can only solve technical problems. No amount of safety tweaks will make a large language model care about someone’s well-being; it can merely help it better pretend.

Ultimately, AI companies and the virtuous friend have very different aims and motivations. At some level, the purpose of an AI company is to turn a profit. What the precise business model(s) will be has yet to emerge, as currently most AI is still burning through investors’ money. But whatever strategy eventually arises – whether nudging customers towards buying certain products or maximizing engagement and subscription fees – it will be distinct from the sincere regard of Aristotelian friendship. Worse, to the extent that AI chatbots and companions can alleviate loneliness, they rely on this loneliness in the first place to generate demand for the product.

AI companions may be able to fill some of the functions that friendship does – offering a steady hand or a kind word. However, they fundamentally cannot deliver the mutual caring that we expect from the truest form of friendship. Advancements to replication of depth and sincerity will no doubt be made, but what will remain constant is the lack of genuine empathy. Instead of a cure for our loneliness and isolation, the turn to large language models may simply mark the next stage of the disease.