← Return to search results
Back to Prindle Institute
Technology

Is It Okay to Be Mean to AI?

By Kenneth Boyd
29 Sep 2025

Fast-food chain Taco Bell recently replaced drive-through workers with AI chatbots at a select number of locations across America. The outcome was perhaps predictable: numerous videos went viral on social media showing customers becoming infuriated with the AI’s mistakes. People also started to see what they could get away with, including one instance where a customer ordered 18,000 waters, temporarily crashing the system.

As AI programs start to occupy more mundane areas of our lives, more and more people are getting mad at them, are being mean to them, or are just trying to mess with them. This behavior has apparently become so pervasive that AI company Anthropic announced that its chatbot Claude would now end conversations when they were deemed “abusive.” Never one to shy away from offering his opinion, Elon Musk went to Twitter to express his concerns, remarking that “torturing AI is not okay.”

Using terms like “abuse” and “torture” already risks anthropomorphizing AI, so let’s ask a simpler question: is it okay to be mean to AI?

We asked a similar question at the Prindle Post a few years ago, when chatbots had only recently become mainstream. That article argued that we should not be cruel to AIs, since by acting cruelly towards one thing we might get into the habit of acting cruelly towards other things, as well. However, chatbots and our relationships with them have changed in the years since their introduction. Is it still the case that we shouldn’t be mean to them? I think the answer has become a bit more complicated.

There is certainly still an argument to be made that, as a rule, we should avoid acting cruelly whenever possible, even if it is towards inanimate objects. Recent developments in AI have, however, raised a potentially different question regarding the treatment of chatbots: whether they can be harmed. The statements from Anthropic and Musk seem to imply that they can, or at least that there is a chance that they can be, and thus that you shouldn’t be cruel to chatbots because doing so at least risks causing harm to the chatbot itself.

In other words, we might think that we shouldn’t be mean to chatbots because they have moral status: they are the kinds of things that can be morally harmed, benefitted, and evaluated as good or bad. There are lots of things that have moral status – people and other complex animals are usually the things we think of first, but we might also think about simpler animals, plants, and maybe even nature. There are also lots of things that we don’t typically think have moral status, as well: inanimate objects, machines, single-cell organisms, things like that.

So how can we determine whether something has moral status? Here’s one approach: whether something has moral status depends on certain properties that it has. For example, we might think that the reason people have moral status is because they have consciousness, or perhaps because they have brains and a nervous system, or some other property. These aren’t the only properties we can choose. For example, 18th-century philosopher Jeremy Bentham argued that animals should be afforded many more rights than they were at the time, not because they have consciousness or the ability to reason, per se, but simply because they are capable of suffering.

What about AI chatbots, then? Despite ongoing hype, there still is no good reason to believe any chatbot is capable of reasoning in the way that people are, nor is there any good reason to believe that they possess “consciousness” or are capable of suffering in any sense. So if it can’t reason, isn’t conscious, and can’t suffer, should we definitively rule out chatbots from having moral status?

There is potentially another way of thinking about moral status: instead of thinking about the properties of the thing itself, we should think about our relationship with it. Philosopher of technology Mark Coeckelbergh considers cases where people have become attached to robot companions, arguing that, for example, “if an elderly person is already very attached to her Paro robot and regards it as a pet or baby, then what needs to be discussed is that relation, rather than the ‘moral standing’ of the robot.” According to this view, it’s not important whether a robot, AI, or really anything else has consciousness or can feel pain when thinking about moral status. Instead, what’s important when considering how we should treat something is our experiences with and relationship to it.

You may have had a similar experience: we can become attached to objects and feel that they deserve consideration that other objects do not. We might also ascribe more moral status to some things rather than others, depending on our relationship with them. For example, someone who eats meat can recognize that their pet dog or cat is comparable in terms of relevant properties to a pig, insofar as they are all capable of suffering, have brains and complex nervous systems, etc. Regardless, although they have no problem eating a pig, they would likely be horrified if someone suggested they eat their pet. In this case, they might ascribe some moral status to a pig, but would ascribe much more moral status to their pet because of their relationship with it.

Indeed, we have also seen cases where people have become very attached to their chatbots, in some cases forming relationships with them or even attempting to marry them. In such cases, we might think that there is a meaningful moral relationship, regardless of any properties the chatbot has. If we were to ascribe a chatbot moral status because of our relationship with it, though, its being a chatbot is incidental: it would be a thing that we are attached to and consider important, but that doesn’t mean that it thereby has any of the important properties we typically associate with having moral status. Nor would our relationship be generalizable: just because one person has an emotional attachment to a chatbot does not mean that all relationships with chatbots are morally significant.

Indeed, we have seen that not all of our experiences with AI have been positive. As AI chatbots and other programs occupy a larger part of our lives, they can make our lives more frustrating and difficult, and thus we might establish relationships with them that do not hold them up as objects of our affection or care, but as obstacles and even detriments to our wellbeing. Are there cases, then, where a chatbot might not be deserving of our care, but rather our condemnation?

For example, we have all likely been in a situation where we had to deal with frustrating technology. Maybe it was an outdated piece of software you were forced to use, or an appliance that never worked as it was supposed to, or a printer that constantly jammed for seemingly no good reason. None of these things have the properties that make them a legitimate subject of moral evaluation: they don’t know what they’re doing, have no intentions to upset anyone, and have none of the obligations that we would expect from a person. Nevertheless, it is the relationship we’ve established with them that seems to make them an appropriate target of our ire. It is not only cathartic to yell profanities at the office printer after its umpteenth failure at completing a simple printing task, it is justified.

When an AI chatbot takes the place of a person and fails to work properly, it is no surprise that we would start to have negative experiences with it. While failing to properly take a Taco Bell order is, all things considered, not a significant indignity, it is symptomatic of a larger trend of problems that AI has been creating, ranging from environmental impact, to job displacement, to overreliance resulting in cognitive debt, to simply creating more work for us than before they existed. Perhaps, then, ordering 18,000 waters in an attempt to crash an unwelcome AI system is less cruel as it is a righteous expression of indignation.

The dominant narrative around AI – perpetrated by tech companies – is that it will bring untold benefits that will make our lives easier, and that it will one day be intelligent in the way human beings are. If these things were true, then it would be easier to be concerned with the so-called “abuse” of AI. However, given that AI programs do not have the properties for moral status, and that our relationships with them are frequently ones of frustration, perhaps being mean to an AI isn’t such a big deal after all.

Ken Boyd holds a PhD in philosophy from the University of Toronto. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com
Related Stories