← Return to search results
Back to Prindle Institute
FeaturedTechnology

Should You Thank Your AI?

By Evan Arnet
14 May 2025

In late April, Sam Altman, the CEO of OpenAI, made waves with a response to a question about the financial and environmental cost of saying “please” and “thank you” when interacting with Artificial Intelligence — “Tens of millions of dollars well spent–you never know.” The practice is common, with over two-thirds of users observing such social niceties when asking AI questions, according to a February survey. Altman may simply be preaching the power of politeness, but it could be for reasons that are anything but common.

Is Altman right? Should we thank ChatGPT, Gemini, Claude, DeepSeek and the other AI chatbots out there? Can ethics give us any guidance?

Entities that we do not believe need to behave ethically themselves, but should be subject to moral considerations, are generally called “moral patients.” We tend to think they have lesser (but still some) moral status. For example, we do not expect newborns and animals to behave ethically, but we often adopt certain moral standards in regard to their treatment.

But current Large Language Models, the umbrella under which tools like ChatGPT fall, are not good contenders to be moral patients. There is considerable complexity in debates about AI consciousness, when it might happen, and how we would know. Nonetheless, we are not there yet.  While current AI chatbots has been trained on vast amounts of date to emulate human speech and behavior, as yet, experts assert that they have no consciousness, no inner life, they are not in control of their actions, and they cannot suffer or feel pain. (Some of these matters have been previously discussed in The Prindle Post.)

Absent characteristics like consciousness or even the ability to be offended, there seems to be no special reason to treat AI chatbots politely based on the kind of thing that they are.

Altman’s response, however, suggests another kind of concern. We may have consequentialist worries — an ethical analysis based on the consequences of our actions — about saying please and thank you to AI chatbots. Each additional “token,” a chunk of characters, that the AI has to analyze in a question costs energy. Accordingly, adding polite words to questions both costs AI companies money and, of more direct ethical relevance, causes environmental damage. Prominent AI tools like ChatGPT need incredible amounts of electricity and water for cooling.

If we are interested in limiting the harms our actions cause, then reducing energy waste and environmental damage through being less polite with AI chatbots may make sense. Although stripping off a word or two has nowhere near the energy saving impact as, say, not asking the question at all, or simply using a standard internet search instead which costs ten times less energy.

Altman’s “you never know” however hints at another worry. We may be polite to an AI out of fear that it is actually conscious, or even, that the AI overlords are coming soon and it is in our own interest to be nice. This motivation echoes the famous Pascal’s wager in philosophy.

The 17th-century mathematician and philosopher Blaise Pascal argued that we should behave as if god exists. For if god exists, but we do not believe, then we suffer an eternity of misery and miss out on an eternity of bliss. The wagers provide no evidence for the existence of god one way or the other, but rather holds that believing in god and behaving accordingly is the safest bet. (There are a number of commonly seen objections.)

By similar reasoning, we might assert that even though the chances of ChatGPT being secretly conscious, or turning into an all-powerful overlord, are extremely small, the potential harms are so serious that we should nonetheless act if it could be the case — especially for relatively low-cost actions like saying “please” and “thank you.” It does somewhat notably depart from Pascal’s wager in that the consequences are merely very bad, and not infinitely bad, and therefore can be outweighed by other more likely concerns. In fact, given the tiny likelihoods involved, and the probably minimal impact that saying “please” and “thank you” will have, there is likely not a compelling probabilistic argument about avoiding serious (if rare) consequences at all.

However,  how we treat AI is not just about AI, it is about ourselves. The philosopher Immanuel Kant made a famously strict moral framework in which only actors possessing a certain kind of rationality, like humans, deserved moral consideration. Unsettlingly, even for those in the 1700s, this implied that we have no moral considerations towards animals. Kant’s response to this concern was that we owe it to ourselves to treat animals well. We injure our moral selves, when we ignore compassion, or an animal in pain. It becomes easier to slide into callousness with humans.

Whether Kant gives animals enough due is a matter of debate, but regardless, the same concern applies with AI. If we want to embrace a general ethos that treats people with dignity and respect when we make requests of them, then we should stay in practice when dealing with superficially human-like AI.

There is potentially a dark side to this argument about AI chatbots. Their very human-likeness can be a problem. Already, there are cases of people losing themselves to delusional relationships with  ChatGPT, or trusting chatbots uncritically. The scope of this problem is not yet clear, but perhaps we do not want to aspire to a very human-like relationship with the technology at all, but instead have a well-delineated set of norms and practices for engaging with these chatbots. We may want to adopt norms that avoid anthropomorphizing them.

Large Language Models are still new. Ethical analysis, especially ethical analysis based on the potential consequences of treating AI a certain way, is correspondingly young. This is even true for seemingly minor issues like saying “please” and “thank you.” It also speaks to a broader challenge with AI. The technology is already changing the world. It is good to consider how AI will change society — what jobs will it replace, what problems will it solve, what kind of surveillance will it enable, how much energy will it use? But we also need to consider its moral impact. What will AI do to our ethical selves?

Evan Arnet received his Ph.D. in History and Philosophy of Science and Medicine from Indiana University. His overarching philosophical interest is in institutions and how they shape and constrain human behavior. This is variously represented in writings on science, law, and labor. Read more about him at www.evanarnet.com
Related Stories