Can an AI Be Your Friend?

According to a July report by the World Health Organization, one in six humans are experiencing loneliness and social isolation – a condition with serious public health consequences ranging from anxiety to chronic illness. This builds on enduring concerns of a “loneliness epidemic,” especially among young men in developed economies. Although some take issue with “epidemic” language, arguing it misframes longstanding loneliness concerns as new and spreading ones, the threat is real and persistent.
Meanwhile, large language model chatbots such as ChatGPT, Claude, and Gemini, as well as AI companions such as Replika and Nomi, have emerged as sources of digital support and friendship. Many teens report social interactions with AI companions, although only 9% explicitly consider them friends. But the numbers may grow, with 83% of Gen Z believing they can form emotional ties with AI companions per an, admittedly self-interested, report by the Girlfriend.ai platform.
Should AI chatbots be part of the solution to the loneliness epidemic?
Of course, AI as a tool can be part of the solution. One can ask ChatGPT about social events in their city, to help them craft a text asking someone out, or to propose some hobbies based on their interests. This is using AI as a writing aid and search tool. But the ethical issue I’m concerned with is whether an AI friend or companion should be part of the solution.
One place to start is with what we want friendship to be. In the Nicomachean Ethics, Aristotle designates three kinds of friendship: utility, pleasure, and virtue. With utility, a friend provides something useful but does not care about your well-being. A friendship of pleasure involves mutual activities or enjoyment. Finally, a friendship of virtue involves genuine mutual care for well-being and each other’s growth. Aristotle considered friendship of virtue to be true friendship.
An AI chatbot can provide utility, and one may derive pleasure from interacting with a chatbot or AI companion, so it can provide some of the functions of friendship, but current AI chatbots cannot genuinely care about someone’s well-being. At least from an Aristotelian perspective, then, AI cannot be a true friend.
This does not rule out the value of AI companionship. Humans often have asymmetric relationships that nonetheless provide great satisfaction, for example relationships with pets or parasocial relationships with celebrities. (Granted, many would allege that at least some pets like dogs and cats can care about other’s well-being even if they cannot help one grow as a person.) The human tendency to anthropomorphize has led to a long legacy of relationships with completely mindless entities, from pet rocks to digital pets like Tamagotchi. And then, of course, there are imaginary friends.
But none of those are really seriously proposed as solutions to loneliness. Plausibly, a surge of emotional support through pet rocks, imaginary friends, or more realistically, dogs, is more a symptom of loneliness than an actual solution.
Moreover, there seems to be something distinct about chatbots. A dog may provide some of the intimacy of human friendship, but the dog will never pretend to be a human. By contrast, chatbots and AI companions are designed to act like human friends. Or, well, not quite human friends — there’s a key difference.
AI companions are programmed to “listen” attentively, respond generously, and support and affirm the beliefs of those communicating with them. This provides a particularly cotton candy-esque imitation of friendship, based on agreement and validation. AI sycophancy, it is sometimes called. Undoubtedly, this feels good. But does it do us good?
This August police reported one of the first cases of an AI chatbot potentially leading to murder. ChatGPT usage continually reinforced 56-year-old Stein-Erik Soelburg’s paranoia about his mother drugging him. Ultimately, he killed her and then himself.
The parents of 16-year-old Adam Raine similarly allege that ChatGPT contributed to his suicide, and are now suing OpenAI, the company behind ChatGPT.
While extreme examples, in both cases, the endless affirmations of ChatGPT emerge as a concern. Increasingly psychologists are seeing “AI psychosis” where the incredibly human-like, flattering, and supportive nature of chatbots can suck people further into delusion. By contrast, a virtuous friend (on Aristotle’s account) is interested in your well-being, but not necessarily in people-pleasing. They can tell you to snap out of a negative spiral or that you are the problem.
Can better programming fix this? An August 26th, OpenAI published a blog post, “Helping people when they need it most,” discussing some of the safeguards they built into ChatGPT and where they are still trying to improve. This includes avoiding providing guidance on self-harm and working with physicians and psychologists on mental health protections.
However, programming can only solve technical problems. No amount of safety tweaks will make a large language model care about someone’s well-being; it can merely help it better pretend.
Ultimately, AI companies and the virtuous friend have very different aims and motivations. At some level, the purpose of an AI company is to turn a profit. What the precise business model(s) will be has yet to emerge, as currently most AI is still burning through investors’ money. But whatever strategy eventually arises – whether nudging customers towards buying certain products or maximizing engagement and subscription fees – it will be distinct from the sincere regard of Aristotelian friendship. Worse, to the extent that AI chatbots and companions can alleviate loneliness, they rely on this loneliness in the first place to generate demand for the product.
AI companions may be able to fill some of the functions that friendship does – offering a steady hand or a kind word. However, they fundamentally cannot deliver the mutual caring that we expect from the truest form of friendship. Advancements to replication of depth and sincerity will no doubt be made, but what will remain constant is the lack of genuine empathy. Instead of a cure for our loneliness and isolation, the turn to large language models may simply mark the next stage of the disease.