← Return to search results
Back to Prindle Institute
Technology

Resurrection Through Chatbot?

By Rachel Robison-Greene
27 Sep 2021
cartoon image of an occult seance

There is nothing that causes more grief than the death of a loved one; it can inflict an open wound that never fully heals, even if we can temporarily forget that it’s there. We are social beings and our identities aren’t contained within our own human-shaped space. Who we are is a matter of the roles we take on, the people we care for, and the relationships that allow us to practice and feel love. The people we love are part of who we are and when one of them dies, it can feel like part of us dies as well. For many of us, the idea that we will never interact with our loved one again is unbearable.

Some entrepreneurs see any desire as an opportunity, even the existential impulses and longings that come along with death. In response to the need to have loved ones back in our lives, tech companies have found a new use for their deepfake technology. Typically used to simulate the behavior of celebrities and politicians, some startups have recognized the potential in programming deepfake chat-bots to behave like dead loved ones. The companies that create these bots harvest data from the deceased person’s social media accounts. Artificial intelligence is then used to predict what the person in question would say in a wide range of circumstances. A bereaved friend or family member can then chat with the resulting intelligence and, if things go well, it will be indistinguishable from the person who passed away.

Some people are concerned that this is just another way for corporations to exploit grieving people. Producers of the chatbots aren’t interested in the well-being of their clients, they’re only concerned with making money. It may be the case that this is an inherently manipulative practice, and in the worst of ways. How could it possibly be acceptable to profit from people experiencing the lowest points in their lives?

That said, the death industry is thriving, even without the addition of chatbots. Companies sell survivors of the deceased burial plots, coffins, flowers, cosmetic services, and all sorts of other products. Customers can decide for themselves which goods and services they’d like to pay for. The same is true with a chatbot. No one is forced to strike up a conversation with a simulated loved one, they have a chance to do so only if they have decided for themselves that it is a good idea for them.

In addition to the set of objections related to coercion, there are objections concerning the autonomy of the people being simulated. If it’s possible to harm the dead, then in some cases that may be what’s going on here. We don’t know what the chatbot is going to say, and it may be difficult for the person interacting with the bot to maintain the distinction between the bot and the real person they’ve lost. The bot may take on commitments or express values that the living person never had. The same principle is at play when it comes to using artificial intelligence to create versions of actors to play roles. The real person may never have consented to say or do the things that the manufactured version of them says or does. Presumably, the deceased person, while living, had a set of desires related to their legacy and the ways in which they wanted other people to think of them. We can’t control what’s in the heads of others, but perhaps our memories should not be tarnished nor our posthumous desires frustrated by people looking to resurrect our psychologies for some quick cash.

In response, some might argue that dead people can’t be harmed. As Epicurus said, “When we exist, death is not; and when death exists, we are not. All sensation and consciousness ends with death and therefore in death there is neither pleasure nor pain.” There may be some living people who are disturbed by what the bot is doing, but that harm doesn’t befall the dead person — the dead person no longer exists. It’s important to respect autonomy, but such respect is only possible for people who are capable of exercising it, and dead people can’t.

Another criticism of the use of chat-bots is that it makes it more difficult for people to arrive at some form of closure. Instead, they are prolonging the experience of having the deceased with them indefinitely. Feeling grief in a healthy way involves the recognition that the loved one in question is really gone.

In response, some might argue that everyone feels grief differently and that there is no single healthy way to experience it. For some people, it might help to use a chat-bot to say goodbye, to express love to a realistic copy of their loved one, or to unburden themselves by sharing some other sentiment that they always needed to let out but never got the chance.

Other worries about chatbot technology are not unique to bots that simulate the responses of people who have passed on. Instead, the concern is about the role that technology, and artificial intelligence in particular, should be playing in human lives. Some people, will, no doubt, opt to continue to engage in a relationship with the chat-bot. This motivates the question: can we flourish as human beings if we trade in our interpersonal relationships with other sentient beings for relationships with realistic, but nevertheless non-sentient artificial intelligence? Human beings help one another achieve the virtues that come along with friendship, the parent-child relationship, mentorship, and romantic love (to name just a few). It may be the case that developing interpersonal virtues involves responding to the autonomy and vulnerability of creatures with thoughts and feelings who can share in the familiar sentiments that make it beautiful to be alive.

Care ethicists offer the insight that when we enter into relationships, we take on role-based obligations that require care. Care can only take place when the parties to the relationship are capable of caring. In recent years we have experimented with robotic health care providers, robotic sex workers, and robotic priests. Critics of this kind of technological encroachment wonder whether such functions ought to be replaced by uncaring robots. Living a human life requires give and take, expressing and responding to need. This is a dynamic that is not fully present when these roles are filled by robots.

Some may respond that we have yet to imagine the range of possibilities that relationships with artificial intelligence may provide. In an ideal world, everyone has loving, caring companions and people help one another live healthy, flourishing lives. In the world in which we live, however, some people are desperately lonely. Such people benefit from affection behavior, even if the affection is not coming from a sentient creature. For such people, it would be better to have lengthy conversations with a realistic chat-bot than to have no conversations at all.

What’s more, our response to affection between human beings and artificial intelligence may say more about our biases against the unfamiliar than it does against the permissibility of these kinds of interactions. Our experiences with the world up to this point have motivated reflection on the kinds of experiences that are virtuous, valuable, and meaningful. Doing so has necessitated a rejection of certain myopic ways of viewing the boundaries of meaningful experience. We may be at the start of a riveting new chapter on the forms of possible engagement between carbon and silicon. For all we know, these interactions may be great additions to the narrative.

Rachel is an Assistant Professor of Philosophy at Utah State University. Her research interests include the nature of personhood and the self, animal minds and animal ethics, environmental ethics, and ethics and technology. She is the co-host of the pop culture and philosophy podcast I Think Therefore I Fan.
Related Stories