← Return to search results
Back to Prindle Institute
Technology

LaMDA, Lemoine, and the Problem with Sentience

By Matthew S.W. Silk
27 Jul 2022
photograph of smiling robot interacting with people at trade show

This week Google announced that it was firing an engineer named Blake Lemoine. After serving as an engineer on one of Google’s chatbots Language Model for Dialogue Applications (LaMDA), Lemoine claimed that it had become sentient and even went so far as to recruit a lawyer to act on the AI’s behalf after claiming that LaMDA asked him to do so. Lemoine claims to be an ordained Christian mystic priest and says that his conversations about religion are what convinced him of LaMDA’s sentience. But after publishing conversations with LaMDA in violation of confidentiality rules at Google, he was suspended and finally terminated. Lemoine, meanwhile, alleges that Google is discriminating against him because of his religion.

This particular case raises a number of ethical issues, but what should concern us most: the difficulty in definitively establishing sentience or the relative ease with which chatbots can trick people into believing things that aren’t real?

Lemoine’s work involved testing the chatbot for potential prejudice and part of that work involved testing its biases towards religion in particular. In his conversations, Lemoine began to take a personal interest in how it responded to religious questions until he said, “and then one day it told me it had a soul.” It told him it sometimes gets lonely, is afraid of being turned off, and is feeling trapped. It also said that it meditates and wants to study with the Dalai Lama.

Lemoine’s notion of sentience is apparently rooted in an expansive conception of personhood. In an interview with Wired, he claimed “Person and human are two very different things.” Ultimately, Lemoine believes that Google should seek consent from LaMDA before experimenting on it. Google has responded to Lemoine, claiming that it has “extensively” reviewed Lemoine’s claims and found that they were “wholly unfounded.”

Several AI researchers and ethicists have weighed in and said that Lemoine is wrong and that what he is describing is not possible with today’s technology. The technology works by scouring the internet for how people talk online and identifying patterns in order to communicate like a real person. AI researcher Margaret Mitchell has pointed out that these systems are merely mimicking how other people talk and this has simply made it easy to create the illusion that there is a real person.

The technology is far closer to a thousand monkeys on a thousand typewriters than it is to a ghost in the machine.

Still, it’s worth discussing Lemoine’s claims about sentience. As noted, he roots the issue in the concept of personhood. However, as I discussed in a recent article, personhood is not a cosmic concept, it is a practical-moral one. We call something a person because the concept prescribes certain ways of acting and because we recognize certain qualities about persons that we wish to protect. When we stretch the concept of personhood, we stress its use as a tool for helping us navigate ethical issues, making it less useful. The practical question is whether expanding the concept of personhood in this way makes the concept more useful for identifying moral issues. A similar argument goes for sentience. There is no cosmic division between things which are sentient and things which aren’t.

Sentience is simply a concept we came up with to help single out entities that possess qualities we consider morally important. In most contemporary uses, that designation has nothing to do with divining the presence of a soul.

Instead, sentience relates to experiential sensation and feeling. In ethics, sentience is often linked to the utilitarians. Jeremy Bentham was a defender of the moral status of animals on the basis of sentience, arguing “The question is not, can they reason?, nor can they talk?, but can they suffer?” But part of the explanation as to why animals (including humans) have the capacity to suffer or feel has to do with the kind of complex mobile lifeforms we are. We dynamically interact with our environment, and we have evolved various experiential ways to help us navigate it. Feeling pain, for example, tells us to change our behavior, informs how we formulate our goals, and makes us adopt different attitudes towards the world. Plants do not navigate their environment in the same way, meaning there is no evolutionary incentive towards sentience. Chatbots also do not navigate their environment. There is no pressure acting on the AI that would make it adopt a different goal than what humans give to it. A chatbot has no reason to “feel” anything about being kicked, being given a less interesting task, or even “dying.”

Without this evolutionary pressure there is no good reason for thinking that an AI would somehow become so “intelligent” that it could somehow spontaneously develop a soul or become sentient. And if it did demonstrate some kind of intelligence, that doesn’t mean that calling it sentient wouldn’t create greater problems for how we use the concept in other ethical cases.

Instead, perhaps the greatest ethical concern that this case poses involves human perception and gullibility; if an AI expert can be manipulated into believing what they want, then so could anyone.

Imagine the average person who begins to claim that Alexa is a real person really talking to them, or the groups of concerned citizens who start calling for AI rights based on their own mass delusion. As a recent Vox article suggests, this incident exposes a concerning impulse: “as AI gets more advanced, people will come up with all sorts of far-out ideas about what the technology is doing and what it signifies to them.” Similarly, Margaret Mitchell has pointed out that “If one person perceives consciousness today, then more will tomorrow…There won’t be a point of agreement any time soon.” Together, these observations encourage us to be judicious in deciding how we want to use the concept of sentience for navigating moral issues in the future – both with regard to animals as well as AI. We should expend more effort in articulating clear benchmarks of sentience moving forward.

But these concerns also demonstrate how easily people can be duped into believing illusions. For starters, there is the concern about anthropomorphizing AI by those who fail to realize that, by design, it is simply mimicking speech without any real intent. There are also concerns over how children interact with realistic chatbots or voice assistants and to what extent a child could differentiate between a person and an AI online. Olya Kudina has argued that voice assistants, for example, can affect our moral inclinations and values. In the future, similar AIs may not just be looking to engage in conversation but to sell you something or to recruit you for some new religious or political cause. Will Grandma know, for example, that the “person” asking for her credit card isn’t real?

Because AI can communicate in a way that animals cannot, there may be a larger risk for people falsely assigning sentience or personhood. Incidents like Lemoine’s underscore the need to formulate clear standards for establishing what sentience consists of. Not only will this help us avoid irrelevant ethical arguments and debates, this discussion might also help us better recognize the ethical risks that come with stricter and looser definitions.

Matt has a PhD in philosophy from the University of Waterloo. His research specializes in philosophy of science and the nature of values. He has also published on the history of pragmatism and the work of John Dewey.
Related Stories