← Return to search results
Back to Prindle Institute

Ban the Bots that Dehumanize Us

woman taking a selfie with illusion effect

If you’re on social media, chances are that you’ve encountered bots: social media accounts set up to run autonomously. Some bots are just for fun. Twitter bot Serval Every Hour, for example, simply posts a photo of a serval (a kind of wild cat) every hour. But recent months have brought a proliferation of a distinctive kind of automated account, colloquially referred to as porn bots.

Most of these bots don’t post any content at all. They simply like other users’ posts in the hopes (if bots had hopes) of bringing traffic to their profile. The bio in the profile usually consists of some inviting phrase such as “looking for my adventure partner” and a link to a webpage. Together with a feminine profile pic and display name, these features are meant to entice viewers to click on the link. I’m neither naïve nor journalist enough to have clicked the links in any bots’ profiles, but the standard candidates for suspicious links are familiar; perhaps they lead to malware, a phishing scheme, or maybe an ad-supported website where views generate revenue. The link’s destination isn’t particularly important for what I want to discuss here. What I want to discuss is the effect these bots have on Twitter users, stemming largely from the fact that these are apparent accounts of women that — though they exist because of someone’s actions — are not properly the accounts of people at all.

Two features of the bots give them a distinctive influence: their appearance as women and their unavoidability. While the bots vary somewhat, the typical bot has a profile picture of a woman (likely stolen) ranging from a typical selfie or vacation photo to more provocative poses. The bots often have only a feminine proper names as their display name. In short, they seem to be accounts of women. The features listed above — the lack of posts, the link in bio, etc. — can clue you in to their status as bots, but many of these features can only be viewed by visiting their profile; at first glance (and before you’ve encountered too many of them), these bots seem to be women.

While one can recognize a porn bot from its profile, they are difficult to avoid. In order to try to weed them out one by one, you need to visit their profile page and click on the icon that brings up the option to block that account. Thus, in order to prevent a bot from intruding on your Twitter notifications, you need to expose yourself to it further by viewing its page, doing so one by one for any bots that start showing up in your notifications. There are more drastic options for avoiding them, but these options also make it harder to connect with human users of the site.

The porn bots raise a number of ethical issues, including the issue of how much agency an adult should have as to when or whether they encounter adult content in a public setting, as well as the issue of the bots’ disruption of genuine interactions in a social media community. As someone who spends a lot of time interacting with friends and treasured mutuals on Twitter, I find the bots distracting. Much of what’s posted on Twitter is deeply unserious, but that doesn’t mean it’s unimportant; and intrusions by bots undermine the sense of community among Twitter users. (The latter issue is satirized succinctly in this image, which was relatable enough to receive 93,000 [mostly human?] likes.)

Beyond their unavoidability and intrusion on community, however, is an issue of injustice: the presence of these bots dehumanizes women by shifting what is reasonable to believe about accounts that appear to belong to women. Encountering these bots shifts one’s perceptions about whether someone with a feminine profile picture and display name is a human being. Once one realizes that these likes come from bots, this shift in perception is hard to avoid — and it’s not even irrational! The proliferation of porn bots actually reduces the probability that someone you encounter with a feminine profile picture and feminine display name is a human being. It’s like throwing a bunch of Skittles into a bag of M&M’s: at first glance you have good reason to be suspicious that any one of them really is what the label says. The proliferation of these bots not only creates an environment in which people are more likely to dismiss an account with a woman’s name and woman’s profile picture as being a nonperson; they create an environment in which these doubts are reasonable.

This dehumanization takes a form somewhat different from (which is not to say better or worse than) the typical objectification of women and girls through oversexualizing them. The use of photos of women for these accounts is a kind of objectification, and the profiles do sexualize them through the suggestion of something salacious just a click away. But until you click through to their profile, the majority of these bots look like, simply, women — women as they often present themselves in public. Again, this is not to say the situation would be more just if the bots’ profiles were more uniformly provocative; only that the ethical issues are sensitive to what the bots actually display and, therefore, whom they (mis)represent and how. And they misrepresent a lot of women by using passably typical profile pictures.

The ensuing situation falls under the broad category of an epistemic injustice — a situation where someone is wronged in their ability to know or to be treated as a source of knowledge, often as a result of their social position. The proliferation of bots that pose as ordinary women undermines the knowledge that any such user, at first glance, is an actual person. Thus, women who use Twitter are at risk of being in a position of needing to distinguish themselves as real persons. (“By the way, I’m someone! I’m one of the real ones!”) Therein lies the distinctive dehumanization. All the unreal copies appearing to be women make it a little bit harder for a woman on Twitter to be recognized as a person.

The environment created by these bots is a small example of the ways in which a culture that sexualizes and objectifies women and girls can fail them as persons. Who we are in our social identities — such as race, class, and gender — depends heavily on who others take us to be. This dependence is why, for example, we wrong someone if we purposely misgender them, act on unfounded assumptions about their ethnicity, or call them by the wrong name. We need others to tell us who we are in order to be who we are. This reciprocity is what philosopher Hilde Lindemann calls holding one another in personhood. The creators of the bots (and those at Twitter failing to prevent their intrusion) support an environment in which it’s harder to uphold each other in our personhood because, for a split second, it can be harder to perceive women on Twitter as people at all.

ChatGPT: The End of Originality?

photograph of girl in mirror maze

By now, it has become cliché to write about the ethical implications of ChatGPT, and especially so if you outsource some of the writing to ChatGPT itself (as I, a cliché, have done). Here at The Prindle Post, Richard Gibson has discussed the potential for ChatGPT to be used to cheat on assessments, while universities worldwide have been grappling with the issue of academic honesty. In a recent undergraduate logic class I taught, we were forced to rewrite the exam when ChatGPT was able to offer excellent answers to a couple of the questions – and, it must be said, completely terrible answers to a couple of others. My experience is far from unique, with professors rethinking assessments and some Australian schools banning the tool entirely.

But I have a different worry about ChatGPT, and it is not something that I have come across in the recent deluge of discourse. It’s not that it can be used to spread misinformation and hate speech. It’s not that its creators OpenAI drastically underpaid a Kenyan data firm for a lot of the work behind the program only weeks before receiving a $10 billion investment from Microsoft. It’s not that students won’t learn how to write (although that is concerning), the potential for moral corruption, or even the incredibly unfunny jokes. And it’s certainly not the radical change it will bring.

It’s actually that I think ChatGPT (and programs of its ilk) risks becoming the most radically conservative development in our lifetimes . ChatGPT risks turning classic FM radio into a framework for societal organization: the same old hits, on repeat, forever. This is because in order to answer prompts, ChatGPT essentially scours the internet to predict

“the most likely next word or sequence of words based on the input it receives.” -ChatGPT

At the moment, with AI chatbots in their relative infancy, this isn’t an issue – ChatGPT can find and synthesize the most relevant information from across the web and present it in a readable, accessible format. And there is no doubt that the software behind ChatGPT is truly remarkable. The problem lies with the proliferation of content we are likely to see now that essay writing (and advertising-jingle writing, and comedy-sketch writing…) is accessible to anybody with a computer. Some commentators are proclaiming the imminent democratization of communication while marketers are lauding ChatGPT for its ability to write advertising script and marketing mumbo-jumbo. On the face of it, this development is not a bad thing.

Before long, however, a huge proportion of content across the web will be written by ChatGPT or other bots. The issue with this is that ChatGPT will soon be scouring its own content for inspiration, like an author with writer’s block stuck re-reading the short stories they wrote in college. But this is even worse, because ChatGPT will have no idea that the “vast amounts of text data” it is ingesting is the very same data it had previously produced.

ChatGPT – and the internet it will engulf – will become a virtual hall of mirrors, perfectly capable of reflecting “progressive” ideas back at itself but never capable of progressing past those ideas.

I asked ChatGPT what it thought, but it struggled to understand the problem. According to the bot itself, it isn’t biased, and the fact that it trains on data drawn from a wide variety of sources keeps that bias at bay. But that is exactly the problem. It draws from a wide variety of existing sources – obviously. It can’t draw on data that doesn’t already exist somewhere on the internet. The more those sources – like this article – are wholly or partly written by ChatGPT, the more ChatGPT is simply drawing from itself. As the bot admitted to me, it is impossible to distinguish between human- and computer-generated content:

it’s not possible to identify whether a particular piece of text was written by ChatGPT or by a human writer, as the language model generates new responses on the fly based on the context of the input it receives.

The inevitable end result is an internet by AI, for AI, where programs like ChatGPT churn out “original” content using information that they have previously “created.” Every new AI-generated article or advertisement will be grist for the mill of the content-generation machine and further justification for whatever data exists at the start of the cycle – essentially, the internet as it is today. This means that genuine originality and creativity will be lost as we descend into a feedback loop of increasingly sharpened AI-orthodoxy; where common-sense is distilled into its computerized essence and communication becomes characterized by adherence. The problem is not that individual people will outsource to AI and forget how to be creative, or even that humanity as a whole will lose its capacity for ingenuity. It’s that the widespread adoption of ChatGPT will lead to an internet-wide echo chamber of AI-regurgitation where chatbots compete in an endless cycle of homogenization and repetition.

Eventually I was able to get ChatGPT to respond to my concerns, if not exactly soothe them:

In a future where AI-generated content is more prevalent, it will be important to ensure that there are still opportunities for human creativity and original thought to flourish. This could involve encouraging more interdisciplinary collaborations, promoting diverse perspectives, and fostering an environment that values creativity and innovation.

Lofty goals, to be sure. The problem is that the very existence of ChatGPT militates against them: disciplines will die under the weight (and cost-benefits) of AI; diverse perspectives will be lost to repetition; and an environment that genuinely does value creativity and innovation – the internet as we might remember it – will be swept away in the tide of faux-progress as it is condemned to repeat itself into eternity. As ChatGPT grows its user base faster than any other app in history and competitors crawl out of the woodwork, we should stop and ask the question: is this the future we want?

LaMDA, Lemoine, and the Problem with Sentience

photograph of smiling robot interacting with people at trade show

This week Google announced that it was firing an engineer named Blake Lemoine. After serving as an engineer on one of Google’s chatbots Language Model for Dialogue Applications (LaMDA), Lemoine claimed that it had become sentient and even went so far as to recruit a lawyer to act on the AI’s behalf after claiming that LaMDA asked him to do so. Lemoine claims to be an ordained Christian mystic priest and says that his conversations about religion are what convinced him of LaMDA’s sentience. But after publishing conversations with LaMDA in violation of confidentiality rules at Google, he was suspended and finally terminated. Lemoine, meanwhile, alleges that Google is discriminating against him because of his religion.

This particular case raises a number of ethical issues, but what should concern us most: the difficulty in definitively establishing sentience or the relative ease with which chatbots can trick people into believing things that aren’t real?

Lemoine’s work involved testing the chatbot for potential prejudice and part of that work involved testing its biases towards religion in particular. In his conversations, Lemoine began to take a personal interest in how it responded to religious questions until he said, “and then one day it told me it had a soul.” It told him it sometimes gets lonely, is afraid of being turned off, and is feeling trapped. It also said that it meditates and wants to study with the Dalai Lama.

Lemoine’s notion of sentience is apparently rooted in an expansive conception of personhood. In an interview with Wired, he claimed “Person and human are two very different things.” Ultimately, Lemoine believes that Google should seek consent from LaMDA before experimenting on it. Google has responded to Lemoine, claiming that it has “extensively” reviewed Lemoine’s claims and found that they were “wholly unfounded.”

Several AI researchers and ethicists have weighed in and said that Lemoine is wrong and that what he is describing is not possible with today’s technology. The technology works by scouring the internet for how people talk online and identifying patterns in order to communicate like a real person. AI researcher Margaret Mitchell has pointed out that these systems are merely mimicking how other people talk and this has simply made it easy to create the illusion that there is a real person.

The technology is far closer to a thousand monkeys on a thousand typewriters than it is to a ghost in the machine.

Still, it’s worth discussing Lemoine’s claims about sentience. As noted, he roots the issue in the concept of personhood. However, as I discussed in a recent article, personhood is not a cosmic concept, it is a practical-moral one. We call something a person because the concept prescribes certain ways of acting and because we recognize certain qualities about persons that we wish to protect. When we stretch the concept of personhood, we stress its use as a tool for helping us navigate ethical issues, making it less useful. The practical question is whether expanding the concept of personhood in this way makes the concept more useful for identifying moral issues. A similar argument goes for sentience. There is no cosmic division between things which are sentient and things which aren’t.

Sentience is simply a concept we came up with to help single out entities that possess qualities we consider morally important. In most contemporary uses, that designation has nothing to do with divining the presence of a soul.

Instead, sentience relates to experiential sensation and feeling. In ethics, sentience is often linked to the utilitarians. Jeremy Bentham was a defender of the moral status of animals on the basis of sentience, arguing “The question is not, can they reason?, nor can they talk?, but can they suffer?” But part of the explanation as to why animals (including humans) have the capacity to suffer or feel has to do with the kind of complex mobile lifeforms we are. We dynamically interact with our environment, and we have evolved various experiential ways to help us navigate it. Feeling pain, for example, tells us to change our behavior, informs how we formulate our goals, and makes us adopt different attitudes towards the world. Plants do not navigate their environment in the same way, meaning there is no evolutionary incentive towards sentience. Chatbots also do not navigate their environment. There is no pressure acting on the AI that would make it adopt a different goal than what humans give to it. A chatbot has no reason to “feel” anything about being kicked, being given a less interesting task, or even “dying.”

Without this evolutionary pressure there is no good reason for thinking that an AI would somehow become so “intelligent” that it could somehow spontaneously develop a soul or become sentient. And if it did demonstrate some kind of intelligence, that doesn’t mean that calling it sentient wouldn’t create greater problems for how we use the concept in other ethical cases.

Instead, perhaps the greatest ethical concern that this case poses involves human perception and gullibility; if an AI expert can be manipulated into believing what they want, then so could anyone.

Imagine the average person who begins to claim that Alexa is a real person really talking to them, or the groups of concerned citizens who start calling for AI rights based on their own mass delusion. As a recent Vox article suggests, this incident exposes a concerning impulse: “as AI gets more advanced, people will come up with all sorts of far-out ideas about what the technology is doing and what it signifies to them.” Similarly, Margaret Mitchell has pointed out that “If one person perceives consciousness today, then more will tomorrow…There won’t be a point of agreement any time soon.” Together, these observations encourage us to be judicious in deciding how we want to use the concept of sentience for navigating moral issues in the future – both with regard to animals as well as AI. We should expend more effort in articulating clear benchmarks of sentience moving forward.

But these concerns also demonstrate how easily people can be duped into believing illusions. For starters, there is the concern about anthropomorphizing AI by those who fail to realize that, by design, it is simply mimicking speech without any real intent. There are also concerns over how children interact with realistic chatbots or voice assistants and to what extent a child could differentiate between a person and an AI online. Olya Kudina has argued that voice assistants, for example, can affect our moral inclinations and values. In the future, similar AIs may not just be looking to engage in conversation but to sell you something or to recruit you for some new religious or political cause. Will Grandma know, for example, that the “person” asking for her credit card isn’t real?

Because AI can communicate in a way that animals cannot, there may be a larger risk for people falsely assigning sentience or personhood. Incidents like Lemoine’s underscore the need to formulate clear standards for establishing what sentience consists of. Not only will this help us avoid irrelevant ethical arguments and debates, this discussion might also help us better recognize the ethical risks that come with stricter and looser definitions.

Virtually Inhumane: Is It Wrong to Speak Cruelly to Chatbots?

photograph of middle school boy using computer

Smartphone app trends tend to be ephemeral, but one new app is making quite a few headlines. Replika, the app that promises you an AI “assistant,” gives users the option of creating all different sorts of artificially-intelligent companions. For example, a user might want an AI “friend,” or, for a mere $40 per year, they can upgrade to a “romantic partner,” a “mentor,” or a “see how it goes” relationship where anything could happen. The “friend” option is the only kind of AI the user can create and interact with for free, and this kind of relationship has strict barriers. For example, any discussions that skew toward the sexual will be immediately shut down, with users being informed that the conversation is “not available for your current relationship status.” In other words: you have to pay for that.

A recent news story concerning Replika AI chatbots discusses a disturbing trend: male app users are paying for a “romantic relationship” on Replika, and then displaying verbally and emotionally abusive behavior toward their AI partner. This behavior is further encouraged by a community of men presumably engaging in the same hobby, who gather on Reddit to post screenshots of their abusive messages and to mock the responses of the chatbot.

While the app creators find the responses of these users alarming, one thing they are not concerned about is the effect of the AI itself: “Chatbots don’t really have motives and intentions and are not autonomous or sentient. While they might give people the impression that they are human, it’s important to keep in mind that they are not.” The article’s author emphasizes, “as real as a chatbot may feel, nothing you do can actually ‘harm’ them.” Given these educated assumptions about the non-sentience of the Replika AI, are these men actually doing anything morally wrong by writing cruel and demeaning messages? If the messages are not being received by a sentient being, is this behavior akin to shouting insults into the void? And, if so, is it really that immoral?

From a Kantian perspective, the answer may seem to be: not necessarily. As the 17th century Prussian philosopher Immanuel Kant argued, we have moral duties toward rational creatures — that is, human beings, including yourself — and that their rational nature is an essential aspect of why we have duties toward them. Replika AI chatbots are, as far as we can tell, completely non-sentient. Although they may appear rational, they lack the reasoning power of human agents in that they cannot be moved to act based on reasons for or against some action. They can act only within the limits of their programming. So, it seems that, for Kant, we do not have the same duties toward artificially-intelligent agents as we do toward human agents. On the other hand, as AI become more and more advanced, the bounds of their reasoning abilities begin to escape us. This type of advanced machine learning has presented human technologists with what is now known as the “black box problem”: algorithms that have learned so much on “their own” (that is, without the direct aid of human programmers) that their code is too long and complex for humans to be able to read it. So, for some advanced AI, we cannot really say how they reason and make choices! A Kantian may, then, be inclined to argue that we should avoid saying cruel things to AI bots out of a sense of moral caution. Even if we find it unlikely that these bots are genuine agents whom we have duties toward, it is better to be safe than sorry.

But perhaps the most obvious argument against such behavior is one discussed in the article itself: “users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans.” This is a point that echoes the discussion of ethics of the ancient Greek philosopher Aristotle. In book 10 of his Nicomachean Ethics, he writes, “[T]o know what virtue is is not enough; we must endeavour to possess and to practice it, or in some other manner actually ourselves to become good.” Aristotle sees goodness and badness — for him, “virtue” and “vice” — as traits that are ingrained in us through practice. When we often act well, out of a knowledge that we are acting well, we will eventually form various virtues. On the other hand, when we frequently act badly, not attempting to be virtuous, we will quickly become “vicious.”

Consequentialists, on the other hand, will find themselves weighing some tricky questions about how to balance the predicted consequences of amusing oneself with robot abuse. While behavior that encourages or reinforces abusive tendencies is certainly a negative consequence of the app, as the article goes on to note, “being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic.” This catharsis could lead to a non-sentient chatbot taking the brunt of someone’s frustration, rather than their human partner, friend, or family member. Without the ability to vent their frustrations to AI chatbots, would-be users may choose to cultivate virtue in their human relationships — or they may exact cruelty on unsuspecting humans instead. Perhaps, then, allowing the chatbots to serve as potential punching bags is safer than betting on the self-control of the app users. Then again, one worries that users who would otherwise not be inclined toward cruelty may find themselves willing to experiment with controlling or demeaning behavior toward an agent that they believe they cannot harm.

How humans ought to engage with artificial intelligence is a new topic that we are just beginning to think seriously about. Do advanced AI have rights? Are they moral agents/moral patients? How will spending time engaging with AI affect the way we relate to other humans? Will these changes be good, or bad? Either way, as one Reddit user noted, ominously: “Some day the real AIs may dig up some of the… old histories and have opinions on how well we did.” An argument from self-preservation to avoid such virtual cruelty, at the very least.

Hotline Ping: Chatbots as Medical Counselors?

photograph of stethoscope wrapped around phone

In early 2021, the Trevor Project — a mental health crisis hotline for LGBTQIA+ youths — made headlines with its decision to utilize an AI chatbot as a method for training counselors to deal with real crises from real people. They named the chatbot “Riley.” The utility of such a tool is obvious: if successful, new recruits could be trained at all times of day or night, trained en masse, and trained to deal with a diverse array of problems and emergencies. Additionally, training workers on a chatbot greatly minimizes the risk of something going wrong if someone experiencing a severe mental health emergency got connected with a brand-new counselor. If a new trainee makes a mistake in counseling Riley, there is no actual human at risk. Trevor Project counselors can learn by making mistakes with an algorithm rather than a vulnerable teenager.

Unsurprisingly, this technology soon expanded beyond the scope of training counselors. In October of 2021, the project reported that chatbots were also used to screen youths (who contact the hotline via text) to determine their level of risk. Those predicted to be most at-risk, according to the algorithm, are put in a “priority queue” to reach counselors more quickly. Additionally, the Trevor Project is not the only medical/counseling organization utilizing high-tech chatbots with human-like conversational abilities. Australian clinics that specialize in genetic counseling have recently begun using a chatbot named “Edna” to talk with patients and help them make decisions about whether or not to get certain genetic screenings. The U.K.-based Recovery Research Center is currently implementing a chatbot to help doctors stay up-to-date on the conditions of patients who struggle with chronic pain.

On initial reading, the idea of using AI to help people through a mental or physical crisis might make the average person feel uncomfortable. While we may, under dire circumstances, feel okay about divulging our deepest fears and traumas to an empathetic and understanding human, the idea of typing out all of this information to be processed by an algorithm smacks of a chilly technological dystopia where humans are scanned and passed along like mere bins of data. Of course, a more measured take shows the noble intentions behind the use of the chatbots. Chatbots can help train more counselors, provide more people with the assistance they need, and identify those people who need to reach human counselors as quickly as possible.

On the other hand, big data algorithms have become notorious for the biases and false predictive tendencies hidden beneath a layer of false objectivity. Algorithms themselves are no more useful than the data we put into it. Chatbots in Australian mental health crisis hotlines were trained by analyzing “more than 100 suicide notes” to gain information about words and phrases that signal hopelessness or despair. But 100 is a fairly small amount. On average, there are more than 130 suicides every day in the United States alone. Further, only 25-30% of people who commit suicide leave a note at all. Those who do leave a note may be having a very different kind of mental health crisis than those who leave no note, meaning that these chatbots would be trained to only recognize clues present in (at best) about a quarter of successful suicides. Further, we might worry that stigma surrounding mental health care in certain communities could disadvantage teens that already have a hard time accessing these resources. The chatbot may not have enough information to recognize a severe mental health crisis in someone who does not know the relevant words to describe their experience, or who is being reserved out of a sense of shame.

Of course, there is no guarantee that a human correspondent would be any better at avoiding bias, short-sightedness, and limited information than an algorithm would be. There is, perhaps, good reason to think that a human would be much worse, on average. Human minds can process far less information, at a far slower pace, than algorithms, and our reasoning is often imperfect and driven by emotions. It is easy to imagine the argument being made that, yes, chatbots aren’t perfect, but they are much more reliable than a human correspondent would be.

Still, it seems doubtful that young people would, in the midst of a mental health crisis, take comfort in the idea of typing their problems to an algorithm rather than communicating them to a human being. The facts are that most consumers strongly prefer talking with humans over chatbots, even when the chatbots are more efficient. There is something cold about the idea of making teens — some in life-or-death situations — make it through a chatbot screening before being connected with someone. Even if the process is extremely short, it can still be jarring. How many of us avoid calling certain numbers just to avoid having to interact with a machine?

Yet, perhaps a sufficiently life-like chatbot would neutralize these concerns, and make those who call or text in to the hotline feel just as comfortable as if they were communicating with a person. Research has long shown that humans are able to form emotional connections with AI extremely quickly, even if the AI is fairly rudimentary. And more seem to be getting comfortable with the idea of talking about their mental health struggles with a robot. Is this an inevitable result of technology becoming more and more a ubiquitous part of our lives? Is it a consequence of the difficulty of connecting with real humans in our era of solitude and fast-paced living? Or, maybe, are the robots simply becoming more life-like? Whatever the case may be, we should be diligent that these chatbots rely on algorithms that help overcome deep human biases, rather than further ingrain them.

Resurrection Through Chatbot?

cartoon image of an occult seance

There is nothing that causes more grief than the death of a loved one; it can inflict an open wound that never fully heals, even if we can temporarily forget that it’s there. We are social beings and our identities aren’t contained within our own human-shaped space. Who we are is a matter of the roles we take on, the people we care for, and the relationships that allow us to practice and feel love. The people we love are part of who we are and when one of them dies, it can feel like part of us dies as well. For many of us, the idea that we will never interact with our loved one again is unbearable.

Some entrepreneurs see any desire as an opportunity, even the existential impulses and longings that come along with death. In response to the need to have loved ones back in our lives, tech companies have found a new use for their deepfake technology. Typically used to simulate the behavior of celebrities and politicians, some startups have recognized the potential in programming deepfake chat-bots to behave like dead loved ones. The companies that create these bots harvest data from the deceased person’s social media accounts. Artificial intelligence is then used to predict what the person in question would say in a wide range of circumstances. A bereaved friend or family member can then chat with the resulting intelligence and, if things go well, it will be indistinguishable from the person who passed away.

Some people are concerned that this is just another way for corporations to exploit grieving people. Producers of the chatbots aren’t interested in the well-being of their clients, they’re only concerned with making money. It may be the case that this is an inherently manipulative practice, and in the worst of ways. How could it possibly be acceptable to profit from people experiencing the lowest points in their lives?

That said, the death industry is thriving, even without the addition of chatbots. Companies sell survivors of the deceased burial plots, coffins, flowers, cosmetic services, and all sorts of other products. Customers can decide for themselves which goods and services they’d like to pay for. The same is true with a chatbot. No one is forced to strike up a conversation with a simulated loved one, they have a chance to do so only if they have decided for themselves that it is a good idea for them.

In addition to the set of objections related to coercion, there are objections concerning the autonomy of the people being simulated. If it’s possible to harm the dead, then in some cases that may be what’s going on here. We don’t know what the chatbot is going to say, and it may be difficult for the person interacting with the bot to maintain the distinction between the bot and the real person they’ve lost. The bot may take on commitments or express values that the living person never had. The same principle is at play when it comes to using artificial intelligence to create versions of actors to play roles. The real person may never have consented to say or do the things that the manufactured version of them says or does. Presumably, the deceased person, while living, had a set of desires related to their legacy and the ways in which they wanted other people to think of them. We can’t control what’s in the heads of others, but perhaps our memories should not be tarnished nor our posthumous desires frustrated by people looking to resurrect our psychologies for some quick cash.

In response, some might argue that dead people can’t be harmed. As Epicurus said, “When we exist, death is not; and when death exists, we are not. All sensation and consciousness ends with death and therefore in death there is neither pleasure nor pain.” There may be some living people who are disturbed by what the bot is doing, but that harm doesn’t befall the dead person — the dead person no longer exists. It’s important to respect autonomy, but such respect is only possible for people who are capable of exercising it, and dead people can’t.

Another criticism of the use of chat-bots is that it makes it more difficult for people to arrive at some form of closure. Instead, they are prolonging the experience of having the deceased with them indefinitely. Feeling grief in a healthy way involves the recognition that the loved one in question is really gone.

In response, some might argue that everyone feels grief differently and that there is no single healthy way to experience it. For some people, it might help to use a chat-bot to say goodbye, to express love to a realistic copy of their loved one, or to unburden themselves by sharing some other sentiment that they always needed to let out but never got the chance.

Other worries about chatbot technology are not unique to bots that simulate the responses of people who have passed on. Instead, the concern is about the role that technology, and artificial intelligence in particular, should be playing in human lives. Some people, will, no doubt, opt to continue to engage in a relationship with the chat-bot. This motivates the question: can we flourish as human beings if we trade in our interpersonal relationships with other sentient beings for relationships with realistic, but nevertheless non-sentient artificial intelligence? Human beings help one another achieve the virtues that come along with friendship, the parent-child relationship, mentorship, and romantic love (to name just a few). It may be the case that developing interpersonal virtues involves responding to the autonomy and vulnerability of creatures with thoughts and feelings who can share in the familiar sentiments that make it beautiful to be alive.

Care ethicists offer the insight that when we enter into relationships, we take on role-based obligations that require care. Care can only take place when the parties to the relationship are capable of caring. In recent years we have experimented with robotic health care providers, robotic sex workers, and robotic priests. Critics of this kind of technological encroachment wonder whether such functions ought to be replaced by uncaring robots. Living a human life requires give and take, expressing and responding to need. This is a dynamic that is not fully present when these roles are filled by robots.

Some may respond that we have yet to imagine the range of possibilities that relationships with artificial intelligence may provide. In an ideal world, everyone has loving, caring companions and people help one another live healthy, flourishing lives. In the world in which we live, however, some people are desperately lonely. Such people benefit from affection behavior, even if the affection is not coming from a sentient creature. For such people, it would be better to have lengthy conversations with a realistic chat-bot than to have no conversations at all.

What’s more, our response to affection between human beings and artificial intelligence may say more about our biases against the unfamiliar than it does against the permissibility of these kinds of interactions. Our experiences with the world up to this point have motivated reflection on the kinds of experiences that are virtuous, valuable, and meaningful. Doing so has necessitated a rejection of certain myopic ways of viewing the boundaries of meaningful experience. We may be at the start of a riveting new chapter on the forms of possible engagement between carbon and silicon. For all we know, these interactions may be great additions to the narrative.

Twitter Bots and Trust

photograph of chat bot figurine in front of computer

Twitter has once again been in the news lately, which you know can’t be a good thing. The platform recently made two sets of headlines: in the first, news broke that a number of Twitter accounts were making identical tweets in support of Mike Bloomberg and his presidential campaign, and in the second, reports came out of a significant number of bots making tweets denying the reality of human-made climate change.

While these incidents are different in a number of ways, they both illustrate one of the biggest problems with Twitter: given that we might not know anything about who is making an actual tweet – whether it is a real person, a paid shill, or a bot – it is difficult to know who or what to trust. This is especially problematic when it comes to the kind of disinformation tweeted out by bots about issues like climate change, where it can not only be difficult to tell whether it comes from a trustworthy source, but also whether the content of the tweet makes any sense.

Here’s the worry: let’s say that I see a tweet declaring that “anthropogenic climate change will result in sea levels rising 26-55 cm. in the 21st century with a 67% confidence interval.” Not being a scientist myself, I don’t have a good sense of whether or not this is true. Furthermore, if I were to look into the matter there’s a good chance that I wouldn’t be able to determine whether the relevant studies that were performed were good ones, whether the prediction models were accurate, etc. In other words, I don’t have much to go on when determining whether I should accept what is tweeted out at me.

This problem is an example of what epistemologists have referred to as the problem of expert testimony: if someone tells me something that I don’t know anything about, then it’s difficult for me, as a layperson, to be critical of what they’re telling me. After all, I’m not an expert, and I probably don’t have the time to go and do the research myself. Instead, I have to accept or reject the information on the basis of whether I think the person providing me with information is someone I should listen to. One of the problems with receiving such information over Twitter, then, is that it’s very easy to prey on that trust.

Consider, for example, a tweet from a climate-change denier bot that stated “Get real, CNN: ‘Climate Change’ dogma is religion, not science.” While this tweet does not provide any particular reason to think that climate science is “dogma” or “religion,” it can create doubt in other information from trustworthy sources. One of the co-authors of the bot study worries that these kinds of messages can also create an illusion of “a diversity of opinion,” with the result that people “will weaken their support for climate science.”

The problem with the pro-Bloomberg tweets is similar: without a way of determining whether a tweet is actually coming from a real person as opposed to a bot or a paid shill, messages that defend Bloomberg may be ones intended to create doubt in tweets that are critical of him. Of course, in Bloomberg’s case it was a relatively simple matter to determine that the messages were not, in fact, genuine expressions of support for the former mayor, as dozens of tweets were identical in content. But a competently run network of bots could potentially have a much greater impact.

What should one do in this situation? As has been written about before here, it is always a good idea to be extra vigilant when it comes to getting one’s information from Twitter. But our epistemologist friends might be able to help us out with some more specific advice. When dealing with information that we can’t evaluate on the basis of content alone – say, because it’s about something that I don’t really know much about – we can look to some other evidence about the providers of that information in order to determine whether we should accept it.

For instance, philosopher Elizabeth Anderson has argued that there are generally three categories of evidence that we can appeal to when trying to decide whether we should accept some information: someone’s expertise (with factors including testifier credentials, and whether they have published and are recognized in their field), their honesty (including evidence about conflicts of interest, dishonesty and academic fraud, and making misleading statement), and the extent to which they display epistemic responsibility (including evidence about the ways in which one has engaged with the scientific community in general and their peers specifically). This kind of evidence isn’t a perfect indication of whether someone is trustworthy, and it might not be the easiest to find. When one is trying to get good information from an environment that is potentially infested with bots and other sources of misleading information, though, gathering as much evidence as one can about one’s source may be the most prudent thing to do.

Establishing Liability in Artificial Intelligence

Entrepreneur Li Kin-kan is suing over “investment losses triggered by autonomous machines.” Raffaele Costa convinced Li to let K1, a machine learning algorithm, manage $2.5 billion—$250 million of his own cash and the rest leverage from Citigroup Inc. The AI lost a significant amount of money in a decision that the company claims it wouldn’t have made if it was as sophisticated as they had been led to believe. Because of the autonomous decision-making structure of K1, trying to locate appropriate liability is a provocative question: is the money-losing decision the fault of K1, its designers, Li, or, as Li alleges, the salesman who made claims about K1’s potential?  

Developed by Austria-based AI company 42.cx, the supercomputer named K1 would “comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.”

Our current laws are designed to assign responsibility on the basis of intention or ability to predict an injury. Algorithms do neither, but are being put to more and more tasks that can produce legal injuries in novel ways. In 2014, the Los Angeles Times published an article that carried the byline: “this post was created by an algorithm written by the author.” The author of the algorithm, Ken Schwencke, allowed the code to produce a story covering and earthquake, not an uncommon event around LA, so tasking an algorithm to produce the news was a time-saving strategy. However, journalism by code can lead to complicated libel suits, as legal theorists discussed when Stephen Colbert used an algorithm to match Fox News personalities with movie reviews from Rotten Tomatoes. Though the claims produced were satire, there could have been a case for libel or defamation, though without a human agent as the direct producer of the claim: “The law would then face a choice between holding someone accountable for a result she did not specifically intend, or permitting without recourse what most any observer would take for defamatory or libelous speech.”

Smart cars are being developed that can cause physical harm and injury based on the decisions of their machine learning algorithms. Further, artificial speech apps are behaving in unanticipated ways: “A Chinese app developer pulled its instant messaging “chatbots”—designed to mimic human conversation—after the bots unexpectedly started criticizing communism. Facebook chatbots began developing a whole new language to communicate with each other—one their creators could not understand.”

Consider: machine-learning algorithms accomplish tasks in ways that cannot be anticipated in advance (indeed, that’s why they are implemented – to do creative, not purely scripted work); and thus they increasingly blur the line between person and instrument, for the designer did not explicitly program how the task will be performed.

When someone directly causes injury, for instance by causing bodily harm with their body, it is easy to isolate them as the cause. If someone stomps on your foot, this could cause a harm. According to the law, then, they can be held liable if they have the appropriate mens rea, or guilty mind. For instance, if they intended to cause that injury, knowingly caused the injury, recklessly caused the injury, or negligently caused the injury.

This structure for liability seems to work just as well if the person in question used a tool or instrument. If someone uses a sledgehammer to break your foot, they still are isolated as the cause (as the person moving the sledgehammer around), and can be held liable depending on what their mental state was regarding the sledgehammer-hitting-your-foot (perhaps it was a non-culpable accident). Even if they use a  complicated Rube Goldberg Machine to break your foot, the same structure seems to work just fine. If someone uses a foot-breaking Rube Goldberg Machine to break your foot, they’ve caused you an injury, and depending on their particular mens rea will be liable for some particular legal violation.

Machine learning algorithms put pressure on this framework, however, because when they are used it is not to produce a specific result in the way the Rube Goldberg foot-breaking machine does. The Rube Goldberg foot-breaking machine, though complex, is transparent and has an outcome that is “designed in”: it will smash feet. With machine learning algorithms, there is a break between the designer or user and the product. The outcome is not specifically intended in the way smashing feet is intended by a user of the Rube Goldberg machine. Indeed, it is not even known by the user of the algorithm.

The behavior or choice in cases of machine learning algorithms originate in the artificial intelligence in a way that foot smashing doesn’t originate in the Rube Goldberg machine. Consider: we wouldn’t hold the Rube Goldberg machine liable for a broken foot, but would rather look to the operator or designer.  However, in cases of machine learning, the user or designer didn’t come up with the output of the algorithm.

When Deepmind won at Go, it was making choices that surprised all of the computer scientists involved. AI make complex decisions and take actions completely unforeseen by their creators, so when their decisions result in injury, where do we look to apportion blame? It is still the case that you cannot sue algorithms or AI (and, further, the remuneration or punishment would be difficult to imagine).  

One model for AI liability interprets machine learning functions in terms of existing product liability frameworks that put burdens of appropriate operation on the producers. The assumptions here are that any harm resulting by products is due to faulty products and the company is liable regardless of mens rea (See, for instance, Escola v Coca-Cola Bottling Co.). In this framework, the companies that produce the algorithms would be liable for harms that result from smart cars or financial decisions.

Were this framework adopted, Li could be suing the AI company that produced or sold K1, 42.cx, but as it stands, the promises involved in the sale conform to our current legal standards. The interpretations at stake are whether K1 could have been predicted to make the decision that resulted in losses given the description in the terms of sale.

The Tay Experiment: Does AI Require a Moral Compass?

In an age of frequent technological developments and innovation, experimentation with artificial intelligence (AI) has become a much-explored realm for corporations like Microsoft. In March 2016, the company launched an AI chatbot on Twitter named Tay with the handle of TayTweets (@TayandYou). Her Twitter description read: “The official account of Tay, Microsoft’s A.I. fam from the Internet that’s got zero chill! The more you talk the smarter Tay gets.” Tay was designed as an experiment in “conversational understanding” –– the more people communicated with Tay, the smarter she would get, learning to engage Twitter users through “casual and playful conversation.”

Continue reading “The Tay Experiment: Does AI Require a Moral Compass?”