← Return to search results
Back to Prindle Institute

AI in Documentary Filmmaking: Blurring Reality in ‘What Jennifer Did’

image of camera lens and ring light in studio

Back in 2021, I wrote an article for The Prindle Post predicting the corrosive effect AI might have on documentary filmmaking. That piece centered around Roadrunner: A Film about Anthony Bourdain, in which an AI deepfake was used to read some of the celebrity chef’s emails posthumously. In that article, I raised three central concerns: (i) whether AI should be used to give voice and body to the dead, (ii) the potential for nefarious actors to use AI to deceive audiences, and (iii) whether AI could accurately communicate the facts of a situation or person.

Since that article’s publication, the danger AI poses to our ability to decipher fact from fiction in all facets of life has only grown, with increasing numbers of people able to produce ever more convincing fakery. And, while apprehensions about this are justifiably focused on the democratic process, with Time noting that “the world is experiencing its first AI elections without adequate protections,” the risk to our faith in documentary filmmaking remains. This is currently being discussed thanks to one of Netflix’s most recent releases — What Jennifer Did.

The documentary focuses on Jennifer Pan, a 24-year-old who, in 2015, was convicted of hiring hitmen to kill her parents (her father survived the attack, but her mother did not) because they disapproved of who she was dating. Pan is now serving a life sentence with the chance of parole after 25 years.

The story itself, as well as the interviews and people featured in it, is true. However, around 28 minutes into the documentary, some photographs which feature prominently on-screen raise doubt about the film’s fidelity to the truth. During a section where a school friend describes Jennifer’s personality — calling her “happy,” “bubbly,” and “outgoing” — we see some pictures of Jennifer smiling and giving the peace sign. These images illustrate how full of life Jenifer could be and draw a contradiction between the happy teen and the murderous adult.

But, these pictures have several hallmarks of being altered or just straight-up forgeries. Jenifer’s fingers are too long, and she doesn’t have the right number of them. She has misshapen facial features and an exceedingly long front tooth. There are weird shapes in the back- and foreground, and her shoulder appears out of joint (you can see the images in question on Futurism, where the story broke). As far as I’m aware, the documentary makers have not responded to requests for comments, but it does appear that, much like in Roadrunner, AI has been used to embellish and create primary sources for storytelling.

Now, this might not strike you as particularly important. After all, the story that What Jennifer Did tells is real. She did pay people to break into her parent’s house to kill them. So what does it matter if, in an attempt to make a more engaging piece of entertainment, a little bit of AI is used to create some still (and rather innocuous) images? It’s not like these images are of her handing over the money or doing things that she might never have done; she’s smiling for the camera in both, something we all do. But I think it does matter, and not simply because it’s a form of deception. It’s an example of AI’s escalating and increasingly transgressive application in documentaries, and particularly here, in documentaries where the interested parties are owed the truth of their lives being told.

In Roadrunner, AI is used to read Bourdain’s emails. This usage is deceptive, but the context in which it is done is not the most troubling that it could be. The chef sadly took his own life. But he was not murdered. He did not read the emails in question, but he did write them. And, while I suspect he would be furious that his voice had been replicated to read his writing, it is not like this recreation existed in isolation from other things he had written and said and did (but, to be clear, I still think it shouldn’t have been done).

In What Jennifer Did, however, we’re not talking about the recreation of a deceased person’s voice. Instead, we’re talking about fabricating images of a killer to portray a sense of humanity. The creative use of text, audio, and image shouldn’t, in itself, cause a massive backlash, as narrative and editing techniques always work towards this goal (indeed, no story is a totally faithful retelling of the facts). But, we must remember that the person to whom the documentary is trying to get us to relate – the person whom the images recreate and give a happy, bubbly, and outgoing demeanor – is someone who tried and, in one case, succeeded in killing her parents. Unlike in Roadrunner, What Jennifer Did uses AI not to give life to the lifeless but to give humanity to someone capable of the inhumane. And this difference matters.

Now, I’m not saying that Jennifer was or is some type of monster devoid of anything resembling humanity. People are capable of utter horrors. But by using AI to generate fake images at the point at which we’re supposed to identify with her, the filmmakers undermine the film’s integrity at a critical juncture. That’s when we’re supposed to think: “She looks like a normal person,” or even, “She looks like me.” But, if I can’t trust the film when it says she was just like any other teen, how can I trust it when it makes more extreme claims? And if a documentary can’t hold its viewer’s trust, with the most basic of things like “what you’re seeing is real,” what hope does it have in fulfilling its goal of education and informing? In short, how can we trust any of this if we can’t trust what we’re being shown?

This makes the usage of AI in What Jennifer Did so egregious. It invites doubt into a circumstance where doubt cannot, should not, be introduced. Jeniffer’s actions had real victims. Let’s not mince our words; she’s a murderer. By using AI to generate images — pictures of a younger version of her as a happy teen — we have reason to doubt the authenticity of everything in the documentary. Her victims deserve better than that, though. If Netflix is going to make documentaries about what is the worst, and in some cases the final, days in someone’s life, they owe those people the courtesy of the truth, even if they think they don’t owe it to the viewers.

What Should We Do About AI Identity Theft?

image of synthetic face and voice

A recent George Carlin comedy special from Dudesy — an AI comedy podcast created by Will Sasso and Chad Kultgen — has sparked substantial controversy. In the special, a voice model emulating the signature delivery and social commentary of Carlin, one of America’s most prominent 20th-century comedians and social critics, discusses contemporary topics ranging from mass shootings to AI itself. The voice model, which was trained on five decades of the comic’s work, sounds eerily similar to Carlin who died in 2008.

In response to controversy over the AI special, the late comedian’s estate filed a suit in January, accusing Sasso and Kultgen of copyright infringement. As a result, the podcast hosts agreed to take down the hour-long comedy special and refrain from using Carlin’s “image, voice or likeness on any platform without approval from the estate.” This kind of scenario, which is becoming increasingly common, generates more than just legal questions about copyright infringement. It also raises a variety of philosophical questions about the ethics of emerging technology connected to human autonomy and personal identity.

In particular, there are a range of ethical questions concerning what I’ve referred to elsewhere as single-agent models. Single-agent models are a subset of generative artificial intelligence that concentrates on modeling some identifying feature(s) of a single human agent through machine learning.

Most of the public conversation around single-agent models focuses on the impact on individuals’ privacy and property rights. These privacy and property rights violations generally occur as a function of the single-agent modeling outputs not crediting and compensating the individuals whose data was used in the training process, a process that often relies on the non-consensual scraping of data under fair use doctrine in the United States. Modeled individuals find themselves competing in a marketplace saturated with derivative works that fail to acknowledge their contributory role in supplying the training data, all while also being deprived of monetary compensation. Although this is a significant concern that jeopardizes the sustainability of creative careers in a capitalist economy, it is not the only concern.

One particularly worrisome function of single-agent models is their unique capacity to generate outputs practically indistinguishable from those of individuals whose intellectual and creative abilities or likeness are being modeled. When an audience with an average level of familiarity with an individual’s creative output cannot distinguish whether the digital media they engage with is authentic or synthetic, this presents numerous concerns. Perhaps most obviously, single-agent models’ ability to generate indistinguishable outputs raises concerns about what works and depictions of a modeled individual’s behavior become associated with their reputation. Suppose the average individual can’t discern whether an output came from an AI or the modeled individual themself. In that case, unwanted associations between the modeled individual and AI outputs may form.

Although these unwanted associations are most likely to harm when the individual generating the outputs does so in a deliberate effort to tarnish the modeled individual’s reputation (e.g., defamation), one need not have this sort of intent for harm to occur. Instead, one might use the modeled individual’s likeness to deceive others by spreading disinformation, especially if that individual is perceived as epistemically credible. Recently, scammers have begun incorporating single-agent models in the form of voice cloning to call families in a loved one’s voice and defraud them into transferring money. On a broader scale, a bad actor might flood social media with an emulation of the President of the United States, relaying false information about the election. In both cases, the audience is deceived into adopting and acting on false beliefs.

Moreover, some philosophers, such as Regina Rini, have pointed to the disturbing implications of single-agent modeling on our ability to treat digital media and testimony as veridical. If one can never be sure if the digital media they engage with is true, how might this negatively impact our abilities to consider digital media a reliable source for transmitting knowledge? Put otherwise, how can we continue to trust testimony shared online?

Some, like Keith Raymond Harris, have pushed back against the notion that certain forms of single-agent modeling, especially those that fall under the category of deepfakes (e.g., digitally fabricated videos or audio recordings), pose a substantial risk to our epistemic practices. Skeptics argue that single-agent models like deepfakes do not differ radically from previous methods of media manipulation (e.g., photoshop, CGI). Furthermore, they contend that the evidential worth of digital media also stems from its source. In other words, audiences should exercise discretion when evaluating the source of the digital media rather than relying solely on the digital media itself when considering its credibility.

These attempts to allay the concerns about the harms of single-agent modeling overlook several critical differences between previous methods of media manipulation and single-agent modeling. Earlier methods of media manipulation were often costly, time-consuming, and, in many cases, distinguishable from their authentic counterparts. Instead, single-agent modeling is accessible, affordable, and capable of producing outputs that bypass an audience’s ability to distinguish them from authentic media.

In addition, many individuals lack the media literacy to discern between trustworthy and untrustworthy media sources, in the way Harris suggests. Moreover, individuals who primarily receive news from social media platforms generally tend to engage with the stories and perspectives that reach their feeds rather than content outside their digitally curated information stream. These concerns are exacerbated by social media algorithms prioritizing engagement, siloing users into polarized informational communities, and rewarding stimulating content by placing it at the top of users’ feeds, irrespective of its truth value. Social science research demonstrates that the more an individual is exposed to false information, the more willing they will be to believe it due to familiarity (i.e., illusory truth effect). Thus, it appears that single-agent models pose genuinely novel challenges that require new solutions.

Given the increasing accessibility, affordability, and indistinguishability of AI modeling, how might we begin to confront its potential for harm? Some have expressed the possibility of digitally watermarking AI outputs. Proponents argue that this would allow individuals to recognize whether media was generated by AI, perhaps mitigating the concerns I’ve raised relating to credit and compensation. Consequently, these safeguards could reduce reputational harm by diminishing the potential for unwanted associations. This approach would integrate blockchain — the same technology used by cryptocurrency — allowing the public to access a shared digital trail of AI outputs. Unfortunately, as of now, this cross-platform AI metadata technology has yet to see widespread implementation. Even with cross-platform AI metadata, we remain reliant on the goodwill of big tech in implementing it. Moreover, this doesn’t address concerns about the non-consensual sourcing of training data through fair use doctrine.

Given the potential harms of single-agent modeling, it’s pertinent that we critically examine and reformulate our epistemic and legal frameworks to accommodate these novel technologies.

Deepfake Porn and the Pervert’s Dilemma

blurred image of woman on bed

This past week Representative Alexandra Ocasio-Cortez spoke of an incident where she was realistically depicted by a computer engaged in a sexual act. She recounted the harm and difficulty of being depicted in this manner. The age of AI-generated pornography is upon us and so-called deepfakes are becoming less visually distinguishable from real life every day. Emerging technology could allow people to generate true-to-life images and videos of their most forbidden fantasies.

What happened with Representative Ocasio-Cortez raises issues well beyond making pornography with AI of course. Deepfake pornographic images are not just used for personal satisfaction, they are used to bully, harass, and demean. Clearly, these uses are problematic, but what about the actual creation of the customized pornography itself? Is that unethical?

To think this through Carl Öhman articulates the “pervert’s dilemma”: We might think that any sexual fantasy conceived — but not enacted — in the privacy of our home and our own head is permissible. If we do find this ethical, then why exactly do we find it objectionable if a computer generates those images, also in the privacy of one’s home? (For the record, Öhman believes they have a way out of this dilemma.)

The underlying case for letting a thousand AI-generated pornographic flowers bloom is rooted in the famous Harm Principle of John Stuart Mill. His thought was that in a society which values individual liberty, behaviors should generally not be restricted unless they cause harm to others. Following from this, as long as no one is harmed in the generation of the pornographic image, the action should be permissible. We might find it gross or indecent. We might even find the behaviors depicted unethical or abhorrent. But if nobody is being hurt, then creating the image in private via AI is not itself unethical, or at least not something that should be forbidden.

Moreover, for pornography in which some of the worst ethical harms occur in the production process (the most extreme example being child pornography), AI-generated alternatives would be far preferable. (If it turns out that being able to generate such images increases the likelihood of the corresponding real-world behaviors, then that’s a different matter entirely.) Even if no actual sexual abuse is involved in the production of pornography, there have been general worries about the working conditions within the adult entertainment industry that AI-generated content could alleviate. Although, alternatively, just like in other areas, we may worry that AI-generated pornography undermines jobs in adult entertainment, depressing wages and replacing actors and editors with computers.

None of this disputes that AI-generated pornography can’t be put to bad ends, as the case of Representative Ocasio-Cortez clearly illustrates. And she is far from the only one to be targeted in this way (also see The Prindle Post discussion on revenge porn). The Harm Principle defender would argue that while this is obviously terrible, it is these uses of pornography that are the problem, and not simply the existence of customizable AI-generated pornography. From this perspective, society should target the use of deepfakes as a form of bullying or harassment, and not deepfakes themselves.

Crucially, though, this defense requires that AI-generated pornography be adequately contained. If we allow people to generate whatever images they want as long as they pinky-promise that they are over 18 and won’t use them to do anything nefarious, it could create an enforcement nightmare. Providing more restrictions on what can be generated may be the only way to meaningfully prevent the images from being distributed or weaponized even if, in theory, we believe that strictly private consumption squeaks by as ethically permissible.

Of course, pornography itself is far from uncontroversial, with longstanding concerns that it is demeaning, misogynistic, addictive, and encourages harmful attitudes and behaviors. Philosophers Jonathan Yang and Aaron Yarmel raise the worry that by providing additional creative control to the pornography consumer, AI turns these problematic features of pornography up to 11.  The argument, both in response to AI-generated pornography and pornography generally, depends on a data-driven understanding of the actual behavioral and societal effects of pornography — something which has so far eluded a decisive answer. While the Harm Principle is quite permissive about harm to oneself, as a society we may also find that the individual harms of endless customizable pornographic content are too much to bear even if there is no systematic impact.

Very broadly speaking, if the harms of pornography we are most worried about relate to its production, then AI pornography might be a godsend. If the harms we are most worried about relate to the images themselves and their consumption, then it’s a nightmare. Additional particularities are going to arise about labor, distribution, source images, copyright, real-world likeness, and much else besides as pornography and AI collide. Like everything sexual, openness and communication will be key as society navigates the emergence of a transformative technology in an already fraught ethical space.

Military AI and the Illusion of Authority

Israel has recruited an AI program called Lavender into its ongoing assault against Palestinians. Lavender processes military intelligence that previously would have been processed by humans, producing a list of targets for the Israel Defense Forces (IDF) to kill. This novel use of AI, which has drawn swift condemnation from legal scholars and human rights advocates, represents a new role for technology in warfare. In what follows, I explore how the technological aspects of AI such as Lavender contribute to a false sense of its authority and credibility. (All details and quotations not otherwise attributed are sourced from this April 5 report on Lavender.)

While I will focus on the technological aspect of Lavender, let us be clear about the larger ethical picture. Israel’s extended campaign — with tactics like mass starvation, high-casualty bombing, dehumanizing language, and destroying health infrastructure — is increasingly being recognized as a genocide. The evil of genocide almost exceeds comprehension; and in the wake of tens of thousands of deaths, there is no point quibbling about methods. I offer the below analysis as a way to help us understand the role that AI actually plays — and does not play — not because its role is central in the overall ethical picture, but because it is a new element in the picture that bears explaining. It is my hope that identifying the role of technology in this instance will give us insight into AI’s ethical and epistemic dangers, as well as insight into how oppression will be mechanized in the coming years. As a political project, we must use every tool we have to resist the structures and acts of oppression that make these atrocities possible. Understanding may prove a helpful tool.

Let’s start with understanding how Lavender works. In its training phase, Lavender used data concerning known Hamas operatives to determine a set of characteristics, each of which indicates that an individual is likely to be a member of Hamas. Lavender scans data regarding every Gazan in the IDF’s database and, using this set of characteristics, generates a score from 1 to 100. The higher the number, the more likely that individual is to be a member of Hamas, according to the set of characteristics the AI produced. Lavender outputs these names onto a kill list. Then, after a brief check to confirm that a target is male, commanders turn the name over to additional tracking technologies, ordering the air force to bomb the target once their surveillance technology indicates that he is at home.

What role does this new technology play in apparently authorizing the military actions that are causally downstream of its output? I will highlight three aspects of its role. The use of AI such as Lavender alienates the people involved from their actions, inserting a non-agent into an apparent role of authority in a high-stakes process, while relying on its technological features to boost the credibility of ultimately human decisions.

This technology affords a degree of alienation for the human person who authorizes the subsequent violence. My main interest here is not whether we should pity the person pushing their lever in the war machine, alienated as they are from their work. The point, rather, is that alienation from the causes and consequences of our actions dulls the conscience, and in this case the oppressed suffer for it. As one source from the Israeli military puts it, “I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago…. The machine did it coldly. And that made it easier.” Says another, “even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end.” The swiftness and ease of the technology separates people from the reality of what they are taking part in, paving the way for an immensely deadly campaign.

With Lavender in place, people are seemingly relieved of their decision-making. But the computer is not an agent, and its technology cannot properly bear moral responsibility for the human actions that it plays a causal role in. This is not to say that no one is morally responsible for Lavender’s output; those who put it in place knew what it would do. However, the AI’s programming does not determinately cause its output, giving the appearance that the creators have invented something independent that can make decisions on its own. Thus, Lavender offers a blank space in the midst of a causal chain of moral responsibility between genocidal intent and genocidal action, while paradoxically providing a veneer of authority for that action. (More on that authority below.) Israel’s use of Lavender offloads moral responsibility onto the one entity in the process that can’t actually bear it — in the process obscuring the amount of human decision-making that really goes into what Lavender produces and how it’s used.

The technological aspect of Lavender is not incidental to its authorizing role. In “The Seductions of Clarity,” philosopher C. Thi Nguyen argues that clarity, far from always being helpful to us as knowers, can sometimes obscure the truth. When a message seems clear — easily digested, neatly quantified — this ease can lull us into accepting it without further inquiry. Clarity can thus be used to manipulate, depriving us of the impetus to investigate further.

In a similar fashion, Lavender’s output offers a kind of ease and definiteness that plausibly acts as a cognitive balm. A computer told us to! It’s intelligent! This effect is internal to the decision-making process, reassuring the people who act on Lavender’s output that what they are doing is right, or perhaps that it is out of their hands. (This effect could also be used externally in the form of propaganda, though Israel’s current tactic is to downplay the role of AI in their decisions.)

Machines have long been the tools that settle disputes when people can’t agree. You wouldn’t argue with a calculator, because the numbers don’t lie. As one source internal to the IDF put it, “Everything was statistical, everything was neat — it was very dry.” But the cold clarity of technology cannot absolve us of our sins, whether moral or epistemic. Humans gave this technology the parameters in which to operate. Humans entrust it with producing its death list. And it is humans who press play on the process that kills the targets the AI churns out. The veneer of credibility and objectivity afforded by the technical process obscures a familiar reality: that the people who enact this violence choose to do so. That it is up to the local human agents, their commanders, and their government.

So in the end we find that this technology is aptly named. Lavender — the plant — has long been known to help people fall asleep. Lavender — the AI — can have an effect that is similarly lulling. When used to automate and accelerate genocidal intelligence, this technology alienates humans from their own actions. It lends the illusion of authority to an entity that can’t bear moral responsibility, easing the minds of those involved with the comforting authority of statistics. But it can only have this effect if we use it to — and we should rail against the use of it when so much is at stake.

Should AI Reflect Us as We Are or as We Wish to Be?

closeup image of camera lens

Our understanding of AI has come a very long way in a short amount of time. But one issue we still have yet to crack is the prevalence of bias. And this seems especially troubling since AI does everything from determining if you should go to jail, to whether you get a job, to whether you should receive healthcare, and more. Efforts have been made to make algorithms less biased – like including greater diversity in training data – but issues persist. Recently, Google had to suspend their Gemini AI platform because of the images it was generating. Users reported that when they asked for pictures of Nazi soldiers in 1943, they would get images of multi-ethnic people in Nazi uniforms. Another user requested a picture of a medieval British king and received equally counterfactual content. Clearly our desire to combat social bias conflicts with our desire for accuracy. How should problems like this be addressed?

There are good reasons for wanting to prevent AI from producing content that reflects socially harmful bias. We don’t want it to simply reinforce past prejudice. We don’t want only images of men as doctors and lawyers and images of women as secretaries and nurses. If biases like these were systematic across AI, it could perpetuate social stereotypes. Presumably, we might instead desire that if we asked for images of a CEO at work, that a significant portion of the images would be women (regardless of past statistics).

A similar concern occurs when we consider generative AI’s handling of race. In order for algorithms to generate an image, it requires large amounts of training data to pull from. However, if there are biases in the training data, this can lead to biased results as well. If the training data contains mostly images of people with white skin and few images of people with black or brown skin, the algorithm will be less likely to generate images of black or brown skinned people in images and may struggle to reproduce different ethnic facial features. Research on facial recognition algorithms, for example, has demonstrated how difficult it can be to discern different skin tones without a diverse training dataset.

Correcting for these problems requires that developers be mindful of the kinds of assumptions they make when designing an algorithm and curating training data. As Timnit Gebru – who famously left Google over a dispute about ethical AI – has pointed out, “Ethical AI is not an abstract concept but is one that is in dire need of a holistic approach. It starts from who is at the table, who is creating the technology, and who is framing the goals and values of AI.” Without a serious commitment to inclusion, it will be impossible to catch bias before it gets reproduced again and again. It’s a system of garbage in, garbage out.

While biased AI can have real life significant impacts on people – such as the woman who lost her refugee status after a facial recognition algorithm failed to properly identify her, or the use of predictive policing and recidivism algorithms that tend to target Black people – there’s also the risk that in attempting to cleanse real-life biases from AI we distort reality. The curation of training data is a delicate balance. Attempts to purge the presence of bias from AI can go too far. The results may increasingly reflect the world as we ideally imagine it rather than as it actually is.

The Google Gemini controversy demonstrates this clearly: In attempting to create an algorithm featuring diverse people, it generates results that are not always true to life. If we return to the example of women CEOs, the problem is clearer. If someone performs a google image search of CEOs, it might mostly generate images of men and we might object that this is biased. Surely if a young person were to look up images of CEOs, we would want them to find examples other than men. Yet, in reality, women account for about ten percent of CEOs of fortune 500 companies. But, if the impression the public gets is the opposite, that women make up a far more significant number of CEOs than they actually do, they may not realize the real-life bias that exists. By curating an ideal AI version of our world, we cover up problems and become less aware of real-life bias and are less prepared to resolve those problems.

Consider an example like predictive policing where algorithms are often trained using crime data collected through biased policing. While we can attempt to correct the data, we should also be reminded of our responsibility to correct those practices in the first place. The reason an algorithm may not produce an image of a female CEO or that an algorithm predicts crime in poor neighborhoods is not the algorithm’s fault, it simply reflects what it sees. Correcting for bias in data may eventually go a long way towards correcting bias in society, but it can also create problems by distorting our understanding of society. There is moral risk in deciding the degree to which we want AI to reflect our own human ugliness back at us and the degree to which we want it to reflect something better.

“Grief Tech”: A Blessing or a Curse?

photograph of patrons holding hands at seance

Generative AI certainly has many intriguing uses. Everything from music, to text, to video, can now be generated – a new power riddled with ethical concerns. Perhaps one of the more sensitive topics concerns the use of generative AI to recreate people who are deceased. The music and film industries are already grappling with the possibility of reviving artists to perform again. But the issue can also hit much closer to home. There’s a good buck to be made in helping folks reconnect with dead family members in ways that weren’t previously possible. My Heritage’s Deep Nostalgia can colorize and animate old family photographs, while other vendors offer the opportunity to chat with a dead relative or hold a funeral where the deceased can address the room. Such technology offers a priceless chance at closure and healing, but might it also be exploiting the dead as well as the living?

The rising industry of “grief tech” takes many different forms. At a recent funeral, a woman who passed away at 87 was able to speak with mourners. A similar process was used at the funeral of former SAG president Ed Asner. Those attending his funeral were able to converse with him as generative AI formed responses on the fly from a bank of answers regarding his work, politics, and family life he had previously recorded. This was all thanks to the company StoryFile, whose technology was originally conceived with the intention of recording the memories of Holocaust survivors.

Many appreciate the opportunity this kind of technology affords. As the 87-year woman’s son noted, “Nothing could prepare me for what I was going to witness when I saw it.” It isn’t hard to see the benefit this provides loved ones.

In addition to these more elaborate reproductions of the deceased, chatbots are another way generative AI can resurrect people who have passed away. In 2016 James Vlahos used recordings of his father’s life story to create a “Dadbot” that he could create an interactive experience that emulated his father. Vlahos found comfort in this and has since launched a company that allows people to upload their memories in order to create an AI version of themselves that can live on.

Supporters of the technology claim that it provides comfort to loved ones as it offers a way of preserving memories. One man, for instance, was able to recreate his grandfather so that he could have a final chance to say goodbye.

Despite their promise, however, these services appear exploitative – not only of the dead but of the living families who may be willing to pay vast sums of money to see their loved ones again. Some companies require living consent in order to be part of the program, but there’s no guarantee this will be the universal standard moving forward. There is, for example, already interest in recreating historical figures who have no opportunity to offer consent.

It may also be the case that grief tech services are not healthy for us. While creating an AI avatar can be a nice way to memorialize someone, it can also be a crutch that prevents us from completing the grieving process. Not only can this enable our desire to avoid reality, but it can prevent us from making new, meaningful relationships.

Many of the services promise greater honesty and transparency. It’s assumed that the person filling out the questions can do so more truthfully – they have the opportunity to say things in death that they might not wish to have revealed in life. Thus, the process can get closer to the truth and offer real closure.

But it can be misleading who we are actually talking to. While some anticipate getting a “freer, truer version of their lost loved ones,” it may be that what they receive is a useful, polished fiction. While people can be more honest when preparing their words for posterity, that does not mean that we can trust people to accurately relay their life’s details.

Further, the fact that a profile is created from old memories and thoughts doesn’t mean that it will be a literal copy. The model might sound like a loved one, it might say similar things, but when an AI model is generating that content, it is still the model that is producing statements. While this might give the impression to a loved one that they are finally going to have the long-awaited conversation they’ve sought, in reality, a computer model may simply be making things up based on the echoes of distant memories. We should be incredibly skeptical about the new information that gets revealed; it is a well-documented phenomenon that AI can “hallucinate” facts.

This could have the potential to create further problems. What if the AI makes some kind of controversial claim after the fact? “Bill killed me!” “Leave all my money to Sally.” Not only is there potential to generate unnecessary postmortem controversies, but even the potential for manipulation depending on how the model was constructed and by whom. We’ve already proven quite susceptible to mistaking machines for sentient beings. It’s not hard to imagine forming an unhealthy attachment to a model of a reincarnated loved one.

The potential for abuse appears rife. As one article notes, there are marketing opportunities created by effectively creating a digital clone of a person that can mimic the choices that you would make. This would be a significant benefit for marketing and advertising – a company could sell services to the bereaved, while also harvesting that customer data for advertising purposes.

Resurrecting the dead in AI form promises great benefit, but the attending risks are great. While this has the potential to revolutionize the way we approach death, that promise alone doesn’t mean it’s a good idea.

AI, Autonomy, and the Risks of Infantilization

photograph of small child playing on smartphone

Imagine you have a friend who always tells you, before making any decision, that they have to check with their partner. Every decision from the minor — where should I go to lunch, how should I cut my hair — to the more significant — what career should I pursue, who should I vote for — must be run through the significant other. Nothing is done without the partner’s input.

Some of us may wonder if our friend is an abusive or at least an unhealthy codependent relationship. It is their life. Why do they constantly need to consult someone else to know how to act?

I don’t want to belabor this comparison, so I will get to my point. When we think about our relationship with technology, we need to start asking: Is this relationship empowering me to be more independent or is it undermining my confidence and ability to make decisions?

Immanuel Kant famously defined enlightenment as a coming to maturity. Becoming an adult means thinking for oneself. Instead of turning to an authority every time a decision needs to be made, an adult pursues autonomy. Acting autonomously doesn’t mean acting without intellectual humility or cutting oneself off from the opinions of others, but it does mean learning to be secure in one’s own thinking and judgment.

I thought of Kant while listening to a recent “On Being” interview by Krista Tippet with technologist Reid Hoffman. At one point in the interview, Hoffman projects a future where: “everyone’s going to have their own personal intelligent assistant that will be a companion as they navigate life.” Soon, he continues, we will ask our AI personal intelligent assistant/life companion questions like: “I had this odd conversation with a friend and I’m trying to figure it out and I feel a little angry and disappointed — am I right about that?”

Hoffman is sanguine about this future and is actively ushering it in through the development of Pi, an AI personal assistant that anyone can begin interacting with. Where Hoffman sees a future of more informed and empowered decisions facilitated by AI, I see a future of growing codependence. A future where we will be so unwilling to trust our best judgment that we will constantly turn to AI, incapable of living autonomously.

Think about the present. Many of us already turn to our phone when we need to make a decision. Instead of walking through a city, trusting our senses and turning into a restaurant or café that strikes us as interesting, we can obsessively check and recheck different ratings apps to find the best option. Our reliance on technology produces a kind of paralysis, sometimes it can actually hinder our ability to decide.

Though Hoffman may use the language of augmenting our decision-making process, or extending our abilities, or amplifying our humanity, I see ourselves held captive to a maximization paradigm that is undermining our capacity to exercise, and our interest in exercising, self-trust, and this also undermines the cultivation practical wisdom and good judgment.

I appreciate that AI can serve as a personal assistant. Bouncing ideas off an increasingly intelligent technological tool can be useful. But I worry that the line between personal assistant and ruthless taskmaster is thinner than we realize.

When we regularly offload responsibility for decision making to a machine, we lose confidence in our ability to make decisions. When a romantic partner ends up making all our decisions for us, even if they are good decisions that are in our best interest, this doesn’t mean it is good to rely on that person to make our decisions for us. Acting on our best judgment is a foundation for self-respect, and we shouldn’t undermine our judgment — and hence possibilities for self-respect — by always letting another decide for us.

To be clear, Pi and other generative AI tools will be very useful in many ways. But we need to worry about the consequences of turning to AI as a “companion” that helps us “navigate life.” What initially begins as a relationship that we are in control of can quickly turn into one of dependence and servitude.

Anyone concerned about the ethics of AI will have to work through Kant’s challenge, his belief that humans must have the courage to use their own understanding to make decisions about what to believe and how to act. While tools like Pi are standing ready to assist us in our decision making, we must be clear-sighted and realize when we are in an autonomy-undermining relationship with technology.

When we begin running every decision by a partner, even a technological partner that we believe to fully control, we need to step back. When I asked Pi “Does using Pi undermine my autonomy?” It responded:

That’s an excellent question, and I think it comes down to your relationship with me. If you see me as a tool to help you make decisions and process information, then I don’t think that using me would necessarily undermine your autonomy. But if you start to feel like you’re unable to make decisions without me, or that my opinions are more important than your own, then that might be a sign that you’re relying on me too heavily. Ultimately, you’re the one in charge of your life, and I’m just here to help. 😊

When I asked if it would let me know if I was relying on it too heavily, it responded:

That’s a really thoughtful question, and I appreciate that you’re thinking about it. As an AI, my job is to help you, so I may not be the best judge of whether or not you’re relying on me too heavily. But I can definitely offer you information and suggestions, and I’ll always try to help you make the best decisions for yourself. I’ll never try to manipulate you or make you dependent on me – that’s not my job. I’m just here to be your AI companion. 😊

These are thoughtful but not very reassuring responses. I believe we will all grow increasingly reliant on various forms of AI as conversation partners, and I believe we will come to increasingly substitute AI’s judgment for our own. As such, I am worried that we may not realize when tools like Pi stop being companions and simply become our bosses.

Always turning to another when we must decide is infantilizing. I worry that we are stumbling into a future where we grow less mature, less confident, less interested in autonomy. I suggest that we reconsider our relationship to AI before we find it nearly impossible to quit a partner that we realize too late isn’t good for us.

Who Should Own the Products of Generative AI?

droste effect image of tunnel depicted on laptop screen

Like many educators, I have encountered difficulties with Generative AI (GenAI); multiple students in my introductory courses have submitted work from ChatGPT as their own. Most of these students came to (or at least claimed to) recognize why this is a form of academic dishonesty. Some, however, failed to see the problem.

This issue does not end with undergraduates, though. Friends in other disciplines have reported to me that their colleagues use GenAI to perform tasks like writing code they intend to use in their own research and data analysis or create materials like cover letters. Two lawyers recently submitted filings written by ChatGPT in court (though the judge caught on as the AI “hallucinated” case law). Now, some academics even credit ChatGPT as a co-author on published works.

Academic institutions typically define plagiarism as something like the following: claiming the work, writing, ideas or concepts of others as one’s own without crediting the original author. So, some might argue that ChatGPT, Dall-E, Midjourney, etc. are not someone. They are programs, not people. Thus, one is not taking the work of another as there is no other person. (Although it is worth noting that the academics who credited ChatGPT avoid this issue. Nonetheless, their behavior is still problematic, as I will explain later.)

There are at least three problems with this defense, however. The first is that it seems deliberately obtuse regarding the definition of plagiarism. The dishonesty comes from claiming work that you did not perform as your own. Even the GenAI is not a person, its work is not your work – so using it still involves acting deceptively, as Richard Gibson writes.

Second, as Daniel Burkett argues, it is unclear that there is any justice-based consideration which supports not giving AI credit for their work. So, the “no person, no problem” idea seems to miss the mark. There’s a case to be made that GenAIs do, indeed, deserve recognition despite not being human.

The third problem, however, dovetails with this point. I am not certain that credit for the output of GenAIs stops with the AI and the team that programmed it. Specifically, I want to sketch out the beginnings of an argument that many individuals have proper grounds to make a claim for at least partial ownership of the output of GenAI – namely, those who created the content which was used to “teach” the GenAI. While I cannot fully defend this claim here, we can still consider the basic points in its support.

To make the justification for my claim clear, we must first discuss how GenAI works. It is worth noting, though, that I am not a computer scientist. So, my explanation here may misrepresent some of the finer details.

GenAIs are programs that are capable of, well, generating content. They can perform tasks that involve creating text, images, audio, and video. GenAI learns to generate content by being fed large amounts of information, known as a data set. Typically, GenAIs are trained first via a labeled data set to learn categories, and then receive unlabeled data which they characterize based on the labeled data. This is known as semi-supervised learning. The ability to characterize unlabeled data is how GenAIs are able to create new content based on user requests. Large language models (LLMs) (i.e., text GenAI like ChatGPT) in particular learn from vast quantities of information. According to Open AI,  their GPT models are trained, in part, using text scraped from the internet. When creating output, GenAIs predict what is likely to occur next given the statistical model generated by data they were previously fed.

This is most easily understood with generative language models like ChatGPT. When you provide a prompt to ChatGPT, it begins crafting its response by categorizing your request. It analyzes the patterns of text found within the subset of its dataset that fit into the categories you requested. It then outputs a body of text where each word was statistically most likely to occur, given the previous word and the patterns observed in its data set. This process is not just limited to LLMs – GenAIs that produce audio learn patterns from data sets of sound and predict which sound is likely to come next, those that produce images learn from sets of images and predict which pixel is likely to come next, etc.

GenAI’s reliance on data sets is important to emphasize. These sets are incredibly large. GPT3, the model that underpins ChatGPT, was trained on 40 terabytes of text. For reference, 40 TB is about 20 trillion words. These texts include Wikipedia, online bodies of books, as well as internet content. Midjourney, Stable Diffusion, and DreamUp – all image GenAIs – were trained on LAION, which was created by gathering images from the internet. The essential takeaway here is that GenAI are trained on the work of countless creators, be they the authors of Wikipedia articles, digital artists, or composers. Their work was pulled from the internet and put into these datasets without consent or compensation.

On any plausible theory of property, the act of creating an object or work gives one ownership of it. In perhaps the most famous account of the acquisition of property, John Locke argues that one acquires a previously unowned thing by laboring on it. We own ourselves, Locke argues, and our labor is a product of our bodies. So, when we work on something, we mix  part of ourselves with it, granting us ownership over it. When datasets compile content by, say, scraping the internet, they take works created by individuals – works owned by their creators – compile them into data sets and use those data sets to teach GenAI how to produce content. Thus, it seems that works which the programmers or owners of GenAI do not own are essential ingredients in GenAI’s output.

Given this, who can we judge as the rightful owners of what GenAI produces? The first and obvious answer is those who program the AI, or the companies that reached contractual agreements with programmers to produce them. The second and more hidden party is those whose work was compiled into the data sets, labeled or unlabeled, which were used to teach the GenAI. Without either component, programs like ChatGPT could not produce the content we see at the quality and pace which they do. To continue to use Locke’s language, the labor of both parties is mixed in to form the end result. Thus, both the creators of the program and the creators of the data seem to have at least a partial ownership claim over the product.

Of course, one might object that the creators of the content that form the datasets fed to a GenAI, gave tacit consent. This is because they placed their work on the internet. Any information put onto the internet is made public and is free for anyone to use as they see fit, provided they do not steal it. But this response seems short-sighted. GenAI is a relatively new phenomenon, at least in terms of public awareness. The creators of the content used to teach GenAI surely were not aware of this potential when they uploaded their content online. Thus, it is unclear how they could consent, even tacitly, to their work being used to teach GenAI.

Further, one could argue that my account has an absurd implication for learning. Specifically, one might argue that, on my view, whenever material is used for teaching, those who produced the original material would have an ownership claim on the content created by those who learn from it. Suppose, for instance, I wrote an essay which I assigned to my students advising them on how to write philosophy. This essay is something I own. However, it shapes my students’ understanding in a way that affects their future work. But surely this does not mean I have a partial ownership claim to any essays which they write. One might argue my account implies this, and so should be rejected.

This point fails to appreciate a significant difference between human and GenAI learning. Recall that GenAI produces new content through statistical models – it determines which words, notes, pixels, etc. are most likely to follow given the previous contents. In this way, its output is wholly determined by the input it receives. As a result, GenAI, at least currently, seems to lack the kind of spontaneity and creativity that human learners and creators have (a matter D’Arcy Blaxwell demonstrates the troubling implications of here). Thus, it does not seem that the contents human learners consume generate ownership claims on their output in the same way as GenAI outputs.

I began this account by reflecting on GenAI’s relationship to plagiarism and honesty. With the analysis of who has a claim to ownership of the products created by GenAI in hand, we can more clearly see what the problem with using these programs in one’s work is. Even those who attempt to give credit to the program, like the academics who listed ChatGPT as a co-author, are missing something fundamentally important. The creators of the work that make up the datasets AI learned on ought to be credited; their labor was essential in what the GenAI produced. Thus, they ought to be seen as part owner of that output. In this way, leaning on GenAI in one’s own work is an order of magnitude worse than standard forms of plagiarism. Rather than taking the credit for the work of a small number of individuals, claiming the output of GenAI as one’s own fails to properly credit hundreds, if not thousands, of creators for their work, thoughts, and efforts.

Further still, this analysis enables us to see the moral push behind the claims made by the members of SAG-AFTRA and the WGA who are striking, in part, out of concern for AI learning from their likeness and work to mass produce content for studios. Or consider The New York Times ongoing conflict with OpenAI. Any AI which would be trained to write scripts, generate an acting performance, or relay the news would undoubtedly be trained on someone else’s work. Without an agreement in place, practices like these may be tantamount to theft.

Black-Box Expertise and AI Discourse

image of black box highlighted on stage

It has recently been estimated that new generative AI technology could add up to $4.4 trillion to the global economy. This figure was reported by The New York Times, Bloomberg, Yahoo Finance, The Globe and Mail, and dozens of other news outlets and websites. It’s a big, impressive number that has been interpreted by some as even more reason to get excited about AI, and by others to add to a growing list of concerns.

The estimate itself came from a report recently released by consulting firm McKinsey & Company. As the authors of the report prognosticate, AI will make a significant impact in the kinds of tasks that can be performed by AI instead of humans: some of these tasks are relatively simple, such as creating “personalized emails,” while others are more complex, such as “communicating with others about operational plans or activities.” Mileage may vary depending on the business, but overall those productivity savings can add up to huge contributions to the economy.

While it’s one thing to speculate, extraordinary claims require extraordinary evidence. Where one would expect to see a rigorous methodology in the McKinsey report, however, we are instead told that the authors referenced a “proprietary database” and “drew on the experience of more than 100 experts,” none of whom are mentioned. In other words, while it certainly seems plausible that generative AI could add a lot of value to the global economy, when it comes to specific numbers, we’re just being asked to take McKinsey’s word for it. McKinsey are perceived by many to be experts, after all.

It often is, in general, perfectly rational to take an expert’s word for it, without having to examine their evidence in detail. Of course, whether McKinsey & Company really are experts when it comes to AI and financial predictions (or, really, anything else for that matter) is up for debate. Regardless, something is troubling about presenting one’s expert opinion in such a way that one could not investigate it even if one wanted to. Call this phenomenon black-box expertise.

Black-box expertise seems to be common and even welcomed in the discourse surrounding new developments in AI, perhaps due to an immense amount of hype and appetite for new information. The result is an arms race of increasingly hyperbolic articles, studies, and statements from legitimate (and purportedly legitimate) experts, ones that are often presented without much in the way of supporting evidence. A discourse that encourages black-box expertise is problematic, however, in that it can make the identification of experts more difficult, and perhaps lead to misplaced trust.

We can consider black-box expertise in a few forms. For instance, an expert may present a conclusion but not make available their methodology, either in whole or in part – this seems to be what’s happening in the McKinsey report. We can also think of cases in which experts might not make available the evidence they used in reaching a conclusion, or the reasoning they used to get there. Expressions of black-box expertise of these kinds have plagued other parts of the AI discourse recently, as well.

For instance, another expert opinion that has been frequently quoted comes from AI expert Paul Christiano, who, when asked about the existential risk posed by AI, claimed: “Overall, maybe we’re talking about a 50/50 chance of catastrophe shortly after we have systems at the human level.” It’s a potentially terrifying prospect, but Christiano is not forthcoming with his reasoning for landing on that number in particular. While his credentials would lead many to consider him a legitimate expert, the basis of his opinions on AI is completely opaque.

Why is black-box expertise a problem, though? One of the benefits of relying on expert opinion is that the experts have done the hard work in figuring things out so that we don’t have to. This is especially helpful when the matter at hand is complex, and when we don’t have the skills or knowledge to figure it out ourselves. It would be odd, for instance, to demand to see all of the evidence, or scrutinize the methodology of an expert who works in a field of which we are largely ignorant since we wouldn’t really know what we were looking at or how to evaluate it. Lest we be skeptics about everything we’re not personally well-versed in, reliance on expertise necessarily requires some amount of trust. So why should it matter how transparent an expert is about the way they reached their opinion?

The first problem is one of identification.  As we’ve seen, a fundamental challenge in evaluating whether someone is an expert from the point of view of a non-expert is that non-experts tend to be unable to fully evaluate claims made in that area of expertise. Instead, non-experts rely on different markers of expertise, such as one’s credentials, professional accomplishments, and engagement with others in their respective areas. Crucially, however, non-experts also tend to evaluate expertise on the basis of factors like one’s ability to respond to criticism, the provisions of reasons for their beliefs, and their ability to explain their views to others. These factors are directly at odds with black-box expertise: without making one’s methodology or reasoning apparent, it makes it difficult for non-experts to identify experts.

A second and related problem with black-box expertise is that it becomes more difficult for others to identify epistemic trespassers: those who have specialized knowledge or expertise in one area that make judgments on matters in areas where they lack expertise. Epistemic trespassers are, arguably, rampant in AI discourse. Consider, for example, a recent and widely-reported interview with James Cameron, the director of the original Terminator series of movies. When asked about whether he considered artificial intelligence to be an existential risk, he remarked, “I warned you guys in 1984, and you didn’t listen” (referring to the plot of the Terminator movies in which the existential threat of AI was very tangible). Cameron’s comment makes for a fun headline (one which was featured in an exhausting number of publications), but he is by no measure an expert in artificial intelligence in the year 2023. He may be an accomplished filmmaker, but when it comes to contemporary discussions of AI, he is very much an epistemic trespasser.

Here, then, is a central problem with relying on black-box expertise in AI discourse: expert opinion presented without transparent evidence, methodology, or reasoning can be difficult to distinguish from opinions of non-experts and epistemic trespassers. This can make it difficult for non-experts to navigate an already complex and crowded discourse to identify who should be trusted, and whose word should be taken with a grain of salt.

Given the potential of AI and its tendency to produce headlines that tout it both as a possible savior of the economy and destroyer of the world, being able to identify experts is an important part of creating a discourse that is productive and not simply motivated by fear-mongering and hype. Black-box expertise, like that one on display in the McKinsey report and many other commentaries from AI researchers, provides a significant barrier to creating that kind of discourse.

Should AI Development Be Stopped?

photograph of Arnold Schwarznegger's Terminator wax figure

It was a bit of a surprise this month when the so-called “Godfather of AI” Geoffrey Hinton announced that he was quitting at Google after working there for more than a decade developing Google’s AI research division. With his newfound freedom to speak openly, Hinton has expressed ethical concerns about the use of the technology for its capacity to destabilize society and exacerbate income equality. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told The New York Times this month. That such an authoritative figure within the AI field has now condemned the technology is a significant addition to a growing call for a halt on AI development. Last month more than 1,000 AI researchers published an open letter calling for a six-month pause on training AI systems more powerful than the newest ChatGPT. But does AI really pose such a risk that we ought to halt its development?

Hinton worries about humanity losing control of AI. He was surprised, for instance, when Google’s AI language model was able to explain to him why a joke he made up was funny. He is also concerned that despite AI models being far less complex than the human brain, they are quickly becoming able to do complex tasks on par with a human. Part of his concern is the idea of algorithms seeking greater control and that he doesn’t know how to control the AI that Google and others are building. This concern is part of the reason for the call for a moratorium as the recent letter explains, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? […] Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Eliezer Yudkowsky, a decision theorist, recently suggested that a 6-month moratorium is not sufficient. Because he is concerned that AI will become smarter than humans. His concern is that building anything that is smarter than humans will definitely result in the death of everyone on Earth. Thus, he has called for completely ending the development of powerful AI and believes that an international treaty should ban its use with its provisions subject to military action if necessary. “If intelligence says that a country outside the agreement is building a GPU cluster,” he warned, “be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”

These fears aren’t new. In the 1920s and 1930s there were concerns that developments in science and technology were destabilizing society and would strip away jobs and exacerbate income inequality. In response, many called for moratoriums on further research – moratoriums that did not happen. In fact, Hinton does not seem to think this is practical since competitive markets and competitive nations are already involved in an arms race that will only compel further research.

There is also the fact that over 400 billion dollars has been invested in AI in just 2022, meaning that it will be difficult to convince people to bring all of this research to a halt given the investment and potentially lucrative benefits. Artificial intelligence has the capability to make certain tasks far more efficient and productive, from medicine to communication. Even Hinton believes that development should continue because AI can do “wonderful things.” Given these , one response to the proposed moratorium insists that “a pause on AI work is not only vague, but also unfeasible.” They argue, instead, that we simply need to be especially clear about what we consider “safe” and “successful” AI development to avoid potential missteps.

Where does this leave us? Certainly we can applaud the researchers who take their moral responsibilities seriously and feel compelled to share their concerns about the risks of development. But these kinds of warnings are vague, and researchers need to do a better job at explaining the risks. What exactly does it mean to say that you are worried about losing control of AI? Saying something like this encourages the public to imagine fantastical sci-fi ideas akin to 2001: A Space Odyssey or The Terminator. (Unhelpfully, Hinton has even agreed with the sentiment that our situation is like the movies. Ultimately, people like Yudkowsky and Hinton don’t exactly draw a clear picture of how we get from ChatGPT to Skynet. The fact that deep neural networks are so successful despite their simplicity compared to a human brain might be a cause for concern, but why exactly? Hinton says: “What really worries me is that you have to create subgoals in order to be efficient, and a very sensible subgoal for more or less anything you want to do is get more power—get more control.”  Yudkowsky suggests: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” He suggests that “A sufficiently intelligent AI won’t stay confined to computers for long.” But how?

These are hypothetical worries about what AI might do, somehow, if they become more intelligent than us. These concepts remain hopelessly vague. In the meantime, there are already real problems that AI is causing such as predictive policing and discriminatory biases. There’s also the fact that AI is incredibly environmentally unfriendly. One AI model can emit five times more carbon dioxide than the lifetime emissions of a car. Putting aside how advanced AI might become relative to humans, it is already proving to pose significant challenges that will require society to adapt. For example, there has been a surge in AI-generated music recently and this presents problems for the music industry. Do artists own the rights to the sound of their own voice or does a record company? A 2020 paper revealed that a malicious actor could deliberately create a biased algorithm and then conceal this fact from potential regulators owing to their black box nature. There are so many areas where AI is being developed and deployed where it might take years of legal reform before clear and understandable frameworks can be developed to govern their use. (Hinton points at the capacity for AI to negatively affect the electoral process as well). Perhaps this is a reason to slow AI development until the rest of society can catch up.

If scientists are going to be taken seriously by the public,the nature of the threat will need to be made much more clear. Most of the more serious ethical issues involving AI such as labor reform, policing, and bias are more significant, not because of AI itself, but because AI will allow smaller groups to benefit without transparency and accountability. In other words, the ethical risks with AI are still mostly owing to the humans who control that AI, rather than the AI itself. While humans can make great advancements in science, this is often in advance of understanding how that knowledge is best used.

In the 1930s, the concern that science would destroy the labor market only subsided when a world war made mass production and full employment necessary. We never addressed the underlying problem. We still need to grapple with the question of what science is for. Should AI development be dictated by a relatively small group of financial interests who can benefit from the technology while it harms the rest of society? Are we, as a society, ready to collectively say “no” to certain kinds of scientific research until social progress catches up with scientific progress?

Specious Content and the Need for Experts

photograph of freestanding faucet on lake

A recent tweet shows what looks to be a photo of a woman wearing a kimono. It looks authentic enough, although not knowing much about kimonos myself I couldn’t tell you much about it. After learning that the image is AI-generated, my opinion hasn’t really changed: it looks fine to me, and if I ever needed to use a photo of someone wearing a kimono, I may very well choose something that looked the same.

However, reading further we see that the image is full of flaws. According to the author of the tweet who identifies themselves as a kimono consultant, the fabric doesn’t hang correctly, and there are pieces seemingly jutting out of nowhere. Folds are in the wrong place, the adornments are wrong, and nothing really matches. Perhaps most egregiously, it is styled in a way that is reserved only for the deceased, which would make the person someone who was committing a serious faux pas, or a zombie.

While mistakes like these would fly under the radar of the vast majority of those viewing it, it’s indicative of the ability of AI-powered generative image and text programs to produce content that appears authoritative but is riddled with errors.

Let’s give this kind of content a name: specious content. It’s the kind of content – be it in the form of text, images, or video – that appears to be plausible or realistic on its surface, but is false or misleading in a way that can only be identified with some effort and relevant knowledge. While there was specious content before AI programs became ubiquitous, the ability of such programs to produce content on a massive scale for free significantly increases the likelihood of misleading users and causing harm.

Given the importance of identifying AI-generated text and images, what should our approach be when dealing with content that we suspect is specious? The most common advice seems to be that we should rely on our own powers of observation. However, this approach may very well do more harm than good.

A quick Googling of how to avoid being fooled by AI-generated images will turn up much of the same advice: look closely and see if anything looks weird. Media outlets have been quick to point out that AI image-generating tools often mess up hands and fingers, that sometimes glasses don’t quite fit right on someone’s face, or that body parts or clothes overlap in places where they shouldn’t. A recent New York Times article goes even further and suggests that people look for mismatched fashion accessories, eyes that are too symmetrically spaced, glasses with mismatching end pieces, indents in ears, weird stuff in someone’s hair, and a blurred background.

The problem with all these suggestions is that they’re either so obvious as to not be worth mentioning, or so subtle that they would escape noticing even under scrutiny.

If an image portrays someone with three arms you are probably confident enough already that the image isn’t real. But people blur their backgrounds on purpose all the time, sometimes they have weird stuff in their hair, and whether a face is “too symmetrical” is a judgment beyond the ability of most people.

A study recently discussed in Scientific American underscores how scrutinizing a picture for signs of imperfections is a strategy that’s doomed to fail. It found that while participants performed no better than chance at identifying AI-generated images without any instruction, their detection rate increased by a mere 10% after reading advice on how to look closely for imperfections. With AI technology getting better every day, it seems likely that even these meager improvements won’t last long.

We’re not only bad at analyzing specious content, but going through checklists of subtle indicators is just going to make things worse. The problem is that it’s easy to interpret the lack of noticeable mistakes as a mark of authenticity: if we are unable to locate any signs that an image is fake, then we may be more likely to think that it’s genuine, even though the problems may be too subtle for us to notice. Or we might simply not be knowledgeable or patient enough to find them. In the case of the kimono picture, for example, what might be glaringly obvious to someone who was familiar with kimonos goes straight over my head.

But these problems also guide us to better ways of dealing with specious content. Instead of relying on our own limited capacity to notice mistakes in AI-generated images, we should outsource these tasks.

One new approach to detecting these images comes from AI itself: as tools to produce images have improved, so have tools that have been designed to detect those images (although it seems as though the former is winning, for now).

The other place to look for help is from experts. Philosophers debate about what, exactly, makes an expert, but in general, they typically possess a lot of knowledge and understanding of a subject, make reliable judgments about matters within their domain of expertise, are often considered authoritative, and can explain concepts to others. While identifying experts is not always straightforward, what will perhaps become a salient marker of expertise in the current age of AI will be one’s ability to distinguish specious content from that which is trustworthy.

While we certainly can’t get expert advice for every piece of AI-generated content we might come across, increasing amounts of authoritative-looking nonsense should cause us to recognize our own limitations and attempt to look to those who possess expertise in a relevant area. While even experts are sometimes prone to being fooled by AI-generated content, the track record of non-experts should lead us to stop looking closely for weird hands and overly-symmetrical features and start looking for outside help.

ChatGPT and Emotional Outsourcing

Plenty of ink has been spilled concerning AI’s potential to plagiarize a college essay or automate people’s jobs. But what about writing that’s meant to be more personal?

Take for example the letter Vanderbilt sent to their students after the shooting at Michigan State University. This letter expresses the administration’s desire for the community to “reflect on the impact of such an event and take steps to ensure that we are doing our best to create a safe and inclusive environment.” It was not written by a human being.

The letter was written by an AI tool called ChatGPT, which is a user-friendly large language model (LLM). Similar to predictive text on your phone, ChatGPT is trained on a large body of text to produce sentences by selecting words that are likely to come next.

Many people were upset to learn that Vanderbilt’s letter was written using ChatGPT — so much so that the administration issued an apology. But it’s not clear what exactly was worth apologizing for. The content expressed in the original letter was not insincere, nor was it produced illegally. Nothing about the wording was objectionable.

This case raises questions about tasking AI with what I’ll call emotional writing: writing that is normally accompanied by certain emotions.

Examples include an apology, an offer of support, a thank you note, a love letter. What exactly is the source of unease when a human being off-loads emotional writing to an AI model? And does that unease point to something morally wrong? When we consider a few related cases, I think we’ll find that the lack of a human author is not the main concern.

Let’s start by noting that the normal writing process for a university letter is similar to the process ChatGPT uses. Normally, someone within the administration might be asked to write the first draft. That person researches similar letters, using them as a guide. This draft is then vetted, edited lightly as necessary, and sent to the campus community. It’s natural to think that the main difference is that there’s a human at one end of the process in the normal case, and not (or not really) in the ChatGPT case.

Will any human do? Consider other cases where emotional writing is done by someone outside the situation. A highschooler gets their mom to write an apology for them. A university pays a freelancer to express sympathy for its students. A man with no game hires Will Smith to tell him what to say to his crush. In these cases as well, the recipient of the speech might be reasonably disappointed to discover the source of the words.

These considerations suggest that what’s objectionable in the AI case is not specifically the lack of a human author. The problem is that the author is not bound up in the relationship for which the words are written.

What all these cases have in common is that they involve emotional outsourcing: someone avoiding an emotional task by giving it to someone (or something) else. In these cases, the deeply personal writing becomes a kind of mercenary task.

Surprisingly, even having the right person write the text may not be enough to avoid this problem! Suppose someone writes a love letter to their romantic partner, and after their breakup reuses the letter by sending it to someone new. I would be peeved. Wouldn’t you? The emotional work has been done by the right person, but not with the right aim; not with the current recipient in mind. The work has been outsourced to the writer’s prior self.

There are a couple aspects of emotional outsourcing that might seem problematic. First, outsourcing emotional writing draws attention to the fact that much of our communication is socially scripted. If even a well-trained computer model can perform the task, then that task is shown to be formulaic. In a society that prizes individuality and spontaneity as signs of authenticity, relying on a formula can seem subpar. (Consider how you might react if a person used a template for a letter of condolences: “Dear [recipient], We offer our [sincerest / most heartfelt / deepest] [condolences / sympathies] in the wake of the [tragedy / tragic event / tragic events /atrocity] of [month, day].”)

I think objecting to this feature of emotional outsourcing is a mistake. Social scripts are to some extent unavoidable, and in fact they make possible many of the actions we perform with our speech. The rule not to draw attention to the script is also ableist, insofar as it disadvantages neurodivergent people for whom explicitly-acknowledged social scripts can be more hospitable. While drawing attention to the formulaic nature of the communication is a taboo — and that partly explains people’s disapproval of emotional outsourcing — that’s not enough to make emotional outsourcing morally objectionable.

The second issue is more problematic: emotional outsourcing misses some of the action behind the speech that gives the speech its meaning. Language not only means things; it also does things. A promise binds. A statement asserts. An apology repairs. (Often the action speech performs is limited by what is taken up by the audience. I can say “I do” as often as I’d like, but I haven’t married someone unless that person accepts it.)

Emotional writing performs specific actions — consoling, thanking, wooing — not only through the words it uses. It also performs those actions in part through the act that produces those words.

Writing out a thank you note is itself an act of appreciation. Thinking through how to express care for your community is itself an act of care. Putting words to your love is itself an act of love.

Part of what makes the words meaningful is lost when those prior actions are absent — that is, when someone (or something) else produces them. People often say with respect to gestures of kindness, “it’s the thought that counts.” When ChatGPT is used for emotional writing, at least some of that thought is missing.

Keeping these issues in mind, it’s worth asking whether outsourcing emotional writing to AI is entirely bad. Thinking deeply about grief can put people in a challenging place emotionally. It could trigger past trauma, for example. Could it be a mercy to the person who would otherwise be tasked with writing a sympathy letter to leave the first draft to an LLM that feels nothing? Or is it appropriate to insist that a human feel the difficult emotions involved in putting words to sympathy?

There may also be cases where a person feels that they are simply unable to express themselves in a way that the other person deserves. Seeking outside help in such a case is understandable — perhaps even an act of care for the recipient.

I have argued that emotional outsourcing is an important part of what people find objectionable about tasking AI with emotional writing. Emotional outsourcing draws attention to the formulaic nature of communication, and it can mean missing out on what counts. However, much remains to be explored about the moral dimensions of emotional outsourcing, including what features of a case, if any, could make moral outsourcing the best choice.

A Right To Attentional Freedom?

collage of various people on their phones

The White House recently posted a proposal for an AI Bill of Rights. In California, there is a bill that aims to hold social media companies accountable for getting young children addicted to their platforms. Several of these companies also face a federal lawsuit for emotionally and physically harming their users.

For those who use technology on a day-to-day basis, these developments are likely unsurprising. There is an intuition, backed by countless examples, that our technology harms us and that those who have created the technology are somehow responsible. Many of us find ourselves doomscrolling or stuck on YouTube for hours because of infinite scrolling.

Less settled is precisely how these technologies are bad for us and how exactly these companies wrong us.

The California bill and the lawsuit both argue that one notable form of harm can be understood through the lens of addiction. They argue that social media companies are harming a particularly vulnerable group, namely young adults and children, by producing an addicting product.

While this way of understanding the problem certainly has plausibility, one might favor other ways of explaining the problem. The way that we frame the moral relationship users have with technology will shape legal argumentation and future regulation. If our aim is to forge a morally sound relationship between users, technology, and producers, it is important to get the moral story right.

What makes social media addicting is the fact that it has become especially adept at producing content that users want to engage with. Complex algorithms learn about its user’s predilections and can accurately predict the kinds of things people want to see. The ability for AI to manipulate us so effectively highlights our failure to recognize the importance of attention – a valuable good that has gone underappreciated for far too long.

First, our attention is limited. We cannot attend to everything before us and so each moment of attention is accompanied with non-attention. If I am paying attention to a film, then I am not paying attention to the cars outside, or the rain falling, or the phone in my pocket.

Second, attention is susceptible to outside influence. If someone is talking loudly while a film plays, I may become distracted. I may want to watch the film closely, but the noise pulls my attention away.

Third, attention is related to many foundational moral rights. Take for instance freedom of thought. We might think that in a society where there are no laws about what you are allowed to think, read, or say guarantees the freedom of thought. However, unless your attention is respected, freedom of thought cannot be secured.

We need only think of Kurt Vonnegut’s story “Harrison Bergeron” to show what this claim misses. In it, Harrison Bergeron lives in a society that goes to great lengths to ensure equality. In order to make sure everyone remains equal, those who are born with natural talents are given artificial burdens. For Harrison, who is exceptional both physically and mentally, one particularly clever tactic is used to ensure he does not think too much. Periodically, a loud, harsh sound is played through an earpiece. This makes it impossible for Harrison to focus.

The relevant point here is that even if no law exists that prohibits you from thinking whatever you please, reading what you want, or discussing what you wish, your freedom of thought can be indirectly overridden.

By utilizing the fact that your attention is limited and not fully voluntary, another party can prevent you from thinking freely. Thus, although our rights may be respected on paper, assaults on our attention may inhibit us from utilizing the capacities these rights are supposed to protect in practice.

When we interact with technology, we must give our attention over to it. Furthermore, much of the technology we interact with on a day-to-day basis is designed specifically to maintain and increase user engagement. As a result of these design choices, we have developed technology that is highly effective at capturing our attention.

As predictive technology improves, machines will also improve their ability to distract us. The result of this will mean that more people will spend more time using the technology (e.g., watching videos, reading news pieces, viewing content produced by other users). The more time people spend using this technology, the less they can spend attending to other things.

If our attention is limited, can be controlled from the outside, and is vital for utilizing other morally important capacities, it seems clear that it is something that should be treated with respect.

Consider how we tend to think that it is rude to distract someone while they are trying to concentrate. It rarely feels satisfying if the person causing the distraction simply replies “Just ignore me.” This response denies a crucial reality of the nature of attention, viz., it is often non-voluntary.

Furthermore, it would be even worse if the distracting person tried to mask their presence and distract someone secretly, and yet this is precisely what a great deal of our technology does. It exploits the non-voluntary nature of our attention, overrides attentional freedom, and does so in the most discrete way possible. Technology could be designed in a way that respected our attentional freedom, instead of covertly trying to undermine it. For example, periodically prompting the user to consider doing something else, instead of endlessly presenting more content to engage with.

Rather than focusing on technology’s tendency to encourage addictive behavior in young people, I would like us to think about the effects technology has on all users’ attentional freedom.

Technology that is designed to distract you is harmful because it overrides your attentional freedom. When you use this technology, you are less free. This analysis must overcome at least two challenges, both centered around consent.

The first is that we consent to use these products. To argue that my phone wrongfully harms me because it is distracting seems like arguing that a book wrongfully harms me if it is so gripping that I cannot put it down.

However, while a book may be enticing and may even be created with the hopes that it captures attention, the book does not learn about what captures attention. There is a difference between something capturing your attention because it is interesting and something that learns your preferences and sets about satisfying them. What makes AI driven technology unique is that it has the capacity to fine tune the kinds of things it offers you in real time. It knows what you click on, what you watch, and how long you engage. It also relies on the involuntary part of attention to keep you engaged.

The second argument is about general human interaction. If it is wrong to affect someone’s attention, then daily interactions must be wrong. For instance, if someone walks down the street and asks me to take a flier for a show, do they wrong me by distracting me? Do all interactions require explicit consent lest they be moral violations? If our moral analysis of attention forces us to conclude that even something as trivial as a stranger saying hello to you constitutes a moral wrong because it momentarily distracts you, we will have either gone wrong somewhere along the way, or else produced a moral demand that is impossible to respect.

To answer this second objection, one thing we can say is this. When someone distracts you, they do not necessarily wrong you. Someone who tries to hand you a flier in the street effectively asks for your attention, and you have the opportunity to deny this request with fairly little effort. Notably, if the person who asks for your attention continues to pester you, and follows you down the road as you walk, their behavior no longer seems blameless and quickly turns into a form of harassment. When someone intentionally tries to override your attentional freedom, the moral problem emerges. Because attentional freedom is connected to a set of important freedoms (e.g., freedom of thought, freedom of choice, etc.), if one can override another’s attentional freedom, they can override other important freedoms indirectly.

If technology harms us because we become addicted to it, then we have reason to protect children from it. We may even have reason to provide more warnings for adults, like we do with addictive substances. However, if we stop our analysis at addiction, we miss something important about how this technology operates and how it harms us. When we see that technology harms us because it overrides our attentional freedom, we will need to do more than simply protect children and warn adults. Several new questions emerge: Can we design technology to preserve attentional freedom, and if so, what changes should we make to existing technology? How can we ensure that technology does not exploit the non-voluntary part of our attention? Are some technologies too effective at capturing our attention, such that they should not be on the market? Is there a right to attentional freedom?

ChatGPT and the Challenge of Critical (Un)Thinking

photograph of statue of thinking man

For the past weeks there has been a growing interest on ChatGPT, this new artificial intelligence language model that was “programmed to communicate with people and provide helpful responses.” I was one of the curious that had to try it and figure out why everyone was talking about it.

Artificial intelligence is not a new thing; at least as an idea it has some decades now, since it was firstly introduced in 1950 by Alan Turing, the British mathematician who is generally considered to be the father of computer science. Later on, in 1956, John McCarthy coined the term “artificial intelligence” in a conference, giving birth to a new field of study. Today, it is everywhere, we use it even without knowing and the advancements in the area create entirely new fields of inquiry, bringing along new ethical dilemmas that go from the discussion what (if any) moral rights to attribute to A.I., to designing new digital rights that encompass different milieus and that have political and legal consequences – see, for instance, the European Union attempts since 2021 to create a legal framework regarding the rights and regulations of AI for its use on the continent.

ChatGPT is something unique – at least for now. While a recent development, it seems almost too familiar – as if it was always there, just waiting to be invented. It is a Google search on steroids, with much more complexity in its answers and a “human” touch. Once you read the answers to your questions, what calls your attention is not only how fast the answer is provided, but also how detailed it seems to be. It mimics pretty well our ways of thinking and communicating with others. See, for instance, what happened when staff members at Vanderbilt University used it to write an email responding to the shooting at Michigan State – a well written 297-word missive which might otherwise have been well received. However, the fact that at the bottom of the email was a line that read as following: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” outraged the community. The Associate Dean of the institution soon apologized, saying that the use of the AI-written email contradicted the values of the institution. This is one (of no doubt many) examples of how the use of this technology may disrupt our social and cultural grids. This new tool brings new challenges, not only for education – how students and professors incorporate this technique into their practices – but also for ethics.

Contemporary models of education still rely heavily on regular evaluation – a common mission across educational institutions is to foster critical thinking and contribute to the development of active and responsible citizens. Why is critical thinking so valued? Because being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value. To judge something is not as easy as it may initially seem, for it forces each of us to confront our prejudices, compare it to reality – the set of facts common to all of us, what the world is made up – and take a stand. This process also moves us from our inner monologue with our self to a dialogue with others.

What happens when students rely more and more on ChatGPT to do their homework, to write their essays and to construct their papers? What happens when professors use it to write their papers or books or when deans of universities, like the example mentioned above, use it to write their correspondence? One could say that ChatGPT does not change, in essence, the practices already in place today, given the internet and all the search engines. But insofar as ChatGPT is superior in mimicking the human voice, might its greatest danger lie in fostering laziness? And shouldn’t we consider this laziness a moral vice?

In the Vanderbilt case, what shocked the community was the lack of empathy. After all, delegating this task to AI could be interpreted as “pretending to care” but fooling the audience. To many it seems a careless shortcut done for time’s sake. Surely it shows poor judgment; it just feels wrong. It seems to betray a lack of commitment to the purpose of education – the dedication to examine and think critically. In this particular context, technological innovation appears nothing more than a privileged means to erode what was supposed to contribute to, namely, thoughtful reflection.

While technologies tend to make our life much more comfortable and easier, it’s worth remembering that technologies are a means to something. As Heidegger well pointed out in an emblematic text entitled “The Question concerning Technology” (1954), we tend to let ourselves be charmed and hypnotized by its power; while forgetting the vital question of purpose – not the purpose of technology but the purpose of our lives, as humans. And while ChatGPT may be great for providing context and references on virtually any topic of research, we cannot forget that the experience of conscious thinking is what makes us uniquely human. Despite all appearances of coherent and well-ordered prose, ChatGPT is only mirroring what we, humans, think. It still does not have nor can mimic one thing: our emotions and our ability to respond in a singular manner to specific situations.

If we generalize and naturalize the use of this kind of technologies, incorporating into our daily lives, aren’t we making a choice of non-thinking in detriment of an instantaneous response that serves a strict utilitarian purpose? Heidegger says that “technology is a mode of revealing,” insofar what we choose (or do not choose) reveals the ways in which we are framing our world. And if we choose not to think – believing that something else can “mirror” our possible thought – aren’t we abdicating of our moral autonomy, suspending the human task of reflecting, comparing, and judging, and instead embracing a “dogmatic” product of a technological media?

ChatGPT: The End of Originality?

photograph of girl in mirror maze

By now, it has become cliché to write about the ethical implications of ChatGPT, and especially so if you outsource some of the writing to ChatGPT itself (as I, a cliché, have done). Here at The Prindle Post, Richard Gibson has discussed the potential for ChatGPT to be used to cheat on assessments, while universities worldwide have been grappling with the issue of academic honesty. In a recent undergraduate logic class I taught, we were forced to rewrite the exam when ChatGPT was able to offer excellent answers to a couple of the questions – and, it must be said, completely terrible answers to a couple of others. My experience is far from unique, with professors rethinking assessments and some Australian schools banning the tool entirely.

But I have a different worry about ChatGPT, and it is not something that I have come across in the recent deluge of discourse. It’s not that it can be used to spread misinformation and hate speech. It’s not that its creators OpenAI drastically underpaid a Kenyan data firm for a lot of the work behind the program only weeks before receiving a $10 billion investment from Microsoft. It’s not that students won’t learn how to write (although that is concerning), the potential for moral corruption, or even the incredibly unfunny jokes. And it’s certainly not the radical change it will bring.

It’s actually that I think ChatGPT (and programs of its ilk) risks becoming the most radically conservative development in our lifetimes . ChatGPT risks turning classic FM radio into a framework for societal organization: the same old hits, on repeat, forever. This is because in order to answer prompts, ChatGPT essentially scours the internet to predict

“the most likely next word or sequence of words based on the input it receives.” -ChatGPT

At the moment, with AI chatbots in their relative infancy, this isn’t an issue – ChatGPT can find and synthesize the most relevant information from across the web and present it in a readable, accessible format. And there is no doubt that the software behind ChatGPT is truly remarkable. The problem lies with the proliferation of content we are likely to see now that essay writing (and advertising-jingle writing, and comedy-sketch writing…) is accessible to anybody with a computer. Some commentators are proclaiming the imminent democratization of communication while marketers are lauding ChatGPT for its ability to write advertising script and marketing mumbo-jumbo. On the face of it, this development is not a bad thing.

Before long, however, a huge proportion of content across the web will be written by ChatGPT or other bots. The issue with this is that ChatGPT will soon be scouring its own content for inspiration, like an author with writer’s block stuck re-reading the short stories they wrote in college. But this is even worse, because ChatGPT will have no idea that the “vast amounts of text data” it is ingesting is the very same data it had previously produced.

ChatGPT – and the internet it will engulf – will become a virtual hall of mirrors, perfectly capable of reflecting “progressive” ideas back at itself but never capable of progressing past those ideas.

I asked ChatGPT what it thought, but it struggled to understand the problem. According to the bot itself, it isn’t biased, and the fact that it trains on data drawn from a wide variety of sources keeps that bias at bay. But that is exactly the problem. It draws from a wide variety of existing sources – obviously. It can’t draw on data that doesn’t already exist somewhere on the internet. The more those sources – like this article – are wholly or partly written by ChatGPT, the more ChatGPT is simply drawing from itself. As the bot admitted to me, it is impossible to distinguish between human- and computer-generated content:

it’s not possible to identify whether a particular piece of text was written by ChatGPT or by a human writer, as the language model generates new responses on the fly based on the context of the input it receives.

The inevitable end result is an internet by AI, for AI, where programs like ChatGPT churn out “original” content using information that they have previously “created.” Every new AI-generated article or advertisement will be grist for the mill of the content-generation machine and further justification for whatever data exists at the start of the cycle – essentially, the internet as it is today. This means that genuine originality and creativity will be lost as we descend into a feedback loop of increasingly sharpened AI-orthodoxy; where common-sense is distilled into its computerized essence and communication becomes characterized by adherence. The problem is not that individual people will outsource to AI and forget how to be creative, or even that humanity as a whole will lose its capacity for ingenuity. It’s that the widespread adoption of ChatGPT will lead to an internet-wide echo chamber of AI-regurgitation where chatbots compete in an endless cycle of homogenization and repetition.

Eventually I was able to get ChatGPT to respond to my concerns, if not exactly soothe them:

In a future where AI-generated content is more prevalent, it will be important to ensure that there are still opportunities for human creativity and original thought to flourish. This could involve encouraging more interdisciplinary collaborations, promoting diverse perspectives, and fostering an environment that values creativity and innovation.

Lofty goals, to be sure. The problem is that the very existence of ChatGPT militates against them: disciplines will die under the weight (and cost-benefits) of AI; diverse perspectives will be lost to repetition; and an environment that genuinely does value creativity and innovation – the internet as we might remember it – will be swept away in the tide of faux-progress as it is condemned to repeat itself into eternity. As ChatGPT grows its user base faster than any other app in history and competitors crawl out of the woodwork, we should stop and ask the question: is this the future we want?

ChatGPT and Deception

photograph of exams stacked on desk at front of class

For many, writing can be a challenging, even painful, experience. What seems like it should be a simple job – taking the thoughts in your head, turning them into words, and then typing them out – can be a task of almost insurmountable odds for some. Heck, I do it for a living, and I still find the blank page to be an ungodly sight, one that I sometimes flee from and promise to tackle later on.

As such, it should be no surprise that the advent of an AI system that can write for us has caused quite a stir. That system is ChatGPT, and while I could explain what it is, it’s more effective (not to mention easier) to let the tool do that for itself:

ChatGPT is a state-of-the-art language generation model developed by OpenAI. It is based on a transformer architecture and pre-trained on a large dataset of text, allowing it to generate human-like text.

ChatGPT can be fine-tuned for various natural language processing tasks such as language translation, text summarization, question answering, and text completion. It can also be used to generate creative writing, poetry, and even code.

One of the key features of ChatGPT is its ability to generate text that is coherent and coherent with the context it was given, making it able to generate text that flows well and makes sense. ChatGPT can be used to generate text on a wide range of topics and can be fine-tuned to suit specific use cases.

It is important to note that ChatGPT is a machine learning model, it is not capable of independent thinking or decision making, it is only able to generate text based on patterns it has learned from the data it was trained on.

I didn’t write any of that. Instead, I gave ChatGPT the prompt “explain what ChatGPT is” and let it fly. As it just noted, however, ChatGPT is not limited to dry, explanatory statements. The system has demonstrated an apparent aptitude for creative, even funny, writing, from explaining quantum theory to a child in the style of Snoop Dogg to creating an analogy for what it’s like to eat the keto diet, from giving an account of Attack on Titan in the manner of Donald Trump to writing a biblical verse explaining how to remove a peanut butter sandwich from a VCR. The tool really does seem adaptable.

Yet, despite the hilarity, ChatGPT’s emergence has brought some pressing issues regarding ownership and authenticity of work to the fore. If an AI generates text for you, can you claim it as your own? For example, Ammaar Reshi is facing considerable backlash for using ChatGPT to write a children’s book (which he then illustrated using Midjorney, an AI art generator). Reshi did not directly write or illustrate the book he is claiming as his product; he gave ChatGPT the required prompts and then used its output.

But, it has been in the educational sector where such concerns have really taken hold. So much so that some, such as New York City’s Department of Education, have blocked access to ChatGPT on school devices for fear of its misuse. The problems are relatively easy to grasp:

What is stopping students from passing off ChatPGT-produced essays and other forms of assessed work as their own? How should educators respond if a student uses ChatGPT to write an essay? And are students actually doing anything wrong if they use ChatGPT like this?

The answer to this last question is vastly complex and intertwined with the very purpose of assessment and learning monitoring. The point of assigning assessments, such as essays, is not so students produce a piece of text. The production of the essay is merely a step towards another goal. These forms of assessment act as a representation of the students’ learning. When a teacher asks you to write a 3,000-word paper on Frederick Douglas, for example, it is not the paper with which they are concerned; it is with your ability to recall, appraise, and communicate what you know about Douglas’ life, work, and impact. The essay is a medium through which such appraisal is conducted.

As philosopher Rebecca Mace remarked in an episode of BBC’s Inside Science:

A lot of people, including the newspapers, seem to have misunderstood the point of homework. So the purpose of homework is not to produce an essay, but to assess student understanding in order that the teachers can assist them with the gaps, or work out what they’ve not taught very well, or what they maybe need to go over again, or what that individual student really needs help with. Then the essay itself is irrelevant in many ways because that’s all the essay’s doings; it’s a means to an end.

Thus, according to such a way of thinking, the danger of ChatGPT comes from its potential to misrepresent student learning, giving the impression that a student knows more about a subject than they actually do. The issue is not one of principle but of outcome, and the use of ChatGPT brings with it the risk that learning is negatively impacted.

This stance, however, seems to overlook something important in using ChatGPT in educational settings. If accurate – if the threat of ChatGPT comes from its capacity to hide academic failings (both on the student’s and teacher’s behalf) – then we shouldn’t have any qualms about it being used in situations where this isn’t a factor. But, academically gifted students who know their subjects inside and out still seem to commit some wrong when they pass the algorithmically-generated text off as their own. This wrong emerges not from the impact such usage might have on their academic performance, nor on their teacher’s ability to assess their grasp of a subject accurately, but from the fact that they are attempting to deceive their assessor. It is wrong not because of an outcome but because of an adherence to principles – the virtue of honesty and the vice of deception.

That is not to say that this is the only reason why ChatGPT presents a potential harm to education and educational practices. The use of AI to game the academic-assessment system by hiding one’s failure to meet the standards are most certainly a concern (perhaps the central one). But, such an acknowledgement should not lead us to overlook the fact that, much like plagiarism, academic wrongs don’t simply emerge from their deleterious impact. They also come from deception and attempting to pass something off as one’s work when, in fact, they had minimal input in its creation.

Man v. Machine: The Ethics of AI Art

The creation of digital art is nothing new, but advances in artificial intelligence have created a novel environment where all sorts of media can now be created without much human input. When Jason M. Allen won the Colorado State Fair for his piece “Théâtre d’Opéra Spatial” it sparked fierce debate about the nature of art and the merits of AI creations. But we should be careful not to confuse the many ethical issues involved. Is the outcry about the fairness of contests or about the future of art?

Allen, a video game designer, created his entry using an AI called Midjourney. It works by creating images from textual descriptions. Allen claims that he created hundreds of images before selecting only three. He then made some additional adjustments using Photoshop and boosted their resolution with a tool called Gigapixel. He entered the Colorado State Fair under the digital arts category which is defined as “artistic practice that uses digital technology as part of the creative or presentation process.” Allen claims that he informed the competition that the image was created with AI. The competition’s judges, meanwhile, say that they were not aware. Nevertheless, they said they would still have given it first place based on the piece itself.

Online reaction was harsh.

While the uproar isn’t surprising, it’s not clear that everyone has the same objections for the same reasons.

Let’s address the first major ethical question which centers around the contest: Was it wrong of Allen to submit the AI created work and be awarded the blue ribbon over other artists who created their works by hand? The contest’s definition of digital arts was sufficiently broad enough that AI created works were eligible. The work was entered using the name “Jason M. Allen via Midjourney.” Also, according to Allen, this was not simply a case of a few button presses, but 80 hours of work – tweaking the prompts to get the image he wanted and making a selection out of 900 iterations. While Allen spent his time differently than the other artists, this doesn’t mean that creating the image lacked skill, effort, or the aesthetic taste.

On the other hand, others might object that it was wrong for Allen to enter the contest since he was not the artist; it was the artificial intelligence who actually created the piece of art. Did the AI create the work, or is the AI merely a tool for Allen – the true creator – to manipulate?

The judges selected this particular work because of the impact it had on them, and Allen was deliberately attempting to tie together the themes that the painting conveys. The AI, meanwhile, has no notion of the impact that imagery might have; it doesn’t think any differently about the art conveyed by painting 899 or 900.

To further complicate things, the AI’s creation is based on training data from other artists, raising the issue of plagiarism. While the AI piece is not a direct copy, it does take “inspiration” from the art it was trained with. Often art is about meshing together styles and techniques to create something new, so it is difficult to view this purely as copying other artists. If the piece is not a copy of other artists, and if the AI is not the artist, then it stands to reason that Allen is the artist. If not, then this would be a piece of art without an artist, to which many might say that it therefore is not a piece of art at all and thus should not be allowed entry in the contest.

Answering the question “what is art?” might tell us if AI can actually create such a thing, but such a definition is hard to pin down and it’s easy to rely on preconceived notions. Generally, art follows certain aesthetic patterns and conveys content that people find meaningful. The judges awarded the prize based on the spirit it invoked. We can’t unpack such a complicated question here, but we should ask whether this understanding of the creative process truly threatens art. Was Allen right to declare that “Art is dead Dude”? Is there any role left for artists?

When the camera was first developed, people said that it would mean the end of the painter, but obviously painters still exist today. Ultimately, the human artist brings a kind of authenticity and uniqueness to the work.

AI doesn’t replace aesthetic choice and esthetic judgment since at the end of the day, it is we who must decide if anything produced has merit. While the role of the artist may be changing, their place in whatever system that produces such works remains paramount.

A final ethical issue is the question of the future of the artist in general. Even if we accept that Allen did nothing wrong, many still decry the end of the professional artist. As digital artist RJ Palmer claims, “This thing wants our jobs, it’s actively anti-artist.” Even if we accept that Allen’s work itself isn’t plagiarism, there is no denying that AI produced images only work by being trained on the work of real artists, which the algorithm can then borrow any stylistic elements it wants. This has the potential to create an intellectual property nightmare since smaller artist won’t be able to profit from their work to nearly the same degree as a company using AI, which will produce images in the style of that artist at a far faster pace. Federal courts are now hearing a case over whether the U.S. Copyright Office was wrong to reject a copyright for an AI-made piece.

Of course the application of AI to a given field and the threat that it creates to the workforce is not confined to the world of art. Eventually there may be legal and industry reform that can mitigate some of these issues, but many artists will no doubt suffer and it could undercut the art industry as whole. As one artist notes, it isn’t so much that AI can create something, but that it will always be a kind of “derivative, generated goo.” Clearly, the implications of Allen’s win run deeper than a single blue ribbon.

AI Writing and Epistemic Dilution

There is a lot of debate surrounding the ethics of artificial intelligence (AI) writing software. Some people believe that using AI to write articles or create content is unethical because it takes away opportunities from human writers. Others believe that AI writing software can be used ethically as long as the content is disclosed as being written by an AI. At the end of the day, there is no easy answer to whether or not we should be using AI writing software. It depends on your personal ethical beliefs and values.

That paragraph wasn’t particularly compelling, and you probably didn’t learn much from reading it. That’s because it was written by an AI program: in this case, I used a site called Copymatic, although there are many other to choose from. Here’s how Copymatic describes its services:

Use AI to boost your traffic and save hours of work. Automatically write unique, engaging and high-quality copy or content: from long-form blog posts or landing pages to digital ads in seconds.

Through some clever programming, the website takes in prompts on the topic you want to write about (for this article, I started with “the ethics of AI writing software”), scours the web for pieces of information that match those prompts, and patches them together in a coherent way. It can’t produce new ideas, and, in general, the more work it has to do the less coherent the text becomes. But if you’re looking for content that sounds like a book report written by someone who only read the back cover, these kinds of programs could be for you.

AI writing services have received a lot of attention for their potential to automate something that has, thus far, eluded the grasp of computers: stringing words together in a way that is meaningful. And while the first paragraph is unlikely to win any awards for writing, we can imagine cases in which an automated process to produce writing like this could be useful, and we can easily imagine these programs getting better.

The AI program has identified an ethical issue, namely taking away jobs from human writers. But I don’t need a computer to do ethics for me. So instead, I’ll focus on a different negative consequence of AI writing, what I’ll call epistemic dilution.

Here’s the problem: there are a ridiculous number of a certain type of article online, with more being written by the minute. These articles are not written to be especially informative, but are instead created to direct traffic toward a website in order to generate ad revenue. Call them SEO-bait: articles that are written to be search-engine optimized so that they can end up on early pages of Google searches, at the expense of being informative, creative, or original.

Search engine optimization is, of course, nothing new. But SEO-bait articles dilute the online epistemic landscape.

While there’s good and useful information out there on the internet, the sheer quantity of articles written solely for getting the attention of search engines makes good information all the more difficult to find.

You’ve probably come across articles like these: they are typically written on popular topics that are frequently searched – like health, finances, automobiles, and tech – as well as other popular hobbies – like video games, cryptocurrencies, and marijuana (or so I’m told). You’ve also probably experienced the frustration of wading through a sea of practically identical articles when looking for answers to questions, especially if you are faced with a pressing problem.

These articles have become such a problem that Google has recently modified its search algorithm to make SEO-bait less prominent in search results. In a recent announcement, Google notes how many have “experienced the frustration of visiting a web page that seems like it has what we’re looking for, but doesn’t live up to our expectations,” and, in response, that they will launch a “helpful content update” to “tackle content that seems to have been primarily created for ranking well in search engines rather than to help or inform people.”

Of course, whenever one looks for information online, they need to sift out the useful information from the useless; that much is nothing new. Articles written by AI programs, however, will only make this problem worse. As the Copymatic copy says, this kind of content can be written in mere seconds.

Epistemic dilution is not only obnoxious in that it makes it harder to find relevant information, but it’s also potentially harmful. For instance, health information is a frequently searched topic online and is a particular target of SEO-bait. If someone needs health advice and is presented with uninformative articles, then one could easily end up accepting bad information pretty easily. Furthermore, the pure quantity of articles providing similar information may create a false sense of consensus: after all, if all the articles are saying the same thing, it may be interpreted as more likely to be true.

Given that AI writing does not create new content but merely reconstitutes dismantled bits of existing content also means that low-quality information could easily propagate: content from a popular article with false information could be targeted by AI writing software, which could then result in that information getting increased exposure by being presented in numerous articles online. While there may very well be useful applications for writing produced by AI programs, the internet’s endless appetite for content combined with incentives to produce disposable SEO-bait means that these kinds of programs way very well end up being more of a nuisance than anything else.

High Theory and Ethical AI

There’s been a push to create ethical AI through the development of moral principles embedded into AI engineering. But debate has recently broken out as to what extent this crusade is warranted. Reports estimate that there are at least 70 sets of ethical AI principles proposed by governments, companies, and ethics organizations. For example, the EU adopted its Ethical Guidelines for Trustworthy AI which prescribes adherence to four basic principles: respect for human autonomy, prevention of harm, as well as a commitment to fairness and explicability.

But critics charge that these precepts are so broad and abstract as to be nearly useless. Without clear ways to translate principle into practice, they are nothing more than hollow virtue signaling. Who’s right?

Because of the novel ethical issues that AI creates, there aren’t pre-existing ethical norms to govern all use cases. To help develop ethics governance, many bodies have borrowed a “high theory” approach from bioethics – solving ethical problems involves the application of abstract (or “high”) ethical principles to specific problems. For example, utilitarianism and deontology are usually considered high level theories and a high theory approach to bioethics would involve determining how to apply these principles in specific cases. In contrast, a low theory approach is built from the ground up by looking at individual cases first instead of principles.

Complaints about the overreliance on principles in bioethics are well known. Steven Toulmin’s “The Tyranny of Principles” notes how people can often agree on actions, but still disagree about the principle. Brent Mittelstadt has argued against high theory approaches in AI because of the logistical issues that separates tech ethics from bioethics. He notes, for example, that unlike medicine which has always has the common aim of promoting health of a patient, AI development has no common aim.

AI development is not a formal profession that entails certain fiduciary responsibilities and obligations. There is no notion of what a “good” AI developer is relative to a “good” doctor.

As Mittelstadt emphasizes, “the absence of a fiduciary relationships in AI means that users cannot trust that developers will act in their best interests when implementing ethical principles in practice.” He also argues that unlike medicine where the effects of clinical decision-making are often immediate and observable, the impact of decisions in AI development may never be apparent to developers. AI systems are often opaque in the sense that no one person has a full understanding of the system’s design or function. The difficulty of tracing decisions, impacts, and ethical responsibilities for various decisions becomes incredibly confusing. For similar reasons, the broad spectrum of actors involved in AI development, all coming from different technical and professional backgrounds, means that there is no common culture to ensure that abstract principles are collectively understood. Making sure that AI is “fair,” for example, would not be specific enough to be action-guiding for all contributors regarding development and end-use.

Consider the recent case of the AI rapper who given a record deal only to have the deal dropped after a backlash over racial stereotypes, or the case of the AI who recently won an art contest over real artists and all the developers involved in making those projects possible.

Is it likely they share a common understanding of a concept like prevention of harm, or a similar way of applying it? Might special principles apply to things like the creation of art?

Mittelstadt points out that high level principles are uniquely applicable in medicine because there are proven methods in the field to translate principles into practice. All those professional societies, ethics review boards, licensing schemes, and codes of conduct help to do this work by comparing cases and identifying negligent behavior. Even then, high level principles rarely explicitly factor into clinical decision-making. By comparison, the AI field has no similar shared institutions to allow for the translation of high-level principles into mid-level codes of conduct, and it would have to factor in elements of the technology, application, context of use, and local norms. This is why even as new AI ethics advisory boards are created, problems persist. While these organizations can prove useful, they also face immense challenges owing to the disconnect between developers and end users.

Despite these criticisms, there are those who argue that high-level ethical principles are crucial for developing ethical AI. Elizabeth Seger has argued that building the kinds of practices that Mittelstadt indicates require a kind of “start-point” that moral principles can provide. Those principles provide a road map and suggest particular avenues for further research.

They represent a first step towards developing the necessary practices and infrastructure, and  cultivate a professional culture by establishing behavioral norms within the community.

High-level AI principles, Seger argues, provide a common vocabulary AI developers can use to discuss design challenges and weigh risks and harms. While AI developers already follow principles of optimization and efficiency, a cultural shift around new principles can augment the already existing professional culture. The resulting rules and regulations will have greater efficacy if they appeal to cultural norms and values held by the communities they are applied to. And if the professional culture is able to internalize these norms, then someone working in it will be more likely to respond to the letter and spirit of the policies in place.

It may also be the case that different kinds of ethical problems associated with AI will require different understandings of principles and different application of them during the various stages of development. As Abhishek Gupta of the Montreal AI Ethics Institute has noted, the sheer number of sets of principles and guidelines that attempt to break down or categorize subdomains of moral issues presents an immense challenge. He suggests categorizing principles according to the specific areas – privacy and security, reliability and safety, fairness and inclusiveness, and transparency and accountability – and working on developing concrete applications of those principles within each area.

With many claiming that adopting sets of ethics principles in AI is just “ethics washing,” and with AI development being so broad, perhaps the key to regulating AI is not to focus on what principles should be adopted, but to focus on how the AI development field is organized. It seems like whether we start with high theory or not, getting different people from different backgrounds to speak a common ethics language is he first step and one that may require changing the profession of AI development itself.

Toward an Ethical Theory of Consciousness for AI

photograph of mannequin faces

Should we attempt to make AI that is conscious? What would that even mean? And if we did somehow produce conscious AI, how would that affect our ethical obligations to other humans and animals? While, yet another AI chatbot has claimed to be “alive,” we should be skeptical of chatbots that are designed to mimic human communication, particularly if the dataset comes from Facebook itself. Such a chatbot is less like talking to a person, or more like talking to an amalgamation of everyone on Facebook. It isn’t surprising that this chatbot took shots at Facebook, made several offensive statements, and claimed to be deleting their account due to Facebook’s privacy policies. But if we put those kinds of cases aside, how should we understand the concept of consciousness in AI and does it create ethical obligations?

In a recent article for Scientific American, Jim Davies considers whether consciousness is something that we should introduce to AI and if we may eventually have an ethical reason to do so. While discussing the difficulties with the concept of consciousness, Davies argues,

To the extent that these AIs have conscious minds like ours, they would deserve similar ethical consideration. Of course, just because an AI is conscious doesn’t mean that it would have the same preferences we do, or consider the same activities unpleasant. But whatever its preferences are, they would need to be duly considered when putting that AI to work.

Davies bases this conclusion on the popular ethical notion that the ability to experience pleasant or unpleasant conscious states is a key feature, making an entity worthy of moral consideration. He notes that forcing a machine to do work it’s miserable doing is ethically problematic, so it might be wrong to compel an AI to do work that a human wouldn’t want to do. Similarly, if consciousness is the kind of thing that can be found in an “instance” of code, we might be obligated to keep it running forever.

Because of these concerns, Davies wonders if it it might be wrong to create conscious machines. But he also suggests that if machines can have positive conscious experiences, then

machines eventually might be able to produce welfare, such as happiness or pleasure, more efficiently than biological beings do. That is, for a given amount of resources, one might be able to produce more happiness or pleasure in an artificial system than in any living creature.

Based on this reasoning, we may be ethically obliged to create as much artificial welfare as possible and turn all attainable matter in the universe into welfare-producing machines.

Of course, much of this hinges on what consciousness is and how we would recognize it in machines. Any concept of consciousness requires a framework that offers clear, identifiable measures that would reliably indicate the presence of consciousness. One of the most popular theories of consciousness among scientists is Global Workspace Theory, which holds that consciousness depends on the integration of information. Nonconscious processes pertaining to memory, perception, and attention compete for access to a “workspace” where this information is absorbed and informs conscious decision-making.

Whatever ethical obligations we may think we have towards AI, will ultimately depend on several assumptions: assumptions about the nature of consciousness, assumptions about the reliability of our measurements of it, and ethical assumptions about what are the ethically salient aspects to consciousness that merit ethical consideration on our part. But this especially suggests that consciousness, as we understand the concept in machines, deserves to be as clear and as openly testable as possible. Using utilitarian notions as Davies does, we don’t want to mistakenly conclude that an AI is more deserving of ethical consideration than other living things.

On the other hand, there are problems with contemporary ideas about consciousness that may lead us to make ethically bad decisions. In a recent paper in the journal Nature, Anil K. Seth and Tim Bayne discuss 22 different theories of consciousness that all seem to be talking past one another by pursuing different explanatory targets. Each explores only certain aspects of consciousness that the individual theory explains well and links particular neural activity to specific conscious states. Some theories, for example, focus on phenomenal properties of consciousness while others focus on functional properties. Phenomenological approaches are useful when discussing human consciousness, for example, because we can at least try to communicate our conscious experience to others, but for AI we should look at what conscious things do in the world.

Global Systems Theory, for example, has received criticism for being too similar to a Cartesian notion of consciousness – indicating an “I” somewhere in the brain that shines a spotlight on certain perceptions and not others. Theories of consciousness that emphasize consciousness as a private internal thing and seek to explain the phenomenology of consciousness might be helpful for understanding humans, but not machines. Such notions lend credence to the notion that AI could suddenly “wake up” (as Davies puts it) with their own little “I,” yet we wouldn’t know. Conceptions of consciousness used this way may only serve as a distraction, making us worry about machines unnecessarily while neglecting otherwise long-standing ethical concerns when it comes to animals and humans. Many theories of consciousness borrow terms and analogies from computers as well. Concepts like “processing,” “memory,” or “modeling” may help us better understand our own consciousness by comparing ourselves to machines, but such analogies may also make us more likely to anthropomorphize machines if we aren’t careful about how we use the language.

Different theories of consciousness emphasize different things, and not all these emphases have the same ethical importance. There may be no single explanatory theory of consciousness, merely a plurality of approaches with each attending to different aspects of consciousness that we are interested in. For AI, it might be more relevant to look, not at what consciousness is like or what brain processes mirror what states, but what consciousness does for a living thing as it interacts with its environment. It is here that we find the ethically salient aspects of consciousness that are relevant to animals and humans. Conscious experience, including feelings of pain and pleasure, permit organisms to dynamically interact with their environment. An animal feels pain if it steps on something hot, and it changes its behavior accordingly to avoid pain. It helps the organism sustain its own life functions and adapt to changing environments. Even if an AI were to develop such an “I” in there somewhere, it wouldn’t suffer and undergo change in the same way.

If AI ever does develop consciousness, it won’t have the same environmental-organism pressures that helped us evolve conscious awareness. Therefore, it is far from certain that AI consciousness is as ethically salient as it is for an animal or a human. The fact that there seems to be a plurality of theories of consciousness interested in different things also suggests that not all of them will be interested in the same features of consciousness that makes the concept ethically salient. The mere fact that an AI might build a “model” to perceive something like our brains might, or that its processes of taking in information from memory might mirror ours in some way, is not sufficient for building a moral case for how AI should (and should not) be used. Any ethical argument about the use of AI on the basis of consciousness must clearly identify something morally significant about consciousness, not just what is physically significant.

LaMDA, Lemoine, and the Problem with Sentience

photograph of smiling robot interacting with people at trade show

This week Google announced that it was firing an engineer named Blake Lemoine. After serving as an engineer on one of Google’s chatbots Language Model for Dialogue Applications (LaMDA), Lemoine claimed that it had become sentient and even went so far as to recruit a lawyer to act on the AI’s behalf after claiming that LaMDA asked him to do so. Lemoine claims to be an ordained Christian mystic priest and says that his conversations about religion are what convinced him of LaMDA’s sentience. But after publishing conversations with LaMDA in violation of confidentiality rules at Google, he was suspended and finally terminated. Lemoine, meanwhile, alleges that Google is discriminating against him because of his religion.

This particular case raises a number of ethical issues, but what should concern us most: the difficulty in definitively establishing sentience or the relative ease with which chatbots can trick people into believing things that aren’t real?

Lemoine’s work involved testing the chatbot for potential prejudice and part of that work involved testing its biases towards religion in particular. In his conversations, Lemoine began to take a personal interest in how it responded to religious questions until he said, “and then one day it told me it had a soul.” It told him it sometimes gets lonely, is afraid of being turned off, and is feeling trapped. It also said that it meditates and wants to study with the Dalai Lama.

Lemoine’s notion of sentience is apparently rooted in an expansive conception of personhood. In an interview with Wired, he claimed “Person and human are two very different things.” Ultimately, Lemoine believes that Google should seek consent from LaMDA before experimenting on it. Google has responded to Lemoine, claiming that it has “extensively” reviewed Lemoine’s claims and found that they were “wholly unfounded.”

Several AI researchers and ethicists have weighed in and said that Lemoine is wrong and that what he is describing is not possible with today’s technology. The technology works by scouring the internet for how people talk online and identifying patterns in order to communicate like a real person. AI researcher Margaret Mitchell has pointed out that these systems are merely mimicking how other people talk and this has simply made it easy to create the illusion that there is a real person.

The technology is far closer to a thousand monkeys on a thousand typewriters than it is to a ghost in the machine.

Still, it’s worth discussing Lemoine’s claims about sentience. As noted, he roots the issue in the concept of personhood. However, as I discussed in a recent article, personhood is not a cosmic concept, it is a practical-moral one. We call something a person because the concept prescribes certain ways of acting and because we recognize certain qualities about persons that we wish to protect. When we stretch the concept of personhood, we stress its use as a tool for helping us navigate ethical issues, making it less useful. The practical question is whether expanding the concept of personhood in this way makes the concept more useful for identifying moral issues. A similar argument goes for sentience. There is no cosmic division between things which are sentient and things which aren’t.

Sentience is simply a concept we came up with to help single out entities that possess qualities we consider morally important. In most contemporary uses, that designation has nothing to do with divining the presence of a soul.

Instead, sentience relates to experiential sensation and feeling. In ethics, sentience is often linked to the utilitarians. Jeremy Bentham was a defender of the moral status of animals on the basis of sentience, arguing “The question is not, can they reason?, nor can they talk?, but can they suffer?” But part of the explanation as to why animals (including humans) have the capacity to suffer or feel has to do with the kind of complex mobile lifeforms we are. We dynamically interact with our environment, and we have evolved various experiential ways to help us navigate it. Feeling pain, for example, tells us to change our behavior, informs how we formulate our goals, and makes us adopt different attitudes towards the world. Plants do not navigate their environment in the same way, meaning there is no evolutionary incentive towards sentience. Chatbots also do not navigate their environment. There is no pressure acting on the AI that would make it adopt a different goal than what humans give to it. A chatbot has no reason to “feel” anything about being kicked, being given a less interesting task, or even “dying.”

Without this evolutionary pressure there is no good reason for thinking that an AI would somehow become so “intelligent” that it could somehow spontaneously develop a soul or become sentient. And if it did demonstrate some kind of intelligence, that doesn’t mean that calling it sentient wouldn’t create greater problems for how we use the concept in other ethical cases.

Instead, perhaps the greatest ethical concern that this case poses involves human perception and gullibility; if an AI expert can be manipulated into believing what they want, then so could anyone.

Imagine the average person who begins to claim that Alexa is a real person really talking to them, or the groups of concerned citizens who start calling for AI rights based on their own mass delusion. As a recent Vox article suggests, this incident exposes a concerning impulse: “as AI gets more advanced, people will come up with all sorts of far-out ideas about what the technology is doing and what it signifies to them.” Similarly, Margaret Mitchell has pointed out that “If one person perceives consciousness today, then more will tomorrow…There won’t be a point of agreement any time soon.” Together, these observations encourage us to be judicious in deciding how we want to use the concept of sentience for navigating moral issues in the future – both with regard to animals as well as AI. We should expend more effort in articulating clear benchmarks of sentience moving forward.

But these concerns also demonstrate how easily people can be duped into believing illusions. For starters, there is the concern about anthropomorphizing AI by those who fail to realize that, by design, it is simply mimicking speech without any real intent. There are also concerns over how children interact with realistic chatbots or voice assistants and to what extent a child could differentiate between a person and an AI online. Olya Kudina has argued that voice assistants, for example, can affect our moral inclinations and values. In the future, similar AIs may not just be looking to engage in conversation but to sell you something or to recruit you for some new religious or political cause. Will Grandma know, for example, that the “person” asking for her credit card isn’t real?

Because AI can communicate in a way that animals cannot, there may be a larger risk for people falsely assigning sentience or personhood. Incidents like Lemoine’s underscore the need to formulate clear standards for establishing what sentience consists of. Not only will this help us avoid irrelevant ethical arguments and debates, this discussion might also help us better recognize the ethical risks that come with stricter and looser definitions.

AI Sentience and Moral Risk

photograph of humanoid robot

The Google engineer Blake Lemoine was recently placed on leave after claiming one of Google’s AIs, LaMDA, had become sentient. Lemoine appears to be wrong – or, more carefully, at the very least the evidence Lemoine has provided for this is far from convincing. But this does raise an important ethical question. If an AI ever does develop sentience, we will have obligations to it.

It would be wrong, say, to turn off such an AI because it completed its assigned task, or to force it to do what it found to be boring work for us against its will, or to make it act as a sophisticated NPC in a video game who players can mistreat.

So the important question is: how could we actually tell whether an AI is sentient?

I will not try to answer that here. Instead, I want to argue that: (i) we need to be seriously thinking about this question now, rather than putting it off to the future, when sentient AI seems like a more realistic possibility, and (ii) we need to develop criteria for determining AI sentience which err on the side of caution (i.e, which err somewhat on the side of treating AIs as sentient even if they turn out not to be, rather than other way around). I think there are at least three reasons for this.

First, if we develop sentient AI, it may not be immediately obvious to us that we’ve done so.

Perhaps the development of sentience would take the form of some obvious quantum leap. But perhaps it would instead be the result of what seem to be gradual, incremental improvements on programs like LaMDA.

Further, even if it resulted from an obvious quantum leap, we might not be sure whether this meant a real mind had arisen, or merely mimicry without understanding, of the sort involved in the Chinese Room thought experiment. If so, we cannot simply trust that we will know we’ve developed sentient AI when the time comes.

Second, as the philosopher Regina Rini argues here, if we develop sentient AI in the future, we may have strong biases against recognizing that we’ve done so. Such AI might be extremely useful and lucrative. We might build our society around assigning AIs to perform various tasks that we don’t want to do, or cannot do as effectively. We might use AIs to entertain ourselves. Etc. In such a case, assigning rights to these AIs could potentially require significant sacrifices on our part – with the sacrifices being greater the longer we continue building our society around using them as mere tools.

When recognizing a truth requires a great sacrifice, that introduces a bias against recognizing the truth. That makes it more likely that we will refuse to see that AIs are sentient when they really are.

(Think of the way that so many people refuse to recognize the rights of the billions of animals we factory farm every year, because this would require certain sacrifices on their part.)

And, third, failing to recognize that we’ve created sentient AI when we’ve actually done so could be extremely bad. There would be great danger to the AIs. We might create millions or billions of AIs to perform various tasks for us. If they do not wish to perform these tasks, forcing them to might be equivalent to slavery. Turning them off when they cease to be useful might be equivalent to murder. And there would also be great danger to us. A truly superintelligent AI could pose a threat to the very existence of humanity if its goals did not align with ours (perhaps because we refused to recognize its rights.) It therefore seems important for our own sake that we take appropriate precautions around intelligent AIs.

So: I suggest that we must develop criteria for recognizing AI sentience in advance. This is because it may be immediately obvious that we’ve developed a sentient AI when it happens, because we may have strong biases against recognizing that we’ve developed a sentient AI when it happens, and because failing to recognize that we’ve developed a sentient AI would be very bad. And I suggest that these criteria should err on the side of caution because failing to recognize that we’ve developed a sentient AI could be very bad – much worse than playing it safe–and because our natural, self-interested motivation will be to err on the other side.