← Return to search results
Back to Prindle Institute

What Should We Do About AI Identity Theft?

image of synthetic face and voice

A recent George Carlin comedy special from Dudesy — an AI comedy podcast created by Will Sasso and Chad Kultgen — has sparked substantial controversy. In the special, a voice model emulating the signature delivery and social commentary of Carlin, one of America’s most prominent 20th-century comedians and social critics, discusses contemporary topics ranging from mass shootings to AI itself. The voice model, which was trained on five decades of the comic’s work, sounds eerily similar to Carlin who died in 2008.

In response to controversy over the AI special, the late comedian’s estate filed a suit in January, accusing Sasso and Kultgen of copyright infringement. As a result, the podcast hosts agreed to take down the hour-long comedy special and refrain from using Carlin’s “image, voice or likeness on any platform without approval from the estate.” This kind of scenario, which is becoming increasingly common, generates more than just legal questions about copyright infringement. It also raises a variety of philosophical questions about the ethics of emerging technology connected to human autonomy and personal identity.

In particular, there are a range of ethical questions concerning what I’ve referred to elsewhere as single-agent models. Single-agent models are a subset of generative artificial intelligence that concentrates on modeling some identifying feature(s) of a single human agent through machine learning.

Most of the public conversation around single-agent models focuses on the impact on individuals’ privacy and property rights. These privacy and property rights violations generally occur as a function of the single-agent modeling outputs not crediting and compensating the individuals whose data was used in the training process, a process that often relies on the non-consensual scraping of data under fair use doctrine in the United States. Modeled individuals find themselves competing in a marketplace saturated with derivative works that fail to acknowledge their contributory role in supplying the training data, all while also being deprived of monetary compensation. Although this is a significant concern that jeopardizes the sustainability of creative careers in a capitalist economy, it is not the only concern.

One particularly worrisome function of single-agent models is their unique capacity to generate outputs practically indistinguishable from those of individuals whose intellectual and creative abilities or likeness are being modeled. When an audience with an average level of familiarity with an individual’s creative output cannot distinguish whether the digital media they engage with is authentic or synthetic, this presents numerous concerns. Perhaps most obviously, single-agent models’ ability to generate indistinguishable outputs raises concerns about what works and depictions of a modeled individual’s behavior become associated with their reputation. Suppose the average individual can’t discern whether an output came from an AI or the modeled individual themself. In that case, unwanted associations between the modeled individual and AI outputs may form.

Although these unwanted associations are most likely to harm when the individual generating the outputs does so in a deliberate effort to tarnish the modeled individual’s reputation (e.g., defamation), one need not have this sort of intent for harm to occur. Instead, one might use the modeled individual’s likeness to deceive others by spreading disinformation, especially if that individual is perceived as epistemically credible. Recently, scammers have begun incorporating single-agent models in the form of voice cloning to call families in a loved one’s voice and defraud them into transferring money. On a broader scale, a bad actor might flood social media with an emulation of the President of the United States, relaying false information about the election. In both cases, the audience is deceived into adopting and acting on false beliefs.

Moreover, some philosophers, such as Regina Rini, have pointed to the disturbing implications of single-agent modeling on our ability to treat digital media and testimony as veridical. If one can never be sure if the digital media they engage with is true, how might this negatively impact our abilities to consider digital media a reliable source for transmitting knowledge? Put otherwise, how can we continue to trust testimony shared online?

Some, like Keith Raymond Harris, have pushed back against the notion that certain forms of single-agent modeling, especially those that fall under the category of deepfakes (e.g., digitally fabricated videos or audio recordings), pose a substantial risk to our epistemic practices. Skeptics argue that single-agent models like deepfakes do not differ radically from previous methods of media manipulation (e.g., photoshop, CGI). Furthermore, they contend that the evidential worth of digital media also stems from its source. In other words, audiences should exercise discretion when evaluating the source of the digital media rather than relying solely on the digital media itself when considering its credibility.

These attempts to allay the concerns about the harms of single-agent modeling overlook several critical differences between previous methods of media manipulation and single-agent modeling. Earlier methods of media manipulation were often costly, time-consuming, and, in many cases, distinguishable from their authentic counterparts. Instead, single-agent modeling is accessible, affordable, and capable of producing outputs that bypass an audience’s ability to distinguish them from authentic media.

In addition, many individuals lack the media literacy to discern between trustworthy and untrustworthy media sources, in the way Harris suggests. Moreover, individuals who primarily receive news from social media platforms generally tend to engage with the stories and perspectives that reach their feeds rather than content outside their digitally curated information stream. These concerns are exacerbated by social media algorithms prioritizing engagement, siloing users into polarized informational communities, and rewarding stimulating content by placing it at the top of users’ feeds, irrespective of its truth value. Social science research demonstrates that the more an individual is exposed to false information, the more willing they will be to believe it due to familiarity (i.e., illusory truth effect). Thus, it appears that single-agent models pose genuinely novel challenges that require new solutions.

Given the increasing accessibility, affordability, and indistinguishability of AI modeling, how might we begin to confront its potential for harm? Some have expressed the possibility of digitally watermarking AI outputs. Proponents argue that this would allow individuals to recognize whether media was generated by AI, perhaps mitigating the concerns I’ve raised relating to credit and compensation. Consequently, these safeguards could reduce reputational harm by diminishing the potential for unwanted associations. This approach would integrate blockchain — the same technology used by cryptocurrency — allowing the public to access a shared digital trail of AI outputs. Unfortunately, as of now, this cross-platform AI metadata technology has yet to see widespread implementation. Even with cross-platform AI metadata, we remain reliant on the goodwill of big tech in implementing it. Moreover, this doesn’t address concerns about the non-consensual sourcing of training data through fair use doctrine.

Given the potential harms of single-agent modeling, it’s pertinent that we critically examine and reformulate our epistemic and legal frameworks to accommodate these novel technologies.

Deepfake Porn and the Pervert’s Dilemma

blurred image of woman on bed

This past week Representative Alexandra Ocasio-Cortez spoke of an incident where she was realistically depicted by a computer engaged in a sexual act. She recounted the harm and difficulty of being depicted in this manner. The age of AI-generated pornography is upon us and so-called deepfakes are becoming less visually distinguishable from real life every day. Emerging technology could allow people to generate true-to-life images and videos of their most forbidden fantasies.

What happened with Representative Ocasio-Cortez raises issues well beyond making pornography with AI of course. Deepfake pornographic images are not just used for personal satisfaction, they are used to bully, harass, and demean. Clearly, these uses are problematic, but what about the actual creation of the customized pornography itself? Is that unethical?

To think this through Carl Öhman articulates the “pervert’s dilemma”: We might think that any sexual fantasy conceived — but not enacted — in the privacy of our home and our own head is permissible. If we do find this ethical, then why exactly do we find it objectionable if a computer generates those images, also in the privacy of one’s home? (For the record, Öhman believes they have a way out of this dilemma.)

The underlying case for letting a thousand AI-generated pornographic flowers bloom is rooted in the famous Harm Principle of John Stuart Mill. His thought was that in a society which values individual liberty, behaviors should generally not be restricted unless they cause harm to others. Following from this, as long as no one is harmed in the generation of the pornographic image, the action should be permissible. We might find it gross or indecent. We might even find the behaviors depicted unethical or abhorrent. But if nobody is being hurt, then creating the image in private via AI is not itself unethical, or at least not something that should be forbidden.

Moreover, for pornography in which some of the worst ethical harms occur in the production process (the most extreme example being child pornography), AI-generated alternatives would be far preferable. (If it turns out that being able to generate such images increases the likelihood of the corresponding real-world behaviors, then that’s a different matter entirely.) Even if no actual sexual abuse is involved in the production of pornography, there have been general worries about the working conditions within the adult entertainment industry that AI-generated content could alleviate. Although, alternatively, just like in other areas, we may worry that AI-generated pornography undermines jobs in adult entertainment, depressing wages and replacing actors and editors with computers.

None of this disputes that AI-generated pornography can’t be put to bad ends, as the case of Representative Ocasio-Cortez clearly illustrates. And she is far from the only one to be targeted in this way (also see The Prindle Post discussion on revenge porn). The Harm Principle defender would argue that while this is obviously terrible, it is these uses of pornography that are the problem, and not simply the existence of customizable AI-generated pornography. From this perspective, society should target the use of deepfakes as a form of bullying or harassment, and not deepfakes themselves.

Crucially, though, this defense requires that AI-generated pornography be adequately contained. If we allow people to generate whatever images they want as long as they pinky-promise that they are over 18 and won’t use them to do anything nefarious, it could create an enforcement nightmare. Providing more restrictions on what can be generated may be the only way to meaningfully prevent the images from being distributed or weaponized even if, in theory, we believe that strictly private consumption squeaks by as ethically permissible.

Of course, pornography itself is far from uncontroversial, with longstanding concerns that it is demeaning, misogynistic, addictive, and encourages harmful attitudes and behaviors. Philosophers Jonathan Yang and Aaron Yarmel raise the worry that by providing additional creative control to the pornography consumer, AI turns these problematic features of pornography up to 11.  The argument, both in response to AI-generated pornography and pornography generally, depends on a data-driven understanding of the actual behavioral and societal effects of pornography — something which has so far eluded a decisive answer. While the Harm Principle is quite permissive about harm to oneself, as a society we may also find that the individual harms of endless customizable pornographic content are too much to bear even if there is no systematic impact.

Very broadly speaking, if the harms of pornography we are most worried about relate to its production, then AI pornography might be a godsend. If the harms we are most worried about relate to the images themselves and their consumption, then it’s a nightmare. Additional particularities are going to arise about labor, distribution, source images, copyright, real-world likeness, and much else besides as pornography and AI collide. Like everything sexual, openness and communication will be key as society navigates the emergence of a transformative technology in an already fraught ethical space.

Questions About Authenticity from Androids in the Age of ChatGPT

photograph of Blade Runner movie scene

One of the great philosophical fiction writers of the 20th century, Philip K. Dick, was deeply concerned about the problem of discerning the authentic from the inauthentic. Many of his books and stories deal with this theme. One novel in particular, Do Androids Dream of Electric Sheep?, includes a version of the Turing Test, what Dick called the Voight-Kampff test, to distinguish humans from androids. Most people are probably more familiar with the use of the test in the novel’s cinematic adaptation, Blade Runner. In the film, the use of the test is never meaningfully questioned. In the novel, however, it is suggested throughout (and especially in chapters 4 and 5) that there are significant moral, epistemic, and ontological issues with the test. The Voight-Kampff test is meant to distinguish humans from androids by measuring involuntary physical responses connected to empathetic reactions to hypothetical scenarios. The test works on the supposition that humans would have empathic reactions to the misuse and destruction of other conscious entities while androids would not. It is suggested to the reader, as well as characters, that the test will generate false positives by occasionally identifying humans with psychological disorders as being androids. The main character, Rick Deckard, knows about the possibility of false positives when he tests another character, Rachael Rosen, and determines she is an android.

The possibility that the test is faulty allows Deckard to be manipulated. The CEO of the company that makes the androids, Eldon Rosen, claims Rachael is a human. His explanation is that for most of her life Rachael had little human contact and thus did not develop empathy sufficient to pass the test. If the test is returning false positives, then enforcement agencies using the test “may have retired [killed], very probably have retired, authentic humans with underdeveloped empathic ability.” Further muddying the philosophical waters, it is unclear whether advances in android technology have made it so that the test results in false negatives, and thereby allowing dangerous androids to move freely through society. As Rachael points out “If you have no test you can administer, then there is no way you can identify an android. And if there’s no way you can identify an android there’s no way you can collect your bounty.” In other words, without a reliable test to distinguish the authentic from the inauthentic, the real from the fake, Rick Deckard and the organizations he represents are paralyzed.

But Rachael is an android. Her backstory is a lie. Nevertheless, Deckard is constantly questioning what he knows, what is real, and whether his behavior as a bounty hunter is morally licit.

Dick was worried about problems with the inability to distinguish the real from the fake in the 1960s and 1970s. In 2023, with the creation of a variety of forms of technology such as ChatGPT, voice cloning, and deep fake image technology, along with public figures being willing to tell outright lies with seeming impunity, we already live in Dick’s dystopia.

Consider just a few recent examples.

The Writers Guild of America (WGA) has worries about AI-generated content as part of the ongoing strike. One proposal made by the WGA to the Alliance of Motion Picture and Television Producers (AMPTP) states the AMPTP should agree to “Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.”

The Washington Post reports that an instructor at Texas A&M, Jared Mumm, sent an email to his students stating that due to concerns about the use of ChatGPT in his course, all students were going to receive an incomplete for the course until he could determine who did and who did not use the software.

The U. S. Senate Judiciary Subcommittee on Privacy, Technology and the Law recently heard testimony from Sam Altman, CEO of Open AI – the creators of ChatGPT – about the dangers and solutions to the existence of such technology.

Each of these on-going events center on a concern about the authenticity of ideas, about whether the consumer of these ideas can reliably believe that the claimed or implied origin of an idea is veridical. To make this point, Senator Richard Blumenthal (D-CT) opened the subcommittee hearing with an audio recording that was an AI-generated clone of his voice reading the following ChatGPT generated script:

Too often, we have seen what happens when technology outpaces regulation: the unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.

As the real Senator Blumenthal points out, the AI’s “apparent reasoning is pretty impressive” suggesting the senator actually endorses something akin to the ideas that are, nevertheless, not his ideas. The fear is that someone hearing this, or any other such creation, would be unable to know whether the ideas are authentic and being authentically stated by the claimed author. For another example, The Daily Show recently created a fake re-election ad for President Biden using similar technology. The fake ad includes profanity and morally questionable attacks on the character of many public figures — see here. Comments about the ad discuss how effective this it might be for the President’s future campaign with one, Lezopi5914, stating that the fake ad is “Full of truth. Best and most honest ad ever.”

If we can’t distinguish the real from the simulated, then we are cut off from the objective world. If we are cut off from the objective world, then we are cut off from our ability to ensure that sentences correspond to reality and thus our ability to assign the value “true” to our sentences (at least on a correspondence theory of truth). Without being able to assign the value “true” to sentences such as our beliefs, our decisional-capacity is threatened. On one reading of Kantian forms of autonomy, the agent has to have full, accurate information to make rational, moral choices. The possibility of the existence of an ad with an AI-generated script, a cloned voice, and a deep-faked face undermines a voter’s ability to make rational, moral choices about who deserves our vote in the next election as well as more mundane decisions.

To make matters even worse, if I know fake information is out there, but I can’t distinguish the authentic from the inauthentic, then how do I form any beliefs whatsoever, let alone act on them?

One does not need to adopt William Clifford’s strong principle that “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence” to become paralyzed. In the current situation, I don’t merely lack sufficient evidence for belief adoption, I have no grounds for a belief.

We can see how such thinking may have influenced Jared Mumm to give every student in his Texas A&M course an incomplete.  As he put it on one student’s paper, “I have to gauge what you are learning not a computer” but at the moment, he lacks a reliable test or procedure to identify if a text is authentically written by a student or generated by an AI chatbot. Result: everyone gets an incomplete until we can sort this out. As educators, do we have to suspend all grading until a solution is found? If no one can trust the information they obtain about politicians, policy, legislation, the courts, or what happened on January 6, 2021, do we even bother participating in self-governance through voting and other forms of political participation?

Rick Deckard suggests that those who create such technology have a moral responsibility to provide solutions to the problems they are creating. It is hinted that the only reason the android technology is not banned outright in the fictional world is because there exist tests to distinguish the authentic from the inauthentic. He tells Rachael and Eldon Rosen that “[i]f you have no confidence in the Voight-Kampff scale … possibly your organization should have researched an alternate test. It can be argued the responsibility rests partly on you.” In other words, in order to avoid outright bans, detection technology needs to be available and trustworthy.

Help (?) is on the way. Open AI is developing tools that can identify text as AI generated. Other companies are creating software to identify voice cloning and deep fakes. Is this all we need? A test, a procedure, to determine what is real and what is fake, and then policies that use these tests to determine who is to suffer punitive measures for using these tools for deceptive purposes? Open AI admits that its detection tool generates false positives nine percent of the time and there is no word on how often it produces false negatives. Other detection software is also not one hundred percent accurate. Thus, we seem to lack reliable tests. What about enforceable policies? Some new outlets report that Mr. Altman asked Congress to regulate AI, thereby saying it is a governmental responsibility, not Open AI’s, to develop and enforce these rules. Other outlets suggest that Mr. Altman is trying to shape legislation to the advantage of Open AI, not in solving the problems of authenticity the technology creates. Who do you believe? Who can you believe?

And so here we are, in a world filled with technology that can deceive us. But we have no reliable manner to identify the authentic from the inauthentic and no enforceable policies to discourage nefarious use of these technologies. While I believe that academics have many tools at their disposal to keep students from cheating, the larger societal problems of this technology are more concerning.

I am not the only person to notice the connection between Philip K. Dick and these philosophical problems — see here and here. But the seemingly unanswerable question is, “Did Jim write this article and then discover similar essays that provide further support for his concerns or are the explicit references to cover his tracks so no one notices that he plagiarized, possibly using ChatGPT?” How can you know?

Virtual Influencers: Harmless Advertising or Dystopian Deception?

photograph of mannequin in sunglasses and wig

As social media sites become more and more ubiquitous, the influence of internet marketers and celebrities has exponentially increased. Now,  “Influencer” has evolved into a serious job title. Seventeen-year-old Charli D’Amelio, for example, started posting short, simple dance videos on TikTok in 2019 and has since accrued over 133 million followers and ended 2021 with earnings of more than $17.5 million dollars. With so much consumer attention to be won, an entire industry has spawned to support virtual influencers – brand ambassadors designed using AI and CGI technologies as a substitute for human influencers. Unlike other automated social media presences – such as “twitter bots” – virtual social media influencers have an animated, life-like appearance coupled with a robust, fabricated persona – taking brand humanization to another level.

Take Miquela (also known as Lil Miquela), who was created in 2016 by Los Angeles-based digital marketing company Brud. On her various social media platforms, Miquela claims to be a 19-year-old AI robot with a passion for social justice, fashion, music, and friendship. Currently, Miquela, who regularly features in luxury brand advertising and fashion magazines, has over 190,000 monthly listeners on Spotify and gives “live” interviews at major events like Coachella. It is estimated that in 2020, Lil Miquela (with 2.8 million followers across her social media accounts) made $8,500 per sponsored post and contributed $11.7 million to her company of origin.

The key advantages of virtual influencers like Miquela revolve around their adaptability, manipulability, economic efficiency, and persistence.

Virtual brand ambassadors are the perfect faces for advertising campaigns because their appearances and personalities can be sculpted to fit a company’s exact specifications.

Virtual influencers are also cheaper and more reliable than human labor in the long run. Non-human internet celebrities can “work” around-the-clock in multiple locations at once and cannot age or die unless instructed to by their programmers. In the case of Chinese virtual influencer Ling, her primary appeal to advertisers is her predictable and controllable nature, which provides a sense of reassurance that human brand ambassadors cannot. Human influencers have the frustrating tendency to say or do things the public finds objectionable that might tarnish the reputation of the brands to which they are linked. Just as automation in machine factory labor reduces the risk of human labor, the use of digital social media personalities mitigates the possibility of human error.

One concern, of course, is the deliberate deception at work. At the outset of her emergence onto the social media scene, Miquela’s human-ness was hotly debated in internet circles. Before her creators revealed her artificial nature to the public, many of her followers believed that she was a real, slightly over-edited teenage model.

The human-like appearance and mannerisms of Miquela and other virtual influencers offers a reason to worry about what the future of social media might look like, especially as these computer-generated accounts continue to grow in number.

It’s possible that in the future algorithms will create virtual influencers, produce social media accounts for them, and post without any human intervention. One can imagine a dystopian, Blade Runner-esque future in which it is practically impossible to distinguish between real people and replicants on the internet. Much like deepfakes, the rise of virtual influencers highlights our inability to distinguish reality from fabrications. Many warn of the serious ramifications coming if we can no longer trust any of the information we consume.

One day, the prevalence of fake, human-like social media presences may completely eradicate our sense of reality in the virtual realm. This possibility suggests that the use of virtual influencers undermines the very purpose of these social media platforms. Sites such as Facebook and Twitter were created with the intention of connecting people by facilitating the sharing of news, photos, art, memories – the human experience. Unfortunately, these platforms have been repurposed as powerful tools for advertising and monetization. Although it’s true that human brand ambassadors have contributed to the impersonal and curated aspects of social media, virtual influencers make the internet even more asocial than ever before. Instead of being sold a product or a lifestyle by another human, we are being marketed to by an artificially intelligent beings with no morals, human constraints, or ability to connect with others.

Moreover, the lifestyle that virtual influencers showcase raises additional concerns. Human social media influencers already perpetuate unrealistic notions of how we should live, work, and look. The posts of these creators are curated to convey a sense of perfection and success that appeal to the aspirations of their followers. Human influencers generally project an image of having an enviable lifestyle that’s ultimately fake. Virtual influencers are even more guilty of this given that nothing about the lives they promote is real.

As a result, human consumers of artificially-created social media content (especially younger audiences) are comparing themselves to completely unreal standards that no human can ever hope to achieve.

The normalization of virtual influencers only adds additional pressure to be young, beautiful, and wealthy, and may inhibit our ability to live life well.

Virtual influencer companies further blur this line between reality and fantasy by sexualizing their artificial employees. For example, Blawko (another virtual influencer created by Brud) who self-describes as a “young robot sex symbol” has garnered attention in part for its tumultuous fake relationship with another virtual influencer named Bermuda. Another unsettling example of forced sexuality occurs in a Calvin Klein ad. In the video, Lil Miquela emerges from off screen to meet human supermodel Bella Hadid, the two models kiss, and the screen goes black. Is the complete, uninhibited control over the sexual depiction of virtual influencers a power we want their creators to have? The hyper-sexualization of women in advertising is already a pervasive issue. Now, with virtual influencers, companies can compel the talent to do or say whatever they wish. Even though these influencers are not real people with real bodily autonomy, why does it feel wrong for their creators to insert them into sexual narratives for public consumption? While this practice may not entail any direct harm, in a broader societal context the commodification of virtual sexuality remains problematic.

Given the widespread use and appeal of virtual influencers, we should be more cognizant of the moral implications of this evolving technology. Virtual influencers and their developers threaten to undercut whatever value social media possesses, limit the transparency of social networking sites, cement unrealistic societal standards, and exploit digital sexuality for the sake of fame and continued economic success.

Resurrection Through Chatbot?

cartoon image of an occult seance

There is nothing that causes more grief than the death of a loved one; it can inflict an open wound that never fully heals, even if we can temporarily forget that it’s there. We are social beings and our identities aren’t contained within our own human-shaped space. Who we are is a matter of the roles we take on, the people we care for, and the relationships that allow us to practice and feel love. The people we love are part of who we are and when one of them dies, it can feel like part of us dies as well. For many of us, the idea that we will never interact with our loved one again is unbearable.

Some entrepreneurs see any desire as an opportunity, even the existential impulses and longings that come along with death. In response to the need to have loved ones back in our lives, tech companies have found a new use for their deepfake technology. Typically used to simulate the behavior of celebrities and politicians, some startups have recognized the potential in programming deepfake chat-bots to behave like dead loved ones. The companies that create these bots harvest data from the deceased person’s social media accounts. Artificial intelligence is then used to predict what the person in question would say in a wide range of circumstances. A bereaved friend or family member can then chat with the resulting intelligence and, if things go well, it will be indistinguishable from the person who passed away.

Some people are concerned that this is just another way for corporations to exploit grieving people. Producers of the chatbots aren’t interested in the well-being of their clients, they’re only concerned with making money. It may be the case that this is an inherently manipulative practice, and in the worst of ways. How could it possibly be acceptable to profit from people experiencing the lowest points in their lives?

That said, the death industry is thriving, even without the addition of chatbots. Companies sell survivors of the deceased burial plots, coffins, flowers, cosmetic services, and all sorts of other products. Customers can decide for themselves which goods and services they’d like to pay for. The same is true with a chatbot. No one is forced to strike up a conversation with a simulated loved one, they have a chance to do so only if they have decided for themselves that it is a good idea for them.

In addition to the set of objections related to coercion, there are objections concerning the autonomy of the people being simulated. If it’s possible to harm the dead, then in some cases that may be what’s going on here. We don’t know what the chatbot is going to say, and it may be difficult for the person interacting with the bot to maintain the distinction between the bot and the real person they’ve lost. The bot may take on commitments or express values that the living person never had. The same principle is at play when it comes to using artificial intelligence to create versions of actors to play roles. The real person may never have consented to say or do the things that the manufactured version of them says or does. Presumably, the deceased person, while living, had a set of desires related to their legacy and the ways in which they wanted other people to think of them. We can’t control what’s in the heads of others, but perhaps our memories should not be tarnished nor our posthumous desires frustrated by people looking to resurrect our psychologies for some quick cash.

In response, some might argue that dead people can’t be harmed. As Epicurus said, “When we exist, death is not; and when death exists, we are not. All sensation and consciousness ends with death and therefore in death there is neither pleasure nor pain.” There may be some living people who are disturbed by what the bot is doing, but that harm doesn’t befall the dead person — the dead person no longer exists. It’s important to respect autonomy, but such respect is only possible for people who are capable of exercising it, and dead people can’t.

Another criticism of the use of chat-bots is that it makes it more difficult for people to arrive at some form of closure. Instead, they are prolonging the experience of having the deceased with them indefinitely. Feeling grief in a healthy way involves the recognition that the loved one in question is really gone.

In response, some might argue that everyone feels grief differently and that there is no single healthy way to experience it. For some people, it might help to use a chat-bot to say goodbye, to express love to a realistic copy of their loved one, or to unburden themselves by sharing some other sentiment that they always needed to let out but never got the chance.

Other worries about chatbot technology are not unique to bots that simulate the responses of people who have passed on. Instead, the concern is about the role that technology, and artificial intelligence in particular, should be playing in human lives. Some people, will, no doubt, opt to continue to engage in a relationship with the chat-bot. This motivates the question: can we flourish as human beings if we trade in our interpersonal relationships with other sentient beings for relationships with realistic, but nevertheless non-sentient artificial intelligence? Human beings help one another achieve the virtues that come along with friendship, the parent-child relationship, mentorship, and romantic love (to name just a few). It may be the case that developing interpersonal virtues involves responding to the autonomy and vulnerability of creatures with thoughts and feelings who can share in the familiar sentiments that make it beautiful to be alive.

Care ethicists offer the insight that when we enter into relationships, we take on role-based obligations that require care. Care can only take place when the parties to the relationship are capable of caring. In recent years we have experimented with robotic health care providers, robotic sex workers, and robotic priests. Critics of this kind of technological encroachment wonder whether such functions ought to be replaced by uncaring robots. Living a human life requires give and take, expressing and responding to need. This is a dynamic that is not fully present when these roles are filled by robots.

Some may respond that we have yet to imagine the range of possibilities that relationships with artificial intelligence may provide. In an ideal world, everyone has loving, caring companions and people help one another live healthy, flourishing lives. In the world in which we live, however, some people are desperately lonely. Such people benefit from affection behavior, even if the affection is not coming from a sentient creature. For such people, it would be better to have lengthy conversations with a realistic chat-bot than to have no conversations at all.

What’s more, our response to affection between human beings and artificial intelligence may say more about our biases against the unfamiliar than it does against the permissibility of these kinds of interactions. Our experiences with the world up to this point have motivated reflection on the kinds of experiences that are virtuous, valuable, and meaningful. Doing so has necessitated a rejection of certain myopic ways of viewing the boundaries of meaningful experience. We may be at the start of a riveting new chapter on the forms of possible engagement between carbon and silicon. For all we know, these interactions may be great additions to the narrative.

Will the Real Anthony Bourdain Please Stand Up?

headshot of Anthony Bourdain

Released earlier this month, Roadrunner: A Film About Anthony Bourdain (hereafter referred to as Roadrunner) documents the life of the globetrotting gastronome and author. Rocketing to fame in the 2000’s thanks to his memoir Kitchen Confidential: Adventures in the Culinary Underbelly and subsequent appearances on series such as Top Chef and No Reservations, Bourdain was (in)famous for his raw, personable, and darkly funny outlook. Through his remarkable show Anthony Bourdain: Parts Unknown, the chef did more than introduce viewers to fascinating, delicious, and occasionally stomach-churning meals from around the globe. He used his gastronomic knowledge to connect with others. He reminded viewers of our common humanity through genuine engagement, curiosity, and passion for the people he met and the cultures in which he fully immersed himself. Bourdain tragically died in 2018 while filming Parts Unknown’s twelfth season. Nevertheless, he still garners admiration for his brutal honesty, inquisitiveness regarding the culinary arts, and eagerness to know people, cultures, and himself better.

To craft Roadrunner’s narrative, director Morgan Neville draws from thousands of hours of video and audio footage of Bourdain. As a result, Bourdain’s distinctive accent and stylistic lashings of profanity can be heard throughout the movie as both dialogue and voice-over. It is the latter of these, and precisely three voice-over lines equating to roughly 45-seconds, that are of particular interest. This is because the audio for these three lines is not drawn from pre-existing footage. An AI-generated version of Bourdain’s voice speaks them. In other words, Bourdain never uttered these lines. Instead, he is being mimicked via artificial means.

It’s unclear which three lines these are, although Neville has confirmed one of them, regarding Bourdain’s contemplation on success, appears in the film’s trailer. However, what is clear is that Neville’s use of deepfakes to give Bourdain’s written words life should give us pause for multiple reasons, three of which we’ll touch on here.

Firstly, one cannot escape the feeling of unease regarding the replication and animation of the likeness of individuals who have died, especially when that likeness is so realistic as to be passable. Whether that is using Audrey Hepburn’s image to sell chocolate, generating a hologram of Tupac Shakur to perform onstage, or indeed, having a Bourdain sound-alike read his emails, the idea that we have less control over our likeness, our speech, and actions in death than we did in life feels ghoulish. It’s common to think that the dead should be left in peace, and it could be argued that this use of technology to replicate the deceased’s voice, face, body, or all of the above somehow disturbs that peace in an unseemly and unethical manner.

However, while such a stance may seem intuitive, we don’t often think in these sorts of terms for other artefacts. We typically have no qualms about giving voice to texts written by people who died hundreds or even thousands of years ago. After all, the vast majority of biographies and biographical movies feature dead people. There is very little concern about the representation of those persons on-screen or the page because they are dead. We may have concerns about how they are being represented or whether that representation is faithful (more on these in a bit). But the mere fact that they are no longer with us is typically not a barrier to their likeness being imitated by others.

Thus, while we may feel uneasy about Bourdain’s voice being a synthetic replication, it is not clear why we should have such a feeling merely because he’s deceased. Does his passing really alter the ethics of AI-facilitated vocal recreation, or are we simply injecting our squeamishness about death into a discussion where it doesn’t belong?

Secondly, even if we find no issue with the representation of the dead through AI-assisted means, we may have concerns about the honesty of such work. Or, to put it another way, the potential for deepfake facilitated deception.

The problem of computer-generated images and their impact on social and political systems are well known. However, the use of deepfake techniques in Roadrunner represents something much more personable. The film does not attempt to destabilize governments or promote conspiracy theories. Rather, it tries to tell a story about a unique individual in their voice. But, how this is achieved feels underhanded.

Neville doesn’t make it clear in the film which parts of the audio are genuine or deepfaked. As a result, our faith in the trustworthiness of the entire project is potentially undermined – if the audio’s authenticity is uncertain, can we be safe in assuming the rest of the film is trustworthy?

Indeed, the fact that this technique had been used to create the audio footage was concealed, or at least obfuscated, until Neville was challenged about it during an interview reinforces such skepticism. That’s not to say that the rest of the film must be called into doubt. However, the nature of the product, especially as it is a documentary, requires a contract between the viewer and the filmmaker built upon honesty. We expect, rightly or wrongly, for documentaries to be faithful representations of those things they’re documenting, and there’s a question of whether an AI-generated version of Bourdain’s voice is faithful or not.

Thirdly, even if we accept that the recreation of the voices of the dead is acceptable, and even if we accept that a lack of clarity about when vocal recreations are being used isn’t an issue, we may still want to ask whether what’s being conveyed is an accurate representation of Bourdain’s views and personality. In essence, would Bourdain have said these things in this way?

You may think this isn’t a particular issue for Roadrunner as the AI-generated voice-over isn’t speaking sentences written by Neville. It speaks text which Bourdain himself wrote. For example, the line regarding success featured in the film’s trailer was taken from emails written by Bourdain. Thus, you may think that this isn’t too much of an issue because Neville simply gives a voice to Bourdain’s unspoken words.

However, to take such a stance overlooks how much information – how much meaning – is derivable not from the specific words we use but how we say them. We may have the words Bourdain wrote on the page, but we have no idea how he would have delivered them. The AI algorithm in Roadrunner may be passable, and the technology will likely continue to develop to the point where distinguishing between ‘real’ voices and synthetic ones becomes all but impossible. But such a faithful re-creation would do little to tell us about how lines would be delivered.

Bourdain may ask his friend the question about happiness in a tone that is playful, angry, melancholic, disgusted, or a myriad of other possibilities. We simply have no way of knowing, nor does Neville. By using the AI-deepfake to voice Bourdain, Neville is imbuing meaning into the chef’s words – a meaning which is derived from Neville’s interpretation and the black-box of AI-algorithmic functioning.

Roadrunner is a poignant example of an increasingly ubiquitous problem – how can we trust the world around us given technology’s increasingly convincing fabrications? If we cannot be sure that the words within a documentary, words that sound like they’re being said by one of the most famous chefs of the past twenty years, are genuine, then what else are we justified in doubting? If we can’t trust our own eyes and ears, what can we trust?

Ethical Considerations of Deepfakes

computer image of two identical face scans

In a recent interview for MIT Technology Review, art activist Barnaby Francis, creator of deepfake Instagram account @bill_posters_uk, mused that deepfake is “the perfect art form for these kinds of absurdist, almost surrealist times that we’re experiencing.” Francis’ use of deepfakes to mimic celebrities and political leaders on Instagram is aimed at raising awareness about the danger of deepfakes and the fact that “there’s a lot of people getting onto the bandwagon who are not really ethically or morally bothered about who their clients are, where this may appear, and in what form.” While deepfake technology has received alarmist media attention in the past few years, Francis is correct in his assertion that there are many researchers, businesses, and academics who are pining for the development of more realistic deepfakes.

Is deepfake technology ethical? If not, what makes it wrong? And who holds the responsibility to prevent the potential harms generated by deepfakes: developers or regulators?

Deepfakes are not new. The first mention of deepfake was by a reddit user in 2017, who began using the technology to create pornographic videos. However, the technology soon expanded to video games as a way to create images of people within a virtual universe. However, the deepfake trend suddenly turned toward more global agendas, with fake images and videos of public figures and political leaders being distributed en masse. One altered video of Joe Biden was so convincing that even President Trump fell for it. Last year, there was a deepfake video of Mark Zuckerberg talking about how happy he was to have thousands of people’s data. At the time, Facebook maintained that deepfake videos would stay up, as they did not violate their terms of agreement. Deepfakes have only increased since then. In fact, there exists an entire YouTube playlist with deepfake videos dedicated to President Trump.

In 2020, those who have contributed to deepfake technology are not only individuals in the far corners of the internet. Researchers at the University of Washington have also developed deepfakes using algorithms in order to combat their spread. Deepfake technology has been used to bring art to life, recreate the voices of historical figures, and to use celebrities’ likeness to communicate powerful public health messages. While the dangers of deepfakes have been described by some as dystopian, the methods behind their creation have been relatively transparent and accessible.

One problem with deepfakes are that they mimic a person’s likeness without their permission. The original Deepfakes, which used photos or videos of a person mixed with pornography uses a person’s likeness for sexual gratification. Such use of a person’s likeness might never personally affect them, but could still be considered wrong, since they are being used as a source of pleasure and entertainment, without consent. These examples might seem far-fetched, but in 2019 a now-defunct app called DeepNude, sought to do exactly that. Even worse than using someone’s likeness without their knowledge, is if the use of their likeness is intended to reach them and others, in order to humiliate or damage their reputation. One could see the possibility of a type of deepfake revenge-porn, where scorned partners attempt to humiliate their exes by creating deepfake pornography. This issue is incredibly pressing and might be more prevalent than the other potential harms of deepfakes. One study, for example, found that 96% of existing deepfakes take the form of pornography.

Despite this current reality, much of the moral concern over deepfakes is grounded in their potential to easily spread misinformation. Criticism around deepfakes in recent years has been mainly surrounding their potential for manipulating the public to achieve political ends. It is becoming increasingly easy to spread a fake video depicting a politician who is clearly incompetent or spreading a questionable message, which might detract from their base. On a more local level, deepfakes could be used to discredit individuals. One could imagine a world in which deepfakes are used to frame someone in order to damage their reputation, or even to suggest they have committed a crime. Video and photo evidence is commonly used in our civil and criminal justice system, and the ability to manipulate videos or images of a person, undetected, arguably poses a grave danger to a justice system which relies on our sense of sight and observation to establish objective fact. Perhaps even worse than framing the innocent could be failing to convict the guilty. In fact, a recent study in the journal Crime Science found that deepfakes pose a serious crime threat when it comes to audio and video impersonation and blackmail. What if a deepfake is used to replace a bad actor with a person who does not exist? Or gives plausible deniability to someone who claims that a video or image of them has been altered?

Deepfakes are also inherently dishonest. Two of the most popular social media networks, Instagram and TikTok, inherently rely upon visual media which could be subject to alteration by self-imposed deepfakes. Even if a person’s likeness is being manipulated with their consent and also could have positive consequences, it still might be considered wrong due to the dishonest nature of its content. Instagram in particular has been increasingly flooded with photoshopped images, as there is an entire app market that exists solely for editing photos of oneself, usually to appear more attractive. The morality of editing one’s photos has been hotly contested amongst users and between feminists. Deepfakes only stand to increase the amount of media that is self-edited and the moral debates that come along with putting altered media of oneself on the internet.

Proponents of deepfakes argue that their positive potential far outweighs the negative. Deepfake technology has been used to spark engagement with the arts and culture, and even to bring historical figures back to life, both for educational and entertainment purposes. Deepfakes also hold the potential to integrate AI into our lives in a more humanizing and personal manner. Others, who are aware of the possible negative consequences of deepfakes, argue that the development and research of this technology should not be impeded, as the advancement of the technology can also contribute to research methods of spotting it. And there is some evidence backing up this argument, as the development of deepfake progresses, so do the methods for detecting it. It is not the moral responsibility of those researching deepfake technology to stop, but rather the role of policymakers to ensure the types of harmful consequences mentioned above do not wreak havoc on the public. At the same time, proponents such as David Greene, of the Electronic Frontier Foundation, argue that too stringent limits on deepfake research and technology will “implicate the First Amendment.”

Perhaps then it is not the government nor deepfake creators who are responsible for their harmful consequences, but rather the platforms which make these consequences possible. Proponents might argue that the power of deepfakes is not necessarily from their ability to deceive one individual, but rather the media platforms on which they are allowed to spread. In an interview with Digital Trends, the creator of Ctrl Shift Face (a popular deepfake YouTube channel), contended that “If there ever will be a harmful deepfake, Facebook is the place where it will spread.” While this shift in responsibility might be appealing, detractors might ask how practical it truly is. Even websites that have tried to regulate deepfakes are having trouble doing so. Popular pornography website, PornHub, has banned deepfake videos, but still cannot fully regulate them. In 2019, a deepfake video of Ariana Grande was watched 9 million times before it was taken down.

In December, the first federal regulation pertaining to deepfakes passed through the House, the Senate, and was signed into law by President Trump. While increased government intervention to prevent the negative consequences of deepfakes will be celebrated by some, researchers and creators will undoubtedly push back on these efforts. Deepfakes are certainly not going anywhere for now, but it remains to be seen if the potentially responsible actors will work to ensure their consequences remain net-positive.

California’s “Deepfake” Ban

computer image of a 3D face scan

In 2018, actor and filmmaker Jordan Peele partnered with Buzzfeed to create a warning video. The video appears to feature President Barak Obama advising viewers not to trust everything that they see on the Internet. After the President says some things that are out of character for him, Peele reveals that the speaker is not actually President Obama, but is, instead, Peele himself. The video was a “deepfake.” Peele’s face had been altered using digital technology to look and move just like the face of the president.

Deepfake technology is often used for innocuous and even humorous purposes. One popular example is a video that features Jennifer Lawrence discussing her favorite desperate housewife during a press conference at the Golden Globes. The face of actor Steve Buscemi is projected, seamlessly, onto Lawrence’s face. In a more troubling case, Rudy Giuliani tweeted an altered video of Nancy Pelosi in which she appears to be impaired, stuttering and slurring her speech. The appearance of this kind of altered video highlights the dangers that deepfakes can pose to both individual reputations and to our democracy more generally.

In response to this concern, California passed legislation this month that makes it a crime to distribute audio or video that presents a false impression about a candidate standing for an election occurring within sixty days. There are exceptions to the legislation. News media are exempt (clearing the way for them to report on this phenomenon), and it does not apply to deepfakes made for the purposes of satire or parody. The law sunsets in 2023.

This legislation caused controversy. Supporters of the law argue that the harmful effects of deepfake technology can destroy lives. Contemporary “cancel culture,” under which masses of people determine that a public figure is not deserving of time and attention and is even deserving of disdain and social stigma, could potentially amplify the harms. The mere perception of a misstep is often enough to permanently damage a person’s career and reputation. Videos featuring deepfakes have the potential to spread quickly, while the true nature of the video may spread much more slowly, if at all. By the time the truth comes out, it may be too late. People make up their minds quickly and are often reluctant to change their perspectives, even in the face of compelling evidence. Humans are prone to confirmation bias—the tendency to consider only the evidence that supports what the believer was already inclined to believe anyway. Deepfakes deliver fodder for confirmation bias, wrapped in very attractive packaging, to viewers. When deepfakes meet cancel culture in a climate of poor information literacy, the result is a social and political powder keg.

Supporters of the law argue further that deepfake technology threatens to seriously damage our democratic institutions. Citizens regularly rely on videos they see on the Internet to inform them about the temperament, behavioral profile, and political beliefs of candidates. It is likely that deepfakes would present a significant obstacle to becoming a well-informed voter. They would inevitably contribute to the sense that some voters currently have that we exist in a post-truth world—if you find a video in which Elizabeth Warren says one thing, just wait long enough and you’ll see a video of her saying the exact opposite. Who’s to say which is the deepfake? The results of such a worldview would be devastating.

Opponents of the law are concerned that it violates the first amendment. They argue that the legislation invites the government to consider the content of the messages being expressed and to allow or disallow such messages based on that content. This is dangerous precedent to set—it is exactly the type of thing that the first amendment is supposed to prevent.

What’s more, the legislation has the potential to stifle artistic expression. The law contains exemptions for the use of deepfakes that are made for the purposes of parody and satire. There are countless other kinds of statements that people might use deepfakes to make. In fact, in his warning video, artist Jordan Peele used a deepfake to great effect, arguably making his point far more powerfully than he could have using a different method. Peele’s deepfake might have resulted in more cautious and conscientious viewers. Opponents of the legislation argue that this is precisely why the first amendment is so important. It protects the kind of speech and artistic expression that gets people thinking about how their behavior ought to change in light of what they viewed.

In response, supporters of the legislation might argue that when the first amendment was originally drafted, we didn’t have the technology that we have today. It may well be the case that if the constitution were written today, it would be a very different document. Free speech is important, but technology can cause harm now in an utterly unprecedented way. Perhaps we need to balance the value of free speech against the potential harms differently now that those harms have such an extended scope.

A lingering, related question has to do with the role that social media companies play in all of this. False information spreads like wildfire on sites like Facebook and Twitter. Many people use these platforms as their source for news. The policies of these exceptionally powerful platforms are more important for the proper functioning of our democracy than anyone ever could have imagined. Facebook has taken some steps to prevent the spread of fake news, but many are concerned that it has not gone far enough.

In a tremendously short period of time, technology has transformed our perception of what’s possible. In light of this, we have an obligation to future generations to help them learn to navigate the very challenging information literacy circumstances that we’ve created for them. With good reason, people believe that they can trust their senses. Our academic curriculum must change to make future generations more discerning.