← Return to search results
Back to Prindle Institute

Questions About Authenticity from Androids in the Age of ChatGPT

photograph of Blade Runner movie scene

One of the great philosophical fiction writers of the 20th century, Philip K. Dick, was deeply concerned about the problem of discerning the authentic from the inauthentic. Many of his books and stories deal with this theme. One novel in particular, Do Androids Dream of Electric Sheep?, includes a version of the Turing Test, what Dick called the Voight-Kampff test, to distinguish humans from androids. Most people are probably more familiar with the use of the test in the novel’s cinematic adaptation, Blade Runner. In the film, the use of the test is never meaningfully questioned. In the novel, however, it is suggested throughout (and especially in chapters 4 and 5) that there are significant moral, epistemic, and ontological issues with the test. The Voight-Kampff test is meant to distinguish humans from androids by measuring involuntary physical responses connected to empathetic reactions to hypothetical scenarios. The test works on the supposition that humans would have empathic reactions to the misuse and destruction of other conscious entities while androids would not. It is suggested to the reader, as well as characters, that the test will generate false positives by occasionally identifying humans with psychological disorders as being androids. The main character, Rick Deckard, knows about the possibility of false positives when he tests another character, Rachael Rosen, and determines she is an android.

The possibility that the test is faulty allows Deckard to be manipulated. The CEO of the company that makes the androids, Eldon Rosen, claims Rachael is a human. His explanation is that for most of her life Rachael had little human contact and thus did not develop empathy sufficient to pass the test. If the test is returning false positives, then enforcement agencies using the test “may have retired [killed], very probably have retired, authentic humans with underdeveloped empathic ability.” Further muddying the philosophical waters, it is unclear whether advances in android technology have made it so that the test results in false negatives, and thereby allowing dangerous androids to move freely through society. As Rachael points out “If you have no test you can administer, then there is no way you can identify an android. And if there’s no way you can identify an android there’s no way you can collect your bounty.” In other words, without a reliable test to distinguish the authentic from the inauthentic, the real from the fake, Rick Deckard and the organizations he represents are paralyzed.

But Rachael is an android. Her backstory is a lie. Nevertheless, Deckard is constantly questioning what he knows, what is real, and whether his behavior as a bounty hunter is morally licit.

Dick was worried about problems with the inability to distinguish the real from the fake in the 1960s and 1970s. In 2023, with the creation of a variety of forms of technology such as ChatGPT, voice cloning, and deep fake image technology, along with public figures being willing to tell outright lies with seeming impunity, we already live in Dick’s dystopia.

Consider just a few recent examples.

The Writers Guild of America (WGA) has worries about AI-generated content as part of the ongoing strike. One proposal made by the WGA to the Alliance of Motion Picture and Television Producers (AMPTP) states the AMPTP should agree to “Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.”

The Washington Post reports that an instructor at Texas A&M, Jared Mumm, sent an email to his students stating that due to concerns about the use of ChatGPT in his course, all students were going to receive an incomplete for the course until he could determine who did and who did not use the software.

The U. S. Senate Judiciary Subcommittee on Privacy, Technology and the Law recently heard testimony from Sam Altman, CEO of Open AI – the creators of ChatGPT – about the dangers and solutions to the existence of such technology.

Each of these on-going events center on a concern about the authenticity of ideas, about whether the consumer of these ideas can reliably believe that the claimed or implied origin of an idea is veridical. To make this point, Senator Richard Blumenthal (D-CT) opened the subcommittee hearing with an audio recording that was an AI-generated clone of his voice reading the following ChatGPT generated script:

Too often, we have seen what happens when technology outpaces regulation: the unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.

As the real Senator Blumenthal points out, the AI’s “apparent reasoning is pretty impressive” suggesting the senator actually endorses something akin to the ideas that are, nevertheless, not his ideas. The fear is that someone hearing this, or any other such creation, would be unable to know whether the ideas are authentic and being authentically stated by the claimed author. For another example, The Daily Show recently created a fake re-election ad for President Biden using similar technology. The fake ad includes profanity and morally questionable attacks on the character of many public figures — see here. Comments about the ad discuss how effective this it might be for the President’s future campaign with one, Lezopi5914, stating that the fake ad is “Full of truth. Best and most honest ad ever.”

If we can’t distinguish the real from the simulated, then we are cut off from the objective world. If we are cut off from the objective world, then we are cut off from our ability to ensure that sentences correspond to reality and thus our ability to assign the value “true” to our sentences (at least on a correspondence theory of truth). Without being able to assign the value “true” to sentences such as our beliefs, our decisional-capacity is threatened. On one reading of Kantian forms of autonomy, the agent has to have full, accurate information to make rational, moral choices. The possibility of the existence of an ad with an AI-generated script, a cloned voice, and a deep-faked face undermines a voter’s ability to make rational, moral choices about who deserves our vote in the next election as well as more mundane decisions.

To make matters even worse, if I know fake information is out there, but I can’t distinguish the authentic from the inauthentic, then how do I form any beliefs whatsoever, let alone act on them?

One does not need to adopt William Clifford’s strong principle that “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence” to become paralyzed. In the current situation, I don’t merely lack sufficient evidence for belief adoption, I have no grounds for a belief.

We can see how such thinking may have influenced Jared Mumm to give every student in his Texas A&M course an incomplete.  As he put it on one student’s paper, “I have to gauge what you are learning not a computer” but at the moment, he lacks a reliable test or procedure to identify if a text is authentically written by a student or generated by an AI chatbot. Result: everyone gets an incomplete until we can sort this out. As educators, do we have to suspend all grading until a solution is found? If no one can trust the information they obtain about politicians, policy, legislation, the courts, or what happened on January 6, 2021, do we even bother participating in self-governance through voting and other forms of political participation?

Rick Deckard suggests that those who create such technology have a moral responsibility to provide solutions to the problems they are creating. It is hinted that the only reason the android technology is not banned outright in the fictional world is because there exist tests to distinguish the authentic from the inauthentic. He tells Rachael and Eldon Rosen that “[i]f you have no confidence in the Voight-Kampff scale … possibly your organization should have researched an alternate test. It can be argued the responsibility rests partly on you.” In other words, in order to avoid outright bans, detection technology needs to be available and trustworthy.

Help (?) is on the way. Open AI is developing tools that can identify text as AI generated. Other companies are creating software to identify voice cloning and deep fakes. Is this all we need? A test, a procedure, to determine what is real and what is fake, and then policies that use these tests to determine who is to suffer punitive measures for using these tools for deceptive purposes? Open AI admits that its detection tool generates false positives nine percent of the time and there is no word on how often it produces false negatives. Other detection software is also not one hundred percent accurate. Thus, we seem to lack reliable tests. What about enforceable policies? Some new outlets report that Mr. Altman asked Congress to regulate AI, thereby saying it is a governmental responsibility, not Open AI’s, to develop and enforce these rules. Other outlets suggest that Mr. Altman is trying to shape legislation to the advantage of Open AI, not in solving the problems of authenticity the technology creates. Who do you believe? Who can you believe?

And so here we are, in a world filled with technology that can deceive us. But we have no reliable manner to identify the authentic from the inauthentic and no enforceable policies to discourage nefarious use of these technologies. While I believe that academics have many tools at their disposal to keep students from cheating, the larger societal problems of this technology are more concerning.

I am not the only person to notice the connection between Philip K. Dick and these philosophical problems — see here and here. But the seemingly unanswerable question is, “Did Jim write this article and then discover similar essays that provide further support for his concerns or are the explicit references to cover his tracks so no one notices that he plagiarized, possibly using ChatGPT?” How can you know?

The Curious Case of Evie Toombes: Alternative Realities and Non-Identity

photograph of elongated shadow of person on paved road

Evie Toombes just won a lawsuit against her mother’s doctor. She was born with spina bifida, a birth defect affecting her spine, which requires continual medical care. Taking folic acid before and during pregnancy can help reduce the risk of spina bifida, but Toombes says that the doctor told her mother that folic acid supplements weren’t necessary. The judge ruled that, had the doctor advised Toombes’ mother “about the relationship between folic acid supplementation and the prevention of spina bifida/neural tube defects,” she would have “delayed attempts to conceive” until she was sure her folic acid levels were adequate, and that “in the circumstances, there would have been a later conception, which would have resulted in a normal healthy child.” The judge therefore ruled that the doctor was liable for damages because of Toombes’ condition.

Let’s assume that Toombes is right about the facts. If so, the case may seem straightforward. But it actually raises an incredibly difficult philosophical conundrum noted by the philosopher Derek Parfit. Initially, it seems Toombes was harmed by the doctor’s failure to advise her mother about folic acid. But the suggestion is that, if he’d done so, her mother would have “delayed attempts to conceive,” resulting in the “later conception” of a “normal healthy child.” And, presumably, that child would not have been Evie Toombes. Had her mother waited, a different sperm would have fertilized a different egg, producing a different child. So had the doctor advised her mother to take folic acid and delay pregnancy, it’s not as though Toombes would have been born, just without spina bifida. A different child without spina bifida would have been born, and Toombes would not have existed at all.

It may be that some lives are so bad that non-existence would be better. And if your life is worse than non-existence, then it’s easy to see why you’d have a complaint against someone who’s responsible for your life. But Toombes’ life doesn’t seem to be like this: she is a successful equestrian. And anyway, she didn’t make that claim as part of her argument, and the court didn’t rely on it. However, if Toombes’ life is worth living, and if the doctor’s actions are responsible for her existing at all, it might seem puzzling how the doctor’s actions could have wronged her.

The non-identity problem arises in cases like this, where we can affect how well-off future people are, but only by also changing which future people come to exist. It’s a problem because causing future people to be less well-off seems wrong, but it’s also hard to see who is wronged in these cases, provided the people who come to exist have lives worth living. E.g., it seems that the doctor should have told Toombes’ mother about folic acid, but, assuming her life is worth living, it’s also hard to see how Toombes is wronged by his not doing so, since that’s why she exists.

The non-identity problem also has implications for many other real-world questions. For instance, if we enact sustainable environmental policies, perhaps future generations will be better-off. But these generations will also consist of different people: the butterfly effect of different policies means that different people will get married, will conceive at different times, etc. Provided the (different) people in the resource-depleted future have lives worth living, it may be hard to see why living unsustainably would be wrong.

(It might be plausible that the doctor wronged Toombes’ mother, whose existence doesn’t depend on his actions. But wrongs against currently-existing people may not be able to explain the wrong of the unsustainable environmental policy, provided the bad effects won’t show up for a long time. Some unsustainable policies might only help current people, by allowing them to live more comfortably. And anyway, the court thought Toombes was also wronged: she’s getting the damages.)

Because it is relevant to important questions like this, it would be very handy to know what the solution to the non-identity problem is. Unfortunately, all solutions have drawbacks.

An obvious possibility is to say that we should make the world as good as possible. Since well-being is good, then, all else equal, we would be obligated to make sure that better-off people exist in the future rather than worse-off ones. But the decision of the court was that the doctor wronged Toombes herself, not just that he failed to make the world as good as possible: if that was the problem, he should have been ordered to pay money to some charity that makes the world as good as possible, rather than paying money to Toombes. And anyway, it isn’t obvious that we’re obligated to make sure future generations contain as much well-being as possible. One way to do that is by having happy children. But most people don’t think we’re obligated to have children, even if, in some case, that would add the most happiness to the world on balance.

Another possibility is to say that we can wrong people without harming them. Perhaps telling comforting lies is like this: here, lying prevents a harm, but can still be wrong if the person has a right to know the painful truth. Perhaps individuals have a right against being caused to exist under certain sorts of difficult conditions. But notice that we can usually waive rights like this. If I have a right to the painful truth, I can waive this right and ask you not to tell me. People who haven’t been born yet can’t waive rights (or do anything else). But when people are not in a position to waive a right, we can permissibly act based on whether we think they would or should waive the right, or something like that. You have a right to refuse having your legs amputated. But if paramedics find you unconscious and must amputate your legs to save your life, they’ll probably do it, since they figure you would consent, if you could.  Why not think that, similarly, future people whose lives are worth living generally would or should consent to the only course of action that can bring them into being, even if their lives are difficult in some ways?

A third solution says that Toombes’ doctor didn’t act wrongly after all–and neither would we act wrongly by being environmentally unsustainable, etc. But that’s very hard to believe. It’s even harder to believe in other cases. Here’s a case inspired by the philosopher Gregory Kavka. Suppose I and my spouse sign a contract to sell our (not yet conceived) first child into slavery. Because of the deal, we conceive a child under slightly different circumstances than we otherwise would have, resulting in a different child. (Maybe the slaver gives us a special hotel room.) There’s no way to break the contract and keep our child from slavery. Suppose the child’s life is, though difficult, (barely) worth living. This solution appears to suggest that signing the slave contract is permissible: after all, the child has a life worth living, and wouldn’t exist otherwise. But that doesn’t seem right!

I wrote more about this in chapter eight of this book. There are other possible moves, but they have problems, too. So the non-identity problem is a real head-scratcher. Maybe someone reading this can make some progress on it.

The Ethics of Telling All: What’s at Stake in Memoir Writing?

Photograph of author Karl Ove Knausgard standing, holding a microphone, and reading from a book where the title "My Struggle" is visible

When Norwegian author Karl Ove Knausgaard published the first volume of his My Struggle series in 2009 it was a startling commercial success, but also a personal disaster. Knausgaard’s infamous six-part series of autobiographical novels (titled Min Kamp in Norwegian) recounts the “banalities and humiliations” of his private life. While My Struggle is classified as a “novel”, it is described by Pacific Standard as a “barely-veiled but finely-rendered memoir”. After his first two fictional novels A Time for Everything (1998) and Out of This World (2004) received critical acclaim in Norway, Knausgaard found that he was “sick of fiction” and set out to write exhaustively about his own life. Consequently, My Struggle reveals his father’s fatal spiral into alcoholism, the failures of his first marriage, the boredom of fatherhood, the manic depression of his second wife, and much more.  “Autofiction” has become an increasingly mainstream mode of contemporary writing, but how authors should balance the ethical dilemma of exposing the private life of their friends and family remains unclear.

The first book of the My Struggle series, titled A Death in the Family, meticulously chronicles the slow, pitiful demise of Knausgaard’s alcoholic father. When Knausgaard first shared the manuscripts of his work with relatives, his father’s side of the family called it “verbal rape” and attempted a lawsuit to stop publication. Under the weight of bitter family and legal action, Knausgaard was forced to change the names of My Struggle and refers to the villainous alcoholic of the novel only as “father”. For Knausgaard, the suppression of true names weakened the goal of his novel: “to depict reality as it was.”

The issue with ‘reality’, however, is that everyone seems to have their own version. Part of the legal action against My Struggle were defamation claims disputing the circumstances surrounding the death of Knausgaard’s father. In another dispute over reality, Knausgaard’s first ex-wife recorded a radio documentary, titled Tonje’s Version, where she details the trauma of having her personal life publicly exposed. What’s striking about the documentary is Tonje’s point that her own memories came second to Knausgaard’s art. For Knausgaard, depicting reality meant his own reality. But, if memory is colored from our own perspective, how much claim can he have on what’s ‘true’ and not? Hari Kunzru writes in an article for The Guardian, “But he [Knausgaard] is, inevitably, an unreliable narrator. How could he not be? We live a life of many dinners, many haircuts, many nappy changes. You can’t narrate them all. You pick and choose. You (in the unlovely vernacular of our time) curate.”

Even when people accept the ‘truth’ presented by a memoir it can damage and destroy personal relationships. Knausgaard was married to his second wife, Linda, while writing My Struggle. After Linda read Knausgaard’s frank account of their marriage in his manuscript, she called him and said their relationship could never be romantic again. The media storm generated from the first few books of the series led to Linda having a nervous breakdown and divorcing Knausgaard. In an interview, Knausgaard admits to striking a Faustian deal with the publication of My Struggle saying, “I have actually sold my soul to the devil. That’s the way it feels. Because . . . I get such a huge reward.”, while “the people I wrote about get the hurt.” My Struggle is now an international bestseller and revered as one of the greatest literary accomplishments of the 21st century, yet on the final page of My Struggle Knausgaard admits “I will never forgive myself”. Critical acclaim and popular fame could not justify the damage done to Knausgaard and his family, but can anything positive emerge from the pain of writing such an unforgiving memoir?

Ashley Barnell, a contributor to The Conversation, writes in an essay, “By representing the conflicts and silences that families live with writers can introduce more diverse and honest accounts of family life into public culture.” From Instagram photos to popular humor people work hard to hide what hurts and feign happiness. As a collective unit, families are no exception. Norway found My Struggle particularly scandalous because of its violation of family privacy, which an article by The Guardian says was “profoundly shocking to the Lutheran sensibilities of a country that is less comfortable with public confessions than the Oprah-soaked anglophone world”. Knausgaard’s reckless exposition does not simply leave behind the outward facing mask individuals and families show the rest of the world, it shatters it all together and instead exposes deliberately, albeit painfully, the reality of one’s life.

Thematically speaking, shame is a core aspect of My Struggle. “Concealing what is shameful to you,” Knausgaard reflects, “will never lead to anything of value.” In a piece of literary criticism, Odile Heynders writes that shame in My Struggle, “. . . is connected to questions of humanness, humanity and humility. The capacity for shame makes the protagonist fragile, as it constitutes an acute state of sensitivity”. Advocates of literary fiction often cite its ability to increase one’s capacity for empathy. The shame and sensitivity of My Struggle, mixed with a self-deprecating humor, similarly accomplishes this feat by bringing readers to consider their own openness about pain they have both felt and delt. Barnell’s essay also points out that “The memoirist’s candid account of family struggles can destigmatize taboo topics – such as divorce, sexuality, and suicide.” In My Struggle, tough subjects like alcoholism, manic depression, existential dread, and broken relationships are not constructed neatly within the pages of fictional novel, but laid bare in their honest existence.

My Struggle, which has sold over half a million copies in Norway alone, may be helpful in encouraging more candid discussions of emotional pain. Yet, those whose private lives are thrust into the spotlight through nonfiction writing can be deeply disrupted. I think Knausgaard would argue that, to move past pain, it must be addressed in its most raw, authentic form. However, not everyone may be looking for such a public reconciliation. Authors working with the powerful mode of tell-all memoirs should consider the wellbeing of those immediately affected by publication and then the work’s potential benefit to the rest of the world.

The Real and the Rented

Image of a Tokyo cityscape

The New Yorker recently published a fascinating article about the “rent a family” phenomenon in Japan. Elif Batuman reports that businesses with names like “Family Romance” sell the services of actors who play various roles. One man missed his dead wife and estranged daughter, so he hired actors to come to his house and take their place at the dinner table. The relationship continued for some time until he was ready to get back in touch with his real daughter.

Continue reading “The Real and the Rented”

On Lying When There is No Truth

A photo of a Pinocchio doll.

One of St. Augustine’s enduring gifts to ethics has been Just War Theory. “Thou shalt not kill” comes with an asterisk and a long explanatory footnote.  Augustine did not leave us a Just Lie Theory. “Thou shalt not bear false witness” is almost absolute.

Augustine wrote about lying because, of course, everyone does it. And not just about little things. Even Augustine’s co-religionists were saying anything they could to win converts to their side. This was bad. Lying about faith and salvation degraded and debased Truth, the foundation of Augustine’s spiritual values. Augustine worried that a person converted by a lie had never accepted the Truth, and so might not really be saved.

Continue reading “On Lying When There is No Truth”