← Return to search results
Back to Prindle Institute
Technology

Questions About Authenticity from Androids in the Age of ChatGPT

By James M. Okapal
2 Jun 2023

One of the great philosophical fiction writers of the 20th century, Philip K. Dick, was deeply concerned about the problem of discerning the authentic from the inauthentic. Many of his books and stories deal with this theme. One novel in particular, Do Androids Dream of Electric Sheep?, includes a version of the Turing Test, what Dick called the Voight-Kampff test, to distinguish humans from androids. Most people are probably more familiar with the use of the test in the novel’s cinematic adaptation, Blade Runner. In the film, the use of the test is never meaningfully questioned. In the novel, however, it is suggested throughout (and especially in chapters 4 and 5) that there are significant moral, epistemic, and ontological issues with the test. The Voight-Kampff test is meant to distinguish humans from androids by measuring involuntary physical responses connected to empathetic reactions to hypothetical scenarios. The test works on the supposition that humans would have empathic reactions to the misuse and destruction of other conscious entities while androids would not. It is suggested to the reader, as well as characters, that the test will generate false positives by occasionally identifying humans with psychological disorders as being androids. The main character, Rick Deckard, knows about the possibility of false positives when he tests another character, Rachael Rosen, and determines she is an android.

The possibility that the test is faulty allows Deckard to be manipulated. The CEO of the company that makes the androids, Eldon Rosen, claims Rachael is a human. His explanation is that for most of her life Rachael had little human contact and thus did not develop empathy sufficient to pass the test. If the test is returning false positives, then enforcement agencies using the test “may have retired [killed], very probably have retired, authentic humans with underdeveloped empathic ability.” Further muddying the philosophical waters, it is unclear whether advances in android technology have made it so that the test results in false negatives, and thereby allowing dangerous androids to move freely through society. As Rachael points out “If you have no test you can administer, then there is no way you can identify an android. And if there’s no way you can identify an android there’s no way you can collect your bounty.” In other words, without a reliable test to distinguish the authentic from the inauthentic, the real from the fake, Rick Deckard and the organizations he represents are paralyzed.

But Rachael is an android. Her backstory is a lie. Nevertheless, Deckard is constantly questioning what he knows, what is real, and whether his behavior as a bounty hunter is morally licit.

Dick was worried about problems with the inability to distinguish the real from the fake in the 1960s and 1970s. In 2023, with the creation of a variety of forms of technology such as ChatGPT, voice cloning, and deep fake image technology, along with public figures being willing to tell outright lies with seeming impunity, we already live in Dick’s dystopia.

Consider just a few recent examples.

The Writers Guild of America (WGA) has worries about AI-generated content as part of the ongoing strike. One proposal made by the WGA to the Alliance of Motion Picture and Television Producers (AMPTP) states the AMPTP should agree to “Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.”

The Washington Post reports that an instructor at Texas A&M, Jared Mumm, sent an email to his students stating that due to concerns about the use of ChatGPT in his course, all students were going to receive an incomplete for the course until he could determine who did and who did not use the software.

The U. S. Senate Judiciary Subcommittee on Privacy, Technology and the Law recently heard testimony from Sam Altman, CEO of Open AI – the creators of ChatGPT – about the dangers and solutions to the existence of such technology.

Each of these on-going events center on a concern about the authenticity of ideas, about whether the consumer of these ideas can reliably believe that the claimed or implied origin of an idea is veridical. To make this point, Senator Richard Blumenthal (D-CT) opened the subcommittee hearing with an audio recording that was an AI-generated clone of his voice reading the following ChatGPT generated script:

Too often, we have seen what happens when technology outpaces regulation: the unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.

As the real Senator Blumenthal points out, the AI’s “apparent reasoning is pretty impressive” suggesting the senator actually endorses something akin to the ideas that are, nevertheless, not his ideas. The fear is that someone hearing this, or any other such creation, would be unable to know whether the ideas are authentic and being authentically stated by the claimed author. For another example, The Daily Show recently created a fake re-election ad for President Biden using similar technology. The fake ad includes profanity and morally questionable attacks on the character of many public figures — see here. Comments about the ad discuss how effective this it might be for the President’s future campaign with one, Lezopi5914, stating that the fake ad is “Full of truth. Best and most honest ad ever.”

If we can’t distinguish the real from the simulated, then we are cut off from the objective world. If we are cut off from the objective world, then we are cut off from our ability to ensure that sentences correspond to reality and thus our ability to assign the value “true” to our sentences (at least on a correspondence theory of truth). Without being able to assign the value “true” to sentences such as our beliefs, our decisional-capacity is threatened. On one reading of Kantian forms of autonomy, the agent has to have full, accurate information to make rational, moral choices. The possibility of the existence of an ad with an AI-generated script, a cloned voice, and a deep-faked face undermines a voter’s ability to make rational, moral choices about who deserves our vote in the next election as well as more mundane decisions.

To make matters even worse, if I know fake information is out there, but I can’t distinguish the authentic from the inauthentic, then how do I form any beliefs whatsoever, let alone act on them?

One does not need to adopt William Clifford’s strong principle that “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence” to become paralyzed. In the current situation, I don’t merely lack sufficient evidence for belief adoption, I have no grounds for a belief.

We can see how such thinking may have influenced Jared Mumm to give every student in his Texas A&M course an incomplete.  As he put it on one student’s paper, “I have to gauge what you are learning not a computer” but at the moment, he lacks a reliable test or procedure to identify if a text is authentically written by a student or generated by an AI chatbot. Result: everyone gets an incomplete until we can sort this out. As educators, do we have to suspend all grading until a solution is found? If no one can trust the information they obtain about politicians, policy, legislation, the courts, or what happened on January 6, 2021, do we even bother participating in self-governance through voting and other forms of political participation?

Rick Deckard suggests that those who create such technology have a moral responsibility to provide solutions to the problems they are creating. It is hinted that the only reason the android technology is not banned outright in the fictional world is because there exist tests to distinguish the authentic from the inauthentic. He tells Rachael and Eldon Rosen that “[i]f you have no confidence in the Voight-Kampff scale … possibly your organization should have researched an alternate test. It can be argued the responsibility rests partly on you.” In other words, in order to avoid outright bans, detection technology needs to be available and trustworthy.

Help (?) is on the way. Open AI is developing tools that can identify text as AI generated. Other companies are creating software to identify voice cloning and deep fakes. Is this all we need? A test, a procedure, to determine what is real and what is fake, and then policies that use these tests to determine who is to suffer punitive measures for using these tools for deceptive purposes? Open AI admits that its detection tool generates false positives nine percent of the time and there is no word on how often it produces false negatives. Other detection software is also not one hundred percent accurate. Thus, we seem to lack reliable tests. What about enforceable policies? Some new outlets report that Mr. Altman asked Congress to regulate AI, thereby saying it is a governmental responsibility, not Open AI’s, to develop and enforce these rules. Other outlets suggest that Mr. Altman is trying to shape legislation to the advantage of Open AI, not in solving the problems of authenticity the technology creates. Who do you believe? Who can you believe?

And so here we are, in a world filled with technology that can deceive us. But we have no reliable manner to identify the authentic from the inauthentic and no enforceable policies to discourage nefarious use of these technologies. While I believe that academics have many tools at their disposal to keep students from cheating, the larger societal problems of this technology are more concerning.

I am not the only person to notice the connection between Philip K. Dick and these philosophical problems — see here and here. But the seemingly unanswerable question is, “Did Jim write this article and then discover similar essays that provide further support for his concerns or are the explicit references to cover his tracks so no one notices that he plagiarized, possibly using ChatGPT?” How can you know?

James M. Okapal is Professor of Philosophy at Missouri Western State University. His research focuses on the intersection of ethics and literature with special interest in theories of moral status and friendship. He is the editor for McFarland’s Ethics and Culture Series.
Related Stories