By now, it has become cliché to write about the ethical implications of ChatGPT, and especially so if you outsource some of the writing to ChatGPT itself (as I, a cliché, have done). Here at The Prindle Post, Richard Gibson has discussed the potential for ChatGPT to be used to cheat on assessments, while universities worldwide have been grappling with the issue of academic honesty. In a recent undergraduate logic class I taught, we were forced to rewrite the exam when ChatGPT was able to offer excellent answers to a couple of the questions – and, it must be said, completely terrible answers to a couple of others. My experience is far from unique, with professors rethinking assessments and some Australian schools banning the tool entirely.
But I have a different worry about ChatGPT, and it is not something that I have come across in the recent deluge of discourse. It’s not that it can be used to spread misinformation and hate speech. It’s not that its creators OpenAI drastically underpaid a Kenyan data firm for a lot of the work behind the program only weeks before receiving a $10 billion investment from Microsoft. It’s not that students won’t learn how to write (although that is concerning), the potential for moral corruption, or even the incredibly unfunny jokes. And it’s certainly not the radical change it will bring.
It’s actually that I think ChatGPT (and programs of its ilk) risks becoming the most radically conservative development in our lifetimes . ChatGPT risks turning classic FM radio into a framework for societal organization: the same old hits, on repeat, forever. This is because in order to answer prompts, ChatGPT essentially scours the internet to predict
“the most likely next word or sequence of words based on the input it receives.” -ChatGPT
At the moment, with AI chatbots in their relative infancy, this isn’t an issue – ChatGPT can find and synthesize the most relevant information from across the web and present it in a readable, accessible format. And there is no doubt that the software behind ChatGPT is truly remarkable. The problem lies with the proliferation of content we are likely to see now that essay writing (and advertising-jingle writing, and comedy-sketch writing…) is accessible to anybody with a computer. Some commentators are proclaiming the imminent democratization of communication while marketers are lauding ChatGPT for its ability to write advertising script and marketing mumbo-jumbo. On the face of it, this development is not a bad thing.
Before long, however, a huge proportion of content across the web will be written by ChatGPT or other bots. The issue with this is that ChatGPT will soon be scouring its own content for inspiration, like an author with writer’s block stuck re-reading the short stories they wrote in college. But this is even worse, because ChatGPT will have no idea that the “vast amounts of text data” it is ingesting is the very same data it had previously produced.
ChatGPT – and the internet it will engulf – will become a virtual hall of mirrors, perfectly capable of reflecting “progressive” ideas back at itself but never capable of progressing past those ideas.
I asked ChatGPT what it thought, but it struggled to understand the problem. According to the bot itself, it isn’t biased, and the fact that it trains on data drawn from a wide variety of sources keeps that bias at bay. But that is exactly the problem. It draws from a wide variety of existing sources – obviously. It can’t draw on data that doesn’t already exist somewhere on the internet. The more those sources – like this article – are wholly or partly written by ChatGPT, the more ChatGPT is simply drawing from itself. As the bot admitted to me, it is impossible to distinguish between human- and computer-generated content:
it’s not possible to identify whether a particular piece of text was written by ChatGPT or by a human writer, as the language model generates new responses on the fly based on the context of the input it receives.
The inevitable end result is an internet by AI, for AI, where programs like ChatGPT churn out “original” content using information that they have previously “created.” Every new AI-generated article or advertisement will be grist for the mill of the content-generation machine and further justification for whatever data exists at the start of the cycle – essentially, the internet as it is today. This means that genuine originality and creativity will be lost as we descend into a feedback loop of increasingly sharpened AI-orthodoxy; where common-sense is distilled into its computerized essence and communication becomes characterized by adherence. The problem is not that individual people will outsource to AI and forget how to be creative, or even that humanity as a whole will lose its capacity for ingenuity. It’s that the widespread adoption of ChatGPT will lead to an internet-wide echo chamber of AI-regurgitation where chatbots compete in an endless cycle of homogenization and repetition.
Eventually I was able to get ChatGPT to respond to my concerns, if not exactly soothe them:
In a future where AI-generated content is more prevalent, it will be important to ensure that there are still opportunities for human creativity and original thought to flourish. This could involve encouraging more interdisciplinary collaborations, promoting diverse perspectives, and fostering an environment that values creativity and innovation.
Lofty goals, to be sure. The problem is that the very existence of ChatGPT militates against them: disciplines will die under the weight (and cost-benefits) of AI; diverse perspectives will be lost to repetition; and an environment that genuinely does value creativity and innovation – the internet as we might remember it – will be swept away in the tide of faux-progress as it is condemned to repeat itself into eternity. As ChatGPT grows its user base faster than any other app in history and competitors crawl out of the woodwork, we should stop and ask the question: is this the future we want?