← Return to search results
Back to Prindle Institute
EducationTechnology

ChatGPT and Deception

By Richard Gibson
24 Jan 2023
photograph of exams stacked on desk at front of class

For many, writing can be a challenging, even painful, experience. What seems like it should be a simple job – taking the thoughts in your head, turning them into words, and then typing them out – can be a task of almost insurmountable odds for some. Heck, I do it for a living, and I still find the blank page to be an ungodly sight, one that I sometimes flee from and promise to tackle later on.

As such, it should be no surprise that the advent of an AI system that can write for us has caused quite a stir. That system is ChatGPT, and while I could explain what it is, it’s more effective (not to mention easier) to let the tool do that for itself:

ChatGPT is a state-of-the-art language generation model developed by OpenAI. It is based on a transformer architecture and pre-trained on a large dataset of text, allowing it to generate human-like text.

ChatGPT can be fine-tuned for various natural language processing tasks such as language translation, text summarization, question answering, and text completion. It can also be used to generate creative writing, poetry, and even code.

One of the key features of ChatGPT is its ability to generate text that is coherent and coherent with the context it was given, making it able to generate text that flows well and makes sense. ChatGPT can be used to generate text on a wide range of topics and can be fine-tuned to suit specific use cases.

It is important to note that ChatGPT is a machine learning model, it is not capable of independent thinking or decision making, it is only able to generate text based on patterns it has learned from the data it was trained on.

I didn’t write any of that. Instead, I gave ChatGPT the prompt “explain what ChatGPT is” and let it fly. As it just noted, however, ChatGPT is not limited to dry, explanatory statements. The system has demonstrated an apparent aptitude for creative, even funny, writing, from explaining quantum theory to a child in the style of Snoop Dogg to creating an analogy for what it’s like to eat the keto diet, from giving an account of Attack on Titan in the manner of Donald Trump to writing a biblical verse explaining how to remove a peanut butter sandwich from a VCR. The tool really does seem adaptable.

Yet, despite the hilarity, ChatGPT’s emergence has brought some pressing issues regarding ownership and authenticity of work to the fore. If an AI generates text for you, can you claim it as your own? For example, Ammaar Reshi is facing considerable backlash for using ChatGPT to write a children’s book (which he then illustrated using Midjorney, an AI art generator). Reshi did not directly write or illustrate the book he is claiming as his product; he gave ChatGPT the required prompts and then used its output.

But, it has been in the educational sector where such concerns have really taken hold. So much so that some, such as New York City’s Department of Education, have blocked access to ChatGPT on school devices for fear of its misuse. The problems are relatively easy to grasp:

What is stopping students from passing off ChatPGT-produced essays and other forms of assessed work as their own? How should educators respond if a student uses ChatGPT to write an essay? And are students actually doing anything wrong if they use ChatGPT like this?

The answer to this last question is vastly complex and intertwined with the very purpose of assessment and learning monitoring. The point of assigning assessments, such as essays, is not so students produce a piece of text. The production of the essay is merely a step towards another goal. These forms of assessment act as a representation of the students’ learning. When a teacher asks you to write a 3,000-word paper on Frederick Douglas, for example, it is not the paper with which they are concerned; it is with your ability to recall, appraise, and communicate what you know about Douglas’ life, work, and impact. The essay is a medium through which such appraisal is conducted.

As philosopher Rebecca Mace remarked in an episode of BBC’s Inside Science:

A lot of people, including the newspapers, seem to have misunderstood the point of homework. So the purpose of homework is not to produce an essay, but to assess student understanding in order that the teachers can assist them with the gaps, or work out what they’ve not taught very well, or what they maybe need to go over again, or what that individual student really needs help with. Then the essay itself is irrelevant in many ways because that’s all the essay’s doings; it’s a means to an end.

Thus, according to such a way of thinking, the danger of ChatGPT comes from its potential to misrepresent student learning, giving the impression that a student knows more about a subject than they actually do. The issue is not one of principle but of outcome, and the use of ChatGPT brings with it the risk that learning is negatively impacted.

This stance, however, seems to overlook something important in using ChatGPT in educational settings. If accurate – if the threat of ChatGPT comes from its capacity to hide academic failings (both on the student’s and teacher’s behalf) – then we shouldn’t have any qualms about it being used in situations where this isn’t a factor. But, academically gifted students who know their subjects inside and out still seem to commit some wrong when they pass the algorithmically-generated text off as their own. This wrong emerges not from the impact such usage might have on their academic performance, nor on their teacher’s ability to assess their grasp of a subject accurately, but from the fact that they are attempting to deceive their assessor. It is wrong not because of an outcome but because of an adherence to principles – the virtue of honesty and the vice of deception.

That is not to say that this is the only reason why ChatGPT presents a potential harm to education and educational practices. The use of AI to game the academic-assessment system by hiding one’s failure to meet the standards are most certainly a concern (perhaps the central one). But, such an acknowledgement should not lead us to overlook the fact that, much like plagiarism, academic wrongs don’t simply emerge from their deleterious impact. They also come from deception and attempting to pass something off as one’s work when, in fact, they had minimal input in its creation.

Richard B. Gibson received his PhD in Bioethics & Medical Jurisprudence from the University of Manchester. His research interests lie at the intersection of philosophy and biology, the philosophy of law, nihilism, and normative ethics. Richard’s currently working on a series of papers examining the social, legal, and ethical implications of cryopreservation.
Related Stories