← Return to search results
Back to Prindle Institute
OpinionTechnology

Academic Work and Justice for AIs

By Daniel Burkett
3 May 2023
drawing of robot in a library
image created using DALL-E

As the U.S. academic year draws to a close, the specter of AI-generated essays and exam answers looms large for teachers. The increased use of AI “chatbots” has forced a rapid and fundamental shift in the way that many schools are conducting assessments, exacerbated by the fact that – in a number of cases – they have been able to pass all kinds of academic assessments. Some colleges are now going so far as to offer amnesty for students who confess to cheating with the assistance of AI.

The use of AI as a novel plagiarism tool has all kinds of ethical implications. Here at The Prindle Post, Richard Gibson previously discussed how this practice creates deception and negatively impacts education, while D’Arcy Blaxell instead looked at the repetitive and homogeneous nature of the content they will produce. I want to focus on a different question, however – one that, so far, has been largely neglected in ethical discussions of the ethics of AI:

Does justice demand that AIs receive credit for the academic work they create?

The concept of “justice” is a tricky one. Though, at its simplest, we might understand justice merely as fairness. And many of us already have an intuitive sense of what this looks like. Suppose, for example, that I am grading a pile of my students’ essays. One of my students, Alejandro, submits a fantastic essay showing a masterful understanding of the course content. I remember, however, that Alejandro has a penchant for wearing yellow t-shirts – a color I abhor. For this reason (and this reason alone) I decide to give him an “F.” Another student of mine, Fiona, instead writes a dismal essay that shows no understanding whatsoever of anything she’s been taught. I, however, am friends with Fiona’s father, and decide to give her an “A” on this basis.

There’s something terribly unfair – or unjust – about this outcome. The grade a student receives should depend solely on the quality of their work, not the color of their T-shirt or whether their parent is a friend of their teacher. Alejandro receives an F when he deserves an A, while Fiona receives an A when she deserves an F.

Consider, now, the case where a student uses an AI chatbot to write their essay. Clearly, it would be unjust for this student to receive a passing grade – they do not deserve to receive credit for work that is not their own. But, then, who should receive credit? If the essay is pass-worthy, then might justice demand that we award this grade to the AI itself? And if that AI passes enough assessments to be awarded a degree, then should it receive this very qualification?

It might seem a preposterous suggestion. But it turns out that it’s difficult to explain why justice would not claim as much.

One response might be to say that the concept of justice don’t apply to AIs because AIs aren’t human. But this relies on the very controversial assumption that justice only applies to Homo sapiens – and this is a difficult claim to make. There is, for example, a growing recognition of the interests of non-human animals. These interests make appropriate the application of certain principles of justice to those animals, arguing – for example – that it is unjust for an animal to suffer for the mere amusement of a human audience. Restricting our discussions of justice to humans would preclude us from making claims like this.

Perhaps, then, we might expand our considerations of justice to all beings that are sentient – that is, those that are able to feel pain and pleasure. This is precisely the basis of Peter Singer’s utilitarian approach to the ethical treatment of animals. According to Singer, if an animal can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain. These interests then form the basis of ways in which it is just or unjust to treat not just humans, but non-human animals too. AIs are not sentient (at least, not yet) – they can experience neither pain nor pleasure. This, then, might be an apt basis on which to exclude them from our discussions of justice. But here’s the thing: we don’t want to make sentience a prerequisite for justice. Why not? Because there are many humans who also lack this feature. Consider, for example, a comatose patient or someone with Congenital Pain Insensitivity. Despite the inability of these individuals to experience pain, it would seem unjust to, say, deprive them of medical treatment. Given this, then, sentience cannot be necessary for the application of justice.

Consider, then, a final alternative: We might argue that justice claims are inapplicable to AIs not because they aren’t human or sentient, but because they fail to understand what they write. This is a perennial problem for AIs, and is often explained in terms of the distinction between the syntax (structure) and semantics (meaning) of what we say. Computer programs – by their very nature – run on input/output algorithms. When, for example, a chatbot receives the input “who is your favourite band?” it is programmed to respond with an appropriate output such as “my favorite band is Rage Against the Machine.” Yet, while the structure (i.e., syntax) of this response is correct, there’s no meaning (i.e., semantics) behind the words. It doesn’t understand what a “band” or a “favorite” is. And when it answers with “Rage Against the Machine”, it is not doing so on the basis of its love for the anarchistic lyrics of Zach de la Rocha, or the surreal sonifications of guitarist Tom Morello. Instead, “Rage Against the Machine” is merely a string of words that it knows to be an appropriate output when given the input “who is your favourite band?” This is fundamentally different to what happens when a human answers the very same question.

But here’s the thing: There are many cases where a student’s understanding of a concept is precisely the same as an AI’s understanding of Rage Against the Machine.

When asked what ethical theory Thomas Hobbes was famous for, many students can (correctly) answer “Contractarianism” without any understanding of what that term means. They have merely learned that this is an appropriate output for the given input. What an AI does when answering an essay or exam question, then, might not be so different to what many students have done for as long as educational institutions have existed.

If a human would deserve to receive a passing grade for a particular piece of academic work, then it remains unclear why justice would not require us to award the same grade to an AI for the very same work. We cannot exclude AIs from our considerations of justice merely on the basis that they lack humanity or sentience, as this would also require the (unacceptable) exclusion of many other beings such as animals and coma patients. Similarly, excluding AIs on the basis that they do not understand what they are writing would create a standard that even many students would fall short of. If we wish to deny AIs credit for their work, we need to look elsewhere for a justification.

Daniel Burkett received his PhD in Philosophy from Rice University, and is now a lecturer in the Philosophy Department at Binghamton University. His primary research interests are in ethics and political philosophy – particularly issues surrounding punishment and climate change.
Related Stories