As the new school year approaches, educators across the country are once more redesigning their classes in light of the brave new world of generative AI. Many teachers are embracing the technology – encouraging their students to make use of this powerful new tool. Some are even going so far as to use AI to assist in their course design. Others, like myself, are banning any use of generative AI in their classes. But why?
Perhaps I’m a luddite. Perhaps I’m no better than Socrates fearing that writing would be the death knell for education. Nevertheless, I think there are (at least) three strong moral arguments against students using AI in a philosophy class – and perhaps education more generally.
Argument 1: AI Harms Creators
Generative AIs like ChatGPT operate on Large Language Models. Put simply, they’re trained on vast quantities of data – usually scraped from what is freely available on the internet. The problem is that this data usually belongs to other people. More problematically, generative AIs make no effort to credit the data that shape their outputs. So, when I use ChatGPT to generate a fluid structure for my paper, or a killer opening paragraph for my opinion piece, there’s no way I can properly credit the sources of those generated outputs. In doing so, I necessarily pass off someone else’s ideas as my own – the very definition of plagiarism.
As our own Tim Sommers notes, a common counter to this argument is that the operation of an LLM isn’t all that different from how our own minds already work: absorbing vast amounts of data, and using that data to produce novel creations. Anyone who’s ever created anything will know the fear that one of your darling creations – a plot point, a song lyric, or a visual design element – is merely parroting another creation once seen, but long forgotten.
Like Sommers, I admit that I lack the expertise to discern how different the operation of LLMs is from how our own minds function. But I think that there is at least one morally important point of difference: While our own creations might be subconsciously informed by data we’ve absorbed, there is (excepting cases of intentional plagiarism) no intention on our part to consciously hold out the work of another as our own. The same isn’t true when we use ChatGPT. We know how LLMs operate, and we know that any product of a generative AI has made vast (unattributed) use of the works of others. This knowledge is, I think, enough to make our actions morally problematic.
Argument 2: AI Harms the Environment
But AI doesn’t just harm creators – it’s also devastating for the environment. Generative AI requires huge amounts of processing power, and that power requires a lot of energy. While precise quantifications are hard to come by, ChatGPT’s power usage is estimated to be roughly equivalent to that of 33,000 standard homes. And it’s not just electricity, either. Generative AIs need vast amounts of water to cool their processors – a concerning prospect, given that we are at imminent risk of a global water crisis.
We are in the throes of a global climate catastrophe – a catastrophe that, according to some estimates, might become irreversible in less than four years if we don’t make drastic changes to our way of living. Among those necessary changes are massive reductions in our energy consumption. Given this, an explosion in the popularity of generative AI is the last thing we need.
Of course, the fact that there is an environmental argument against AI usage doesn’t provide us with an all-things-considered reason to stop. There are many harmful practices that we might need to continue in order to ensure human safety and flourishing. But using AI just doesn’t seem to be among those. Much of our AI usage is entirely frivolous – with 38% of people using AI to plan travel itineraries, and another 25% using it to draft social media posts. And when it comes to non-frivolous functions – like using it to craft an email (as 31% of people have) or prepare for a job interview (as 30% of people have), there are far less environmentally harmful ways of doing the very same thing. Having a question answered by AI can produce almost fifty times the carbon emissions than using a simpler system – like a search engine – to resolve the same query.
Argument 3: AI Harms the User
Even if we’re not motivated to care about creators or the environment, one further fact remains true: AI harms the user. I begin each of my classes by describing philosophy as the discipline that encourages us to think carefully about the reasoning behind our beliefs. This is a challenging – and sometimes terrifying – endeavour, since the discovery of bad reasoning can often force us to abandon some of our most dearly-held beliefs. The subjects I teach require my students to consider some hard questions: Does the climate crisis mean we should have fewer children? Should we permit physician-assisted suicide? Would a Federal ban on TikTok violate our right to freedom of expression? I believe that it’s vitally important that each of us formulate our own answers to such questions. If we farm this out to an algorithm, we’re sort of missing the whole point of philosophy (and education more generally). As Marta Nunes da Costa puts it:
“being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value.”
As I’ve argued before, failing to think critically not only risks making us bad humans, but also bad humans. I believe that fact – coupled with the clear harms to creators and the environment – is more than sufficient to explain why my students shouldn’t use AI.