Why My Students Shouldn’t Use AI Either

Every semester since ChatGPT arrived on the public stage, I have spent considerable time thinking about how to handle AI use with my students, and I have changed my answer each semester. This year, for the first time, I am going to ask that my students unequivocally avoid using it for any reason. Fortunately, I am not alone in this approach. Fellow Prindle author Daniel Burkett has offered three moral reasons why students should not use AI: it harms creators, the environment, and the students themselves. I would like to offer a few more reasons (though not all explicitly moral) to consider.
Argument 4: AI Erodes Responsibility
As AI systems infiltrate our human decision-making processes and social order more deeply, they are contributing to the erosion of accountability. To be sure, many AI evangelists who tout the benefits of AI will be quick to point out that it is on the human user to verify the legitimacy of AI outputs and use them responsibly. However, I am skeptical that this solution can overcome the accountability concerns I have.
Consider one personal anecdote. Last year, another driver hit my partner while she was driving our car and our insurance increased. When we called the insurance company, we wanted an explanation of why we would be paying the new amount. We were not objecting to having to pay more (though, it does feel unjust to have to pay more for an accident you are not at fault for). We simply wanted to know why the increase was $23 as opposed to $15 or $20. When we asked, the response we received was ultimately “I don’t know, that’s just what the system is telling me.” When we asked who we could contact to ask for more details, they said there was no one that could help us.
This example points out a larger issue with the integration of AI systems in social structures. We often think of accountability in cases where things go wrong, but conceptually accountability is about tracking responsibility for outcomes, whatever they may be. When we include AI in more of our life activities, we lose the thread of accountability. The reason for why something happened will increasingly stop with the answer “AI.” What makes AI unique is that it can behave like an agent in ways previous technologies have been unable to, which will make it well suited to enter into the stream of accountability and muddy the waters.
Furthermore, as these systems are more deeply integrated into our technologies and daily life, they will be treated as more trustworthy (regardless of whether they actually are). When people use technology that everyone is using, in the way that everyone is using it, it can be reasonable to ask for clemency when things go awry because they were just doing what was considered standard practice.
In my classrooms, we study ideas and arguments about serious topics: medical ethics, justice, propaganda, and technology. I want students to learn how to formulate ideas, explore their contours, and ultimately form well founded beliefs that they can claim some form of ownership over. Given the propensity of AI systems to obscure the trail of accountability, I will be prohibiting its use because I want students to retain accountability for the ideas they produce in my classrooms.
Argument 5: AI Undermines Growth
One of the promises of AI is that it will take over some tasks for us, in order to free our minds and time up for more important things. We have also been promised that it will stimulate the creation of new, undiscovered roles in society. So far, many of these prophesied positions relate to the management of AI itself: we now need AI policy experts, AI oversight experts, AI alignment specialists, and AI testers, to name just a few.
While we have yet to see an influx of new and exciting career paths beyond those related to managing AI, we do have reason to think that as AI takes over activities for us we will no longer be able to do those things as well. A preliminary study suggests if doctors go from not using AI, to using AI, and then back to not using AI, they get worse at making diagnoses than they were before they started using AI in the workplace. This should not surprise us. When we stop practicing skills, we lose our edge.
Echoing Burkett’s piece, in the realm of philosophy there is virtually no good reason for my students to use AI because every use case seems to undermine the very skills I want them to learn. When I ask my students how they use it, they typically tell me that they draft their own work and then feed it to AI to make it more professional. However, my philosophy courses are not about producing something that sounds convincing or looks professional (though it is nice when this happens). It’s about learning how to think well. When students write an argument defending a position, and then feed it to AI to help make it more professional, they are missing out on practicing one of the crucial skills I am trying to teach them. Editing a paper for logical coherence, careful word choice, and conceptual analysis is part of the skill building process, and AI impedes this.
Argument 6: AI Is Ideological
AI is currently (and will always likely be) infused with ideology. Nicholas Kreuder has written about the dangers that come from the power that the owners of AI have over us, which reveals the ideological nature of these systems and the risks we have when we rely on them.
If AI is given guardrails, those guardrails will be given using the political, moral, and, likely, economic principles that the creators deem appropriate. Even a radical AI enthusiast who believes AI needs to be absolutely “free” would be instantiating an ideology within the AI system if they chose to avoid any guardrails at all. The choice of what data to train the system on and what to exclude will also be one rooted in some ideological choice. And, insofar as these systems need to generate profit, they will always feel the ideological pull of economic interest.
This problem is not unique to AI, of course. The fact that the phrase “to google” is synonymous with the action of searching for something on the internet reveals the informational monopoly that one company wields over a huge portion of the world. And, the way that Google organizes search results is also far from ideology free.
AI ideology is an issue not because it is ideological per se, as most technologies cannot avoid being infused with some kind of ideology. It is that AI is especially good at projecting confidence and expertise. AI writes convincingly from the perspective of many who use it (while many PhDs have criticized AI’s performance as laughable, even childish, this is not representative of the experience that many have while using it).
The problem with AI, then, is not just that it presents information confidently, but that when you ask it questions about controversial political and ethical issues, it appears to give balanced and unbiased answers. You can even instruct the AI to be unbiased and it will tell you that it will do that. But, in reality it cannot. (Notably, if you ask it “can you be unbiased?” it can also correctly tell you that this is not really possible).
While my ideological complaint also applies to pre-AI technologies like the Google search, the television, the radio, or the book, I think that conversing with AI poses a special problem. The confident, conversational, and apparent unbiased delivery of information occludes the ideological bent that AI systems have.
Argument 7: A Refuge From AI
Many of us feel compelled to use AI whether we like it or not out of a fear of being left behind (FOMO is a real tactic in the tech marketing world). I suspect that AI will be used by many of my students because they feel that they must for “educational” purposes. I also know that outside of the university context, students will be required to use AI for their jobs and are forced to use it when interacting with the sociotechnical infrastructure around them.
The final, simple reason I will prohibit AI in my classroom this semester is to give my students a place of refuge from it. My hope this fall is to give students the room to slow down, make mistakes, and think for themselves without the pressure to be perfect. Although it promises to make our lives easier, AI is ultimately a tool that entices us to work harder. It promises to help us make things better, do things faster, and make us stronger. But this is machine logic, and we are human after all. So, this fall I will say no to AI.