← Return to search results
Back to Prindle Institute

A Reasonable Standard for Self-Defense

In Canada this past week something happened that would not likely be very controversial in parts of the United States. A man named Jeremy David McDonald was at home in the morning when Michael Kyle Breen, who was already wanted for prior break-and-enter offenses, broke into McDonald’s house. McDonald defended himself with a knife, and Breen ended up with life-threatening injuries and had to be taken to hospital. In the U.S., we might be more surprised that Breen wasn’t shot, but McDonald would be protected in most states by stand-your-ground statutes. In Canada, however, McDonald was arrested and is facing charges of aggravated assault and assault with a weapon. The decision has prompted public outrage, fueled by increasing concerns about crime and the inability of the justice system to adequately respond. Legal experts, however, believe the public is getting things wrong. Is this simply a misunderstanding of the law or is there a larger issue here?

Before we proceed, there are two details worth noting. The first is that the public does not yet have all the details in this case. Second, one is legally entitled under Canadian law to defend themselves if they are attacked so long as the defense is “reasonable to the circumstances.” In other words, if someone pushes you, you can’t beat them with a tire iron and claim you acted in self-defense. Nevertheless, this is a case where McDonald’s house had been invaded by someone with a lengthy criminal record, and McDonald may have had reasonable concerns about his own safety and that of his family and his property. Ontario Premier Doug Ford, for example, claimed that the charges laid against McDonald demonstrate “something is broken in the system,” adding, “I know if someone breaks into my house or someone else’s, you’re going to fight for your life … This guy has a weapon … you’re going to use any force you can to protect your family.”

Much of the controversy stems from the difficulty in defining exactly what a “reasonable” response looks like in such situations. Self-defense laws are not usually that controversial in Canada, and few disagree with the abstract principle of self-defense being proportional to the threat. According to the police, it is unreasonable to continue to strike an attacker once they have been subdued. We don’t want people seeking vengeance or engaging in torture in the name of self-defense. Threat assessment is perhaps best left to professionals. Unfortunately, emergency services don’t always arrive when you need them. In 2024, a man in Ontario had an armed group break into his house and 911 put him on hold three times. In these minutes, anything could happen, and it might seem reasonable to arm oneself with a knife and to be prepared to use it in order to subdue an invader.

The ambiguities around mounting a “reasonable” defense can leave Canadians facing criminal charges. In 2010, Ian Thompson, a former firearms instructor, was charged with careless use of a firearm after firing warning shots into the air with a revolver in order to ward off arsonists attacking his home with Molotov cocktails. Some of the charges against Thompson were later dropped, and the then-Justice Minister expressed support for the idea of firing warning shots as a reasonable response. Meanwhile, the Liberal and NDP opposition, as well as as police associations, however, worried that this rhetoric would encourage vigilantism and produce more harm than good.

Ironically, over ten years later, the effect of this mindset may be the exact opposite. The risk-averse approach has produced a chilling effect on self-defense itself. The Canadian press is now reminding people that Canadians can, in fact, defend themselves if threatened, and warned of “misinformation” telling them otherwise.

Unfortunately, much of the legal establishment seems to be under the impression that it is the public’s responsibility to understand all of the nuances of what “reasonableness” entails. While laws do exist, much of the application is determined by a complex web of common law decisions. The result has been a standard that doesn’t seem so reasonable after all. This forces Canadians to measure their survival instincts against a post hoc legal fiction. Saying that it’s perfectly reasonable to cut someone who is attacking you in the heat of the moment, but that it is unreasonable to do so in an “aggravated” way is a rule that offers little practical guidance.

We may not know all the details in the McDonald case, but the conflicting responses from our political officials do suggest a larger problem. No one is advocating for mob justice, but the whole point of a “reasonable” standard is that it is action-guiding for average reasonable people. Laws in a democratic society are not static, but adapt to lived experience. They must be tested against the needs of real people if they are going to be a useful tool for coordinating social life. If there is widespread opposition to the way the law applies that standard, perhaps the problem is the isolation of law from the lived realities of average people, not people’s misunderstanding of legal nuance.

AI’s Videogamification of Art

Criticisms of AI-generated art are now familiar, ranging from the unauthorized use of artists’ works to train models, to the aping of art styles that border on plagiarism, to claims that AI enthusiasts fail to understand that creating art requires intention and purpose and is antithetical to being produced automatically by a program. While these criticisms remain as important as ever, AI programs continue to evolve, and with new capabilities come new issues.

Case in point: Google’s new Genie 3, which allows users to create “interactive, playable environments” on the basis of prompts and images. To demonstrate the technology, a Google researcher showed how one could “walk around” famous paintings, such as The Death of Socrates by Jacques-Louis David and Nighthawks by Edward Hopper. The AI program generates a 3D world, allowing users to see characters and objects in the original painting from different angles, essentially creating a kind of videogame, albeit one in which there’s not much to do (at least for now).

I think there is good reason to be critical of AI that makes rudimentary videogames out of works of art. Here I’ll consider three such criticisms. The first two can be commonly found in the comments on social media: first, that using AI to digitally manipulate art is disrespectful to the artist or artwork itself; and second, that choosing to interact with the videogamified artwork represents a failure of imagination on the part of the user. I’ll also consider a new version of an old criticism from the philosophy of art: that AI-generated creations lack the original artwork’s aura.

There is a sense in which manipulating art in this way isn’t new. After all, so-called “immersive experiences” have been popular for a while now, such as Immersive Van Gogh, where visitors can walk among projections of some of Van Gogh’s most recognizable artworks. These experiences are sometimes criticized for being tacky and a tourist trap, but few would consider them egregious crimes against art. It’s also long been accepted by all but the stodgiest scholars that videogames are capable of being aesthetically valuable, so it’s not as though we should think that only oil paintings in ornate frames hanging in galleries are worthy of our aesthetic appreciation.

So what’s wrong with using AI to create a virtual world out of a painting? First off, we might worry that using these programs disrespects the original artist, who likely did not intend their work to be a virtual environment to be walked around in. Part of the problem is that AI programs struggle with generating environments that are coherent, producing artifacts and noise that detract from the original composition of the work of art. In the example of the world created out of Hopper’s Nighthawks, for example, AI-generated faces and words became garbled messes, with the end product feeling akin to vandalism.

This first criticism is an aesthetic one: AI programs that videogamify art ruin the artist’s vision, taking something beautiful and making it grotesque. We might also be tempted to criticize the person who chooses to engage with an artwork via its AI-generated videogame form. Commenters on social media are particularly liable to sling this kind of mud, accusing AI art fans of exhibiting a wide range of personal failings. While social media tends not to feature the most careful debates, one criticism that is worth singling out is that engaging with AI-manipulated versions of artworks represents a failure of imagination.

Why think this? Part of what’s involved in appreciating an artwork is to engage with it on its own terms, which requires interpreting what the artist has put in front of you and what they have left out. We might argue that getting an AI program to fill in the blanks by creating a navigable 3D environment is like taking a shortcut, where you are getting a program to do the work required to appreciate a work of art for you.

We’ve seen this kind of criticism when it comes to people using chatbots to write for them: writing is meaningful when it is intentional and effortful, and it loses that meaning when we offload our cognitive functions to programs. In the same way, using an AI program to generate a world out of a painting offloads your imagination and prevents you from being able to meaningfully appreciate a work of art.

So, the first criticism of AI videogamified art pertains to how a person treats an artist or artwork, and the second is a criticism of the person who uses such programs. The last argument I’ll consider is a bit different: that turning an artwork into a 3D virtual environment provides a subpar aesthetic experience because it fails to capture the original artwork’s aura.

This argument (or at least a form of it) comes from the philosopher Walter Benjamin, who wrote on art and aesthetics in the first half of the 20th century. Benjamin was concerned with a practice that was becoming more and more frequent at the time: that artworks were being reproduced, sometimes on a massive scale. An original painting, Benjamin argued, is unique, and when experienced in a certain place and time, has a presence about it, or what he calls an “aura.” It is a concept perhaps better experienced than described: there is some feeling that you get when encountering an artwork in a gallery as opposed to seeing a picture of it online, or as a postcard in a gift shop.

Benjamin’s worry was that copies of artworks fail to capture something that can only be possessed by the original. He did not, of course, have a conception of modern AI tools, or virtual 3D environments, or videogames. But Benjamin’s complaint still feels apt when experiencing new AI creations today: you’re no longer interacting with the original, but instead something that has been manipulated, and in doing so you fail to have the same kind of aesthetic experience. This criticism is not the charge that you’re necessarily lacking in imagination by engaging with the AI-generated version of a painting instead of the original; it’s just that it’s a shame that you’re missing out on having a more meaningful aesthetic experience.

How serious these criticisms are is up for debate, and many online have argued that new ways for AI programs to create and manipulate artworks really amounts to little more than cool new technology. Regardless, something of value does seem to be lost when interacting with the videogamified version of artworks instead of engaging with them on their own terms. When it comes to having a meaningful aesthetic experience, AI continues to feel like little more than a novelty.

Does Art Make Assertions? Drake’s “Not Like Us” Lawsuit

What many have called “the Rap Battle of the Century,” between rappers Drake and Kendrick Lamar, has led us into uncharted waters. Public opinion has Lamar winning handily, and he appeared to take a victory lap that included a successful album release, five Grammys, a halftime Super Bowl performance, and a worldwide tour alongside R&B singer SZA. However, following his loss, Drake sued for defamation over “Not Like Us.” Specifically, the Canadian rapper sued not Kendrick Lamar, but the record label UMG, which represents both rappers, for promoting the defamatory message contained in “Not Like Us” – i.e., the remarks regarding Drake’s not-so-much implied pedophilic tendencies. In other words: Drake and his team assume “Not Like Us” to be defamatory, and are arguing that UMG knowingly spread the song in spite of its defamatory content.

Much ink has been spilled over the lawsuit (including Lamar’s own satire of the potential trial resulting from it), its impact on Drake’s public image as a rapper, and its significance for rap music as a genre – with the worry that, were the suit to succeed, record labels would demand much more control over rappers’ songs to avoid similar fallout. However, the ink I wish to spill today is aimed at the premise of the lawsuit: i.e., whether a rap song, even a scathing one like “Not Like Us,” can be considered defamatory. If we consider rap lyrics to be part of an art form, can they be taken to be assertions of the truth? Should we take not just rap lyrics, but pieces of art in general, to be as good as statements of fact – such that those uttering them can be held liable?

This question of artistic culpability is far from new. The 1985 Senate hearings of Frank Zappa, John Denver, and Twisted Sister’s Dee Snider to discuss the (supposed) moral objectionability of rock lyrics is probably the most notorious of such cases. Rap lyrics have very often been used as evidence against the very artists that deliver them since the late 80s, with the most recent prominent example being the RICO charges against Jeffrey Lamar “Young Thug” Williams. This (mis)use of song lyrics is partly motivated by the frequent use of the first-person perspective in rap songs that deal with violence or drug use and sale. First-person storytelling give the impression of authenticity. This explains why rap lyrics have been used frequently in trials, a practice often deemed problematic for a number of reasons, including the constraint of First Amendment rights and the risk of strengthening various forms of prejudice.

There is, however, a different kind of problem regarding the use of rap lyrics as assertions of truth – something that regards all forms of artistic expressions. This problem is best highlighted in René Maigritte’s painting, known as The Treachery of Images, representing a realistic-looking pipe accompanied by the caption Ceci n’est pas une pipe (“This is not a pipe”). When looking at the painting, you may see a pipe, but it is not a pipe. It is merely a representation of a pipe. Even in art pieces committed to being as realistic as possible, the piece of art is intrinsically something other than what it means to represent. Likewise, song lyrics, especially those of diss tracks, should not be taken as genuine representations of facts – and are definitely not comparable to admissions of guilt.

Still, lyrical exchanges have led to severe consequences: most famously, the rivalry between East Coast and West Coast rappers – specifically the Bad Boy Records and Death Row Records labels – involved a variety of provocations and threats, both in and out of songs, and may very well have led to the deaths of rap icons Tupac Shakur and the Notorious B.I.G. Even more poignantly, in his feud with Gucci Mane, rapper Young Jeezy released a track where he placed a $10 million bounty on Mane’s head, potentially leading to an attempt on Mane’s life and Mane killing a man in self-defense. In such cases, the fact that the song stands ontologically separate, as a piece of art, from the reality it represents may not necessarily excuse it. This would apply even for other kinds of art pieces: just think of German comedian Jan Böhmermann, whose satirical (albeit quite offensive) poem dedicated to Turkish Prime Minister Recep Erdoğan almost caused a diplomatic incident.

So, there is a sense in which art pieces contain some kind of truth value. However, that truthfulness is not so much a statement of fact as a disclosure about the artist’s state of mind and view of the world. These remarks are closer to speech acts such as wishes, expressions of feelings, or made-up stories. At the same time, we recognize those kinds of utterances because they are an integral part of our sociality: we can recognize the differences between storytelling, wishing, ordering, praying, and making assertions of fact because we can recognize each of them as specific practices with their own logic, expectations, and rules. Making art is, itself, an instance of such practices. They have their own histories, and artists are aware and participants of such practices and their norms, values and expectations. Rap music is no different in that regard, with rap feuds, which deliberately entail the expression of negative and outright defamatory statements between rival rappers, as a part of the practice of rap as an art form.

If one accepts that art pieces, including songs, are a different kind of utterance than assertions of truth, then they should not be treated as evidence in court. Because of rap music’s nature as an art form, they should not be utilized as a purported admission of guilt. This would be even more the case for Drake’s lawsuit of UMG, as the defamatory remarks in “Not Like Us” did not come out of the blue, but were part of an exchange of hostile and defamatory remarks between him and Kendrick Lamar. These statement were part of a rap feud, a practice that characterizes the art form that existed for more than 50 years, and a history which Drake’s lawyers have tried to underplay as much as possible. Hopefully, the competitive and hyperbolic character of rap music will be recognized as essential to the art form, and Drake’s case will be dismissed.

Why My Students Shouldn’t Use AI Either

Every semester since ChatGPT arrived on the public stage, I have spent considerable time thinking about how to handle AI use with my students, and I have changed my answer each semester. This year, for the first time, I am going to ask that my students unequivocally avoid using it for any reason. Fortunately, I am not alone in this approach. Fellow Prindle author Daniel Burkett has offered three moral reasons why students should not use AI: it harms creators, the environment, and the students themselves. I would like to offer a few more reasons (though not all explicitly moral) to consider.

Argument 4: AI Erodes Responsibility

As AI systems infiltrate our human decision-making processes and social order more deeply, they are contributing to the erosion of accountability. To be sure, many AI evangelists who tout the benefits of AI will be quick to point out that it is on the human user to verify the legitimacy of AI outputs and use them responsibly. However, I am skeptical that this solution can overcome the accountability concerns I have.

Consider one personal anecdote. Last year, another driver hit my partner while she was driving our car and our insurance increased. When we called the insurance company, we wanted an explanation of why we would be paying the new amount. We were not objecting to having to pay more (though, it does feel unjust to have to pay more for an accident you are not at fault for). We simply wanted to know why the increase was $23 as opposed to $15 or $20. When we asked, the response we received was ultimately “I don’t know, that’s just what the system is telling me.” When we asked who we could contact to ask for more details, they said there was no one that could help us.

This example points out a larger issue with the integration of AI systems in social structures. We often think of accountability in cases where things go wrong, but conceptually accountability is about tracking responsibility for outcomes, whatever they may be. When we include AI in more of our life activities, we lose the thread of accountability. The reason for why something happened will increasingly stop with the answer “AI.” What makes AI unique is that it can behave like an agent in ways previous technologies have been unable to, which will make it well suited to enter into the stream of accountability and muddy the waters.

Furthermore, as these systems are more deeply integrated into our technologies and daily life, they will be treated as more trustworthy (regardless of whether they actually are). When people use technology that everyone is using, in the way that everyone is using it, it can be reasonable to ask for clemency when things go awry because they were just doing what was considered standard practice.

In my classrooms, we study ideas and arguments about serious topics: medical ethics, justice, propaganda, and technology. I want students to learn how to formulate ideas, explore their contours, and ultimately form well founded beliefs that they can claim some form of ownership over. Given the propensity of AI systems to obscure the trail of accountability, I will be prohibiting its use because I want students to retain accountability for the ideas they produce in my classrooms.

Argument 5: AI Undermines Growth

One of the promises of AI is that it will take over some tasks for us, in order to free our minds and time up for more important things. We have also been promised that it will stimulate the creation of new, undiscovered roles in society. So far, many of these prophesied positions relate to the management of AI itself: we now need AI policy experts, AI oversight experts, AI alignment specialists, and AI testers, to name just a few.

While we have yet to see an influx of new and exciting career paths beyond those related to managing AI, we do have reason to think that as AI takes over activities for us we will no longer be able to do those things as well. A preliminary study suggests if doctors go from not using AI, to using AI, and then back to not using AI, they get worse at making diagnoses than they were before they started using AI in the workplace. This should not surprise us. When we stop practicing skills, we lose our edge.

Echoing Burkett’s piece, in the realm of philosophy there is virtually no good reason for my students to use AI because every use case seems to undermine the very skills I want them to learn. When I ask my students how they use it, they typically tell me that they draft their own work and then feed it to AI to make it more professional. However, my philosophy courses are not about producing something that sounds convincing or looks professional (though it is nice when this happens). It’s about learning how to think well. When students write an argument defending a position, and then feed it to AI to help make it more professional, they are missing out on practicing one of the crucial skills I am trying to teach them. Editing a paper for logical coherence, careful word choice, and conceptual analysis is part of the skill building process, and AI impedes this.

Argument 6: AI Is Ideological

AI is currently (and will always likely be) infused with ideology. Nicholas Kreuder has written about the dangers that come from the power that the owners of AI have over us, which reveals the ideological nature of these systems and the risks we have when we rely on them.

If AI is given guardrails, those guardrails will be given using the political, moral, and, likely, economic principles that the creators deem appropriate. Even a radical AI enthusiast who believes AI needs to be absolutely “free” would be instantiating an ideology within the AI system if they chose to avoid any guardrails at all. The choice of what data to train the system on and what to exclude will also be one rooted in some ideological choice. And, insofar as these systems need to generate profit, they will always feel the ideological pull of economic interest.

This problem is not unique to AI, of course. The fact that the phrase “to google” is synonymous with the action of searching for something on the internet reveals the informational monopoly that one company wields over a huge portion of the world. And, the way that Google organizes search results is also far from ideology free.

AI ideology is an issue not because it is ideological per se, as most technologies cannot avoid being infused with some kind of ideology. It is that AI is especially good at projecting confidence and expertise. AI writes convincingly from the perspective of many who use it (while many PhDs have criticized AI’s performance as laughable, even childish, this is not representative of the experience that many have while using it).

The problem with AI, then, is not just that it presents information confidently, but that when you ask it questions about controversial political and ethical issues, it appears to give balanced and unbiased answers. You can even instruct the AI to be unbiased and it will tell you that it will do that. But, in reality it cannot. (Notably, if you ask it “can you be unbiased?” it can also correctly tell you that this is not really possible).

While my ideological complaint also applies to pre-AI technologies like the Google search, the television, the radio, or the book, I think that conversing with AI poses a special problem. The confident, conversational, and apparent unbiased delivery of information occludes the ideological bent that AI systems have.

Argument 7: A Refuge From AI

Many of us feel compelled to use AI whether we like it or not out of a fear of being left behind (FOMO is a real tactic in the tech marketing world). I suspect that AI will be used by many of my students because they feel that they must for “educational” purposes. I also know that outside of the university context, students will be required to use AI for their jobs and are forced to use it when interacting with the sociotechnical infrastructure around them.

The final, simple reason I will prohibit AI in my classroom this semester is to give my students a place of refuge from it. My hope this fall is to give students the room to slow down, make mistakes, and think for themselves without the pressure to be perfect. Although it promises to make our lives easier, AI is ultimately a tool that entices us to work harder. It promises to help us make things better, do things faster, and make us stronger. But this is machine logic, and we are human after all. So, this fall I will say no to AI.

From Pet to Predator’s Plate

Many of us have pets. More importantly, many of us love our pets. Be they cats, dogs, hamsters, lizards, or any other kind of critter, pets provide their owners with endless love, fun, and no small amount of irritation. So when the inevitable happens and a pet dies, the grief can be devastating. The Royal Society for the Prevention of Cruelty to Animals even dedicates a section of its website to pet bereavement, and services like Paws to Listen and Friends at the End exist solely to support people facing the loss of a companion animal. All this is to say that, for many of us, pets are not simply creatures that we own; they are members of the family.

Perhaps it’s this deep attachment that explains the public backlash to a recent appeal from Denmark’s Aalborg Zoo. The zoo asked people to donate unwanted pets so they could be used to feed its predators. As per the call, the zoo put out on Instagram reads:

Did you know that you can donate smaller pets to Aalborg Zoo?

Chickens, rabbits and guinea pigs form an important part of the diet of our predators – especially the European lynx, which needs whole prey that resembles what it would naturally hunt in the wild 🐾

In zoos, we have a responsibility to imitate the animals’ natural food chain – for the sake of both animal welfare and professional integrity 🤝

If you have a healthy animal that needs to be removed for various reasons, you are welcome to donate it to us. The animals are gently euthanized by trained staff and then used as food. That way, nothing goes to waste – and we ensure natural behavior, nutrition and well-being of our predators 💕

Now, it should be noted here that the zoo does not accept cats or dogs as donations (although one might reasonably ask why). But, in addition to the small animals listed above, it does accept horses. And despite the recent attention the program has received, it is not new; the zoo has been accepting unwanted animals for years. It is simply that this most recent call has caught the media’s attention.

The response was swift. The zoo had to shut down the comments on its Facebook call due to the volume of angry and even abusive replies. Yet, as noted in the call itself, the reason the zoo is seeking these pets is not inherently or even obviously malicious. As the post itself explained, the program is not designed to be cruel. Predators in captivity thrive when their environment, including their diet, mimics what they would experience in the wild. In nature, the predators wouldn’t be eating neatly butchered, boneless meat; they would be consuming whole prey, bones, skin, fur, and organs included. The zoo’s aim, it says, is to provide this more natural diet for the benefit of the animals in its care, not to harm the animals that are donated.

And, for clarity, the donated animals are euthanised before being given to predators. They are not released alive into enclosures, so concerns about them suffering while hunted don’t apply in this case.

Even so, many people find the idea of feeding former pets to zoo animals unsettling. The question is “Why?” Is our discomfort just an instinctive reaction that we should learn to set aside in the name of animal welfare? Or is there a deeper, ethically significant reason for our unease?

On one hand, the case for Aalborg Zoo’s policy is straightforward: it prevents waste. If an animal is going to die anyway, its body could either be cremated or left to rot — or it could feed the zoo’s predators. This isn’t a choice between life and death for the animal; it’s a choice between disposal and purposeful use. Framed in such practical terms, the answer seems obvious: if you care about animals, you might choose the option that benefits another living creature, even if it means accepting that a lynx ate your beloved guinea pig.

On the other hand, there’s an argument rooted not in utility but in symbolism and empathy. By turning pets into food, critics say, we devalue their existence and our relationship with them. As Clifford Warwick, a UK-based consultant biologist and medical scientist, told The Guardian, “It further devalues the lives of pets … It’s a horrendous devaluation of animal life.” He follows this up with a succinct illustration: “Are you really happy saying: ‘OK, well Rex or Bruno, the time has come, there’s a hungry lion at the local zoo. Bye, off you go.’” In short, if pets are more to us in life than mere sacks of meat, they should mean more to us in death as well.

I find this objection difficult to fully accept. We already know the pet is going to die; what happens to the body afterward shouldn’t be more upsetting than the death itself. To focus on the fate of the remains rather than the loss of life seems misplaced.

There’s also the ecological reality: animals eat other animals. It’s not a moral failing, but how many species survive. Whether a zoo predator eats a former pet or a cow bred for slaughter, the act itself is the same. If the pet was going to die regardless, why object to its body sustaining another life?

What’s more, I’d be curious to know how many critics of this policy are vegan or vegetarian. If you continue to eat meat, thereby directly contributing to the death of animals, it seems inconsistent — if not outright hypocritical — to condemn the zoo for doing something that arguably reduces waste. In Denmark, according to the Vegan Society, vegetarians make up only about 3% of the population. For the rest, dietary choices already cause far greater harm to animals than Aalborg Zoo’s policy. And while vegetarian and vegan rates vary around the world, I would feel confident in saying there are far more meat eaters than those who subsist on a purely plant-based diet.

Ultimately, I think the outrage may have less to do with harm and more to do with cultural categories. As Haidt, Koller, and Dias noted back in 1993, we sort animals into moral boxes: pet (not for eating), livestock (fine for eating), vermin (kill on sight). These labels are arbitrary, yet they strongly shape our reactions. A rat and a rabbit are not inherently different in moral worth, but the label we assign determines whether we nurture or exterminate them. Blindly accepting these categories without reflection doesn’t protect animals — it just reinforces our own biases.

So the real question is this: if your pet had to be put down, would you let its death be the end, or would you allow it to help another animal live?

Is Artificial Intelligence Sustainable?

A recent advertisement for Google’s “Gemini” artificial intelligence (AI) model shows users engaged in frivolous, long-form conversations with their AI personal assistant. “We can have a conversation about anything you like,” Gemini cheerfully informs one user, who is unsure of how to approach this new technology. Another user asks Gemini, “how do you tell if something is spicy without tasting it?” to which Gemini responds (without any hint of the stating-the-obvious sarcasm with which a human may be expected to reply such an inane question) “have you tried smelling it?” What is clear from this advert, and other similar adverts produced by companies such as Meta, is that the companies designing and selling AI intend for its adoption to be ubiquitous. The hope of “big tech” is that AI will be used liberally, for “anything” as the advert says, becoming part of the background technological hum of society in just the same way as the internet.

Awkwardly for these companies, this push for the pervasive adoption of AI into all realms of life is coinciding with a climate and ecological crisis that said technologies threaten to worsen. “Data centers,” the physical infrastructure upon which AI systems depend, are predicted by the IEA to double in their energy consumption from 2022 levels by 2026, consuming around 4.5% of total electricity generated globally by 2030 – which would rank them fifth in the list of electricity usage by country, just behind Russia and ahead of Japan. This of course comes with a significant carbon footprint, driving up global energy demand at precisely the moment that frugality is required if countries are to meet their net-zero goals. Such a significant increase in electricity usage is likely to extend our dependency on fossil fuels as efforts to decarbonize supply can’t keep up with demand.

Beyond electricity usage, data centers also require both vast amounts of water for cooling and rare-earth minerals to produce the hardware components out of which they are built. Google’s data centers consumed (that is, evaporated) approximately 31 billion liters of water in 2024 alone. This at a time when water scarcity is already a serious problem throughout much of the world, with two-thirds of the global population experiencing severe water scarcity during at least one month of the year. Similarly, the mining of rare-earth minerals such as antimony, gallium, indium, silicon, and tellurium is another aspect of the AI supply chain known to wreak both ecological and social havoc. China, by far the world’s largest processor of rare-earth minerals, having realized the heavy environmental toll of rare-earth mines, have now mostly outsourced mining to countries such as Myanmar, where the mining process has poisoned waterways and destroyed communities.

Given the vast resources required to build, train, and maintain AI models, it is fair to question the wisdom of asking them “anything.” Do we really need power-hungry state-of-the-art algorithms to tell us that we can smell an ingredient to check whether it’s spicy?

In response to such sustainability concerns, Google has pointed out that alongside the more mundane uses of AI displayed in its advertisement, the implementation of AI throughout industry promises a raft of efficiency savings that could result in an overall net-benefit impact on global emissions. In its 2025 environmental report, Google describes what it calls an “optimal scenario” based on IEA research stating that the widespread adoption of existing AI applications could lead to emissions reductions that are “far larger than emissions from data centers.” Although, some of the IEA’s claims are based on the somewhat spurious assumption that efficiency savings will be converted into reduced emissions rather than simply lowering prices and increasing consumption (for example, some of the emissions reductions predicted by the IEA’s report come from the application of AI to the oil and gas sector itself, including helping to “assess where oil and gas may be present in sufficiently large accumulations”).

Even granting a level of skepticism here, the potential of AI to produce positive outcomes for both the environment and humanity shouldn’t be overlooked. Initiatives such as “AI for Good,” that seeks to use AI to measure and advance the UN’s Sustainable Development Goals, and “AI for the Planet,” an alliance that explores the potential of AI “as a tool in the fight against climate change,” illustrate the optimism around AI as a tool for building a more sustainable future. In fact, a 2022 report produced by “AI for the Planet” claims the technology could be implemented in three key areas in the fight against climate change: mitigation, through measuring and reducing emissions; adaptation, through predicting extreme weather and sea-level rise; and finally, research and education.

There is also potential to use AI as a tool for biodiversity conservation. Research carried out by The University of Cambridge identified several applications for AI in conservation science, including: using visual and audio recognition to monitor population sizes and identify new species; monitoring the online wildlife trade; using digital twins to model ecosystems; and predicting and mitigating human–wildlife conflicts. However, the authors also point to the significant risk of eroding support and funding for smaller scale-participatory research in favor of the larger and wealthier institutions able to carry out AI-based research. Additionally, they highlight the risk of the creation of a colonial system whereby data is extracted from lower-income countries to train models in data centers in North America and Europe, resulting in the export of AI-driven mandates for the use of resources and land back to those lower-income countries.

Such risks indicate the need to consider an important distinction that has been made in the field of AI ethics. Philosophers such as Aimee van Wynsberghe and Henrik Skaug Sætra have argued for the need to move from an “isolationist” to a “structural” analysis of the sustainability of AI technologies. Instead of thinking of AI models as “isolated entities to be optimized by technical professionals,” they must be considered “as a part of a socio-technical system consisting of various structures and economic and political systems.” This means that the sustainability of AI doesn’t come down to a simple cost-benefit analysis of energy and resources used versus those saved through greater efficiency and sustainability applications. In order to fully understand the indirect and systemic effects of AI on environmental sustainability, these philosophers argue, we need to consider AI models in their social and political context.

A structural analysis must begin by pointing out that we live in a system characterized by immense inequalities of both wealth and power. As it stands, most AI models are owned and operated by tech companies whose billionaire CEOs have been described as oligarchs. These companies are the principal beneficiaries of a political system driven by economic growth and fueled through resource extraction. We should expect the AI models they produce to propagate this system, further concentrating power and capital to serve the narrow set of interests represented by these companies and their owners. A purely “isolationist” focus suits these interests as AI’s positive applications can be emphasized, while any negative effects, such as vast levels of resource usage, can be presented as technical problems to be ironed out, rather than systemic issues requiring political reform.

To take some examples already touched upon in this article, an isolationist approach can highlight the efficiency savings that are made possible by using AI models to streamline industry, while a structural approach will point out the economic reality that efficiency-savings tend to be harnessed only to ramp up production, lowering prices and leading to increased consumption, and therefore, higher profits. An isolationist approach can view the dependence of AI on large quantities of rare-earth minerals as a technical problem to be solved through more efficient design, whereas the structural approach will point to the need to address the immense injustices that are intrinsic to the rare-earth supply chain. An isolationist approach will tout the potential for AI models to guide ecological restoration in lower-income countries, while a structural approach will point out how this echoes the colonial history of conservation science.

Once we start to consider AI within its political and socio-economic context rather than as an isolated technological artefact, we can look beyond its direct applications for sustainability so that its many troubling indirect and systemic implications come into sharper focus. It becomes apparent that, rather than promoting sustainability, there is a far greater propensity for AI to enable further resource extraction, evade environmental regulations, and manipulate public debate and opinion on environmental issues.

A striking example of this is the way that AI is being used to undermine public trust in climate science. A report authored by the Stockholm Resilience Centre argues that the ability to generate synthetic text, images, and video at scale could fuel a “perfect storm” of climate misinformation, whereby AI models produce vast amounts of climate denial content that is then disseminated through social media algorithms already geared towards bolstering controversial and polarizing content. Consider this faux-academic paper recently written by Elon Musk’s Grok 3 model that casts doubt on the science of anthropogenic global warming. The paper was widely circulated on social media as an example of the first “peer-reviewed” research led by AI. Of course, claims of “peer-review” are unfounded. Neither the publisher nor the journal are part of the Committee of Publication Ethics and the paper was submitted and published within just twelve days, with no indication of whether it underwent open, single, or double-blind review. It should come as no surprise that one of the co-authors, astrophysicist Willie Soon, is a climate denier known to have received millions in funding from the fossil fuel industry, and whose contested research was referenced by the AI-generated paper. Despite such an obvious conflict of interest, a blog-post by the COVID-19 conspiracy theorist Robert Malone gathered more than a million views, claiming that the use of AI meant that the paper was free from the biases of what he describes as “the debacle of man-made climate change.”

From a “structural” perspective then, ensuring that AI models are sustainable is not merely a technical issue but a political issue of confronting the systems and power-structures within which AI technologies are built and utilized. One step in the right direction is to democratize AI governance such that ultimate control over AI’s direction and implementation is wrestled from the the hands Silicon Valley oligarchs and given to democratically elected governments so that regulation can be imposed to promote AI’s sustainability, both in terms of its physical infrastructure and its applications. However, so long as AI remains enmeshed within the power structures responsible for creating the environmental crisis, it will never truly be a force for advancing sustainability.

The US’s Action Plan to “Prevent Woke AI”

For a few years now, “digital” or “technological” sovereignty has been a prominent topic within AI Ethics and regulatory policies. The challenge being: how can government actors properly rule in the interest of their citizens, while governments (and citizens) must rely on technologies developed by a handful of companies they do not have clear control over? Many efforts to address this challenge consisted either in regulations, such as the EU’s AI Act, or various forms of agreement between (supra)national actors and tech companies.

Unfortunately, the White House’s “America’s AI Action Plan” and the three Executive Orders published on the same day ignore this thorny issue entirely. Instead, these policy proposals aim at deregulating AI development by American Tech companies “to achieve global dominance in artificial intelligence.” The general thrust is clear: deregulate AI development, promote its deployment across society, and export widely so as to strengthen the U.S.’s global standing.

In advancing these interests, one keyword sticks out like a sore thumb: “Woke AI.” As a millennial, it feels surreal to see a term that I have primarily experienced as Internet lingo make its way into a Presidential Executive Order. While this is far from the first time that the term “woke” has been utilized by the president to pejoratively address the values of the opposition, it’s far from clear what precise danger such language is meant to evoke. What kind of threat does “Woke AI” represent?

The July 23rd Executive Order “Preventing Woke AI in the Federal Government” does not attempt to define the term. Instead, it states that AI systems should provide reliable outputs, free from ideological biases or social agendas that might undermine their reliability. In particular, the Order identifies “diversity, identity, and inclusion” (DEI) as a “destructive ideology” that manipulates information regarding race or sex, and incorporates “concepts like critical race theory, transgenderism, unconscious bias, and systemic racism.” The Order then identifies “Unbiased AI Principles” that will guide development going forward. Chief among these is the command that AI must be truth-seeking and ideologically neutral – “not manipulat[ing] responses in favor of ideological dogmas such as DEI” – to ensure that AI systems are trustworthy.

To many AI ethicists (including myself), the Order reads like a series of non-sequiturs. It demands that tech companies reject any notion related to DEI in their AI development guidelines, yet it is quite unspecific regarding what such rejection would entail in practice. Let us set aside the countless examples of AI systems being unlawfully biased on the basis of race, gender, economic status, and disability in a variety of domains. Let us also set aside the practical impossibility for AI systems to be “unbiased” given that they are technologies literally designed to identify potentially meaningful patterns and sort accordingly. And, finally, let us set aside the irony of the clear ideological grounds motivating the Order’s intention to generate non-partisan results. What little remains when all these difficulties have been accounted for doesn’t amount to much. And it’s worth asking why the focus on “anti-woke AI” represents such a large part of the White House’s overall AI strategy.

The answer to that question becomes much clearer when looking at how – and where – “woke AI” crops up. From the beginning, responsible AI policy is described as integral to the goal of protecting free speech American values. Ultimately, AI outputs must “objectively reflect truth rather than social engineering agendas.” For that reason, references to “misinformation,” regarding things like DEI and climate change, must be removed. But this kind of censorship seems odd considering the stated desire to promote freedom of speech, especially because the Plan is explicitly stating what to not talk about – censoring tech companies from mentioning those topics as relevant concerns.

Ultimately, it often feels like the concern over “Woke AI” is merely a pretense for removing safeguards in order to accelerate AI development. This intent is made explicit at several points of the Plan. At its very introduction (and in reference to the Vice President’s remarks at the AI Action Summit last February) any “onerous” regulation towards AI development would mean paralyzing this technology’s potential – a reason why the current administration rescinded Biden’s “dangerous” Executive Order on AI. (Interestingly enough, many saw that regulation as quite lenient, all things considered, especially compared to the EU’s AI Act.) Any mention of regulation in the Plan that does not originate from the current White House is considered “onerous,” “burdensome,” or in some way unreasonable in slowing down AI development.

Even more poignantly, the Plan is quite clear in its intention to counter Chinese influence: it refers to the governance frameworks proposed by international organizations such as the UN, the G7, and the G20 as “vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance.” Safeguards meant to protect individual rights and privacy are written off as the calculated design of the U.S.’s largest geopolitical competitor.

But the Plan is not simply a rhetorical tool to signal dominance within the U.S.’s political discourse. Rather, it is a means of vilifying any obstacle to the “move fast and break things” approach as “woke.” This language is not only meant to clearly separate the current White House’s position from that of their predecessor’s, but to pave the way for deregulation. The fear is that this attitudinal shift cedes far too much power to unaccountable tech companies. Without stronger guardrails in place, we may all get run over.

On Medical Freedom

In recent decades, American healthcare has occupied a central place in public discourse. The cornerstone piece of legislation in President Barack Obama’s time in office, the Affordable Care Act, has dominated discussions of access to healthcare and health insurance since its passage into law. President Donald Trump pushed for legislation that repealed portions of the Act but never articulated an alternative vision, infamously noting during a 2024 presidential debate that he had “concepts of a plan” for a replacement. Current Secretary of Health and Human Services, Robert F. Kennedy Jr., launched a failed presidential campaign focusing not on insurance or healthcare access, but instead championing a concept we will call “medical freedom.”

At the heart of medical freedom is the idea that people ought to be allowed to make choices about healthcare free from external interference. This view seems to consist of (at least) two components.

First, the negative component; that people should be free from government mandated treatment. Specifically, proponents of medical freedom argue against preventative treatments such as vaccinations. Although energized by vaccine policy in the wake of COVID-19, this idea has its roots in pushback against smallpox inoculation in the late 18th century and anti-vaccination movements in the 19th and 20th centuries. As with the historical cases, members of the current medical freedom movement view “vaccine mandates” as infringing upon their fundamental liberties, be they religious or personal, and instead seek alternative remedies or treatments for infectious disease.

(It is worth noting, though, that the government cannot mandate vaccines. Competent adults generally have the right to refuse medical treatment. However, access to some goods and organizations may require vaccination; generally students in public school must be vaccinated against common illnesses [although a vast majority of states allow for religious and/or personal exemptions], members of the military must be vaccinated against many infectious diseases, etc. So perhaps we should better understand this component as objecting to the government incentivizing any treatment.)

The idea of alternative remedies links to the positive component; that people ought to have the right to access their preferred treatments without government intervention. This is a relatively new idea, especially compared to the negative component. The federal government only began regulating drugs in the 20th century with the passage of the Pure Food and Drug Act. The law primarily required accurate labeling of products and disclosure of addictive or dangerous ingredients. In 1936, Congress passed the Federal Food, Drug and Cosmetic Act, which granted the Federal Drug Administration greater regulatory authority, including the ability to ban substances and require pre-market testing of drugs.

Both components of this freedom – the positive and the negative – seem to rely on the moral principle of autonomy, the idea that individuals have the right to make informed choices regarding themselves and their body. This right suggests it is wrong to interfere with someone if they make such a choice when sufficiently informed. Others, specifically the government, should seek only to provide information.

But there seems to be an incoherence within the autonomy-grounded medical freedom movement when placed in the context of other policies favored by the movement, at least as enacted by Kennedy thus far as HHS secretary. This incoherence gives rise to a dilemma: Either the freedoms are (nearly) absolute, in which case the movement should focus regulations primarily on providing consumers with information about products, or the freedoms are not (nearly) absolute and incursion upon them may be justified. The trouble for the proponent of medical freedom being that, once we begin condoning incursions, this may justify the very policies members of the movement rail against.

What would regulation look like if we treated these freedoms as unlimited? It seems that regulations would primarily serve to ensure that citizens are fully informed about drugs and other products they are consuming, insofar as they relate to health. Regulation would simply serve to facilitate autonomous decision-making. Give the citizenry as much information about the products on the market as you can, then let them choose for themselves.

However, members of the medical freedom movement often favor policies that work against consumer information. Organizations such as the National Health Federation stress that consumers have the unrestricted right to access dietary supplements. But in the U.S., dietary supplements occupy a different legal category than pharmaceuticals. Makers of dietary supplements must accurately label the contents of their product. However, unlike pharmaceuticals, makers of dietary supplements do not have to provide substantial evidence about purported benefits of their products. Further, manufacturers are not required to provide dosing information. Yet even common nutrients may produce adverse outcomes in high doses. Earlier this year, children in Texas were treated for Vitamin A toxicity when admitted for measles. Kennedy had previously instructed the CDC to change measles guidance to mention vitamin A as a treatment.

Thus, at least in the current legal framework, the medical freedom movement’s emphasis on dietary supplements actually inhibits, rather than supports, autonomous decision-making. Consumers have lesser access to information about these products than about prescription drugs, thus limiting their ability to make fully informed decisions.

Further, other policies favored by members of the medical freedom movement restrict choices that consumers can make. For instance, in April the FDA announced that it is working with industry to phase out synthetic food dyes, including Allura Red AC, more commonly known as Red 40. However, the Environmental Protection Agency lists Red 40 as “verified to be of low concern,” the World Health Organization finds that Red 40 “does not present a health concern,” and the European Food Advisory Committee concludes that there is limited evidence for a link between Red 40 and behavioral issues in children, a claim commonly cited in arguments against the substance.

Still, working to remove products like synthetic dyes seems like good policy. Even if there is little evidence suggesting that synthetic dyes such as Red 40 lead to adverse outcomes, precaution would justify eliminating them; we stand to gain very little by using them, so avoiding the risk, however remote, seems worth it.

The issue, though, is that this policy is in tension with autonomy. By working to remove synthetic dyes, the regulatory system is forcing a precautionary choice onto consumers that may not otherwise choose it. Even if sensible policy, it directly interferes with free consumer choice.

So, the medical freedom movement has an internal tension with the principle of autonomy in two ways. First, by stressing the freedom to access supplements, the movement inhibits the ability for consumers to make informed choices. Second, by adopting precautionary policies, the movement limits consumer choice.

Of course, one might defend the movement by arguing that precautionary policies are advisable. They might simply admit that, although we are not maximizing freedom, the benefits to public health are worth restricting at least some options.

This is an excellent response. Yet it cannot give the advocate of medical freedom everything. It may require abandoning the commitment to supplement access; a precautionary approach would demand greater scrutiny here, rather than encouraging consumers to pursue products with limited knowledge. Although it may successfully defend more restrictive regulation, as in the case of synthetic dyes. However, once we start justifying restrictions of individual freedom on the basis of public health, this may justify some of the very policies that the medical freedom movement criticizes.

Consider the previously alluded to ongoing measles outbreak in Texas. In an op-ed Kennedy emphasized that vaccination against measles is a personal choice, instead choosing to stress nutrition as the best way to combat the disease. But the data suggest encouraging vaccinations is a better policy than letting measles spread among even healthy people. In a longitudinal study of 276,327 doses of the MMR (mumps, measles, rubella) vaccine given to adults and adolescents from January 2010 to December 2018, there were fewer than 6 serious outcomes requiring hospitalization per 100,000 doses during the risk window following vaccination, some of which were attributable to patients’ prior health conditions. For comparison, measles cases in the United States from 1987–2000 led to hospitalizations at a rate of 19,200 per 100,000 cases and 300 deaths per 100,000 cases. If precaution can prevail over freedom, then this seems like as obvious a case as any; the movement ought to emphasize the importance of vaccines rather than individual choice.

But here Kennedy and members of the medical freedom movement opt for choice over public health. So, it is unclear exactly what value motivates them. Thus, members of the medical freedom movement should take a step back and ask themselves: What are we most committed to? If autonomy is their foundational commitment, then they should promote measures that increase choices available to the public while ensuring that they have plentiful access to information about the products that they consume. But if public health is their primary concern, then this would suggest policies that involve limiting individual choice in at least some respects and the policies that most effectively promote public health may be ones that they are likely to reject, such as promotion of mass vaccination.

As an outsider to decision making, it is easy to think you can have it all. But once forced to develop policy, one often finds that all values cannot be maximized simultaneously. In that case, one must choose which value has greater priority. By insisting on the importance of both public health and individual choice, the medical freedom movement fails to achieve either.

Into Houses or Off the Street?: The Right Response to Homelessness

On July 24th, the US  administration announced a new approach to addressing America’s ongoing homelessness crisis in an executive order: “Ending Crime and Disorder on America’s Streets.” Its main assertion is that “shifting homeless individuals into long-term institutional settings for humane treatment through the appropriate use of civil commitment will restore public order.” It also encourages maximum enforcement of laws against vagrancy and the elimination of encampments.

Ethically notable about the executive order is the framing of the problem. As the title of the order makes clear, the concern is not that hundreds of thousands of Americans do not have housing, but rather that they are on the streets. It might be objected that this is a distinction without a difference; merely semantics. But it is not. For our understanding of the problem impacts potential solutions.

Let us say the policy challenge we are interested in is, “Homeless individuals do not have adequate homes.” The aim then becomes to secure housing, and potential solutions could include avenues such as directly providing housing, seeking to lower rent, or ensuring support services such as mental health care exist so people do not lose housing. By contrast, if the stated problem is, as the title of the executive order indicates, “crime and disorder on America’s streets,” then solutions such as institutionalization (whether in mental health facilities or prisons) are more natural. The specific solutions we consider follow directly from our particular understanding of the problem.

In this case, one might worry there is something wrong with treating the unhoused merely as a inconvenince for other people. Immanuel Kant, a German philosopher of the 1700s, famously argued that people should always be treated as ends and not merely means.  In other words, we should regard people as something valuable in their own right and not simply for their usefulness to others.

However, while the inadequately housed are, themselves, the primary sufferers, homelessness is a social problem that affects other people, from human waste to burdens on public services, to, somewhat more complicatedly, crime.  Consideration of the public impact of homelessness is not, itself, inherently dehumanizing. Instead, Kant’s ethical concern comes when we fail to recognize the homeless as people, and begin to treat them merely as a public nuisance to be resolved. To be clear, the executive order is hardly unique in taking this stance. It is an enduring approach in the US.

The executive order also represents a changed stance on alleged causes. It asserts, “The Federal Government and the States have spent tens of billions of dollars on failed programs that address homelessness but not its root causes, leaving other citizens vulnerable to public safety threats.” Per the order, the root causes are mental health problems and substance abuse. But experts generally dispute this contention. While mental health and substance abuse are major problems in homeless populations, it is rent and housing prices that appear fundamental. Although there have been longstanding worries about deinstitutionalization — the shift of individuals with serious mental illness away from mental health institutions and into community care. (It is worth noting that overall funding for mental  health care is actually being decreased by current legislation.)

There are also concerns about the executive order’s abandonment of “housing first” approaches. These prioritize getting homeless people into housing, and then following up with other support services and are strongly evidence-backed. Unsurprisingly, homelessness is a major stressor and cause of drug abuse and mental health problems, so housing first programs make intuitive sense. Homelessness often places individuals in survival mode, where other problems become especially challenging to treat.

Most of these concerns relate to the policy infeasibility of the executive order, rather than ethics. But an ethical issue exists behind analysis of the causes of homelessness. Big picture: Is homelessness something wrong with individuals, that needs to be treated, or society, that needs to be changed? These are of course not mutually exclusive, but there is a question of emphasis and who we see as morally responsible for homelessness. If we primarily blame individuals for their own homelessness, e.g., through irresponsible drug use, we may be less inclined to support compassionate solutions. The recent executive order leans toward an individual responsibility tack, rather than highlighting background economic or social problems.

A final controversial reprioritization is dismissing “harm reduction” practices. The intent of these practices, such as supervised injection facilities and sterile needle exchanges for injection drug users, is to minimize the harms of a dangerous behavior.  Some of the major ethical questions with “harm reduction” have been analyzed in The Prindle Post by Nicholas Kreuder. As he points out, while one may be uncomfortable with the government enabling drug use, from a lives saved perspective, the evidence is very favorable for harm reduction approaches.

Again, focusing on the characterization of the problem can provide some clarity. If the problem to solve is “drug use,” then simply reducing harms is not responsive. Alternatively, if what we care about addressing are “harms associated with drug use,” such as overdose and disease transmission, then a strategy of harm reduction is more logical. An analogous case to consider is abstinence-only sex education, versus sex education that discusses condoms and other contraceptives and prophylactics.

As this executive order illustrates, specifying the problem we want to solve — housing or vagrancy; drug use or drug harms — is a major ethical decision in its own right. Thinking of homelessness primarily as a problem for the non-homeless, raises further unsettling implications. Just how much harm and indignity for homeless people, from anti-homeless architecture to criminalization of sleeping in public spaces to incarceration, is allowable to prevent the impositions they cause to others?

Why My Students Shouldn’t Use AI

As the new school year approaches, educators across the country are once more redesigning their classes in light of the brave new world of generative AI. Many teachers are embracing the technology – encouraging their students to make use of this powerful new tool. Some are even going so far as to use AI to assist in their course design. Others, like myself, are banning any use of generative AI in their classes. But why?

Perhaps I’m a luddite. Perhaps I’m no better than Socrates fearing that writing would be the death knell for education. Nevertheless, I think there are (at least) three strong moral arguments against students using AI in a philosophy class – and perhaps education more generally.

Argument 1: AI Harms Creators

Generative AIs like ChatGPT operate on Large Language Models. Put simply, they’re trained on vast quantities of data – usually scraped from what is freely available on the internet. The problem is that this data usually belongs to other people. More problematically, generative AIs make no effort to credit the data that shape their outputs. So, when I use ChatGPT to generate a fluid structure for my paper, or a killer opening paragraph for my opinion piece, there’s no way I can properly credit the sources of those generated outputs. In doing so, I necessarily pass off someone else’s ideas as my own – the very definition of plagiarism.

As our own Tim Sommers notes, a common counter to this argument is that the operation of an LLM isn’t all that different from how our own minds already work: absorbing vast amounts of data, and using that data to produce novel creations. Anyone who’s ever created anything will know the fear that one of your darling creations – a plot point, a song lyric, or a visual design element – is merely parroting another creation once seen, but long forgotten.

Like Sommers, I admit that I lack the expertise to discern how different the operation of LLMs is from how our own minds function. But I think that there is at least one morally important point of difference: While our own creations might be subconsciously informed by data we’ve absorbed, there is (excepting cases of intentional plagiarism) no intention on our part to consciously hold out the work of another as our own. The same isn’t true when we use ChatGPT. We know how LLMs operate, and we know that any product of a generative AI has made vast (unattributed) use of the works of others. This knowledge is, I think, enough to make our actions morally problematic.

Argument 2: AI Harms the Environment

But AI doesn’t just harm creators – it’s also devastating for the environment. Generative AI requires huge amounts of processing power, and that power requires a lot of energy. While precise quantifications are hard to come by, ChatGPT’s power usage is estimated to be roughly equivalent to that of 33,000 standard homes. And it’s not just electricity, either. Generative AIs need vast amounts of water to cool their processors – a concerning prospect, given that we are at imminent risk of a global water crisis.

We are in the throes of a global climate catastrophe – a catastrophe that, according to some estimates, might become irreversible in less than four years if we don’t make drastic changes to our way of living. Among those necessary changes are massive reductions in our energy consumption. Given this, an explosion in the popularity of generative AI is the last thing we need.

Of course, the fact that there is an environmental argument against AI usage doesn’t provide us with an all-things-considered reason to stop. There are many harmful practices that we might need to continue in order to ensure human safety and flourishing. But using AI just doesn’t seem to be among those. Much of our AI usage is entirely frivolous – with 38% of people using AI to plan travel itineraries, and another 25% using it to draft social media posts. And when it comes to non-frivolous functions – like using it to craft an email (as 31% of people have) or prepare for a job interview (as 30% of people have), there are far less environmentally harmful ways of doing the very same thing. Having a question answered by AI can produce almost fifty times the carbon emissions than using a simpler system – like a search engine – to resolve the same query.

Argument 3: AI Harms the User

Even if we’re not motivated to care about creators or the environment, one further fact remains true: AI harms the user. I begin each of my classes by describing philosophy as the discipline that encourages us to think carefully about the reasoning behind our beliefs. This is a challenging – and sometimes terrifying – endeavour, since the discovery of bad reasoning can often force us to abandon some of our most dearly-held beliefs. The subjects I teach require my students to consider some hard questions: Does the climate crisis mean we should have fewer children? Should we permit physician-assisted suicide? Would a Federal ban on TikTok violate our right to freedom of expression? I believe that it’s vitally important that each of us formulate our own answers to such questions. If we farm this out to an algorithm, we’re sort of missing the whole point of philosophy (and education more generally). As Marta Nunes da Costa puts it:

“being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value.”

As I’ve argued before, failing to think critically not only risks making us bad humans, but also bad humans. I believe that fact – coupled with the clear harms to creators and the environment – is more than sufficient to explain why my students shouldn’t use AI.