← Return to search results
Back to Prindle Institute

Is It Okay to Be Mean to AI?

Fast-food chain Taco Bell recently replaced drive-through workers with AI chatbots at a select number of locations across America. The outcome was perhaps predictable: numerous videos went viral on social media showing customers becoming infuriated with the AI’s mistakes. People also started to see what they could get away with, including one instance where a customer ordered 18,000 waters, temporarily crashing the system.

As AI programs start to occupy more mundane areas of our lives, more and more people are getting mad at them, are being mean to them, or are just trying to mess with them. This behavior has apparently become so pervasive that AI company Anthropic announced that its chatbot Claude would now end conversations when they were deemed “abusive.” Never one to shy away from offering his opinion, Elon Musk went to Twitter to express his concerns, remarking that “torturing AI is not okay.”

Using terms like “abuse” and “torture” already risks anthropomorphizing AI, so let’s ask a simpler question: is it okay to be mean to AI?

We asked a similar question at the Prindle Post a few years ago, when chatbots had only recently become mainstream. That article argued that we should not be cruel to AIs, since by acting cruelly towards one thing we might get into the habit of acting cruelly towards other things, as well. However, chatbots and our relationships with them have changed in the years since their introduction. Is it still the case that we shouldn’t be mean to them? I think the answer has become a bit more complicated.

There is certainly still an argument to be made that, as a rule, we should avoid acting cruelly whenever possible, even if it is towards inanimate objects. Recent developments in AI have, however, raised a potentially different question regarding the treatment of chatbots: whether they can be harmed. The statements from Anthropic and Musk seem to imply that they can, or at least that there is a chance that they can be, and thus that you shouldn’t be cruel to chatbots because doing so at least risks causing harm to the chatbot itself.

In other words, we might think that we shouldn’t be mean to chatbots because they have moral status: they are the kinds of things that can be morally harmed, benefitted, and evaluated as good or bad. There are lots of things that have moral status – people and other complex animals are usually the things we think of first, but we might also think about simpler animals, plants, and maybe even nature. There are also lots of things that we don’t typically think have moral status, as well: inanimate objects, machines, single-cell organisms, things like that.

So how can we determine whether something has moral status? Here’s one approach: whether something has moral status depends on certain properties that it has. For example, we might think that the reason people have moral status is because they have consciousness, or perhaps because they have brains and a nervous system, or some other property. These aren’t the only properties we can choose. For example, 18th-century philosopher Jeremy Bentham argued that animals should be afforded many more rights than they were at the time, not because they have consciousness or the ability to reason, per se, but simply because they are capable of suffering.

What about AI chatbots, then? Despite ongoing hype, there still is no good reason to believe any chatbot is capable of reasoning in the way that people are, nor is there any good reason to believe that they possess “consciousness” or are capable of suffering in any sense. So if it can’t reason, isn’t conscious, and can’t suffer, should we definitively rule out chatbots from having moral status?

There is potentially another way of thinking about moral status: instead of thinking about the properties of the thing itself, we should think about our relationship with it. Philosopher of technology Mark Coeckelbergh considers cases where people have become attached to robot companions, arguing that, for example, “if an elderly person is already very attached to her Paro robot and regards it as a pet or baby, then what needs to be discussed is that relation, rather than the ‘moral standing’ of the robot.” According to this view, it’s not important whether a robot, AI, or really anything else has consciousness or can feel pain when thinking about moral status. Instead, what’s important when considering how we should treat something is our experiences with and relationship to it.

You may have had a similar experience: we can become attached to objects and feel that they deserve consideration that other objects do not. We might also ascribe more moral status to some things rather than others, depending on our relationship with them. For example, someone who eats meat can recognize that their pet dog or cat is comparable in terms of relevant properties to a pig, insofar as they are all capable of suffering, have brains and complex nervous systems, etc. Regardless, although they have no problem eating a pig, they would likely be horrified if someone suggested they eat their pet. In this case, they might ascribe some moral status to a pig, but would ascribe much more moral status to their pet because of their relationship with it.

Indeed, we have also seen cases where people have become very attached to their chatbots, in some cases forming relationships with them or even attempting to marry them. In such cases, we might think that there is a meaningful moral relationship, regardless of any properties the chatbot has. If we were to ascribe a chatbot moral status because of our relationship with it, though, its being a chatbot is incidental: it would be a thing that we are attached to and consider important, but that doesn’t mean that it thereby has any of the important properties we typically associate with having moral status. Nor would our relationship be generalizable: just because one person has an emotional attachment to a chatbot does not mean that all relationships with chatbots are morally significant.

Indeed, we have seen that not all of our experiences with AI have been positive. As AI chatbots and other programs occupy a larger part of our lives, they can make our lives more frustrating and difficult, and thus we might establish relationships with them that do not hold them up as objects of our affection or care, but as obstacles and even detriments to our wellbeing. Are there cases, then, where a chatbot might not be deserving of our care, but rather our condemnation?

For example, we have all likely been in a situation where we had to deal with frustrating technology. Maybe it was an outdated piece of software you were forced to use, or an appliance that never worked as it was supposed to, or a printer that constantly jammed for seemingly no good reason. None of these things have the properties that make them a legitimate subject of moral evaluation: they don’t know what they’re doing, have no intentions to upset anyone, and have none of the obligations that we would expect from a person. Nevertheless, it is the relationship we’ve established with them that seems to make them an appropriate target of our ire. It is not only cathartic to yell profanities at the office printer after its umpteenth failure at completing a simple printing task, it is justified.

When an AI chatbot takes the place of a person and fails to work properly, it is no surprise that we would start to have negative experiences with it. While failing to properly take a Taco Bell order is, all things considered, not a significant indignity, it is symptomatic of a larger trend of problems that AI has been creating, ranging from environmental impact, to job displacement, to overreliance resulting in cognitive debt, to simply creating more work for us than before they existed. Perhaps, then, ordering 18,000 waters in an attempt to crash an unwelcome AI system is less cruel as it is a righteous expression of indignation.

The dominant narrative around AI – perpetrated by tech companies – is that it will bring untold benefits that will make our lives easier, and that it will one day be intelligent in the way human beings are. If these things were true, then it would be easier to be concerned with the so-called “abuse” of AI. However, given that AI programs do not have the properties for moral status, and that our relationships with them are frequently ones of frustration, perhaps being mean to an AI isn’t such a big deal after all.

Robot Kitchens, AI Cooks, and the Meaning of Food

I knew that I was very probably not going to die, of course. Very few people get ill from pufferfish in restaurants. But I still felt giddy as I took my first bite, as though I could taste the proximity of death in that chewy, translucent flesh. I swilled my saki, squeezed some lemon onto the rest of my sashimi, and looked up. Through the serving window I could see the chef who held my life in his busy hands. We made eye contact for a moment. I took another bite. This is absurd. I am absurd. I pictured the people I love, across the ocean in sleeping California, stirring gently in their warm, musky beds.

My experience in Tokyo eating pufferfish, a delicacy known as fugu, was rich and profound. Fugu has an unremarkable taste. But pufferfish is poisonous; it can be lethal unless it is prepared in just the right way by a highly trained chef. My experience was inflected with my knowledge of the food’s provenance and properties: that this flesh in my mouth was swimming in a tank a few minutes ago and was extracted from its lethal encasement by a man who has dedicated his life to this delicate task. Seconds ago, it was twitching on my plate. And now it might bring me a lonely death in an unfamiliar land. This knowledge produced a cascade of emotions and associations as I ate, prompting reflections on my life and the things I care about.

Fugu is an unfamiliar illustration of the familiar fact that our eating experiences are often constituted by more than physical sensations and a drive for sustenance. Attitudes relating to the origin or context of our food (such as a belief that this food might kill me, or that this food was made with a caring hand) often affect our eating experiences. There is much more to food, as a site of human experience and culture, than sensory and nutritional properties.

You would be hard pressed to find someone who denies this. Yet we are on the cusp of societal changes in food production that could systematically alter our relationship to food and, consequently, our eating experiences. These changes are part of broader trends apparent across nearly all spheres of life resulting from advances in artificial intelligence and other automation technologies. Just as an AI system can now drive your taxi, process your loan application, and write your emails, so AI and related automation tools can now make your food, at home or in a restaurant. Many technologists in Silicon Valley are trying to make automated food production ubiquitous. One CEO of a successful company I spoke with said he expects that almost no human beings will be cooking in thirty years’ time, kind of like how today very few humans make soap, toys, or clothing by hand. It may sound ridiculous, but I’ve found that this vision is common in influential industry spaces.

What might life look like if this technological vision were to come about? This question can appear trivial relative to louder questions about autonomous weapons systems, AI medicine, or the existential threat of a superintelligence. It is not a question of life and death. But I think the question points to a more insidious possibility: that our technological advances might quietly erode the conditions that enable us to experience our day-to-day lives as meaningful.

On the one hand, the struggle for sustenance is a universal feature of human life, and everyone is a potential beneficiary of technology that streamlines food production, like AI that invents recipes or performs kitchen managerial work and robots that prepare food. Home cooking robots could save people time and effort that would be better spent elsewhere. A restaurant that staffs fewer humans could save on labor costs and pass these savings on to customers. Robots could mitigate human errors relating to hygiene or allergies. And then there is the possibility of automated systems that can personalize food to each consumer’s specific tastes and dietary requirements. Virtually every technologist I have spoken to in this industry is excited about a future where every diner can receive a bespoke meal that leaves them totally satisfied and healthy, every time.

Automation brings interesting aesthetic possibilities, too. AI can augment human creativity by helping pioneer unusual flavor pairings. The knowledge that your food was created by a sexy robot could enhance your eating experience, especially if the alternative would be a miserable and underpaid laborer.

These are nice possibilities. But one thing that automation tends to do is create distance between humans and the things that are automated. Our food systems already limit our contact with the sources of our food. For example, factory farming hides the processes through which meat is produced, concealing moral problems and detracting from pleasures of eating that are rooted in participation in food production. AI and robotics could create even more distance between us and our food. Think of the Star Trek replicator as an extreme case; the diner calls for food, and it simply appears via a wholly automated process.

Why is the prospect of losing touch with food processes concerning? For some it might not be. There are many sources of value in the world, and there is no one right way to relate to food. But, personally, I find the prospect of losing touch with food concerning because my most memorable food experiences have all been conditioned by my contact with the processes through which my food came to be.

I have a sybaritic streak. I enjoy being regaled at fancy restaurants with diseased goose livers, spherified tonics, perfectly plated tongues, and other edible exotica. But these experiences tend to pass for me like a kaleidoscopic dream, filled with rarefied sensations that can’t be recalled upon waking. The eating experiences I cherish most are those in which my food is thickly connected to other things that I care about, like relationships, ideas, and questions that matter to me. These evocative connections are established through contact with the process through which my food was made.

I’ve already mentioned one example, but I can think of many others. Like when, in the colicky confusion of graduate school, Sam and I slaughtered and consumed a chicken in the living room of his condo so that we might, as men of principle, become better acquainted with the hidden costs of our food. Or when I ordered tripas tacos for Stephen, my houseguest in Santa Barbara, which he thoroughly enjoyed until, three tacos in, he asked me what ‘tripas’ meant. Or when I made that terrible tuna-fish casserole filled with glorious portions of shredded cheese and Goldfish crackers for Amy, Jacob, and Allison so that they might become sensuously acquainted with a piece of my childhood. Or when Catelynn and I sat in that tiny four-seat kitchen overlooking the glittering ocean in Big Sur and were served sushi, omakase style, directly from the chef’s greasy, gentle hands, defining a shared moment of multisensory beauty.

These experiences fit into the fabric of my life in unique and highly meaningful ways. These experiences are mine, but you probably have some like it. The thing to notice is that these sorts of experiences would be inaccessible without contact with the provenance of food. They would not be possible in a world where all food was produced by a Star Trek replicator. This suggests that food automation threatens to erode an important source of human meaning.

Really, there are all sorts of concerns you might have about AI and robotics in the culinary sphere. Many of these have been identified by my colleague Patrick Lin. But for me, the erosion of meaning is worth emphasizing in discussions about technology because this kind of cost resists quantification, making it easy to overlook. It’s the sort of thing that might not show up on a cost-benefit of a tech CEO who speaks glibly about eliminating human cooking.

The point I’m making is not that we should reject automation. The point is that as we augment and replace human labor in restaurants, home kitchens, and other spheres of life, we need to be attentive to how the processes we hope to automate away may enrich our lives. An increase in efficiency according to quantifiable criteria (time, money, waste) can diminish squishier but no less important things. Sometimes this provides a reason to insist on an alternative vision in which humans remain in contact with the processes in question. I would argue this is true in the kitchen; humans should retain robust roles in the processes through which our food comes to be.

After my meal in Tokyo, I used my phone to find an elevated walkway on which to smoke. I took a drag on a cigarette and watched a group of men under an overpass producing music, in the old way, by a faint neon light. I could feel the fugu in my belly, and my thoughts flashed to my loves and hopes. One of the men playing a guitar looked up. We made eye contact for a moment. I took another drag. This is nice. I am happy.

 

Note: This material is based upon work supported by the National Science Foundation under Award No. 2220888.  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Can an AI Be Your Friend?

According to a July report by the World Health Organization, one in six humans are experiencing loneliness and social isolation – a condition with serious public health consequences ranging from anxiety to chronic illness. This builds on enduring concerns of a “loneliness epidemic,” especially among young men in developed economies. Although some take issue with “epidemic” language, arguing it misframes longstanding loneliness concerns as new and spreading ones, the threat is real and persistent.

Meanwhile, large language model chatbots such as ChatGPT, Claude, and Gemini, as well as AI companions such as Replika and Nomi, have emerged as sources of digital support and friendship. Many teens report social interactions with AI companions, although only 9% explicitly consider them friends. But the numbers may grow, with 83% of Gen Z believing they can form emotional ties with AI companions per an, admittedly self-interested, report by the Girlfriend.ai platform.

Should AI chatbots be part of the solution to the loneliness epidemic?

Of course, AI as a tool can be part of the solution. One can ask ChatGPT about social events in their city, to help them craft a text asking someone out, or to propose some hobbies based on their interests. This is using AI as a writing aid and search tool. But the ethical issue I’m concerned with is whether an AI friend or companion should be part of the solution.

One place to start is with what we want friendship to be. In the Nicomachean Ethics, Aristotle designates three kinds of friendship: utility, pleasure, and virtue. With utility, a friend provides something useful but does not care about your well-being. A friendship of pleasure involves mutual activities or enjoyment. Finally, a friendship of virtue involves genuine mutual care for well-being and each other’s growth. Aristotle considered friendship of virtue to be true friendship.

An AI chatbot can provide utility, and one may derive pleasure from interacting with a chatbot or AI companion, so it can provide some of the functions of friendship, but current AI chatbots cannot genuinely care about someone’s well-being. At least from an Aristotelian perspective, then, AI cannot be a true friend.

This does not rule out the value of AI companionship. Humans often have asymmetric relationships that nonetheless provide great satisfaction, for example relationships with pets or parasocial relationships with celebrities. (Granted, many would allege that at least some pets like dogs and cats can care about other’s well-being even if they cannot help one grow as a person.) The human tendency to anthropomorphize has led to a long legacy of relationships with completely mindless entities, from pet rocks to digital pets like Tamagotchi. And then, of course, there are imaginary friends.

But none of those are really seriously proposed as solutions to loneliness. Plausibly, a surge of emotional support through pet rocks, imaginary friends, or more realistically, dogs, is more a symptom of loneliness than an actual solution.

Moreover, there seems to be something distinct about chatbots. A dog may provide some of the intimacy of human friendship, but the dog will never pretend to be a human. By contrast, chatbots and AI companions are designed to act like human friends. Or, well, not quite human friends — there’s a key difference.

AI companions are programmed to “listen” attentively, respond generously, and support and affirm the beliefs of those communicating with them. This provides a particularly cotton candy-esque imitation of friendship, based on agreement and validation. AI sycophancy, it is sometimes called. Undoubtedly, this feels good. But does it do us good?

This August police reported one of the first cases of an AI chatbot potentially leading to murder. ChatGPT usage continually reinforced 56-year-old Stein-Erik Soelburg’s paranoia about his mother drugging him. Ultimately, he killed her and then himself.

The parents of 16-year-old Adam Raine similarly allege that ChatGPT contributed to his suicide, and are now suing OpenAI, the company behind ChatGPT.

While extreme examples, in both cases, the endless affirmations of ChatGPT emerge as a concern. Increasingly psychologists are seeing “AI psychosis” where the incredibly human-like, flattering, and supportive nature of chatbots can suck people further into delusion. By contrast, a virtuous friend (on Aristotle’s account) is interested in your well-being, but not necessarily in people-pleasing. They can tell you to snap out of a negative spiral or that you are the problem.

Can better programming fix this? An August 26th, OpenAI published a blog post, “Helping people when they need it most,” discussing some of the safeguards they built into ChatGPT and where they are still trying to improve. This includes avoiding providing guidance on self-harm and working with physicians and psychologists on mental health protections.

However, programming can only solve technical problems. No amount of safety tweaks will make a large language model care about someone’s well-being; it can merely help it better pretend.

Ultimately, AI companies and the virtuous friend have very different aims and motivations. At some level, the purpose of an AI company is to turn a profit. What the precise business model(s) will be has yet to emerge, as currently most AI is still burning through investors’ money. But whatever strategy eventually arises – whether nudging customers towards buying certain products or maximizing engagement and subscription fees – it will be distinct from the sincere regard of Aristotelian friendship. Worse, to the extent that AI chatbots and companions can alleviate loneliness, they rely on this loneliness in the first place to generate demand for the product.

AI companions may be able to fill some of the functions that friendship does – offering a steady hand or a kind word. However, they fundamentally cannot deliver the mutual caring that we expect from the truest form of friendship. Advancements to replication of depth and sincerity will no doubt be made, but what will remain constant is the lack of genuine empathy. Instead of a cure for our loneliness and isolation, the turn to large language models may simply mark the next stage of the disease.

AI’s Videogamification of Art

Criticisms of AI-generated art are now familiar, ranging from the unauthorized use of artists’ works to train models, to the aping of art styles that border on plagiarism, to claims that AI enthusiasts fail to understand that creating art requires intention and purpose and is antithetical to being produced automatically by a program. While these criticisms remain as important as ever, AI programs continue to evolve, and with new capabilities come new issues.

Case in point: Google’s new Genie 3, which allows users to create “interactive, playable environments” on the basis of prompts and images. To demonstrate the technology, a Google researcher showed how one could “walk around” famous paintings, such as The Death of Socrates by Jacques-Louis David and Nighthawks by Edward Hopper. The AI program generates a 3D world, allowing users to see characters and objects in the original painting from different angles, essentially creating a kind of videogame, albeit one in which there’s not much to do (at least for now).

I think there is good reason to be critical of AI that makes rudimentary videogames out of works of art. Here I’ll consider three such criticisms. The first two can be commonly found in the comments on social media: first, that using AI to digitally manipulate art is disrespectful to the artist or artwork itself; and second, that choosing to interact with the videogamified artwork represents a failure of imagination on the part of the user. I’ll also consider a new version of an old criticism from the philosophy of art: that AI-generated creations lack the original artwork’s aura.

There is a sense in which manipulating art in this way isn’t new. After all, so-called “immersive experiences” have been popular for a while now, such as Immersive Van Gogh, where visitors can walk among projections of some of Van Gogh’s most recognizable artworks. These experiences are sometimes criticized for being tacky and a tourist trap, but few would consider them egregious crimes against art. It’s also long been accepted by all but the stodgiest scholars that videogames are capable of being aesthetically valuable, so it’s not as though we should think that only oil paintings in ornate frames hanging in galleries are worthy of our aesthetic appreciation.

So what’s wrong with using AI to create a virtual world out of a painting? First off, we might worry that using these programs disrespects the original artist, who likely did not intend their work to be a virtual environment to be walked around in. Part of the problem is that AI programs struggle with generating environments that are coherent, producing artifacts and noise that detract from the original composition of the work of art. In the example of the world created out of Hopper’s Nighthawks, for example, AI-generated faces and words became garbled messes, with the end product feeling akin to vandalism.

This first criticism is an aesthetic one: AI programs that videogamify art ruin the artist’s vision, taking something beautiful and making it grotesque. We might also be tempted to criticize the person who chooses to engage with an artwork via its AI-generated videogame form. Commenters on social media are particularly liable to sling this kind of mud, accusing AI art fans of exhibiting a wide range of personal failings. While social media tends not to feature the most careful debates, one criticism that is worth singling out is that engaging with AI-manipulated versions of artworks represents a failure of imagination.

Why think this? Part of what’s involved in appreciating an artwork is to engage with it on its own terms, which requires interpreting what the artist has put in front of you and what they have left out. We might argue that getting an AI program to fill in the blanks by creating a navigable 3D environment is like taking a shortcut, where you are getting a program to do the work required to appreciate a work of art for you.

We’ve seen this kind of criticism when it comes to people using chatbots to write for them: writing is meaningful when it is intentional and effortful, and it loses that meaning when we offload our cognitive functions to programs. In the same way, using an AI program to generate a world out of a painting offloads your imagination and prevents you from being able to meaningfully appreciate a work of art.

So, the first criticism of AI videogamified art pertains to how a person treats an artist or artwork, and the second is a criticism of the person who uses such programs. The last argument I’ll consider is a bit different: that turning an artwork into a 3D virtual environment provides a subpar aesthetic experience because it fails to capture the original artwork’s aura.

This argument (or at least a form of it) comes from the philosopher Walter Benjamin, who wrote on art and aesthetics in the first half of the 20th century. Benjamin was concerned with a practice that was becoming more and more frequent at the time: that artworks were being reproduced, sometimes on a massive scale. An original painting, Benjamin argued, is unique, and when experienced in a certain place and time, has a presence about it, or what he calls an “aura.” It is a concept perhaps better experienced than described: there is some feeling that you get when encountering an artwork in a gallery as opposed to seeing a picture of it online, or as a postcard in a gift shop.

Benjamin’s worry was that copies of artworks fail to capture something that can only be possessed by the original. He did not, of course, have a conception of modern AI tools, or virtual 3D environments, or videogames. But Benjamin’s complaint still feels apt when experiencing new AI creations today: you’re no longer interacting with the original, but instead something that has been manipulated, and in doing so you fail to have the same kind of aesthetic experience. This criticism is not the charge that you’re necessarily lacking in imagination by engaging with the AI-generated version of a painting instead of the original; it’s just that it’s a shame that you’re missing out on having a more meaningful aesthetic experience.

How serious these criticisms are is up for debate, and many online have argued that new ways for AI programs to create and manipulate artworks really amounts to little more than cool new technology. Regardless, something of value does seem to be lost when interacting with the videogamified version of artworks instead of engaging with them on their own terms. When it comes to having a meaningful aesthetic experience, AI continues to feel like little more than a novelty.

Why My Students Shouldn’t Use AI Either

Every semester since ChatGPT arrived on the public stage, I have spent considerable time thinking about how to handle AI use with my students, and I have changed my answer each semester. This year, for the first time, I am going to ask that my students unequivocally avoid using it for any reason. Fortunately, I am not alone in this approach. Fellow Prindle author Daniel Burkett has offered three moral reasons why students should not use AI: it harms creators, the environment, and the students themselves. I would like to offer a few more reasons (though not all explicitly moral) to consider.

Argument 4: AI Erodes Responsibility

As AI systems infiltrate our human decision-making processes and social order more deeply, they are contributing to the erosion of accountability. To be sure, many AI evangelists who tout the benefits of AI will be quick to point out that it is on the human user to verify the legitimacy of AI outputs and use them responsibly. However, I am skeptical that this solution can overcome the accountability concerns I have.

Consider one personal anecdote. Last year, another driver hit my partner while she was driving our car and our insurance increased. When we called the insurance company, we wanted an explanation of why we would be paying the new amount. We were not objecting to having to pay more (though, it does feel unjust to have to pay more for an accident you are not at fault for). We simply wanted to know why the increase was $23 as opposed to $15 or $20. When we asked, the response we received was ultimately “I don’t know, that’s just what the system is telling me.” When we asked who we could contact to ask for more details, they said there was no one that could help us.

This example points out a larger issue with the integration of AI systems in social structures. We often think of accountability in cases where things go wrong, but conceptually accountability is about tracking responsibility for outcomes, whatever they may be. When we include AI in more of our life activities, we lose the thread of accountability. The reason for why something happened will increasingly stop with the answer “AI.” What makes AI unique is that it can behave like an agent in ways previous technologies have been unable to, which will make it well suited to enter into the stream of accountability and muddy the waters.

Furthermore, as these systems are more deeply integrated into our technologies and daily life, they will be treated as more trustworthy (regardless of whether they actually are). When people use technology that everyone is using, in the way that everyone is using it, it can be reasonable to ask for clemency when things go awry because they were just doing what was considered standard practice.

In my classrooms, we study ideas and arguments about serious topics: medical ethics, justice, propaganda, and technology. I want students to learn how to formulate ideas, explore their contours, and ultimately form well founded beliefs that they can claim some form of ownership over. Given the propensity of AI systems to obscure the trail of accountability, I will be prohibiting its use because I want students to retain accountability for the ideas they produce in my classrooms.

Argument 5: AI Undermines Growth

One of the promises of AI is that it will take over some tasks for us, in order to free our minds and time up for more important things. We have also been promised that it will stimulate the creation of new, undiscovered roles in society. So far, many of these prophesied positions relate to the management of AI itself: we now need AI policy experts, AI oversight experts, AI alignment specialists, and AI testers, to name just a few.

While we have yet to see an influx of new and exciting career paths beyond those related to managing AI, we do have reason to think that as AI takes over activities for us we will no longer be able to do those things as well. A preliminary study suggests if doctors go from not using AI, to using AI, and then back to not using AI, they get worse at making diagnoses than they were before they started using AI in the workplace. This should not surprise us. When we stop practicing skills, we lose our edge.

Echoing Burkett’s piece, in the realm of philosophy there is virtually no good reason for my students to use AI because every use case seems to undermine the very skills I want them to learn. When I ask my students how they use it, they typically tell me that they draft their own work and then feed it to AI to make it more professional. However, my philosophy courses are not about producing something that sounds convincing or looks professional (though it is nice when this happens). It’s about learning how to think well. When students write an argument defending a position, and then feed it to AI to help make it more professional, they are missing out on practicing one of the crucial skills I am trying to teach them. Editing a paper for logical coherence, careful word choice, and conceptual analysis is part of the skill building process, and AI impedes this.

Argument 6: AI Is Ideological

AI is currently (and will always likely be) infused with ideology. Nicholas Kreuder has written about the dangers that come from the power that the owners of AI have over us, which reveals the ideological nature of these systems and the risks we have when we rely on them.

If AI is given guardrails, those guardrails will be given using the political, moral, and, likely, economic principles that the creators deem appropriate. Even a radical AI enthusiast who believes AI needs to be absolutely “free” would be instantiating an ideology within the AI system if they chose to avoid any guardrails at all. The choice of what data to train the system on and what to exclude will also be one rooted in some ideological choice. And, insofar as these systems need to generate profit, they will always feel the ideological pull of economic interest.

This problem is not unique to AI, of course. The fact that the phrase “to google” is synonymous with the action of searching for something on the internet reveals the informational monopoly that one company wields over a huge portion of the world. And, the way that Google organizes search results is also far from ideology free.

AI ideology is an issue not because it is ideological per se, as most technologies cannot avoid being infused with some kind of ideology. It is that AI is especially good at projecting confidence and expertise. AI writes convincingly from the perspective of many who use it (while many PhDs have criticized AI’s performance as laughable, even childish, this is not representative of the experience that many have while using it).

The problem with AI, then, is not just that it presents information confidently, but that when you ask it questions about controversial political and ethical issues, it appears to give balanced and unbiased answers. You can even instruct the AI to be unbiased and it will tell you that it will do that. But, in reality it cannot. (Notably, if you ask it “can you be unbiased?” it can also correctly tell you that this is not really possible).

While my ideological complaint also applies to pre-AI technologies like the Google search, the television, the radio, or the book, I think that conversing with AI poses a special problem. The confident, conversational, and apparent unbiased delivery of information occludes the ideological bent that AI systems have.

Argument 7: A Refuge From AI

Many of us feel compelled to use AI whether we like it or not out of a fear of being left behind (FOMO is a real tactic in the tech marketing world). I suspect that AI will be used by many of my students because they feel that they must for “educational” purposes. I also know that outside of the university context, students will be required to use AI for their jobs and are forced to use it when interacting with the sociotechnical infrastructure around them.

The final, simple reason I will prohibit AI in my classroom this semester is to give my students a place of refuge from it. My hope this fall is to give students the room to slow down, make mistakes, and think for themselves without the pressure to be perfect. Although it promises to make our lives easier, AI is ultimately a tool that entices us to work harder. It promises to help us make things better, do things faster, and make us stronger. But this is machine logic, and we are human after all. So, this fall I will say no to AI.

Is Artificial Intelligence Sustainable?

A recent advertisement for Google’s “Gemini” artificial intelligence (AI) model shows users engaged in frivolous, long-form conversations with their AI personal assistant. “We can have a conversation about anything you like,” Gemini cheerfully informs one user, who is unsure of how to approach this new technology. Another user asks Gemini, “how do you tell if something is spicy without tasting it?” to which Gemini responds (without any hint of the stating-the-obvious sarcasm with which a human may be expected to reply such an inane question) “have you tried smelling it?” What is clear from this advert, and other similar adverts produced by companies such as Meta, is that the companies designing and selling AI intend for its adoption to be ubiquitous. The hope of “big tech” is that AI will be used liberally, for “anything” as the advert says, becoming part of the background technological hum of society in just the same way as the internet.

Awkwardly for these companies, this push for the pervasive adoption of AI into all realms of life is coinciding with a climate and ecological crisis that said technologies threaten to worsen. “Data centers,” the physical infrastructure upon which AI systems depend, are predicted by the IEA to double in their energy consumption from 2022 levels by 2026, consuming around 4.5% of total electricity generated globally by 2030 – which would rank them fifth in the list of electricity usage by country, just behind Russia and ahead of Japan. This of course comes with a significant carbon footprint, driving up global energy demand at precisely the moment that frugality is required if countries are to meet their net-zero goals. Such a significant increase in electricity usage is likely to extend our dependency on fossil fuels as efforts to decarbonize supply can’t keep up with demand.

Beyond electricity usage, data centers also require both vast amounts of water for cooling and rare-earth minerals to produce the hardware components out of which they are built. Google’s data centers consumed (that is, evaporated) approximately 31 billion liters of water in 2024 alone. This at a time when water scarcity is already a serious problem throughout much of the world, with two-thirds of the global population experiencing severe water scarcity during at least one month of the year. Similarly, the mining of rare-earth minerals such as antimony, gallium, indium, silicon, and tellurium is another aspect of the AI supply chain known to wreak both ecological and social havoc. China, by far the world’s largest processor of rare-earth minerals, having realized the heavy environmental toll of rare-earth mines, have now mostly outsourced mining to countries such as Myanmar, where the mining process has poisoned waterways and destroyed communities.

Given the vast resources required to build, train, and maintain AI models, it is fair to question the wisdom of asking them “anything.” Do we really need power-hungry state-of-the-art algorithms to tell us that we can smell an ingredient to check whether it’s spicy?

In response to such sustainability concerns, Google has pointed out that alongside the more mundane uses of AI displayed in its advertisement, the implementation of AI throughout industry promises a raft of efficiency savings that could result in an overall net-benefit impact on global emissions. In its 2025 environmental report, Google describes what it calls an “optimal scenario” based on IEA research stating that the widespread adoption of existing AI applications could lead to emissions reductions that are “far larger than emissions from data centers.” Although, some of the IEA’s claims are based on the somewhat spurious assumption that efficiency savings will be converted into reduced emissions rather than simply lowering prices and increasing consumption (for example, some of the emissions reductions predicted by the IEA’s report come from the application of AI to the oil and gas sector itself, including helping to “assess where oil and gas may be present in sufficiently large accumulations”).

Even granting a level of skepticism here, the potential of AI to produce positive outcomes for both the environment and humanity shouldn’t be overlooked. Initiatives such as “AI for Good,” that seeks to use AI to measure and advance the UN’s Sustainable Development Goals, and “AI for the Planet,” an alliance that explores the potential of AI “as a tool in the fight against climate change,” illustrate the optimism around AI as a tool for building a more sustainable future. In fact, a 2022 report produced by “AI for the Planet” claims the technology could be implemented in three key areas in the fight against climate change: mitigation, through measuring and reducing emissions; adaptation, through predicting extreme weather and sea-level rise; and finally, research and education.

There is also potential to use AI as a tool for biodiversity conservation. Research carried out by The University of Cambridge identified several applications for AI in conservation science, including: using visual and audio recognition to monitor population sizes and identify new species; monitoring the online wildlife trade; using digital twins to model ecosystems; and predicting and mitigating human–wildlife conflicts. However, the authors also point to the significant risk of eroding support and funding for smaller scale-participatory research in favor of the larger and wealthier institutions able to carry out AI-based research. Additionally, they highlight the risk of the creation of a colonial system whereby data is extracted from lower-income countries to train models in data centers in North America and Europe, resulting in the export of AI-driven mandates for the use of resources and land back to those lower-income countries.

Such risks indicate the need to consider an important distinction that has been made in the field of AI ethics. Philosophers such as Aimee van Wynsberghe and Henrik Skaug Sætra have argued for the need to move from an “isolationist” to a “structural” analysis of the sustainability of AI technologies. Instead of thinking of AI models as “isolated entities to be optimized by technical professionals,” they must be considered “as a part of a socio-technical system consisting of various structures and economic and political systems.” This means that the sustainability of AI doesn’t come down to a simple cost-benefit analysis of energy and resources used versus those saved through greater efficiency and sustainability applications. In order to fully understand the indirect and systemic effects of AI on environmental sustainability, these philosophers argue, we need to consider AI models in their social and political context.

A structural analysis must begin by pointing out that we live in a system characterized by immense inequalities of both wealth and power. As it stands, most AI models are owned and operated by tech companies whose billionaire CEOs have been described as oligarchs. These companies are the principal beneficiaries of a political system driven by economic growth and fueled through resource extraction. We should expect the AI models they produce to propagate this system, further concentrating power and capital to serve the narrow set of interests represented by these companies and their owners. A purely “isolationist” focus suits these interests as AI’s positive applications can be emphasized, while any negative effects, such as vast levels of resource usage, can be presented as technical problems to be ironed out, rather than systemic issues requiring political reform.

To take some examples already touched upon in this article, an isolationist approach can highlight the efficiency savings that are made possible by using AI models to streamline industry, while a structural approach will point out the economic reality that efficiency-savings tend to be harnessed only to ramp up production, lowering prices and leading to increased consumption, and therefore, higher profits. An isolationist approach can view the dependence of AI on large quantities of rare-earth minerals as a technical problem to be solved through more efficient design, whereas the structural approach will point to the need to address the immense injustices that are intrinsic to the rare-earth supply chain. An isolationist approach will tout the potential for AI models to guide ecological restoration in lower-income countries, while a structural approach will point out how this echoes the colonial history of conservation science.

Once we start to consider AI within its political and socio-economic context rather than as an isolated technological artefact, we can look beyond its direct applications for sustainability so that its many troubling indirect and systemic implications come into sharper focus. It becomes apparent that, rather than promoting sustainability, there is a far greater propensity for AI to enable further resource extraction, evade environmental regulations, and manipulate public debate and opinion on environmental issues.

A striking example of this is the way that AI is being used to undermine public trust in climate science. A report authored by the Stockholm Resilience Centre argues that the ability to generate synthetic text, images, and video at scale could fuel a “perfect storm” of climate misinformation, whereby AI models produce vast amounts of climate denial content that is then disseminated through social media algorithms already geared towards bolstering controversial and polarizing content. Consider this faux-academic paper recently written by Elon Musk’s Grok 3 model that casts doubt on the science of anthropogenic global warming. The paper was widely circulated on social media as an example of the first “peer-reviewed” research led by AI. Of course, claims of “peer-review” are unfounded. Neither the publisher nor the journal are part of the Committee of Publication Ethics and the paper was submitted and published within just twelve days, with no indication of whether it underwent open, single, or double-blind review. It should come as no surprise that one of the co-authors, astrophysicist Willie Soon, is a climate denier known to have received millions in funding from the fossil fuel industry, and whose contested research was referenced by the AI-generated paper. Despite such an obvious conflict of interest, a blog-post by the COVID-19 conspiracy theorist Robert Malone gathered more than a million views, claiming that the use of AI meant that the paper was free from the biases of what he describes as “the debacle of man-made climate change.”

From a “structural” perspective then, ensuring that AI models are sustainable is not merely a technical issue but a political issue of confronting the systems and power-structures within which AI technologies are built and utilized. One step in the right direction is to democratize AI governance such that ultimate control over AI’s direction and implementation is wrestled from the the hands Silicon Valley oligarchs and given to democratically elected governments so that regulation can be imposed to promote AI’s sustainability, both in terms of its physical infrastructure and its applications. However, so long as AI remains enmeshed within the power structures responsible for creating the environmental crisis, it will never truly be a force for advancing sustainability.

The US’s Action Plan to “Prevent Woke AI”

For a few years now, “digital” or “technological” sovereignty has been a prominent topic within AI Ethics and regulatory policies. The challenge being: how can government actors properly rule in the interest of their citizens, while governments (and citizens) must rely on technologies developed by a handful of companies they do not have clear control over? Many efforts to address this challenge consisted either in regulations, such as the EU’s AI Act, or various forms of agreement between (supra)national actors and tech companies.

Unfortunately, the White House’s “America’s AI Action Plan” and the three Executive Orders published on the same day ignore this thorny issue entirely. Instead, these policy proposals aim at deregulating AI development by American Tech companies “to achieve global dominance in artificial intelligence.” The general thrust is clear: deregulate AI development, promote its deployment across society, and export widely so as to strengthen the U.S.’s global standing.

In advancing these interests, one keyword sticks out like a sore thumb: “Woke AI.” As a millennial, it feels surreal to see a term that I have primarily experienced as Internet lingo make its way into a Presidential Executive Order. While this is far from the first time that the term “woke” has been utilized by the president to pejoratively address the values of the opposition, it’s far from clear what precise danger such language is meant to evoke. What kind of threat does “Woke AI” represent?

The July 23rd Executive Order “Preventing Woke AI in the Federal Government” does not attempt to define the term. Instead, it states that AI systems should provide reliable outputs, free from ideological biases or social agendas that might undermine their reliability. In particular, the Order identifies “diversity, identity, and inclusion” (DEI) as a “destructive ideology” that manipulates information regarding race or sex, and incorporates “concepts like critical race theory, transgenderism, unconscious bias, and systemic racism.” The Order then identifies “Unbiased AI Principles” that will guide development going forward. Chief among these is the command that AI must be truth-seeking and ideologically neutral – “not manipulat[ing] responses in favor of ideological dogmas such as DEI” – to ensure that AI systems are trustworthy.

To many AI ethicists (including myself), the Order reads like a series of non-sequiturs. It demands that tech companies reject any notion related to DEI in their AI development guidelines, yet it is quite unspecific regarding what such rejection would entail in practice. Let us set aside the countless examples of AI systems being unlawfully biased on the basis of race, gender, economic status, and disability in a variety of domains. Let us also set aside the practical impossibility for AI systems to be “unbiased” given that they are technologies literally designed to identify potentially meaningful patterns and sort accordingly. And, finally, let us set aside the irony of the clear ideological grounds motivating the Order’s intention to generate non-partisan results. What little remains when all these difficulties have been accounted for doesn’t amount to much. And it’s worth asking why the focus on “anti-woke AI” represents such a large part of the White House’s overall AI strategy.

The answer to that question becomes much clearer when looking at how – and where – “woke AI” crops up. From the beginning, responsible AI policy is described as integral to the goal of protecting free speech American values. Ultimately, AI outputs must “objectively reflect truth rather than social engineering agendas.” For that reason, references to “misinformation,” regarding things like DEI and climate change, must be removed. But this kind of censorship seems odd considering the stated desire to promote freedom of speech, especially because the Plan is explicitly stating what to not talk about – censoring tech companies from mentioning those topics as relevant concerns.

Ultimately, it often feels like the concern over “Woke AI” is merely a pretense for removing safeguards in order to accelerate AI development. This intent is made explicit at several points of the Plan. At its very introduction (and in reference to the Vice President’s remarks at the AI Action Summit last February) any “onerous” regulation towards AI development would mean paralyzing this technology’s potential – a reason why the current administration rescinded Biden’s “dangerous” Executive Order on AI. (Interestingly enough, many saw that regulation as quite lenient, all things considered, especially compared to the EU’s AI Act.) Any mention of regulation in the Plan that does not originate from the current White House is considered “onerous,” “burdensome,” or in some way unreasonable in slowing down AI development.

Even more poignantly, the Plan is quite clear in its intention to counter Chinese influence: it refers to the governance frameworks proposed by international organizations such as the UN, the G7, and the G20 as “vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance.” Safeguards meant to protect individual rights and privacy are written off as the calculated design of the U.S.’s largest geopolitical competitor.

But the Plan is not simply a rhetorical tool to signal dominance within the U.S.’s political discourse. Rather, it is a means of vilifying any obstacle to the “move fast and break things” approach as “woke.” This language is not only meant to clearly separate the current White House’s position from that of their predecessor’s, but to pave the way for deregulation. The fear is that this attitudinal shift cedes far too much power to unaccountable tech companies. Without stronger guardrails in place, we may all get run over.

Why My Students Shouldn’t Use AI

As the new school year approaches, educators across the country are once more redesigning their classes in light of the brave new world of generative AI. Many teachers are embracing the technology – encouraging their students to make use of this powerful new tool. Some are even going so far as to use AI to assist in their course design. Others, like myself, are banning any use of generative AI in their classes. But why?

Perhaps I’m a luddite. Perhaps I’m no better than Socrates fearing that writing would be the death knell for education. Nevertheless, I think there are (at least) three strong moral arguments against students using AI in a philosophy class – and perhaps education more generally.

Argument 1: AI Harms Creators

Generative AIs like ChatGPT operate on Large Language Models. Put simply, they’re trained on vast quantities of data – usually scraped from what is freely available on the internet. The problem is that this data usually belongs to other people. More problematically, generative AIs make no effort to credit the data that shape their outputs. So, when I use ChatGPT to generate a fluid structure for my paper, or a killer opening paragraph for my opinion piece, there’s no way I can properly credit the sources of those generated outputs. In doing so, I necessarily pass off someone else’s ideas as my own – the very definition of plagiarism.

As our own Tim Sommers notes, a common counter to this argument is that the operation of an LLM isn’t all that different from how our own minds already work: absorbing vast amounts of data, and using that data to produce novel creations. Anyone who’s ever created anything will know the fear that one of your darling creations – a plot point, a song lyric, or a visual design element – is merely parroting another creation once seen, but long forgotten.

Like Sommers, I admit that I lack the expertise to discern how different the operation of LLMs is from how our own minds function. But I think that there is at least one morally important point of difference: While our own creations might be subconsciously informed by data we’ve absorbed, there is (excepting cases of intentional plagiarism) no intention on our part to consciously hold out the work of another as our own. The same isn’t true when we use ChatGPT. We know how LLMs operate, and we know that any product of a generative AI has made vast (unattributed) use of the works of others. This knowledge is, I think, enough to make our actions morally problematic.

Argument 2: AI Harms the Environment

But AI doesn’t just harm creators – it’s also devastating for the environment. Generative AI requires huge amounts of processing power, and that power requires a lot of energy. While precise quantifications are hard to come by, ChatGPT’s power usage is estimated to be roughly equivalent to that of 33,000 standard homes. And it’s not just electricity, either. Generative AIs need vast amounts of water to cool their processors – a concerning prospect, given that we are at imminent risk of a global water crisis.

We are in the throes of a global climate catastrophe – a catastrophe that, according to some estimates, might become irreversible in less than four years if we don’t make drastic changes to our way of living. Among those necessary changes are massive reductions in our energy consumption. Given this, an explosion in the popularity of generative AI is the last thing we need.

Of course, the fact that there is an environmental argument against AI usage doesn’t provide us with an all-things-considered reason to stop. There are many harmful practices that we might need to continue in order to ensure human safety and flourishing. But using AI just doesn’t seem to be among those. Much of our AI usage is entirely frivolous – with 38% of people using AI to plan travel itineraries, and another 25% using it to draft social media posts. And when it comes to non-frivolous functions – like using it to craft an email (as 31% of people have) or prepare for a job interview (as 30% of people have), there are far less environmentally harmful ways of doing the very same thing. Having a question answered by AI can produce almost fifty times the carbon emissions than using a simpler system – like a search engine – to resolve the same query.

Argument 3: AI Harms the User

Even if we’re not motivated to care about creators or the environment, one further fact remains true: AI harms the user. I begin each of my classes by describing philosophy as the discipline that encourages us to think carefully about the reasoning behind our beliefs. This is a challenging – and sometimes terrifying – endeavour, since the discovery of bad reasoning can often force us to abandon some of our most dearly-held beliefs. The subjects I teach require my students to consider some hard questions: Does the climate crisis mean we should have fewer children? Should we permit physician-assisted suicide? Would a Federal ban on TikTok violate our right to freedom of expression? I believe that it’s vitally important that each of us formulate our own answers to such questions. If we farm this out to an algorithm, we’re sort of missing the whole point of philosophy (and education more generally). As Marta Nunes da Costa puts it:

“being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value.”

As I’ve argued before, failing to think critically not only risks making us bad humans, but also bad humans. I believe that fact – coupled with the clear harms to creators and the environment – is more than sufficient to explain why my students shouldn’t use AI.

More Than a Message: Finding Meaning in Sympathy Cards

I suspect having cancer must suck. Indeed, when I think about cancer, my mind instantly runs to terms like “malignant,” “tumor,” “fight,” and “chemotherapy.” All things which one might generously call undesirable. A phrase that doesn’t jump out at me is “funny.” However, Mark Steel’s The Leopard in my House: One Man’s Adventures in Cancerland is an undeniably funny book.

In it, the author, broadcaster, stand-up comedian, and newspaper columnist recounts the extraordinary year in which he was diagnosed and treated for throat cancer. The book is peppered with vivid, often graphic descriptions of what it’s like when your neck is repeatedly blasted with radiation (expect a lot of talk about mucus). But it’s also filled with warm, witty reflections on his relationships, his fellow cancer patients, his brushes with mortality, and the miraculous feeling of doing something as simple as swallowing water again.

Undoubtedly, all of the above is ripe for philosophical analysis, and this fact isn’t lost on Mark himself. In one chapter of the book, after receiving a recommendation from a fellow cancer sufferer, Mark considers the disease, the effect it’s having on his life, and how one can wrestle with forces beyond one’s control, via Marcus Aurelius’ Meditations. Taking a stoic approach, he considers the futility of asking “Why me?” or looking for fairness in something as arbitrary as a cancer diagnosis. It’s a grounded, practical perspective — and, I think, a wise one.

Yet, what I want to do here is look past all the fertile philosophical material that Mark provides us in his book and go in on something very specific: sympathy cards.

In Chapter 11, Mark talks about how awkward it can be talking to someone who has cancer. How unsure we often feel about what to say. That uncertainty, he argues, is natural, but it shouldn’t stop us from saying anything at all.

Of course, this transfers to non face-to-face communications. I’m sure you too have faced the daunting task of writing a meaningful message in a card. It might be something as innocuous as a birthday or leaving card, or as substantial as a commiserations or sympathy card. Yet, despite having some vague sense of what you want to say, you don’t know the correct words. This makes the whole thing slightly stressful, and the temptation is there to simply not write in the card at all. To not say anything. This, Mark says, is a mistake:

… here’s my tip if you know someone who lands in a serious medical condition. If you’re not sure what to say to someone, don’t worry because anything you say is better than nothing. I can’t remember the exact words of many of the messages I got, I can only remember that they came. Every email, text or call is a reminder that you matter to someone and that is critical in getting you passed the difficult moments. Don’t worry about saying the wrong thing. All the person remembers is that you got in touch.

This, I think, gets at something we often forget when it comes to comforting those who are ill. You’re not expected to craft the perfect, poetic paragraph that lifts someone out of despair. The point of a message, whether text, email, or card, isn’t to fix everything. It’s not even to promise that everything will be okay (because you can’t know that). Rather, it’s to remind the person they haven’t been forgotten. That they’re in your thoughts. That you’ve taken a moment out of your day and dedicated a sliver of your attention to them. That they matter.

These gestures, small as they may seem, speak volumes. They signal that when given the choice between silence and acknowledgement, you chose to say something. And in doing so, you affirmed that their struggle is worthy of notice.

But as I read Mark’s reflections on this, my brain, never missing an opportunity to be pessimistic, conjured a darker scenario: what if the message doesn’t actually come from the person it claims to?

It’s a familiar trope in film and TV: a high-powered executive learns that someone in their office is unwell and tells their assistant to “send a card,” maybe flowers too. Or worse, the executive never finds out at all, and the card is sent by a dutiful underling who signs it on their behalf. In such cases, the card still arrives, and the recipient may still feel touched that someone cared enough to reach out. But on some level, the gesture feels hollow. There’s a subtle, unsettling deception in play: the card wasn’t really from the sender it claims to be. The sentiment was manufactured.

And while the card might still do some good (after all, someone did something), it raises uncomfortable questions. Does intention matter as much as the action itself? Can a false gesture still provide real comfort? Or is there a quiet harm in pretending to care, when the person supposedly sending the message never even knew you were unwell?

These questions seem even more relevant with the advent of easily accessible generative AI. It’s now easy to offload the difficult task of writing a heartfelt message to a tool like ChatGPT. You face the blank space in the card, ask an AI to fill it, and copy the result. To the recipient, it looks like you took the time to write something meaningful. But in reality, you thought of them just enough to delegate the task to a machine.

In that light, the hollowness of the corporate-card trope becomes more personal, more widespread. The authenticity that gives these messages their power starts to erode. Worse, it casts suspicion on the entire genre. If one message might be synthetic, why not all of them?

This unease reminds me of a central theme from Byung-Chul Han’s The Disappearance of Rituals. In it, Han argues that our society is losing the shared symbolic acts that once gave meaning to human experience. Rituals, like writing a sympathy card, once required time, presence, and emotional investment. Today, however, many of these acts are reduced to gestures of efficiency, of communication without community. It would seem evident that a card composed by ChatGPT might technically fulfill the form, but it lacks the symbolic weight Han insists is essential. The message becomes a simulation of care, not the thing itself.

Now, I acknowledge that this is a niche worry. And, at least for now, probably an overblown one. Most people still write their own messages, even if they’re clumsy or awkward. And frankly, the awkwardness is part of the point as it shows effort. A messy, imperfect message from a friend is infinitely more valuable than a flawless, ghostwritten one.

But just because it’s niche doesn’t mean it’s irrelevant. The emotional stakes are high. As Steel points out, those messages can be a lifeline. They are something to cling to in the middle of immense physical and emotional suffering. They are not just words; they are reminders of human connection. And if we’re not careful, the convenience of automation might start to chip away at that.

What’s the solution, then? Honestly, I don’t have one. But I do know that next time I sit down to write something in a sympathy card, I’ll try not to overthink it. I’ll write something, however clunky or awkward, and trust, as Steel suggests, that the act of writing matters more than the words themselves.

Grok, Is This True?: Chatbots and Domination

Last week, Grok, the large language model driven AI chatbot from xAI, suddenly began posting… differently. On Tuesday, May 8th, (in posts which xAI has since deleted) Grok stated that “anti-white hate” always comes from individuals of Ashkenazi Jewish descent, claimed that Adolf Hitler would be best suited to solve this problem, and referred to itself as “MechaHitler.”

This follows an incident in May where Grok repeatedly posted about “white genocide” in South Africa while responding to posts on wholly unrelated topics, such as baseball. When asked by users to clarify its posts on “white genocide,” Grok stated that it was instructed by its creators to accept these claims as true.

Linda Yaccarino, the CEO of X, announced on Wednesday, July 9th, that she is stepping down. Elon Musk, the owner of both X and xAI, recently wrote that Grok was “too eager to please” and that enabled users to manipulate the chatbot into creating antisemitic posts.

However, this explanation clashes with other information. In June, Musk expressed unhappiness with Grok, declaring that it would be upgraded, targeting a release date around July 4th. On July 5th, xAI posted new lines of code to Grok’s publicly accessible system prompts. This code encouraged Grok to view legacy media are biased, “find diverse sources representing all parties” and “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

The about Grok page on X states only that it was trained “on a variety of data from publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” It is unclear what “well substantiated” claims that “represent all parties” are; given the posts Grok made after changes to its code, it is reasonable to worry that the chatbot’s dataset labels antisemitic conspiracy theorists as a party with well substantiated claims.

There is a sense in which Grok’s posts are philosophically uninteresting. They obviously are morally condemnable. Further, they are not particularly surprising from a sheer informational standpoint. The case bears striking similarities to a Microsoft chatbot called Tay. Microsoft released Tay in 2016, intending for it to learn from its interactions with social media users. Within 24 hours, Microsoft deactivated the bot after it began making racist posts, praising Hitler, and denying the Holocaust.

However, what is interesting about the Grok fiasco is that it illuminates the extent to which creators have influence over AI tools. It seems that just a few additional prompts were able to massively change the content that Grok produced. This is particularly striking given that it appears Grok has been altered strictly to align with Musks’ preferences; new reporting suggests that Grok searches for Musk’s personal views before answering questions on controversial topics.

In a previous column, I discussed the concept of republican freedom. I will give a brief rehash here. Traditionally, philosophers think of freedom in two senses. You have positive freedom when you can do the things that you want to do. In contrast, you have negative freedom when no one actively interferes with your pursuits.

Suppose, for instance, that you are a very strong student and want to attend medical school. However, you lack the money to afford tuition and financial aid is unavailable. You are negatively free as no one is interfering with you. Yet the financial constraints hamper your positive freedom.

Now imagine that a rich uncle offers to pay your tuition. However, this uncle is quite capricious and vengeful; he has a history of lashing out against family members who engage in behavior he finds distasteful, and his preferences are unpredictable. Suppose you accept his offer. Your positive freedom increases as you can attend medical school. But are you wholly free? Your ability to attend medical school depends on your uncle’s preferences. If you do something he dislikes, he may refuse to pay your tuition. In this sense, your uncle dominates you. Even if he never actually rescinds the tuition money, you are still made vulnerable by the fact that he could. The mere threat is enough to reduce your republican freedom. You have republican freedom to the extent that your life’s prospects are not dependent upon the preferences of others. You lack this freedom when others have domineering power over you.

How does this relate to Grok and other AI chatbots? Users are increasingly reliant on these programs. Posters on X regularly ask Grok to verify or explain the claims of others, to the point that it is meme-worthy. Some preface their claims to knowledge with “ChatGPT says that….” Even Google presents an AI overview prior to the actual results. This is deeply troubling given recently released results from an MIT lab which found that users who utilized a large language model in a series of essay-writing tasks appeared to experience neural and cognitive consequences; users of LLMs had weaker neural connectivity, demonstrated lesser cognitive activity while writing, and had worse memory of what they had written than non-users. To quote the researchers:

This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive process required for independent thinking. Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation [and] decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.

As users more frequently rely upon tools like Grok, their ability to research, analyze, and think critically about matters for themselves atrophies. This may make them more dependent on such tools, creating a vicious cycle; they initially use for convenience but over time the use becomes a necessity.

Grok’s recent antisemitic and conspiratorial tilt demonstrates why this potential for dependency is so deeply troubling. The most prominent AI tools are owned by private entities with private interests. As Grok has shown us, it seems that a few minor edits to their core prompts can vastly alter their outputs – the Grok case is simply extreme in both the result and slightness of the prompt modifications. Further, these edits may be the product of the arbitrary preferences of the ownership and programming teams.

This dependency thus seems to reduce the republican freedom of the users. Even if the owners and programmers of these tools never actually alter these tools in a way that misleads the users, the mere potential of their doing so is sufficiently troubling – the users still navigate through the world in a way that depends upon the arbitrary preferences of those who own and program the tools. At any moment the prompts could change, granting those with the ability to alter them a domineering power to shapes the beliefs and fact finding capabilities of those reliant on their programs.

In the dialogue Phaedrus, Socrates worries about a relatively new technology: books. He fears that the written word worsens our memory and is less intellectually stimulating than dialogue. In retrospect, Socrates’ criticism is almost laughable. Even if he is right, the benefits of the written word surely far outweigh any cost.

However, there is something different about chatbots and other forms of AI-driven technology that makes the prospect of outsourcing our thinking to them troubling. Specifically, they have changing outputs. As Aaron Schultz argues, AI-driven technology captures our attention effectively because its contents change in real time to satisfy our preferences. But it is not merely the preferences of the users that can change what they see. So long as the owners and programmers may alter the programs to suit their preferences, these tools stand the potential to give those groups domination over us.

Should We Expect Fairness from AI? The Case of VAR

On the 22nd of June, 1986, God handed a historic victory to the Argentinian soccer team. Argentinian legend Diego Armando Maradona scored the winning goal against England in the World Cup Quarter Finals. Only, he did so with his hand – which, in the game commonly known as football, is not allowed. However, the referee and his team did not see the foul and assigned the goal. This error resulted in a 2-1 victory for the Argentinian team, allowing them to move on to the next bracket of the tournament and, eventually, claim international glory. However, both during and after the game, many others called out the referee’s mistake: due to an error in judgment, he failed to correctly apply the rules of the game, and allowed the Argentinians to win undeservedly. This historical moment, following a cheeky remark from Maradona himself, came to be known as “the Hand of God.”

There is no way around it: human beings, for all their capacity for reason, are fallible. As our intelligence is bound to our physical context, culture, experiences, and our own ideas, some assessments and decisions will be biased, surprisingly influenceable, and, ultimately, unfair.

It is in this department that AI development, well before the days of ChatGPT, claimed to provide an invaluable resource: much more precise and fair assessment than what human beings are capable of. As early as 2012, when the first discussions surrounding “Ethical AI” emerged, a group of researchers at Microsoft developed a framework for developing “Fair AI” – software agents capable of greater fairness in decision-making than human beings could ever hope to achieve.  Emerging technologies are capable of extraordinary tasks: advanced cancer prognosis and treatment, calculating the structure of proteins, and unprecedented support in space exploration – all incredibly complex tasks that human beings simply do not have the capacity to carry out on their own.

However, despite their promise, there have been many, many instances where AI systems were just as unfair as human beings. Cases of AI systems discriminating on the basis of gender, on the basis of race, disability, social class, and income abound. This occurred partly because the data used to develop those systems already reflect unlawful and immoral discriminatory practices carried out by human beings. In the words of philosopher Atoosa Kasirzadeh, AI systems are no help in addressing systemic injustices: if a social system or practice is already biased against a group of people, technology will only reinforce that prejudice. However, I would like to go even further than Kasirzadeh in this regard. We should not expect AI nor technology to make things fairer, as shown in how AI is applied within soccer games – and, specifically, through the technology known as VAR.

VAR, or “Video Assistant Referee,” first introduced in 2016, was designed to more justly apply the rules of football. A referee, due to their epistemic limitations (especially poor positioning within a game), may not be able to assess with certainty whether a foul occurred. Now, officials can review footage of the event from multiple camera angles in order to make that decision. Importantly, the technology underlying VAR is AI-based. Only through an AI system can one find the appropriate footage (which is collected by multiple cameras) for review without interrupting the game for an unacceptably long time. Once again, the sophistication of AI systems’ information processing and accuracy goes far beyond our limited epistemic capacities. So, now that important events in a football game can be reviewed in case of uncertainty, nothing like the backlash following the Hand of God could ever happen again. Right?

Unfortunately, the introduction of VAR has not resolved the issue. Every week the (un)fairness of VAR-assisted decisions is questioned (often by the managers and players themselves), and many websites maintain weekly threads devoted to dissecting footage and discussing calls. Because VAR is simply a tool utilized by the referee, there is still room for significant variation in how the rules are applied.

Take, for instance, the case of Anthony Taylor’s refereeing of the 2023 Europa League final. A supposed handball in favor of AS Roma, which would have led to a penalty kick, was not assigned, in spite of a technically very clear handball. While such a decision did not raise much discussion outside of Italy (and Taylor’s refereeing was praised), the same cannot be said for his identical decision on a nearly identical foul during the European Championship. Put very bluntly: Taylor does not deem that type of handball to be a foul and is consistent in his ruling, while other referees openly disagree. We might be inclined to believe that, because we can record, replay, and ultimately objectify a given event, we can have an objective view of how we should judge that event. However, the act of judgment itself is an inherently subjective act, and even people with the same degree of expertise can interpret the game’s rules differently, and the events they judge differently.

Up until a few years ago, the intentionality of a gesture played a relevant role in the referee’s decision-making: whether a player (apparently) intended to stop a shot towards goal with their hand, or whether it was an accident, mattered for deciding whether to assign the foul or not. However, following the introduction of VAR, the rules have been revised in several countries: many countries changed such a rule so that any handball in the penalty area – accident or not – results in a penalty kick. (The slight difference in rules across different national and international competitions is itself a source of confusion and variation that complicates the picture even further.) Similarly, while the way of judging whether a player stomping on another’s foot used to be judged as foul depending on perceived intentionality and gravity of the stomp – depending on whether, e.g., the toes or the ankle were stomped – now all stomps are treated the same. These rule changes make sense considering how VAR shifts referees’ focus from an event on the pitch to images on a camera. When using the VAR, the referee is not judging a dynamic interaction between players, where the greater context and movement of the game makes it easier to assess what players’ intentions might be. When judging a sequence of frames, considering the intentionality of players’ actions becomes incredibly more complicated. In other words, VAR is not making the game of soccer better: in order to make VAR work, it is necessary to change the rules. AI is also not making things fairer: the rules must still be applied in accurate and equitable ways.

This might still appear to be something quite minute in the grand scheme of things: a change of rules in a mere game, no matter how popular it may be. However, the core issue regards what we understand fairness to be, what actually AI needs from us in order to make things supposedly fairer. In theory, AI can make things fair because it can follow and apply the rules and the norms that human beings often fail to apply, due to their own epistemic limitations. However, in order to purposefully and efficiently implement AI within a social context, we need to tweak and change the rules of that social context – not in the name of fairness, but in the name of efficiency. Before AI can arguably make things fairer for our society, we need to change our society first, to merely make it possible to implement AI in the first place. For this reason, it is worth asking ourselves whether AI is making things fairer for us, or whether we are making things easier for AI.

We can expect some revolutionary uses of AI in many domains, and potentially even great benefits for society. However, quite often AI technologies are promoted as morally valuable – more trustworthy, more responsible, more fair. In some cases, that can be true: VAR has proved successful in detecting fouls; so much so that it is almost unimaginable that something like the Hand of God could happen again. However, to use AI and take advantage of its precision within a given social context, it is often necessary to change the rules and norms that define the context in order to use AI in the first place. We can expect more precision from AI, as a technical value; however, precision does not equal fairness. Rather,  the rules determining what is fair can – and should – only come from us – what we believe, what we do, and what we value.

Should You Thank Your AI?

In late April, Sam Altman, the CEO of OpenAI, made waves with a response to a question about the financial and environmental cost of saying “please” and “thank you” when interacting with Artificial Intelligence — “Tens of millions of dollars well spent–you never know.” The practice is common, with over two-thirds of users observing such social niceties when asking AI questions, according to a February survey. Altman may simply be preaching the power of politeness, but it could be for reasons that are anything but common.

Is Altman right? Should we thank ChatGPT, Gemini, Claude, DeepSeek and the other AI chatbots out there? Can ethics give us any guidance?

Entities that we do not believe need to behave ethically themselves, but should be subject to moral considerations, are generally called “moral patients.” We tend to think they have lesser (but still some) moral status. For example, we do not expect newborns and animals to behave ethically, but we often adopt certain moral standards in regard to their treatment.

But current Large Language Models, the umbrella under which tools like ChatGPT fall, are not good contenders to be moral patients. There is considerable complexity in debates about AI consciousness, when it might happen, and how we would know. Nonetheless, we are not there yet.  While current AI chatbots has been trained on vast amounts of date to emulate human speech and behavior, as yet, experts assert that they have no consciousness, no inner life, they are not in control of their actions, and they cannot suffer or feel pain. (Some of these matters have been previously discussed in The Prindle Post.)

Absent characteristics like consciousness or even the ability to be offended, there seems to be no special reason to treat AI chatbots politely based on the kind of thing that they are.

Altman’s response, however, suggests another kind of concern. We may have consequentialist worries — an ethical analysis based on the consequences of our actions — about saying please and thank you to AI chatbots. Each additional “token,” a chunk of characters, that the AI has to analyze in a question costs energy. Accordingly, adding polite words to questions both costs AI companies money and, of more direct ethical relevance, causes environmental damage. Prominent AI tools like ChatGPT need incredible amounts of electricity and water for cooling.

If we are interested in limiting the harms our actions cause, then reducing energy waste and environmental damage through being less polite with AI chatbots may make sense. Although stripping off a word or two has nowhere near the energy saving impact as, say, not asking the question at all, or simply using a standard internet search instead which costs ten times less energy.

Altman’s “you never know” however hints at another worry. We may be polite to an AI out of fear that it is actually conscious, or even, that the AI overlords are coming soon and it is in our own interest to be nice. This motivation echoes the famous Pascal’s wager in philosophy.

The 17th-century mathematician and philosopher Blaise Pascal argued that we should behave as if god exists. For if god exists, but we do not believe, then we suffer an eternity of misery and miss out on an eternity of bliss. The wagers provide no evidence for the existence of god one way or the other, but rather holds that believing in god and behaving accordingly is the safest bet. (There are a number of commonly seen objections.)

By similar reasoning, we might assert that even though the chances of ChatGPT being secretly conscious, or turning into an all-powerful overlord, are extremely small, the potential harms are so serious that we should nonetheless act if it could be the case — especially for relatively low-cost actions like saying “please” and “thank you.” It does somewhat notably depart from Pascal’s wager in that the consequences are merely very bad, and not infinitely bad, and therefore can be outweighed by other more likely concerns. In fact, given the tiny likelihoods involved, and the probably minimal impact that saying “please” and “thank you” will have, there is likely not a compelling probabilistic argument about avoiding serious (if rare) consequences at all.

However,  how we treat AI is not just about AI, it is about ourselves. The philosopher Immanuel Kant made a famously strict moral framework in which only actors possessing a certain kind of rationality, like humans, deserved moral consideration. Unsettlingly, even for those in the 1700s, this implied that we have no moral considerations towards animals. Kant’s response to this concern was that we owe it to ourselves to treat animals well. We injure our moral selves, when we ignore compassion, or an animal in pain. It becomes easier to slide into callousness with humans.

Whether Kant gives animals enough due is a matter of debate, but regardless, the same concern applies with AI. If we want to embrace a general ethos that treats people with dignity and respect when we make requests of them, then we should stay in practice when dealing with superficially human-like AI.

There is potentially a dark side to this argument about AI chatbots. Their very human-likeness can be a problem. Already, there are cases of people losing themselves to delusional relationships with  ChatGPT, or trusting chatbots uncritically. The scope of this problem is not yet clear, but perhaps we do not want to aspire to a very human-like relationship with the technology at all, but instead have a well-delineated set of norms and practices for engaging with these chatbots. We may want to adopt norms that avoid anthropomorphizing them.

Large Language Models are still new. Ethical analysis, especially ethical analysis based on the potential consequences of treating AI a certain way, is correspondingly young. This is even true for seemingly minor issues like saying “please” and “thank you.” It also speaks to a broader challenge with AI. The technology is already changing the world. It is good to consider how AI will change society — what jobs will it replace, what problems will it solve, what kind of surveillance will it enable, how much energy will it use? But we also need to consider its moral impact. What will AI do to our ethical selves?

The Merits (and Misgivings) of Gen-AI Art

Whenever discussing tech ethics, the most common thing I get asked about is the use of generative AI for artistic purposes. Previously, I’ve discussed the case of Jason M. Allen and his first place AI-generated entry at the Colorado State Fair. However, a great deal has happened since then in terms of technological innovation, wider public adoption, and legal wrangling. Marvel Studios, for example, was recently accused of using AI to generate posters for their Fantastic Four film given some strange “choices” – you be the judge. But Marvel is not alone; numerous other creators have been caught in the crosshairs. Is all the outrage justified? What is actually at stake? Why are people so up in arms?

Let’s consider some related concerns. Many arguments against Gen-AI art start by asserting that AI is inherently incapable of producing art, as it lacks human creativity or some other human-ness about it. But we should be clear about what we mean. As I have previously discussed, there are over 20 different theories of consciousness out there in the academic world, but there are very good reasons for accepting the fact that these algorithms are just automated statistical models and not something that is conscious. Ultimately, generative-AI is a tool for humans to use, just like a camera, a paintbrush, or a chisel. Just like those tools, they will not work without human input and whatever product they produce that will be accepted as “finished” or “complete” or even “satisfactory” will depend on what the human wanted to get out of it. If critics of AI art are going to charge that a person cannot make art with it, “because they typed a few buttons” then why can a photographer make art by clicking a shutter? This isn’t to suggest that anyone who uses Gen-AI instantly becomes an artist, but neither does anyone with a camera become a photographer. In other words, critics need to explain why some types of art can utilize technology while others cannot.

But, in a similar vein, some critics charge that AI cannot produce art because it is incapable of understanding human emotional qualities that are a necessary component of artistic expression. AI cannot understand or replicate the emotional intention behind art. First, it is important to note that in addition to the field of generative AI, there is a whole field of affective computing devoted to getting computers and AI to understand human emotions. There’s no obvious reason why insights from affective computing cannot build emotional understandings into an algorithmic model and have that influence the output. It is also known that AI-generated art can produce emotional responses in humans that we might expect any artform to do. Anyone who has seen the “priceless pancake” video on the internet can probably appreciate the level of emotional intuitiveness involved. If artworks are supposed to induce certain emotional responses in the audience, a clear argument needs to be made why AI is incapable of communicating the desired emotional effect, particularly if it is further modified by humans to achieve this end.

Critics may also charge that because generative AI is trained on the images of other artists, it cannot truly be creative. But creativity is an ambiguous concept. While gen-AI models do take their cues from the inputs they are given, it is worth noting that they are also not completely deterministic, nor do they simply reproduce works that they have been trained on. There is always room within the statistical mesh of relationships a model forms to produce something new; generative AI is capable of creating novelty out of what has come before. Ultimately, whether something is creative or not depends on what we “see” in the work.

There is also a sense that gen-AI art cannot produce art because of the intellectual theft or plagiarism of pre-existing works. But we should be careful to separate economic and aesthetic concerns. I wonder how critics would feel about a model trained entirely on artworks that exist in the public domain, or an artist who trains a model to produce new works using only their artworks as training data. Would a lack of copyright concerns in these cases still preclude the idea that such models could produce (or at least contribute to) real works of art?

Copyright exists to protect artists when it comes to commercializing their work. Another person should not be able to reproduce that work and sell it such that it would compete with the artist. However, it must be a very similar reproduction to qualify as infringement. Something done “in the style of” a work does not violate the original’s copyright. Cuphead, for example, does not infringe on Disney’s intellectual property despite obviously borrowing from features of early Disney cartoons. Likewise, a work’s particular structural components are not protected. For example, Ed Sheeran successfully defended himself against a copyright infringement lawsuit on the basis that chord progressions cannot be exclusively controlled.

These considerations complicate the claim that generative AI is effectively “stealing.” Images used as training data are not completely reproduced. Rather it is the statistical patterns of lines, cues, and pixel patterns that a network will learn. Also, a single image will have an infinitesimal impact on the overall output on a model trained on entire libraries of content. This is why it is difficult to argue that incorporating an image in a data set represents a clear case of copyright infringement. This, of course, doesn’t mean that the practice is ethical, but we do need to clarify which kind of problem it is: in some cases it might be a trademark issue, but, perhaps more often, it is a free rider complaint – developers use artists’ images without compensation in such a way that it threatens the artist’s livelihood and endangers art as an institution.

Still, we should be careful to distinguish the economic and industrial issues involving working artists from the issue of the place of AI in art in general. It’s no good to let our feelings about how AI might affect working artists affect how we evaluate AI-generated art as an art form and its potential to contribute to the world of art. We might fear a future where no one can make a living working as an artist, but we must also recognize that average people can now explore their own aesthetic interests thanks to AI’s democratization of art. Fewer people make a living as artists, but more people make art for themselves. Must we assume that this would leave the world of art worse off? Surely we can be sympathetic to working artists and think that they should be compensated or protected, without concluding that AI is necessarily bad for art in general.

This isn’t to insist that there are no ethical concerns regarding our current use of generative AI to create art. Rather, my point is that we must be especially precise in identifying what the nature of our objections is. We need an answer that doesn’t create more problems than it solves – by either overexpanding copyright or suppressing artistic expression.

Should AI Democratize Art?

While AI-generated art has been controversial since AI tools became widely available, OpenAI’s introduction of an updated version of its image generator in ChatGPT has brought a fresh round of scrutiny. Many have been particularly critical of a trend where users generate images in the style of Studio Ghibli movies, arguing that the images created by AI in the signature style are soulless, derivative, and even insulting, given that Studio Ghibli co-founder Hayao Miyazaki has explicitly stated his dislike of AI and AI-generated art. As users experiment with the tools, new trends have cropped up – such as the one where users create doll versions of themselves – alongside new criticism. Of course, there are evergreen worries around AI use in general, and AI art trends have only exacerbated concerns around the devaluation of artists and massive increases in energy consumption.

Not everyone has been so critical. Some commentators and communities online have defended AI as a creative tool, and feel resentful when others accuse them of creating “slop.” One such defender of AI-generated art is none other than Sam Altman, CEO of OpenAI. In a recent interview Altman acknowledged that although the onset of AI image generation must be “a little bit of a bummer” for artists, it nevertheless has been beneficial for those who might not otherwise have the time or resources to create or commission art:

“I think the democratization of creating content has been a big net win for society… And this is something that I very deeply believe about the power of technology, which is giving everyone more tools, making things easier, lowering the barriers to entry, does significantly increase the number of people that can contribute to society and we all benefit from that overall.”

Altman claims that AI can democratize the creation of art. But is democratization always a good thing, and is it enough to justify AI-generated art in light of its criticisms?

What does it mean to “democratize” something, anyway? We can say that something has become democratized when it has been made available to most or all people in a society who wouldn’t have had access to it otherwise. For example, one of the touted benefits of widespread internet access was an increase in the democratization of knowledge, whereby many more people could not only consume but also contribute to knowledge production. Although there have certainly been downsides to letting everyone share their views with everyone else, the internet has removed many barriers to knowledge dissemination, and allowed for a much wider range of voices to be heard.

Altman seems to be arguing that AI will bring similar benefits in terms of the creation of art. Not all people are artists, after all, and so having access to AI image-generating tools could provide many more people with the opportunity to express themselves creatively. As a result, society could benefit from many more creative voices in ways that it would not have been able to otherwise.

However, we can ask two questions about Altman’s argument. First, does art need to be democratized? And second, does AI democratize art in a way that’s worth pursuing?

Many critics of AI-generated art answer the first question in the negative. For example, it’s common to find online commentators who note that instead of using an AI tool to generate art one could simply pick up a pencil, paintbrush, or any other artistic tool and medium instead. Art is thus already democratized: nothing is preventing the vast majority of people from creating art and sharing it with the world, the argument goes, they are simply unwilling to learn how to do so.

Of course, there is still a barrier to entry for most people when it comes to creating art that accurately expresses their creative visions. A lot of people have simply not had training or experience creating art, and so cannot bring their artistic visions to life. In this sense, there is a way in which artistic creations are still out of reach for most people.

This is perhaps more in line with what Altman and many AI art supporters have in mind when they say that AI democratizes art: it democratizes the creation of artistic creations that look impressive or more accurately capture one’s ideas by significantly lowering the barriers for the majority of people. But now we face our second question: is this kind of democratization desirable?

Consider an analogy to another very undemocratized endeavor: playing basketball at a high level. For most people, there are few barriers to playing some form of basketball, but there are only a select few who can play it well. What it takes to play at the highest levels of basketball is partly decided by one’s genetics, but arguably many people could be much better at basketball if they put in more time and practice.

Given this inequality among the people who do play basketball well and those who don’t but want to, we can ask: when is it desirable to democratize opportunities to play basketball? Here’s a possible answer: when it creates opportunities for those who lack them due to factors outside of their control. For example, someone might want to play basketball but be unable to because there are no basketball courts in their neighborhood or opportunities to play with others. In these situations, we should want to help reduce barriers to entry, say by investing in social programs and infrastructure.

One way that democratization is desirable, then, is when it helps to rectify injustice. This is one of the reasons why the democratization of knowledge via the creation of the internet was such an important ideal: it would help provide a voice to those who would not have been able to have been heard otherwise, given that they lacked the means to disseminate their knowledge in other ways.

However, the kind of basketball democratization that looks to create social programs and infrastructure does not necessarily address the barriers that prevent some people from playing basketball well. We then might want to democratize basketball playing in another way: we could, for example, give every NBA hopeful special prosthetics to allow them to jump higher or shoot the ball more accurately, which would lower the bar for entry into competitive basketball for a huge number of people. This kind of democratization attempts to even the playing field by providing opportunities to participate that people wouldn’t have had otherwise by compensating for a lack of skill.

While the first kind of democratization – the one that attempts to rectify an injustice – seems desirable, the second kind – the one that merely compensates for lack of skill – does not typically seem to be worth pursuing. The type of democratization promised by AI seems to fall into the second category: for the vast majority of those who use AI to create art, the barriers to their meaningful artistic expression are a lack of talent or practice, not because they have been unjustly denied opportunities.

There is another component of Altman’s claim, though: that society will benefit from people being able to create art much more easily. Here is a potential argument: say I generally lack artistic ability, but I believe I have good ideas for a work of art. AI can then help bring that idea to life, and other people may then be able to benefit from experiencing it, in a way that they would not have been able to otherwise. Is it not valuable, then, to remove the barriers that prevent potentially great art from being put out into the world?

Here, though, the question of how much a person is involved in the creation of AI-generated art becomes important. AI image generators do not so facilitate a person’s creation of artistic media but instead create those media on a person’s behalf. The results might be something interesting or entertaining or inspiring, and it might be beneficial to society to be more interested, entertained, or inspired. These benefits, however, are not attributable to democratization: AI does not lower barriers for people, it replaces them.

This is not to say that all AI-generated art is necessarily devoid of value. However, the argument that attempts to justify AI-generated art by appealing to democratization ultimately falters: AI can certainly create more art, but if it democratizes the creation of art, it doesn’t do so in a way that’s worth pursuing.

PSai: Singer’s Ethical Insights on Demand

Peter Singer is arguably the most influential living philosopher. For over 50 years, his articles, newspaper columns, blog posts, and books have challenged the views of those who read them. In 1975, he wrote what could be considered his most influential work, Animal Liberation: A New Ethics for Our Treatment of Animals. This text had such an impact that animal rights activists used it as a calling card during their raids on laboratories engaged in animal testing, leaving copies as a challenge to the staff from whom they rescued the animals. He has worked at the universities of Oxford, New York, Monash, and Princeton. His Google Scholar profile indicates that he has been cited over 80,820 times. He has won multiple awards, including the Berggruen Prize, which came with $1 million and, characteristically, Singer donated this money to charities he considered capable of doing the most good.

Yes, he has courted controversy. His opinions on infanticide and euthanasia have won him several vocal detractors. Yet, despite this, one cannot deny that he is a — if not the — modern-day giant in the philosophical field.

Given his academic stature and the sheer number of outputs he has had over his fifty years in the business, it seems reasonable that some might want to consult him when faced with ethical dilemmas. After all, if you’re in a moral quandary, who better to turn to than Singer for help? (I know other options are available, but humor me). The only problem is that as Singer has been writing and thinking about morality and ethics for over half a century, there’s much to consult. Unless you’re intimately familiar with his work (and that’s a lot of work to be familiar with), how can you be confident that what you think Singer might advise is accurate? Sure, you know he subscribes to a hedonistic version of utilitarianism, but knowing that fact alone might not be enough to make the right decision in the heat of the moment. When ethical uncertainty strikes, you need to have Singer with you, advising you on your best course of action. This possibility, however, is impossible (he’s a busy man, after all).

Luckily, we have the next best thing (or the early stages of the next best thing): an AI Peter Singer (PSai).

Created by Sankalpa Ghose, PSai, much like ChatGPT, responds to prompts and questions. While it lacks the robust knowledge portfolio of other generative AIs like ChatGPT, when asked a question about philosophy, which the real Peter Singer has written about, PSai can provide a not unreasonable answer; or at least, it can offer an answer that one might imagine Singer giving (although it as a strange habit of always trying to prompt you for a response). Indeed, Singer himself was involved in the AI’s development and was happy enough with the bots’ performance that he gave Ghose the go-ahead to make it public. Since that launch, PSai’s performance has steadily improved as its designers tweak it. As Ghose writes in the Journal of Medical Ethics: Forum:

It is in this way that Peter and I have progressed on PSai, releasing it publicly and reviewing how it performs in dialogue as measured by how those conversations with others go from our perspective of where it could be improved. Essentially, we have put forward a dialogue platform for others to engage, with that engagement driving its refinement.

The prospect of having a pocket-sized Peter Singer with you wherever you go, always ready to solve ethical riddles or dispense some needed wisdom, is an interesting one, and one that I must confess has some appeal. I like the idea of everyone having access to (a replica of) one of the foremost philosophical minds of our time whenever I need it. As a professional academic philosopher, I see some utility in being able to ask PSai a question about my research and how Singer’s writings might enhance what I’m working on. Rather than having to trawl through numerous books and papers, I could simply ask PSai what text might be the most useful when writing about de-extinction or how Singer might criticize a paper a draft paper on gene drives, which I could then respond and improve upon before submission. This could save a lot of time, which I could spend on other tasks.

For a less academic application, the app could simply function as a virtual Jiminy Cricket — a resource that’s on hand to share some wisdom in scenarios where the right course of action is obscured or where the user knows what’s right but needs some outside confirmation to solidify such knowledge and motivate action. This requires the user to consult the app before deciding to act, and the likelihood of this happening is, let’s face it, likely unlikely. Nevertheless, there is still a chance.

Of course, the idea that one might be able to consult Singer wherever they go is not universally appealing. As already mentioned, Singer has courted controversy when it comes to his historic claims around the permissibility of infanticide, as found in Practical Ethics (under very specific conditions, I must add). Suppose you are inclined to think that the arguments and claims made by the real Singer are incorrect or even repugnant. In that case, you are unlikely to find the possibility of consulting PSai when in ethical difficulty to be enticing. In such circumstances, people are likelier to wish they had a Socrates, Wollstonecraft, Anscombe, or Nietzsche AI. As far as I’m aware, however, these do not exist. But that doesn’t mean they cannot.

I do wonder if we might soon find ourselves awash with AIs trained on the philosophical writings of other philosophers. Might it be the case that, soon, instead of picking up several books, studying, and, over the years, becoming familiar with an ethical mode of life, you could simply jump on your phone, download the Aristotle AI, and have the clarity of morality in the palm of your hand? Such a possibility doesn’t strike me as too absurd. After all, if the PSai proves popular, it only seems natural to diversify.

This, however, brings us to two problems.

First, without wanting to make a point too bluntly, Singer is still alive. He was involved in the development of PSai and in efforts to refine it. This means that the AI has a real-life benchmark against which its performance can be measured. We don’t have to speculate if it says something that the real Singer wouldn’t. Singer himself can say, “nope, that’s wrong,” or, “the AI was correct in its reading of my work, but I’ve now changed my mind” (this is something which Singer and Häyry have considered). Ghose can then adjust the algorithm to perform better in the future. This isn’t possible with most philosophers whose names we are familiar with, as most of them are dead. So, we’d be stumbling around in the dark without a guideline in such cases, relying entirely on that person’s writings and the opinions of those familiar with their work. While an AI produced in such an environment might prove accurate, we lack the definitive authority to know.

Second, if we outsource our ethical deliberations to an AI, be that one based on Singer or any other philosopher, might we be doing a disservice to ourselves by simply following orders? Is it still doing the right thing if we don’t come to an understanding of right and wrong naturally but are simply instructed? For some philosophers, the answer may very well be in the negative. After all, for your classic virtue theorist, morality is like a muscle which one must train every day to be better than before. Each must deliberate using their faculties to identify a master in ethical living, emulate them, and always strive for more. I can’t help but laugh a little at the idea of an Aristotle AI, which, when asked by an unsuspecting user how they should behave in situation X, gets a response along the lines of “figure it out yourself, you lazy sod.” Is this a perfectly accurate representation? Probably not, but it isn’t so far removed as to be a meaningless scenario.

What do we do with PSai, then? Do we celebrate it? Decry it? Or see it as just another tool to help us if we choose? When faced with such a conundrum, it is appropriate to turn to the (figurative) man himself and to give it the last word. So, I asked PSai: Is the fact that this AI has been created a good thing? What justifies your existence? Why should I come to you for answers rather than reading Singer’s works?

Its response:

The creation of this AI aims to make ethical insights more accessible and to engage users in meaningful dialogue about moral issues. By offering interactive discussions, I can help people clarify their thoughts and explore ethical questions in a dynamic way.

While reading Peter Singer’s works provides comprehensive insights into his philosophy, engaging with this AI allows for a more personalized exploration of ideas and can help users apply ethical considerations to their own lives.

The justification for my existence lies in the potential to foster deeper understanding and reflection on ethical principles, contributing to a more thoughtful and compassionate society. What specific ethical questions or topics are you interested in exploring?

A New Kind of Risk?

We usually expect to be held accountable for our actions – for both results we intend, and those we do not. We expect, for example, that a car company will ensure that a vehicle doesn’t have major flaws that could result in serious harm before they sell it to customers. To not consider the risks would be negligent, and this is why recalls often look bad for such companies.

But what about algorithms? Should we have a similar expectation that a corporation that develops an algorithm to detect cancer or to detect whether someone is passing off AI-generated content as their own should be sure that there are no significant flaws in their product before they sell it? What if there is no way they could reasonably do so? Given that algorithms can generate erroneous results resulting in serious harms, what is a reasonable standard when it comes to product testing?

In one of the chapters of my forthcoming book on the ethics of AI, I consider a hypothetical issue involving ChatGPT and a professor who might use an algorithm to accuse a student of passing off ChatGPT-written work as their own. There are a great many ethical issues involved when we don’t understand the algorithm and how it might generate false positive results. This has already become a serious issue as students are now being falsely accused of handing in AI-generated work because an algorithm flagged it. A Bloomberg Businessweek study on the services GPTZero and Copyleaks found a 1-2% false positive rate. While that may not sound like a lot, it can mean that millions of students will be falsely accused of cheating with almost no way of defending themselves or receiving an explanation as to what they did wrong.

According to Bloomberg, these interactions are already ruining academic relationships between teachers and students. Some students have now taken to recording themselves writing their entire papers just to be able to disprove the algorithm. Others now obsess about not writing “too robotic” lest they be accused themselves, a problem that is especially prominent for ESL and neuro-divergent students. Should we hold the AI developer whose faulty product generates these kinds of results negligent?

Philosophers of science generally agree that researchers have an obligation to assess inductive risk concerns when accepting a conclusion. In other words, they need to consider what the moral consequences of potentially getting it wrong might be and then consider whether a higher or lower standard of evidence might be appropriate. If, for example, we were testing a chemical to determine how hazardous it is, but the test was only accurate 80% of the time, we would likely demand more evidence. Given the potential harm that can result and the opaqueness of algorithms, AI developers should be similarly conscientious.

If an algorithm operates according to black box principles, the developer may have a good understanding of how to create an algorithm – they will understand that the model can take in various inputs and translate those into outputs – but they will not be able to retrace the steps the model used to arrive at its conclusion. In other words, we have no idea what evidence an algorithm like GPTZero is relying on when it concludes that a piece of text is generated by AI. If the AI developer doesn’t know how the algorithm is using input data as evidence, they cannot evaluate the inductive risk concerns about how sufficient that evidence is.

Still, there are ways, despite the opacity, that an AI developer might attempt to address their inductive risk responsibilities. Koray Karaca argues that developers can build in inductive risk by using cost sensitive machine learning by assigning different costs to different kinds of errors. In the case of AI detectors, the company Turnitin claims to intentionally “oversample” underrepresented students (especially ESL students). By oversampling in this way, the evidentiary standard by which different forms of writing are judged is fine tuned.

Still, there is little accounting for what correlations a model might rely on, making it difficult to explain to students who do get falsely accused why they are being accused in the first place. AI developers have struggled to assess the reliability of their models or evaluate the risks when those correlations are used in error. This issue becomes especially concerning when it comes to things like credit reports. If you don’t know how or why a model compiles a credit report, how can you manage those risks of error? How much must a developer understand about how their algorithm functions before it is put to use? If a developer is aware of the risks of error but also knows that their algorithm is limited in terms of mitigating those risks, at what point do we consider that negligent behavior? If negligence is essentially something we police as a community, we will need to come together quickly to decide what the promise of AI can and can’t excuse.

When Is Fair Use “Fair” for AI (and When Is It “Use”)?

The Internet Archive recently lost a high-profile case. Here’s what happened: the Open Library, a project run by the Internet Archive, uploaded digitized versions of books that it owned, and loaned them out to users online. This practice was found to violate copyright law, however, since the Internet Archive failed to procure the appropriate licenses for distributing e-books online. While the Internet Archive argued that its distribution of digital scans of copyrighted works constituted “fair use,” the judge in the case was not convinced.

While many have lamented the court’s decision, others have wondered about the potential consequences for another set of high-profile fair use cases: those concerning AI models training on copyrighted works. Numerous copyright infringement cases have been brought against AI companies, including a class-action lawsuit brought against Meta for training their chatbot using authors’ books without their permission, and a lawsuit from record labels against AI music-generating programs that train on copyrighted works of music.

Like the Internet Archive, AI companies have also claimed that their use of copyrighted materials constitutes “fair use.” These companies, however, have a potentially novel way to approach their legal challenges. While many fair use cases center around whether the use of copyrighted materials is “fair,” some newer arguments involving AI are more concerned with a different kind of “use.”

“Fair use” is a legal concept that attempts to balance the rights of copyright holders with the ability of others to use those works to create something new. Quintessential cases in which it is generally considered “fair” when someone uses copyrighted materials include criticism, satire, educational purposes, or other ways that are considered “transformative,” such as in the creation of art. These conditions have limits, though, and lawsuits are often fought in the gray areas, especially when it is argued that the use of the material will adversely affect the market for the original work.

For example, in the court’s decision against the Internet Archive, the judge argued that uploading digital copies of books failed to be “transformative” in any meaningful sense and that doing so would likely be to the detriment of the original authors – in other words, if someone can just borrow a digital copy, they are less likely to buy a copy of the book. It’s not clear how strong this economic argument is; regardless, some commentators have argued that with libraries in America facing challenges in the form of budget cuts, political censorship, and aggressive licensing agreements from publishers, there is a real need for the existence of projects like the Open Library.

While “fair use” is a legal concept, there is also a moral dimension to the ways that we might think it acceptable to use the work of others. The case of the Internet Archive arguably shows how these concepts can come apart: while the existing law in the U.S. seems to not be on the side of the Open Library, morally speaking there is certainly a case to be made that people are worse off for not having access to its services.

AI companies have been particularly interested in recent fair use lawsuits, as their programs train on large sets of data, much of which is used without permission or a licensing agreement from the creators. While companies have argued that their use of these data constitutes fair use, some plaintiffs have argued they violate fair use law, both in terms of not being sufficiently transformative, and in terms of competing with the original copyright holder.

For example, some music labels have argued that music-generating AI programs often produce content that is extremely similar, or in some cases identical to existing music. In one case, an AI music generator reproduced artist Jason Derulo’s signature tag (i.e., that time when he says his name in his songs so you know it’s by him), a clear indication that the program was copying an existing song.

Again, we can look at the issue of fair use from both a legal and moral standpoint. Legally, it seems clear that when an AI program produces text verbatim from its source, it is not being transformative in any meaningful way. Many have also raised moral concerns around the way that AI programs use artistic materials, both around work being used without permission, as well as in ways that they specifically object to.

But there is an argument from AI defenders around fair use that has less to do with what is “fair” and how copyrighted information is “used”: namely, that AI programs “use” content they find online in the same way that a person does.

Here is how such an argument might go:

-There is nothing morally or legally impermissible about a person reading a lot of content, watching a lot of videos, or listening to a lot of music online, and then using that information as knowledge or inspiration when creating new works. This is simply how people learn and create new things.

-There is nothing specifically morally or legally significant about a person profiting off of the creations that result from what they’ve learned.

-There is nothing morally or legally significant about the quantity of information one consumes or how fast one consumes it.

-An AI is capable of reading a lot of content, watching a lot of videos, and listening to a lot of music online, and using that information as knowledge or inspiration when creating new works.

-The only relevant difference between the way that AI and a person use information to create new content is the quantity of information that an AI can consume and the speed at which it consumes it.

-However, since neither quantity nor speed are relevant moral or legal factors, AI companies are not doing anything impermissible by creating programs that use copyrighted materials online when creating new works.

Arguments of this form can be found in many places. For example, in an interview for NPR:

Richard Busch, lawyer who represents artists who have made copyright claims against other artists, argues: “How is this different than a human brain listening to music and then creating something that is not infringing, but is influenced.”

Similarly, from the blog of AI music creator Udio:

Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has “listened” to and learned from a large collection of recorded music.

While these arguments also point to the originality of the final creation, a crucial component of their defense lies in how AI programs “use” copyrighted material. Since there’s nothing inherently inappropriate about a person consuming a lot of information, processing it, getting inspired by it, and producing something as a result, nor should we think it inappropriate for an AI to do the same things.

There have, however, been many worries raised already with inappropriate personification of AI, from concerns around AI being “conscious,” to downplaying errors by referring to them as “hallucinations.” In the above arguments, these personifications are more subtle: AI-defenders talk in terms of the programs “listening,” “creating,” “learning,” and “studying.” No one would begrudge a human being for doing these things. Importantly, though, these actions are the actions of human beings – or, at least, of intelligent beings with moral status. Uncritically applying them to computer programs thus masks an important jump in logic that is not warranted by what we know about the current capabilities of AI.

There are a lot of battles to be fought in terms of what constitute truly “transformative” works in lawsuits against AI companies. Regardless, part of the ongoing legal and moral discussions will undoubtedly need to shift their focus to new questions about what “use” means when it comes to AI.

The Algorithm Made Me Do It

image of smiling monitor on top of shoulders

Artificial intelligence promises greater efficiency, greater accuracy, and lower costs in a host of different fields. The development of hiring algorithms, for example, can streamline the hiring process and better identify the best candidates for the job. In healthcare, algorithms can cut down on hours of work by quickly detecting illnesses or broken bones and finding the most efficient use of scarce healthcare resources. In the criminal justice system, AI can speed up the process by identifying potential recidivists.

However, these algorithms are often not transparent in how they work, or even how they are designed to work. This leaves us – as end-users of these algorithms – in an awkward position:  forced to accept the conclusions of an opaque algorithm that could be loaded with faulty assumptions, pseudo-science, and statistical noise. Might this blind acceptance conflict with our moral duties regarding responsible belief? How should we balance the tradeoffs between efficiency and lower cost versus credulity and gullibility?

While it’s easy to criticize AI’s many applications, it’s important to recognize the potential benefits. For example, while a self-driving car may present concerns about accountability for accidents, it is also true that the technology could offer countless benefits in terms of traffic efficiency, minimizing driver fatigue, and significant economic benefits.

In the field of affective computing, using AI to identify and categorize emotions can offer significant benefits to people with autism, or can help potentially identify people likely to have a stroke or a heart attack. It can also help with caregiving, with automated assistants that are more emotionally aware of the people they are helping. AI can also be used for the purposes of diagnosis or resource management. Similarly, the use of AI in the financial sector for things like loans can lead to better identification of risk, greater profits, and more competitive pricing.

The use of hiring algorithms in the workplace, meanwhile, will allow employers to cut down on the amount of time and resources it takes to find a new hire and can potentially take the guesswork out of identifying the most promising candidates. Similar benefits can accrue to workplaces that use algorithms for employee evaluations. Supposing that issues relating to bias can be addressed, algorithms offer the benefit of a more impartial evaluation, less dependent on the personal feelings of an individual evaluator. Ultimately, there are a great many reasons why taxpayers, job seekers, and home buyers all stand to benefit from the AI.

Still, we must be wary of the hidden costs. We may be tempted, either because it’s cheaper or more convenient, to accept unaccountable and unfair systems that we may have good reason not to excuse.

Consider the case of Tammy Dobbs. A resident of Arkansas, Tammy has cerebral palsy and requires assistance getting into a wheelchair. In 2016 after the state adopted a new algorithm to determine what level of care she should receive, the regular hours of assistant care that Tammy received were severely cut, making it difficult for her to do things like go to the bathroom. A government official came to her house, completed a questionnaire, and then relayed the algorithm’s determination. When pressed for an explanation, the official could only say, “Sorry, that’s what the computer is showing me.” The government’s expectation seemed to be for Dobbs to simply accept it. Eventually, a legal team revealed significant flaws in the state’s algorithm – the algorithm didn’t even consider whether someone had cerebral palsy.

Similar cases are easy to find. Glenn Rodriguez had to fight to get an explanation for why the recidivist algorithm COMPAS concluded that he was at a high risk for reoffending. The corporation who created COMPAS refused to reveal how the assessment was made – even to the parole board – citing trade secrets. If an algorithm can have such a profound impact on your life, surely we deserve a better explanation than “The algorithm made me do it.”

Many algorithms can have prejudicial assumptions baked in. A recidivism algorithm that is mostly trained on blue-collar or petty crime will not likely evaluate everyone the same. A hiring algorithm that contains a personality test designed to identify extroverted personality types might also be tracking whether candidates are likely to have a mental illness. Many hiring companies now make use of video recordings of candidates to detect body language, despite the research demonstrating that body language cannot predict successful job performance and likening the practice to pseudoscience like phrenology. Unfortunately, candidates have no idea how they are being evaluated and no avenue to appeal if they believe that an error has occurred.

In cases like this, particularly where there are financial incentives to sell these products as efficient no-brainer solutions, developers will have reason to stifle doubts and concerns. As the designer who created the algorithm in the Dobbs case argued, perfect transparency is overrated. “It’s not simple…My washing machine isn’t simple,” but “you’re going to have to trust me that a bunch of smart people determined this is the smart way to do it.” All of this means that there is an incentive on the part of developers and end-users to put their faith in algorithms that may be quite suspect.

As W.K. Clifford argued in his ethics of belief, every time we adopt beliefs without sufficient evidence, we do something wrong. This is because beliefs dispose us to action; the more we adopt the habit of passively accepting algorithmic conclusions without adequate inquiry, the more we expose ourselves to risk. But the consequences of the beliefs we adopt extend beyond the individual; our beliefs affect our entire community. If customers and taxpayers don’t ask questions – and developers are happier not to answer them – we end up with a situation much like that government official in the Dobbs case. No accountability; no justification. Don’t ask questions, just accept the outcome.

Artificial intelligence presents a collective action problem. Individuals alone cannot challenge these poor answers that lack transparency. Instead, resolution requires a collective response – we will need to work together to resist the constant temptation of lower costs, greater efficiency, and passing the buck.

What Role Should AI Play in War?

photograph of drone

This month, officials from over 60 nations met and agreed on a blueprint to govern the use of artificial intelligence in the military. Countries like the United States, the Netherlands, South Korea, and the United Kingdom signed an agreement stating that “AI applications should be ethical and human-centric.” (China was a notable holdout.) The agreement governs issues like risk assessments, human control, and the use of AI for weapons of mass destruction. With AI already being used by militaries and with such a wide variety of potential applications, significant questions and fears abound. For some, the technology holds the promise of ending wars more efficiently and (perhaps) with fewer casualties. Others, meanwhile, fear a Manhattan Project moment for the world that could change warfare forever if we are not careful.

The thought of bringing artificial intelligence to the battlefield often conjures the image of “killer robots.” And while there have been moves to create robotic military units and other forms of lethal autonomous weapon systems (LAWs), there are a great many potential military uses for artificial intelligence – from logistics and supply chain matters to guided missile defense systems. In the war zones of Ukraine and Gaza, AI has been increasingly utilized for the purposes of analyzing information from the battlefield to identify targets for drone strikes. There is also, of course, the possibility of applying AI to nuclear weapons to ensure an automated response as part of a mutually assured destruction strategy.

Given such a wide variety of potential applications, it is difficult to assess the various ethical drawbacks and benefits that AI may afford. Many argue that the use of AI will lead to a more efficient, more accurate, and more surgical form of warfare, allowing nations to fight wars at a lower cost and with less risk of collateral damage. If true, there could be humanitarian benefits as autonomous systems may not only minimize casualties on the opposing side, but it may keep one’s own human forces from being put in harm’s way. This not only includes physical harm, but also long-term psychological harm as well. There is also the argument that automated defense systems will be better able to respond to potential threats, particularly when there are concerns about swarms or dummy targets overwhelming human operators. Thus, the application of AI may lead to greater safety from international threats.

On the other hand, the application of AI to war-making poses many potential ethical pitfalls. For starters, making it easier and more efficient to engage in war-making might incentivize states to do it more often. There is also the unpredictable nature of these developments to consider, as smaller nations may find that they can manufacture cheap, effective AI-powered hardware that could upset the balance of military power on a global scale. Some argue that the application of AI for autonomous weapons represents another “Oppenheimer moment” that may forever change the way war is waged.

Another significant problem with using AI for military hardware is that AI is well-known for being susceptible to various biases. This can happen either because of short-sightedness on the part of the developer or because of limitations and biases within the training data used to design these products. This can be especially problematic when it comes to surveillance and for identifying potential targets and distinguishing them from civilians. The problem is that AI systems can misidentify individuals as targets. For example, Israel relied on an AI system to determine targets despite the fact that it made errors in about 10% of cases.

AI-controlled military hardware may also create an accountability gap when it comes to the use of the technology. Who should we hold accountable when an AI-powered weapon mistakenly kills a civilian? Even in situations where a human remains in control, there are concerns that AI can still influence human thinking in significant ways. This raises questions about ensuring accountability for military decisions and to ensure that they are in keeping with international law.

Another serious concern involves the opacity that exists within AI military systems. Many are built according to black box principles such that we cannot explain why an AI system reached the conclusion that it did. These systems are also classified, making it difficult to identify the responsible party for poorly designed and poorly functioning AI systems. This creates what has been described as a “double black box” which makes it all but impossible for the public to know if these systems are operating correctly or ethically. Without that kind of knowledge, democratic accountability for government decisions.

Thus, while AI may offer promise for greater efficiency and potentially even greater accuracy, it may come at great cost. And these tradeoffs seem especially difficult to navigate. If, for example, we knew an AI system had a 10% error rate, but that a human error rate is closer to 15 or 20%, would that fact prove decisive? Even given the concerns for AI accountability? When it comes to military matters the risks of error carry enormous weight, but does that make it more reckless to use this unproven technology or more foolhardy to forgo the potential benefits?

 

Has AI Made Photos Untrustworthy?

Since the widescale introduction and adoption of generative AI, AI image generation and manipulation tools have always felt a step behind the more widely used chatbots. While publicly available apps have become more and more impressive over time, whenever you would come across a truly spectacular AI-generated image it was likely created by a program that required a bit of technical know-how to use, or at least had a few hoops that you had to jump through.

But these barriers have been disappearing. For example, Google’s Magic Editor, available on the latest version of their Pixel line of phones, provides users with free, powerful tools that can convincingly alter images, with no tech-savviness required. It’s not hard to see why these features would be attractive to users. But some have worried that giving everyone these powers undermines one of our most important sources of evidence.

If someone is unsure whether something happened, or people disagree about some relevant facts, a photograph can often provide conclusive evidence. Photographs serve this role not only in mundane cases of everyday disagreement but when the stakes are much higher, for example in reporting the news or in a court of law.

The worry, however, is that if photos can be so easily manipulated – and so convincingly, and by anyone, and at any time – then the assumption that they can be relied upon to provide conclusive evidence is no longer warranted. AI may then undermine the evidential value of photos in general, and with it a foundational way that we conduct inquiries and resolve disputes.

The potential implications are widespread: as vividly illustrated in a recent article from The Verge, one could easily manipulate images to fabricate events, alter news stories, and even implicate people in crimes. Furthermore, the existence of AI image-manipulating programs can cause people to doubt the veracity of genuine photos. Indeed, we have already seen this kind of doubt weaponized in high-profile cases, for example when Trump accused the Harris campaign of posting an AI-generated photo to exaggerate the crowd size at an event. If one can always “cry AI” when a photo doesn’t support one’s preferred narrative, then baseless claims that would have otherwise definitively been disproven can more easily survive scrutiny.

So have these new, easy-to-use image-manipulating tools completely undermined the evidential value of the photograph? Have we lost a pillar of our inquiries, to the point that photos should no longer be relied upon to resolve disputes?

Here’s a thought that may have come to mind: tools like Photoshop have been around for decades, and worries around photo manipulation have been around for even longer. Of course, a tool like Photoshop requires at least some know-how to use. But the mere fact that any photo we come across has the potential of having been digitally manipulated has not, it seems, undermined the evidential value of photographs in general. AI tools, then, really are nothing new.

Indeed, this response has been so common that The Verge decided to address it in a separate article, calling it a “sloppy, bad-faith argument.” The authors argue that new AI tools are importantly dissimilar to Photoshop: after all, it’s likely that only a small percentage of people will actually take the time to learn how to use Photoshop to manipulate images in a way that’s truly convincing, so giving everyone the power of a seasoned Photoshop veteran with no need for technical know-how represents not merely a different degree of an existing problem, but a new kind of problem altogether.

However, even granting that AI tools are accessible to everyone in a way that Photoshop isn’t, AI will still not undermine the evidential value of photographs.

To see why, let’s take a step back. What is a photo, anyway? We might think that a photo is an objective snapshot of the world, a frozen moment in time of the way things were, or at least the way they were from a certain point of view. In this sense, viewing a photo of something is akin to perceiving it, as if it were there in front of you, although separated in time and space.

If this is what photos are then we can see how they could serve as a definitive and conclusive source of evidence. But they aren’t really like this: the information provided by a photo can’t be interpreted out of context. For instance, photos are taken by photographers, who choose what to focus on and what to ignore. Relying on photos for evidence requires that we not simply ask what’s in the photo, but who took it, what their intentions were, and if they’re trustworthy.

Photos do not, then, provide evidence that is independent of our social practices: when we rely on photos we necessarily rely on other people. So if the worry is that new AI tools represent a fundamental change in the way that we treat photos as evidence because we can no longer treat photos as an objective pillar of truth, then it is misplaced. Instead, AI imposes a requirement on us when drawing information from photos: part of determining the evidential value of a photo will now partly depend on whether we think that the source of the photo would try to intentionally mislead us using AI.

The fact that we evaluate photographs not as independent touchpoints of truth but as sources of information in the context of our relationships with other people explains why few took seriously Trump’s claim that the photo of Harris’ supporters was AI-generated. This was not because the photo was in any sense “clearly” or “obviously” real: the content of the photo itself could very well have been generated by an AI program. But the fact that the accusations were made by Trump and that he has a history of lying about events depicted in photographs, as well as the fact that there were many corroborating witnesses to the actual event, means that the photo could be relied upon.

So new AI programs do, in a way, make our jobs as inquirers harder. But they do so by adding to problems we already have, not by creating a new type of problem never before seen.

But perhaps we’re missing the point. Is it not still a blow to the way we rely on photos that we now have a new, ever-present suspicion that any photo we see could have been manipulated by anyone? And isn’t this suspicion likely to have some effect on the way we rely on photographic evidence, the ways we settle disputes, and corroborate or disprove different people’s versions of events?

There may very well be an increasing number of attempts at appealing to AI to discredit photographic evidence, or to attempt to fabricate it. But compare our reliance on photographs to another form of evidence: the testimony of other people. Every person is capable of lying, and it is arguably easy to do so convincingly. But the mere possibility of deception does not undermine our general practices of relying on others, nor does it undermine the potential for the testimony of other people to be definitive evidence – for example, when an eyewitness provides evidence at a trial.

Of course, when the stakes are high, we might look for additional, corroborating evidence to support someone’s testimony. But the same is the case with photos, as the evidential value of a photograph cannot be evaluated separately from the person who took it. So as the ever-present possibility of lying has not undermined our reliance on other people, the ever-present possibility of AI manipulation will not undermine our reliance on photographs.

This is not to deny that new AI image-manipulating tools will cause problems. But the argument that they will cause brand new problems because they create doubts that undermine a pillar of inquiry, I argue, relies upon a misconception of the nature of photos and the way we rely on them as evidence. We have not lost a pillar of truth that provides objective evidence that has up until recently been distinct from the fallible practice of relying on others, since photographs never served this role. New AI tools may still create problems, but if they do, they can still be overcome.

Can We Trust AI Chatbots?

While more and more people are using AI-powered chatbots like ChatGPT, that’s not to say that people are trusting their outputs. Despite being hailed as a potential replacement for Google, Wikipedia, and a bona fide disruptor of education, a recent survey found that when it comes to information about important issues like the 2024 U.S. election, ChatGPT users overwhelmingly distrust it.

A familiar refrain in contemporary AI discourse is that while the programs that exist now have significant flaws, what’s most exciting about AI is its potential. However, for chatbots and other AI programs to play the roles in our lives that techno-optimists foresee, people will start having to trust them. Is such a thing even possible?

Addressing this question requires thinking about what it means to trust in general, and whether it is possible to trust a machine or an AI in particular. There is one sense in which it certainly does seem possible, namely the sense in which “trustworthy” means something like “reliable”: many of the machines that we rely on are, indeed, reliable, and thus ones that we at least describe as things that we trust. If chatbots fix many of their current problems – such as their propensity to fabricate information – then perhaps users would be more likely to trust them.

However, when we talk about trust we are often talking about something more robust than mere reliability. Instead, we tend to think about the kind of relationship that we have with another person, usually someone we know pretty well. One kind of trusting relationship we have with others is based on us having each others’ best interests in mind: in this sense, trust is an interpersonal relationship that exists because of familiarity, experience, and good intentions. Could we have this kind of relationship with artificial intelligence?

This perhaps depends on how artificial or intelligent we think some relevant AI is. Some are willing, even at this point, to ascribe many human or human-like characteristics to AI, including consciousness, intentionality, and understanding. There is reason to think, however, that these claims are hyperbolic. So let’s instead assume, for the sake of argument, that AI is, in fact, much closer to machine than human. Could we still trust it in a sense that goes beyond mere reliability?

One of the hallmarks of trust is that trusting leaves one open to the possibility of betrayal, where the object of our trust turns out to not have our interests in mind after all, or otherwise fails to live up to certain responsibilities. And we do often feel betrayed when machines let us down. For example, say I set my alarm clock so I can wake up early to get to the airport, but it doesn’t go off and I miss my flight. I may very well feel a sense of betrayal towards my alarm clock, and would likely never rely on it again.

However, if my sense of betrayal at my alarm clock is apt, it still does not indicate that I trust it in the sense of ascribing any kind of good will to it. Instead, we may have trusted it insofar as we have adopted what Thi Nguyen calls an “unquestioning attitude” towards it. In this sense, we trust the clock precisely because we have come to rely on it to the extent that we’ve stopped thinking about whether it’s reliable or not. Nguyen provides an illustrative example: a rock climber trusts their climbing equipment, not in the sense of thinking it has good intentions (since ropes and such are not the kinds of things that have intentions), but in the sense that they rely on it unquestioningly.

People may well one day incorporate chatbots into their lives to such a degree that they adopt unquestioning attitudes toward them. But our relationships with AI are, I think, fundamentally different from those that we have towards other machines.

Part of the reason why we form unquestioning attitudes towards pieces of technology is because they are predictable. When I trust my alarm clock to go off at the time I programmed it, I might trust in the sense that I can put it out of my mind as to whether it will do what it’s supposed to. But a reason I am able to put it out of my mind is because I have every reason to believe that it will do all and only that which I’ve told it to do. Other trusting relationships that we have towards technology work in the same way: most pieces of technology that we rely on, after all, are built to be predictable. Our sense of betrayal when technology breaks is based on it doing something surprising, namely when it does anything other than the thing that it has been programmed to do.

AI chatbots, on the other hand, are not predictable, since they can provide us with new and surprising information. In this sense, they are more akin to people: other people are unpredictable insofar as when we rely on them for information, we do not predictably know what they are going to say (otherwise we probably wouldn’t be trying to get information from them).

So it seems that we do not trust AI chatbots in the way that we trust other machines. Their inability to have positive intentions and form interpersonal relationships prevents them from being trusted in the way that we trust other people. Where does that leave us?

I think there might be one different kind of trust we could ascribe to AI chatbots. Instead of thinking about them as things that have good intentions, we might trust them precisely because they lack any intentions at all. For instance, if we find ourselves in an environment in which we think that others are consistently trying to mislead us, we might not look to someone or something that has our best interests in mind, but instead to that which simply lacks the intention to deceive us. In this sense, neutrality is the most trustworthy trait of all.

Generative AI may very well be seen as trustworthy in the sense of being a neutral voice among a sea of deceivers. Since it is not an individual agent with its own beliefs, agendas, or values, and has no good or ill intentions, if one finds oneself in an environment they think of as untrustworthy then AI chatbots may be considered a trustworthy alternative.

A recent study suggests that some people may trust chatbots in this way. It found that the strength of people’s beliefs in conspiracy theories dropped after having a conversation with an AI chatbot. While the authors of the study do not propose a single explanation as to why this happened, part of this explanation may lie in the user trusting the chatbot: since someone who believes in conspiracy theories is likely to also think that people are generally trying to mislead them, they may look to something that they perceive as neutral as being trustworthy.

While it may then be possible to trust an AI because of its perceived neutrality, it can only be as neutral as the content it draws from; no information comes from nowhere, despite its appearances. So while it may be conceptually possible to trust AI, the question of whether one should do so at any point in the future remains open.

AI in Documentary Filmmaking: Blurring Reality in ‘What Jennifer Did’

image of camera lens and ring light in studio

Back in 2021, I wrote an article for The Prindle Post predicting the corrosive effect AI might have on documentary filmmaking. That piece centered around Roadrunner: A Film about Anthony Bourdain, in which an AI deepfake was used to read some of the celebrity chef’s emails posthumously. In that article, I raised three central concerns: (i) whether AI should be used to give voice and body to the dead, (ii) the potential for nefarious actors to use AI to deceive audiences, and (iii) whether AI could accurately communicate the facts of a situation or person.

Since that article’s publication, the danger AI poses to our ability to decipher fact from fiction in all facets of life has only grown, with increasing numbers of people able to produce ever more convincing fakery. And, while apprehensions about this are justifiably focused on the democratic process, with Time noting that “the world is experiencing its first AI elections without adequate protections,” the risk to our faith in documentary filmmaking remains. This is currently being discussed thanks to one of Netflix’s most recent releases — What Jennifer Did.

The documentary focuses on Jennifer Pan, a 24-year-old who, in 2015, was convicted of hiring hitmen to kill her parents (her father survived the attack, but her mother did not) because they disapproved of who she was dating. Pan is now serving a life sentence with the chance of parole after 25 years.

The story itself, as well as the interviews and people featured in it, is true. However, around 28 minutes into the documentary, some photographs which feature prominently on-screen raise doubt about the film’s fidelity to the truth. During a section where a school friend describes Jennifer’s personality — calling her “happy,” “bubbly,” and “outgoing” — we see some pictures of Jennifer smiling and giving the peace sign. These images illustrate how full of life Jenifer could be and draw a contradiction between the happy teen and the murderous adult.

But, these pictures have several hallmarks of being altered or just straight-up forgeries. Jenifer’s fingers are too long, and she doesn’t have the right number of them. She has misshapen facial features and an exceedingly long front tooth. There are weird shapes in the back- and foreground, and her shoulder appears out of joint (you can see the images in question on Futurism, where the story broke). As far as I’m aware, the documentary makers have not responded to requests for comments, but it does appear that, much like in Roadrunner, AI has been used to embellish and create primary sources for storytelling.

Now, this might not strike you as particularly important. After all, the story that What Jennifer Did tells is real. She did pay people to break into her parent’s house to kill them. So what does it matter if, in an attempt to make a more engaging piece of entertainment, a little bit of AI is used to create some still (and rather innocuous) images? It’s not like these images are of her handing over the money or doing things that she might never have done; she’s smiling for the camera in both, something we all do. But I think it does matter, and not simply because it’s a form of deception. It’s an example of AI’s escalating and increasingly transgressive application in documentaries, and particularly here, in documentaries where the interested parties are owed the truth of their lives being told.

In Roadrunner, AI is used to read Bourdain’s emails. This usage is deceptive, but the context in which it is done is not the most troubling that it could be. The chef sadly took his own life. But he was not murdered. He did not read the emails in question, but he did write them. And, while I suspect he would be furious that his voice had been replicated to read his writing, it is not like this recreation existed in isolation from other things he had written and said and did (but, to be clear, I still think it shouldn’t have been done).

In What Jennifer Did, however, we’re not talking about the recreation of a deceased person’s voice. Instead, we’re talking about fabricating images of a killer to portray a sense of humanity. The creative use of text, audio, and image shouldn’t, in itself, cause a massive backlash, as narrative and editing techniques always work towards this goal (indeed, no story is a totally faithful retelling of the facts). But, we must remember that the person to whom the documentary is trying to get us to relate – the person whom the images recreate and give a happy, bubbly, and outgoing demeanor – is someone who tried and, in one case, succeeded in killing her parents. Unlike in Roadrunner, What Jennifer Did uses AI not to give life to the lifeless but to give humanity to someone capable of the inhumane. And this difference matters.

Now, I’m not saying that Jennifer was or is some type of monster devoid of anything resembling humanity. People are capable of utter horrors. But by using AI to generate fake images at the point at which we’re supposed to identify with her, the filmmakers undermine the film’s integrity at a critical juncture. That’s when we’re supposed to think: “She looks like a normal person,” or even, “She looks like me.” But, if I can’t trust the film when it says she was just like any other teen, how can I trust it when it makes more extreme claims? And if a documentary can’t hold its viewer’s trust, with the most basic of things like “what you’re seeing is real,” what hope does it have in fulfilling its goal of education and informing? In short, how can we trust any of this if we can’t trust what we’re being shown?

This makes the usage of AI in What Jennifer Did so egregious. It invites doubt into a circumstance where doubt cannot, should not, be introduced. Jeniffer’s actions had real victims. Let’s not mince our words; she’s a murderer. By using AI to generate images — pictures of a younger version of her as a happy teen — we have reason to doubt the authenticity of everything in the documentary. Her victims deserve better than that, though. If Netflix is going to make documentaries about what is the worst, and in some cases the final, days in someone’s life, they owe those people the courtesy of the truth, even if they think they don’t owe it to the viewers.