← Return to search results
Back to Prindle Institute

Control and the Dark Side of Technological Progress

Iran, with technological support from Chinese companies, has assembled a powerful system of digital censorship and surveillance over the past 15 years. That infrastructure was recently employed – using face recognition, internet blackouts, and, AI – to brutally crush protests, resulting in at least 7,000 deaths. On the other side of the world, two senior AI researchers, Zoë Hitzig at OpenAI and Minank Sharma at Anthropic, resigned, citing concerns about the AI business model and AI safety, respectively. Underlying these seemingly dissimilar events is a shared worry about the dangers of technology and who controls it.

Our stories of innovation and technological progress tend to focus on the broader public. We will have access to new mind-bending entertainment sources, life-changing medical technologies,and a vast array of time-saving devices, so the narrative goes. Yet, the most important impact may not be how it is enjoyed by the everyday consumer, but rather how it is wielded by powerful entities such as governments or large corporations – Iran has over 90 million people; OpenAI’s ChatGPT has over 700 million weekly users.

For many of us, especially in advanced economies, our lives are completely infused with technology. Communication with our friends, the news we read, our access to government services, the tools on which we work, recommendations for doctors and restaurants, our political engagement and activism, are all facilitated by either government- or corporate-controlled digital infrastructure. Often we are exchanging our personal details — birth date, favorite websites, anxieties, etc. — for access. Off our computers, we can be monitored by our phone’s GPS, or watched by our Ring cameras.

Increasingly, the tendency has been towards centralization and top-down control. The largest technology companies, such as Alphabet (Google), Microsoft, and Apple have all embraced a platform approach, where they provide digital real estate and tools, which can then be “rented” by others. Likewise, major Large Language Models, such as ChatGPT, charge users or product developers for access to their model. This has led to a digital landscape with very few owners and many borrowers. Even most e-books are simply licensed, rather than owned the way a paper copy is.

At the same time, countries are increasingly asserting digital sovereignty and their right to control digital infrastructure within their territorial domains. China’s Great Firewall is the most famous example, but nations such as Russia and Iran have also developed sophisticated ways to block and shut-off internet access. Even the EU has come to embrace digital sovereignty, although its current concern is minimizing dependence on US tech companies.

This wraparound technological infrastructure – and the data it harvests – represents a great deal of potential control over our lives. This has its advantages. Powerful actors can secure data, fight cybercrime, and provide valuable tools and products. Digital surveillance can be used to fight terrorism. Advertising and data collection allow companies to provide their services at discounted rates.

However, these same powers greatly amplify a tendency already present in 20th-century politics: governments and corporations’ translation of power and knowledge into impact and influence. Their ability to track, monitor, and influence is unrivaled historically.

Given this reality, it is valuable to consider what protects us from the undue exercise of power.

At the most extreme, is the nonexistence of that power. One way to prevent large corporations from wielding such awesome power, for example, is to simply break them up. Similarly, a weakened government can be limited in its capacity to oppress (at the cost of being limited in its capacity to help).

Less extreme are various restraints or counterweights to the exercise of power. For corporations, this includes regulations, supervisory bodies, robust consumer and worker protection laws, and competitive alternatives. For governments, this includes free and fair elections, an independent judiciary, and the separation of powers. A well-functioning government that is responsive to the interests of the people is, of course, better positioned to impose meaningful regulation on corporations than a government that is weak, corrupt, or malfeasant.

Finally, there is mere discretion. Here it is simply a matter of internal restraint whether corporations or governments exercise certain power. As governments and, in a sense, corporations, build up their data-gathering and surveillance architectures, we increasingly rely on trust to maintain data integrity and prevent abuse. This is especially the case for countries like the US with relatively lean regulations, consumer protections, and workers’ rights. On the topic of AI, the US administration asserted in a December executive order that  “AI companies must be free to innovate without cumbersome regulation.” Given the known role such technology can play in deepfakes, data gathering, face recognition, and even cybercrime, this puts a lot of trust in these companies.

Some philosophers emphasize what is called non-domination or republican freedom. The key feature here is that the arbitrary exercise of power is not possible (or is, at least, prohibited), as opposed to merely voluntarily withheld. They emphasize that a slave with a permissive master is still not free.

By the same token, domination represents a particular risk for a world with extraordinarily powerful governmental and corporate actors. We need not just worry about what they do, but what they could do. Good governance may help take the edge off, but can it eliminate the risk entirely? Not every country is blessed with good governance.

We will have to think deeply if we want a world that contains such powerful actors and prevents potential abuses. Do the potential benefits they can provide through incredible resources and economies of scale outweigh the risk of abusing their power? Is it too late to go back?

The accumulation of digital power and the weaponization of technology raise a more general point about the complexity of technological progress. Technological improvement and societal improvement need not walk in lockstep. Certainly, some innovations and new technologies are nearly uncontroversial good things: antibiotics, seatbelts, sanitation, braille.

Still, technological growth is not without its costs and risks. We cannot always see the full effect of new products and innovations – there are always unforeseen dangers and unanticipated applications. It’s also good to remember that the effects of technologies may not refract across a society evenly. AI-fueled innovations that are good for landlords are not necessarily good for renters; those good for companies are not necessarily good for their workers. Technology can exacerbate existing power differentials in society. Nor can we see the combined effect of many different technologies and the, often disorienting, changes they can bring to a society. How will, for example, large language model chatbots like ChatGPT impact how we learn, think, and socialize. It is worth considering what we lose, not just what we gain, in the pursuit of progress.

AI and the Water Wasting Machine

An argument that’s frequently made against the use of AI (specifically, popular chatbots based on LLMs, like ChatGPT, Gemini, Copilot, etc.) is that AI is harmful to the environment. The hardware required to create, train, and run chatbots consumes a significant amount of energy, which in turn requires the use of natural resources, specifically water. Many articles have been written recently that focus on AI’s water usage, with some saying that AI is “accelerating the loss of our scarcest natural resource,” “draining water from areas that need it most,” and that by 2030 it will “match the annual drinking water needs of the United States.

While the fact that AI needs some amount of water to operate isn’t disputed, there is much more debate around how much it uses. Some AI-defenders, for instance, have argued that the amount of water AI uses is either negligible or at least comparatively negligible given the amount of water that other things use and that we are seemingly okay with. For instance, Sam Altman postulated that a single ChatGPT query uses “roughly one fifteenth of a teaspoon” of water per query, while others have written that the AI industry as a whole in the US uses about as much water as its golf courses.

Indeed, “you’ve been thinking about AI’s water use wrong” content is a plentiful resource in 2026. For example, an article from Wired, fittingly titled “You’re Thinking About AI and Water All Wrong,” notes that many early estimates of AI water use are likely off by a not-insignificant amount. And a popular Substack article, less-subtly titled “The AI water issue is fake,” lists the following items alongside the approximate amount of water they take to produce when compared to a prompt for a chatbot:

Leather Shoes – 4,000,000 prompts’ worth of water
Smartphone – 6,400,000 prompts
Jeans – 5,400,000 prompts
T-shirt – 1,300,000 prompts
A single piece of paper – 2550 prompts
A 400 page book – 1,000,000 prompts

These kinds of comparisons are intended not only to challenge the narrative that AI uses a lot of water, but also to defuse the moral argument against AI use on the basis of water consumption. As the Substack author notes: “If you want to send 2500 ChatGPT prompts and feel bad about it, you can simply not buy a single additional piece of paper. If you want to save a lifetime supply’s worth of chatbot prompts, just don’t buy a single additional pair of jeans.” In other words, we risk being moral hypocrites if we criticize AI for its water usage while still reading books and wearing comfortable clothes.

I am not going to question the exact numbers concerning AI water use. Instead, I want to consider the merits of the “you’ve been thinking about AI’s water use wrong” arguments, specifically whether we really are moral hypocrites for criticizing AI on the basis of its water use. If we are okay with playing golf and using phones and doing all sorts of other things that use water, must we refrain from criticizing AI on the same basis? Are the people who think that we shouldn’t use AI because of its water use really thinking about it “all wrong,” or chasing a “fake” issue?

I think that the answer is no: we can still criticize AI on the basis of its water consumption. To illustrate, consider the following thought experiment:

The Water Waster: A company announces a new product: the Water Waster 3000. Here’s how it works: it uses [one/one hundred/one thousand] gallons of potable water every day to spin a wheel. The wheel is not connected to anything, and its only tangible benefit is that some people enjoy watching the wheel spin.

We can write the thought experiment so that the Water Waster 3000 uses different amounts of water. However, no matter which number we chose, we would likely reach the same conclusion: that operating the Water Waster 3000 is a waste of water. Someone who really enjoyed watching wheels spin might argue that, in comparison to other forms of entertainment, a single spin of the wheel uses only a fraction of the amount of water required by other industries. But this argument doesn’t hold much water (so to speak): it doesn’t matter that other things use much more water; what matters is that the Water Waster 3000 wastes it.

It is, of course, unfair to say that the Water Waster 3000 is the exact same thing as an AI chatbot. However, one might argue that AI chatbots and much of the AI industry are akin to the Water Waster 3000 in the sense that the AI industry does not, by and large, produce anything of significant enough value, or at least not enough value to warrant its environmental impact. As argued in the aforementioned Wired article: “People who don’t think twice about eating a burger or buying a new T-shirt are angry about LLMs and water because they are rejecting the entire premise that AI is worth the price of its water use.”

If we approach the argument this way, then we can also avoid the charge of hypocrisy. After all, people need t-shirts and shoes: those things require water to produce, but they’re worth it. Sure, producing clothing has a negative impact on the environment, but since the trade-off is acceptable, it’s not a waste of water, just a use of it.

But does this argument free us from the charge of hypocrisy? After all, we do all sorts of things that arguably do waste water, but still do not subject them to as much scrutiny. For instance, while people certainly need shoes and t-shirts and such, we definitely don’t need as many as people tend to acquire (for example, the environmental damages of so-called “fast-fashion” – mass-produced, cheap and disposable clothing – are well-documented). That AI-enthusiasts often compare the water use of the AI industry to golf courses is thus particularly apt, not only because of the alleged similarities in the amount of water they use, but also because golf doesn’t serve any practical need. Golf is frivolous (arguably), so if we’re happy as a society to accept using a lot of water so that some people can more easily hit a small ball into a small hole that’s really far away, then arguably we should be okay using a similar amount of water for chatbots and other potentially much more useful things.

Of course, the claim that people are hypocritical when it comes to criticizing AI for its environmental impact does not negate the fact that AI use still has an impact on the environment. Using AI can then still be bad because of its water usage, just as it is (arguably) bad that we use a lot of water on golf courses. After all, the claim of moral hypocrisy is not a claim about the rightness or wrongness of an act, but instead about whether one has the grounds to criticize others because of their own acts.

So, perhaps AI is bad because it uses water, but then so is everything else because everything uses water. Isn’t it unfair, then, to single out AI?

Well, not really. We are, after all, in the unenviable position of living in a system where hardly any of our acts as consumers or users of technology are environmentally neutral. If this is enough to undermine our moral authority to criticize, then we could not criticize any acts that have a negative environmental impact, unless those acts are significantly disproportionately more extreme compared to what we do now. But it’s not clear that this is a fair standard to hold people to. Remember our Water Waster 3000: we seem perfectly within our rights to criticize the existence of such a machine, even though we use other, more useful machines that also use water.

The Wired analysis is then perhaps not entirely fair to AI critics: it is not necessarily that people who choose to eat burgers or buy new t-shirts don’t give any thought to the environmental impact of their actions – they might – nor that they think that burgers and t-shirts are worth the environmental costs – they might think they ultimately aren’t. But you can still criticize someone for leaving their tap running even if you happened to have bought a new pair of jeans recently. If AI is a waste of water, then we are not hypocrites for criticizing it.

Of course, the AI-enthusiast would likely reject the idea that AI use really is akin to leaving a tap running or something like the Water Waster 3000, or indeed is a waste of water at all. While it is hard to defend the value of every individual use of an AI chatbot, we might still think that, overall, AI should be conceived of as a use of water that produces something of value, rather than a waste. Maybe what we need to calculate is something like “value-per-milliliter of water used,” where values above a certain level would qualify as being “worth it” while those below are “not worth it.” Where the critic and the enthusiast might disagree, then, is whether AI falls above or below that line.

The ongoing disputes around the amount of water AI uses make this calculation practically impossible. But the charge of hypocrisy assumes that we have an answer to this question, namely that AI is, in fact, worth the water it uses. If we reject this premise, or even just call it into question, then we do not lose our moral ground to hypocrisy.

So, where does this leave us? We’ve seen that it is not enough to defuse the environmental argument against AI use to simply say that AI uses as much or less water than other things that we tend to think are acceptable, either at the level of queries, companies, or the industry as a whole. Nor does the fact that we use water for some things mean that we automatically undermine our ability to criticize AI use on the same basis: we do not lose our moral standing to criticize AI for its environmental impact if we reject the idea that AI is worth its water use. While there will undoubtedly be many more arguments about how much water AI uses, it is still criticizable on that basis.

Is It Okay to Be Mean to AI?

Fast-food chain Taco Bell recently replaced drive-through workers with AI chatbots at a select number of locations across America. The outcome was perhaps predictable: numerous videos went viral on social media showing customers becoming infuriated with the AI’s mistakes. People also started to see what they could get away with, including one instance where a customer ordered 18,000 waters, temporarily crashing the system.

As AI programs start to occupy more mundane areas of our lives, more and more people are getting mad at them, are being mean to them, or are just trying to mess with them. This behavior has apparently become so pervasive that AI company Anthropic announced that its chatbot Claude would now end conversations when they were deemed “abusive.” Never one to shy away from offering his opinion, Elon Musk went to Twitter to express his concerns, remarking that “torturing AI is not okay.”

Using terms like “abuse” and “torture” already risks anthropomorphizing AI, so let’s ask a simpler question: is it okay to be mean to AI?

We asked a similar question at the Prindle Post a few years ago, when chatbots had only recently become mainstream. That article argued that we should not be cruel to AIs, since by acting cruelly towards one thing we might get into the habit of acting cruelly towards other things, as well. However, chatbots and our relationships with them have changed in the years since their introduction. Is it still the case that we shouldn’t be mean to them? I think the answer has become a bit more complicated.

There is certainly still an argument to be made that, as a rule, we should avoid acting cruelly whenever possible, even if it is towards inanimate objects. Recent developments in AI have, however, raised a potentially different question regarding the treatment of chatbots: whether they can be harmed. The statements from Anthropic and Musk seem to imply that they can, or at least that there is a chance that they can be, and thus that you shouldn’t be cruel to chatbots because doing so at least risks causing harm to the chatbot itself.

In other words, we might think that we shouldn’t be mean to chatbots because they have moral status: they are the kinds of things that can be morally harmed, benefitted, and evaluated as good or bad. There are lots of things that have moral status – people and other complex animals are usually the things we think of first, but we might also think about simpler animals, plants, and maybe even nature. There are also lots of things that we don’t typically think have moral status, as well: inanimate objects, machines, single-cell organisms, things like that.

So how can we determine whether something has moral status? Here’s one approach: whether something has moral status depends on certain properties that it has. For example, we might think that the reason people have moral status is because they have consciousness, or perhaps because they have brains and a nervous system, or some other property. These aren’t the only properties we can choose. For example, 18th-century philosopher Jeremy Bentham argued that animals should be afforded many more rights than they were at the time, not because they have consciousness or the ability to reason, per se, but simply because they are capable of suffering.

What about AI chatbots, then? Despite ongoing hype, there still is no good reason to believe any chatbot is capable of reasoning in the way that people are, nor is there any good reason to believe that they possess “consciousness” or are capable of suffering in any sense. So if it can’t reason, isn’t conscious, and can’t suffer, should we definitively rule out chatbots from having moral status?

There is potentially another way of thinking about moral status: instead of thinking about the properties of the thing itself, we should think about our relationship with it. Philosopher of technology Mark Coeckelbergh considers cases where people have become attached to robot companions, arguing that, for example, “if an elderly person is already very attached to her Paro robot and regards it as a pet or baby, then what needs to be discussed is that relation, rather than the ‘moral standing’ of the robot.” According to this view, it’s not important whether a robot, AI, or really anything else has consciousness or can feel pain when thinking about moral status. Instead, what’s important when considering how we should treat something is our experiences with and relationship to it.

You may have had a similar experience: we can become attached to objects and feel that they deserve consideration that other objects do not. We might also ascribe more moral status to some things rather than others, depending on our relationship with them. For example, someone who eats meat can recognize that their pet dog or cat is comparable in terms of relevant properties to a pig, insofar as they are all capable of suffering, have brains and complex nervous systems, etc. Regardless, although they have no problem eating a pig, they would likely be horrified if someone suggested they eat their pet. In this case, they might ascribe some moral status to a pig, but would ascribe much more moral status to their pet because of their relationship with it.

Indeed, we have also seen cases where people have become very attached to their chatbots, in some cases forming relationships with them or even attempting to marry them. In such cases, we might think that there is a meaningful moral relationship, regardless of any properties the chatbot has. If we were to ascribe a chatbot moral status because of our relationship with it, though, its being a chatbot is incidental: it would be a thing that we are attached to and consider important, but that doesn’t mean that it thereby has any of the important properties we typically associate with having moral status. Nor would our relationship be generalizable: just because one person has an emotional attachment to a chatbot does not mean that all relationships with chatbots are morally significant.

Indeed, we have seen that not all of our experiences with AI have been positive. As AI chatbots and other programs occupy a larger part of our lives, they can make our lives more frustrating and difficult, and thus we might establish relationships with them that do not hold them up as objects of our affection or care, but as obstacles and even detriments to our wellbeing. Are there cases, then, where a chatbot might not be deserving of our care, but rather our condemnation?

For example, we have all likely been in a situation where we had to deal with frustrating technology. Maybe it was an outdated piece of software you were forced to use, or an appliance that never worked as it was supposed to, or a printer that constantly jammed for seemingly no good reason. None of these things have the properties that make them a legitimate subject of moral evaluation: they don’t know what they’re doing, have no intentions to upset anyone, and have none of the obligations that we would expect from a person. Nevertheless, it is the relationship we’ve established with them that seems to make them an appropriate target of our ire. It is not only cathartic to yell profanities at the office printer after its umpteenth failure at completing a simple printing task, it is justified.

When an AI chatbot takes the place of a person and fails to work properly, it is no surprise that we would start to have negative experiences with it. While failing to properly take a Taco Bell order is, all things considered, not a significant indignity, it is symptomatic of a larger trend of problems that AI has been creating, ranging from environmental impact, to job displacement, to overreliance resulting in cognitive debt, to simply creating more work for us than before they existed. Perhaps, then, ordering 18,000 waters in an attempt to crash an unwelcome AI system is less cruel as it is a righteous expression of indignation.

The dominant narrative around AI – perpetrated by tech companies – is that it will bring untold benefits that will make our lives easier, and that it will one day be intelligent in the way human beings are. If these things were true, then it would be easier to be concerned with the so-called “abuse” of AI. However, given that AI programs do not have the properties for moral status, and that our relationships with them are frequently ones of frustration, perhaps being mean to an AI isn’t such a big deal after all.

Robot Kitchens, AI Cooks, and the Meaning of Food

I knew that I was very probably not going to die, of course. Very few people get ill from pufferfish in restaurants. But I still felt giddy as I took my first bite, as though I could taste the proximity of death in that chewy, translucent flesh. I swilled my saki, squeezed some lemon onto the rest of my sashimi, and looked up. Through the serving window I could see the chef who held my life in his busy hands. We made eye contact for a moment. I took another bite. This is absurd. I am absurd. I pictured the people I love, across the ocean in sleeping California, stirring gently in their warm, musky beds.

My experience in Tokyo eating pufferfish, a delicacy known as fugu, was rich and profound. Fugu has an unremarkable taste. But pufferfish is poisonous; it can be lethal unless it is prepared in just the right way by a highly trained chef. My experience was inflected with my knowledge of the food’s provenance and properties: that this flesh in my mouth was swimming in a tank a few minutes ago and was extracted from its lethal encasement by a man who has dedicated his life to this delicate task. Seconds ago, it was twitching on my plate. And now it might bring me a lonely death in an unfamiliar land. This knowledge produced a cascade of emotions and associations as I ate, prompting reflections on my life and the things I care about.

Fugu is an unfamiliar illustration of the familiar fact that our eating experiences are often constituted by more than physical sensations and a drive for sustenance. Attitudes relating to the origin or context of our food (such as a belief that this food might kill me, or that this food was made with a caring hand) often affect our eating experiences. There is much more to food, as a site of human experience and culture, than sensory and nutritional properties.

You would be hard pressed to find someone who denies this. Yet we are on the cusp of societal changes in food production that could systematically alter our relationship to food and, consequently, our eating experiences. These changes are part of broader trends apparent across nearly all spheres of life resulting from advances in artificial intelligence and other automation technologies. Just as an AI system can now drive your taxi, process your loan application, and write your emails, so AI and related automation tools can now make your food, at home or in a restaurant. Many technologists in Silicon Valley are trying to make automated food production ubiquitous. One CEO of a successful company I spoke with said he expects that almost no human beings will be cooking in thirty years’ time, kind of like how today very few humans make soap, toys, or clothing by hand. It may sound ridiculous, but I’ve found that this vision is common in influential industry spaces.

What might life look like if this technological vision were to come about? This question can appear trivial relative to louder questions about autonomous weapons systems, AI medicine, or the existential threat of a superintelligence. It is not a question of life and death. But I think the question points to a more insidious possibility: that our technological advances might quietly erode the conditions that enable us to experience our day-to-day lives as meaningful.

On the one hand, the struggle for sustenance is a universal feature of human life, and everyone is a potential beneficiary of technology that streamlines food production, like AI that invents recipes or performs kitchen managerial work and robots that prepare food. Home cooking robots could save people time and effort that would be better spent elsewhere. A restaurant that staffs fewer humans could save on labor costs and pass these savings on to customers. Robots could mitigate human errors relating to hygiene or allergies. And then there is the possibility of automated systems that can personalize food to each consumer’s specific tastes and dietary requirements. Virtually every technologist I have spoken to in this industry is excited about a future where every diner can receive a bespoke meal that leaves them totally satisfied and healthy, every time.

Automation brings interesting aesthetic possibilities, too. AI can augment human creativity by helping pioneer unusual flavor pairings. The knowledge that your food was created by a sexy robot could enhance your eating experience, especially if the alternative would be a miserable and underpaid laborer.

These are nice possibilities. But one thing that automation tends to do is create distance between humans and the things that are automated. Our food systems already limit our contact with the sources of our food. For example, factory farming hides the processes through which meat is produced, concealing moral problems and detracting from pleasures of eating that are rooted in participation in food production. AI and robotics could create even more distance between us and our food. Think of the Star Trek replicator as an extreme case; the diner calls for food, and it simply appears via a wholly automated process.

Why is the prospect of losing touch with food processes concerning? For some it might not be. There are many sources of value in the world, and there is no one right way to relate to food. But, personally, I find the prospect of losing touch with food concerning because my most memorable food experiences have all been conditioned by my contact with the processes through which my food came to be.

I have a sybaritic streak. I enjoy being regaled at fancy restaurants with diseased goose livers, spherified tonics, perfectly plated tongues, and other edible exotica. But these experiences tend to pass for me like a kaleidoscopic dream, filled with rarefied sensations that can’t be recalled upon waking. The eating experiences I cherish most are those in which my food is thickly connected to other things that I care about, like relationships, ideas, and questions that matter to me. These evocative connections are established through contact with the process through which my food was made.

I’ve already mentioned one example, but I can think of many others. Like when, in the colicky confusion of graduate school, Sam and I slaughtered and consumed a chicken in the living room of his condo so that we might, as men of principle, become better acquainted with the hidden costs of our food. Or when I ordered tripas tacos for Stephen, my houseguest in Santa Barbara, which he thoroughly enjoyed until, three tacos in, he asked me what ‘tripas’ meant. Or when I made that terrible tuna-fish casserole filled with glorious portions of shredded cheese and Goldfish crackers for Amy, Jacob, and Allison so that they might become sensuously acquainted with a piece of my childhood. Or when Catelynn and I sat in that tiny four-seat kitchen overlooking the glittering ocean in Big Sur and were served sushi, omakase style, directly from the chef’s greasy, gentle hands, defining a shared moment of multisensory beauty.

These experiences fit into the fabric of my life in unique and highly meaningful ways. These experiences are mine, but you probably have some like it. The thing to notice is that these sorts of experiences would be inaccessible without contact with the provenance of food. They would not be possible in a world where all food was produced by a Star Trek replicator. This suggests that food automation threatens to erode an important source of human meaning.

Really, there are all sorts of concerns you might have about AI and robotics in the culinary sphere. Many of these have been identified by my colleague Patrick Lin. But for me, the erosion of meaning is worth emphasizing in discussions about technology because this kind of cost resists quantification, making it easy to overlook. It’s the sort of thing that might not show up on a cost-benefit of a tech CEO who speaks glibly about eliminating human cooking.

The point I’m making is not that we should reject automation. The point is that as we augment and replace human labor in restaurants, home kitchens, and other spheres of life, we need to be attentive to how the processes we hope to automate away may enrich our lives. An increase in efficiency according to quantifiable criteria (time, money, waste) can diminish squishier but no less important things. Sometimes this provides a reason to insist on an alternative vision in which humans remain in contact with the processes in question. I would argue this is true in the kitchen; humans should retain robust roles in the processes through which our food comes to be.

After my meal in Tokyo, I used my phone to find an elevated walkway on which to smoke. I took a drag on a cigarette and watched a group of men under an overpass producing music, in the old way, by a faint neon light. I could feel the fugu in my belly, and my thoughts flashed to my loves and hopes. One of the men playing a guitar looked up. We made eye contact for a moment. I took another drag. This is nice. I am happy.

 

Note: This material is based upon work supported by the National Science Foundation under Award No. 2220888.  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Can an AI Be Your Friend?

According to a July report by the World Health Organization, one in six humans are experiencing loneliness and social isolation – a condition with serious public health consequences ranging from anxiety to chronic illness. This builds on enduring concerns of a “loneliness epidemic,” especially among young men in developed economies. Although some take issue with “epidemic” language, arguing it misframes longstanding loneliness concerns as new and spreading ones, the threat is real and persistent.

Meanwhile, large language model chatbots such as ChatGPT, Claude, and Gemini, as well as AI companions such as Replika and Nomi, have emerged as sources of digital support and friendship. Many teens report social interactions with AI companions, although only 9% explicitly consider them friends. But the numbers may grow, with 83% of Gen Z believing they can form emotional ties with AI companions per an, admittedly self-interested, report by the Girlfriend.ai platform.

Should AI chatbots be part of the solution to the loneliness epidemic?

Of course, AI as a tool can be part of the solution. One can ask ChatGPT about social events in their city, to help them craft a text asking someone out, or to propose some hobbies based on their interests. This is using AI as a writing aid and search tool. But the ethical issue I’m concerned with is whether an AI friend or companion should be part of the solution.

One place to start is with what we want friendship to be. In the Nicomachean Ethics, Aristotle designates three kinds of friendship: utility, pleasure, and virtue. With utility, a friend provides something useful but does not care about your well-being. A friendship of pleasure involves mutual activities or enjoyment. Finally, a friendship of virtue involves genuine mutual care for well-being and each other’s growth. Aristotle considered friendship of virtue to be true friendship.

An AI chatbot can provide utility, and one may derive pleasure from interacting with a chatbot or AI companion, so it can provide some of the functions of friendship, but current AI chatbots cannot genuinely care about someone’s well-being. At least from an Aristotelian perspective, then, AI cannot be a true friend.

This does not rule out the value of AI companionship. Humans often have asymmetric relationships that nonetheless provide great satisfaction, for example relationships with pets or parasocial relationships with celebrities. (Granted, many would allege that at least some pets like dogs and cats can care about other’s well-being even if they cannot help one grow as a person.) The human tendency to anthropomorphize has led to a long legacy of relationships with completely mindless entities, from pet rocks to digital pets like Tamagotchi. And then, of course, there are imaginary friends.

But none of those are really seriously proposed as solutions to loneliness. Plausibly, a surge of emotional support through pet rocks, imaginary friends, or more realistically, dogs, is more a symptom of loneliness than an actual solution.

Moreover, there seems to be something distinct about chatbots. A dog may provide some of the intimacy of human friendship, but the dog will never pretend to be a human. By contrast, chatbots and AI companions are designed to act like human friends. Or, well, not quite human friends — there’s a key difference.

AI companions are programmed to “listen” attentively, respond generously, and support and affirm the beliefs of those communicating with them. This provides a particularly cotton candy-esque imitation of friendship, based on agreement and validation. AI sycophancy, it is sometimes called. Undoubtedly, this feels good. But does it do us good?

This August police reported one of the first cases of an AI chatbot potentially leading to murder. ChatGPT usage continually reinforced 56-year-old Stein-Erik Soelburg’s paranoia about his mother drugging him. Ultimately, he killed her and then himself.

The parents of 16-year-old Adam Raine similarly allege that ChatGPT contributed to his suicide, and are now suing OpenAI, the company behind ChatGPT.

While extreme examples, in both cases, the endless affirmations of ChatGPT emerge as a concern. Increasingly psychologists are seeing “AI psychosis” where the incredibly human-like, flattering, and supportive nature of chatbots can suck people further into delusion. By contrast, a virtuous friend (on Aristotle’s account) is interested in your well-being, but not necessarily in people-pleasing. They can tell you to snap out of a negative spiral or that you are the problem.

Can better programming fix this? An August 26th, OpenAI published a blog post, “Helping people when they need it most,” discussing some of the safeguards they built into ChatGPT and where they are still trying to improve. This includes avoiding providing guidance on self-harm and working with physicians and psychologists on mental health protections.

However, programming can only solve technical problems. No amount of safety tweaks will make a large language model care about someone’s well-being; it can merely help it better pretend.

Ultimately, AI companies and the virtuous friend have very different aims and motivations. At some level, the purpose of an AI company is to turn a profit. What the precise business model(s) will be has yet to emerge, as currently most AI is still burning through investors’ money. But whatever strategy eventually arises – whether nudging customers towards buying certain products or maximizing engagement and subscription fees – it will be distinct from the sincere regard of Aristotelian friendship. Worse, to the extent that AI chatbots and companions can alleviate loneliness, they rely on this loneliness in the first place to generate demand for the product.

AI companions may be able to fill some of the functions that friendship does – offering a steady hand or a kind word. However, they fundamentally cannot deliver the mutual caring that we expect from the truest form of friendship. Advancements to replication of depth and sincerity will no doubt be made, but what will remain constant is the lack of genuine empathy. Instead of a cure for our loneliness and isolation, the turn to large language models may simply mark the next stage of the disease.

AI’s Videogamification of Art

Criticisms of AI-generated art are now familiar, ranging from the unauthorized use of artists’ works to train models, to the aping of art styles that border on plagiarism, to claims that AI enthusiasts fail to understand that creating art requires intention and purpose and is antithetical to being produced automatically by a program. While these criticisms remain as important as ever, AI programs continue to evolve, and with new capabilities come new issues.

Case in point: Google’s new Genie 3, which allows users to create “interactive, playable environments” on the basis of prompts and images. To demonstrate the technology, a Google researcher showed how one could “walk around” famous paintings, such as The Death of Socrates by Jacques-Louis David and Nighthawks by Edward Hopper. The AI program generates a 3D world, allowing users to see characters and objects in the original painting from different angles, essentially creating a kind of videogame, albeit one in which there’s not much to do (at least for now).

I think there is good reason to be critical of AI that makes rudimentary videogames out of works of art. Here I’ll consider three such criticisms. The first two can be commonly found in the comments on social media: first, that using AI to digitally manipulate art is disrespectful to the artist or artwork itself; and second, that choosing to interact with the videogamified artwork represents a failure of imagination on the part of the user. I’ll also consider a new version of an old criticism from the philosophy of art: that AI-generated creations lack the original artwork’s aura.

There is a sense in which manipulating art in this way isn’t new. After all, so-called “immersive experiences” have been popular for a while now, such as Immersive Van Gogh, where visitors can walk among projections of some of Van Gogh’s most recognizable artworks. These experiences are sometimes criticized for being tacky and a tourist trap, but few would consider them egregious crimes against art. It’s also long been accepted by all but the stodgiest scholars that videogames are capable of being aesthetically valuable, so it’s not as though we should think that only oil paintings in ornate frames hanging in galleries are worthy of our aesthetic appreciation.

So what’s wrong with using AI to create a virtual world out of a painting? First off, we might worry that using these programs disrespects the original artist, who likely did not intend their work to be a virtual environment to be walked around in. Part of the problem is that AI programs struggle with generating environments that are coherent, producing artifacts and noise that detract from the original composition of the work of art. In the example of the world created out of Hopper’s Nighthawks, for example, AI-generated faces and words became garbled messes, with the end product feeling akin to vandalism.

This first criticism is an aesthetic one: AI programs that videogamify art ruin the artist’s vision, taking something beautiful and making it grotesque. We might also be tempted to criticize the person who chooses to engage with an artwork via its AI-generated videogame form. Commenters on social media are particularly liable to sling this kind of mud, accusing AI art fans of exhibiting a wide range of personal failings. While social media tends not to feature the most careful debates, one criticism that is worth singling out is that engaging with AI-manipulated versions of artworks represents a failure of imagination.

Why think this? Part of what’s involved in appreciating an artwork is to engage with it on its own terms, which requires interpreting what the artist has put in front of you and what they have left out. We might argue that getting an AI program to fill in the blanks by creating a navigable 3D environment is like taking a shortcut, where you are getting a program to do the work required to appreciate a work of art for you.

We’ve seen this kind of criticism when it comes to people using chatbots to write for them: writing is meaningful when it is intentional and effortful, and it loses that meaning when we offload our cognitive functions to programs. In the same way, using an AI program to generate a world out of a painting offloads your imagination and prevents you from being able to meaningfully appreciate a work of art.

So, the first criticism of AI videogamified art pertains to how a person treats an artist or artwork, and the second is a criticism of the person who uses such programs. The last argument I’ll consider is a bit different: that turning an artwork into a 3D virtual environment provides a subpar aesthetic experience because it fails to capture the original artwork’s aura.

This argument (or at least a form of it) comes from the philosopher Walter Benjamin, who wrote on art and aesthetics in the first half of the 20th century. Benjamin was concerned with a practice that was becoming more and more frequent at the time: that artworks were being reproduced, sometimes on a massive scale. An original painting, Benjamin argued, is unique, and when experienced in a certain place and time, has a presence about it, or what he calls an “aura.” It is a concept perhaps better experienced than described: there is some feeling that you get when encountering an artwork in a gallery as opposed to seeing a picture of it online, or as a postcard in a gift shop.

Benjamin’s worry was that copies of artworks fail to capture something that can only be possessed by the original. He did not, of course, have a conception of modern AI tools, or virtual 3D environments, or videogames. But Benjamin’s complaint still feels apt when experiencing new AI creations today: you’re no longer interacting with the original, but instead something that has been manipulated, and in doing so you fail to have the same kind of aesthetic experience. This criticism is not the charge that you’re necessarily lacking in imagination by engaging with the AI-generated version of a painting instead of the original; it’s just that it’s a shame that you’re missing out on having a more meaningful aesthetic experience.

How serious these criticisms are is up for debate, and many online have argued that new ways for AI programs to create and manipulate artworks really amounts to little more than cool new technology. Regardless, something of value does seem to be lost when interacting with the videogamified version of artworks instead of engaging with them on their own terms. When it comes to having a meaningful aesthetic experience, AI continues to feel like little more than a novelty.

Why My Students Shouldn’t Use AI Either

Every semester since ChatGPT arrived on the public stage, I have spent considerable time thinking about how to handle AI use with my students, and I have changed my answer each semester. This year, for the first time, I am going to ask that my students unequivocally avoid using it for any reason. Fortunately, I am not alone in this approach. Fellow Prindle author Daniel Burkett has offered three moral reasons why students should not use AI: it harms creators, the environment, and the students themselves. I would like to offer a few more reasons (though not all explicitly moral) to consider.

Argument 4: AI Erodes Responsibility

As AI systems infiltrate our human decision-making processes and social order more deeply, they are contributing to the erosion of accountability. To be sure, many AI evangelists who tout the benefits of AI will be quick to point out that it is on the human user to verify the legitimacy of AI outputs and use them responsibly. However, I am skeptical that this solution can overcome the accountability concerns I have.

Consider one personal anecdote. Last year, another driver hit my partner while she was driving our car and our insurance increased. When we called the insurance company, we wanted an explanation of why we would be paying the new amount. We were not objecting to having to pay more (though, it does feel unjust to have to pay more for an accident you are not at fault for). We simply wanted to know why the increase was $23 as opposed to $15 or $20. When we asked, the response we received was ultimately “I don’t know, that’s just what the system is telling me.” When we asked who we could contact to ask for more details, they said there was no one that could help us.

This example points out a larger issue with the integration of AI systems in social structures. We often think of accountability in cases where things go wrong, but conceptually accountability is about tracking responsibility for outcomes, whatever they may be. When we include AI in more of our life activities, we lose the thread of accountability. The reason for why something happened will increasingly stop with the answer “AI.” What makes AI unique is that it can behave like an agent in ways previous technologies have been unable to, which will make it well suited to enter into the stream of accountability and muddy the waters.

Furthermore, as these systems are more deeply integrated into our technologies and daily life, they will be treated as more trustworthy (regardless of whether they actually are). When people use technology that everyone is using, in the way that everyone is using it, it can be reasonable to ask for clemency when things go awry because they were just doing what was considered standard practice.

In my classrooms, we study ideas and arguments about serious topics: medical ethics, justice, propaganda, and technology. I want students to learn how to formulate ideas, explore their contours, and ultimately form well founded beliefs that they can claim some form of ownership over. Given the propensity of AI systems to obscure the trail of accountability, I will be prohibiting its use because I want students to retain accountability for the ideas they produce in my classrooms.

Argument 5: AI Undermines Growth

One of the promises of AI is that it will take over some tasks for us, in order to free our minds and time up for more important things. We have also been promised that it will stimulate the creation of new, undiscovered roles in society. So far, many of these prophesied positions relate to the management of AI itself: we now need AI policy experts, AI oversight experts, AI alignment specialists, and AI testers, to name just a few.

While we have yet to see an influx of new and exciting career paths beyond those related to managing AI, we do have reason to think that as AI takes over activities for us we will no longer be able to do those things as well. A preliminary study suggests if doctors go from not using AI, to using AI, and then back to not using AI, they get worse at making diagnoses than they were before they started using AI in the workplace. This should not surprise us. When we stop practicing skills, we lose our edge.

Echoing Burkett’s piece, in the realm of philosophy there is virtually no good reason for my students to use AI because every use case seems to undermine the very skills I want them to learn. When I ask my students how they use it, they typically tell me that they draft their own work and then feed it to AI to make it more professional. However, my philosophy courses are not about producing something that sounds convincing or looks professional (though it is nice when this happens). It’s about learning how to think well. When students write an argument defending a position, and then feed it to AI to help make it more professional, they are missing out on practicing one of the crucial skills I am trying to teach them. Editing a paper for logical coherence, careful word choice, and conceptual analysis is part of the skill building process, and AI impedes this.

Argument 6: AI Is Ideological

AI is currently (and will always likely be) infused with ideology. Nicholas Kreuder has written about the dangers that come from the power that the owners of AI have over us, which reveals the ideological nature of these systems and the risks we have when we rely on them.

If AI is given guardrails, those guardrails will be given using the political, moral, and, likely, economic principles that the creators deem appropriate. Even a radical AI enthusiast who believes AI needs to be absolutely “free” would be instantiating an ideology within the AI system if they chose to avoid any guardrails at all. The choice of what data to train the system on and what to exclude will also be one rooted in some ideological choice. And, insofar as these systems need to generate profit, they will always feel the ideological pull of economic interest.

This problem is not unique to AI, of course. The fact that the phrase “to google” is synonymous with the action of searching for something on the internet reveals the informational monopoly that one company wields over a huge portion of the world. And, the way that Google organizes search results is also far from ideology free.

AI ideology is an issue not because it is ideological per se, as most technologies cannot avoid being infused with some kind of ideology. It is that AI is especially good at projecting confidence and expertise. AI writes convincingly from the perspective of many who use it (while many PhDs have criticized AI’s performance as laughable, even childish, this is not representative of the experience that many have while using it).

The problem with AI, then, is not just that it presents information confidently, but that when you ask it questions about controversial political and ethical issues, it appears to give balanced and unbiased answers. You can even instruct the AI to be unbiased and it will tell you that it will do that. But, in reality it cannot. (Notably, if you ask it “can you be unbiased?” it can also correctly tell you that this is not really possible).

While my ideological complaint also applies to pre-AI technologies like the Google search, the television, the radio, or the book, I think that conversing with AI poses a special problem. The confident, conversational, and apparent unbiased delivery of information occludes the ideological bent that AI systems have.

Argument 7: A Refuge From AI

Many of us feel compelled to use AI whether we like it or not out of a fear of being left behind (FOMO is a real tactic in the tech marketing world). I suspect that AI will be used by many of my students because they feel that they must for “educational” purposes. I also know that outside of the university context, students will be required to use AI for their jobs and are forced to use it when interacting with the sociotechnical infrastructure around them.

The final, simple reason I will prohibit AI in my classroom this semester is to give my students a place of refuge from it. My hope this fall is to give students the room to slow down, make mistakes, and think for themselves without the pressure to be perfect. Although it promises to make our lives easier, AI is ultimately a tool that entices us to work harder. It promises to help us make things better, do things faster, and make us stronger. But this is machine logic, and we are human after all. So, this fall I will say no to AI.

Is Artificial Intelligence Sustainable?

A recent advertisement for Google’s “Gemini” artificial intelligence (AI) model shows users engaged in frivolous, long-form conversations with their AI personal assistant. “We can have a conversation about anything you like,” Gemini cheerfully informs one user, who is unsure of how to approach this new technology. Another user asks Gemini, “how do you tell if something is spicy without tasting it?” to which Gemini responds (without any hint of the stating-the-obvious sarcasm with which a human may be expected to reply such an inane question) “have you tried smelling it?” What is clear from this advert, and other similar adverts produced by companies such as Meta, is that the companies designing and selling AI intend for its adoption to be ubiquitous. The hope of “big tech” is that AI will be used liberally, for “anything” as the advert says, becoming part of the background technological hum of society in just the same way as the internet.

Awkwardly for these companies, this push for the pervasive adoption of AI into all realms of life is coinciding with a climate and ecological crisis that said technologies threaten to worsen. “Data centers,” the physical infrastructure upon which AI systems depend, are predicted by the IEA to double in their energy consumption from 2022 levels by 2026, consuming around 4.5% of total electricity generated globally by 2030 – which would rank them fifth in the list of electricity usage by country, just behind Russia and ahead of Japan. This of course comes with a significant carbon footprint, driving up global energy demand at precisely the moment that frugality is required if countries are to meet their net-zero goals. Such a significant increase in electricity usage is likely to extend our dependency on fossil fuels as efforts to decarbonize supply can’t keep up with demand.

Beyond electricity usage, data centers also require both vast amounts of water for cooling and rare-earth minerals to produce the hardware components out of which they are built. Google’s data centers consumed (that is, evaporated) approximately 31 billion liters of water in 2024 alone. This at a time when water scarcity is already a serious problem throughout much of the world, with two-thirds of the global population experiencing severe water scarcity during at least one month of the year. Similarly, the mining of rare-earth minerals such as antimony, gallium, indium, silicon, and tellurium is another aspect of the AI supply chain known to wreak both ecological and social havoc. China, by far the world’s largest processor of rare-earth minerals, having realized the heavy environmental toll of rare-earth mines, have now mostly outsourced mining to countries such as Myanmar, where the mining process has poisoned waterways and destroyed communities.

Given the vast resources required to build, train, and maintain AI models, it is fair to question the wisdom of asking them “anything.” Do we really need power-hungry state-of-the-art algorithms to tell us that we can smell an ingredient to check whether it’s spicy?

In response to such sustainability concerns, Google has pointed out that alongside the more mundane uses of AI displayed in its advertisement, the implementation of AI throughout industry promises a raft of efficiency savings that could result in an overall net-benefit impact on global emissions. In its 2025 environmental report, Google describes what it calls an “optimal scenario” based on IEA research stating that the widespread adoption of existing AI applications could lead to emissions reductions that are “far larger than emissions from data centers.” Although, some of the IEA’s claims are based on the somewhat spurious assumption that efficiency savings will be converted into reduced emissions rather than simply lowering prices and increasing consumption (for example, some of the emissions reductions predicted by the IEA’s report come from the application of AI to the oil and gas sector itself, including helping to “assess where oil and gas may be present in sufficiently large accumulations”).

Even granting a level of skepticism here, the potential of AI to produce positive outcomes for both the environment and humanity shouldn’t be overlooked. Initiatives such as “AI for Good,” that seeks to use AI to measure and advance the UN’s Sustainable Development Goals, and “AI for the Planet,” an alliance that explores the potential of AI “as a tool in the fight against climate change,” illustrate the optimism around AI as a tool for building a more sustainable future. In fact, a 2022 report produced by “AI for the Planet” claims the technology could be implemented in three key areas in the fight against climate change: mitigation, through measuring and reducing emissions; adaptation, through predicting extreme weather and sea-level rise; and finally, research and education.

There is also potential to use AI as a tool for biodiversity conservation. Research carried out by The University of Cambridge identified several applications for AI in conservation science, including: using visual and audio recognition to monitor population sizes and identify new species; monitoring the online wildlife trade; using digital twins to model ecosystems; and predicting and mitigating human–wildlife conflicts. However, the authors also point to the significant risk of eroding support and funding for smaller scale-participatory research in favor of the larger and wealthier institutions able to carry out AI-based research. Additionally, they highlight the risk of the creation of a colonial system whereby data is extracted from lower-income countries to train models in data centers in North America and Europe, resulting in the export of AI-driven mandates for the use of resources and land back to those lower-income countries.

Such risks indicate the need to consider an important distinction that has been made in the field of AI ethics. Philosophers such as Aimee van Wynsberghe and Henrik Skaug Sætra have argued for the need to move from an “isolationist” to a “structural” analysis of the sustainability of AI technologies. Instead of thinking of AI models as “isolated entities to be optimized by technical professionals,” they must be considered “as a part of a socio-technical system consisting of various structures and economic and political systems.” This means that the sustainability of AI doesn’t come down to a simple cost-benefit analysis of energy and resources used versus those saved through greater efficiency and sustainability applications. In order to fully understand the indirect and systemic effects of AI on environmental sustainability, these philosophers argue, we need to consider AI models in their social and political context.

A structural analysis must begin by pointing out that we live in a system characterized by immense inequalities of both wealth and power. As it stands, most AI models are owned and operated by tech companies whose billionaire CEOs have been described as oligarchs. These companies are the principal beneficiaries of a political system driven by economic growth and fueled through resource extraction. We should expect the AI models they produce to propagate this system, further concentrating power and capital to serve the narrow set of interests represented by these companies and their owners. A purely “isolationist” focus suits these interests as AI’s positive applications can be emphasized, while any negative effects, such as vast levels of resource usage, can be presented as technical problems to be ironed out, rather than systemic issues requiring political reform.

To take some examples already touched upon in this article, an isolationist approach can highlight the efficiency savings that are made possible by using AI models to streamline industry, while a structural approach will point out the economic reality that efficiency-savings tend to be harnessed only to ramp up production, lowering prices and leading to increased consumption, and therefore, higher profits. An isolationist approach can view the dependence of AI on large quantities of rare-earth minerals as a technical problem to be solved through more efficient design, whereas the structural approach will point to the need to address the immense injustices that are intrinsic to the rare-earth supply chain. An isolationist approach will tout the potential for AI models to guide ecological restoration in lower-income countries, while a structural approach will point out how this echoes the colonial history of conservation science.

Once we start to consider AI within its political and socio-economic context rather than as an isolated technological artefact, we can look beyond its direct applications for sustainability so that its many troubling indirect and systemic implications come into sharper focus. It becomes apparent that, rather than promoting sustainability, there is a far greater propensity for AI to enable further resource extraction, evade environmental regulations, and manipulate public debate and opinion on environmental issues.

A striking example of this is the way that AI is being used to undermine public trust in climate science. A report authored by the Stockholm Resilience Centre argues that the ability to generate synthetic text, images, and video at scale could fuel a “perfect storm” of climate misinformation, whereby AI models produce vast amounts of climate denial content that is then disseminated through social media algorithms already geared towards bolstering controversial and polarizing content. Consider this faux-academic paper recently written by Elon Musk’s Grok 3 model that casts doubt on the science of anthropogenic global warming. The paper was widely circulated on social media as an example of the first “peer-reviewed” research led by AI. Of course, claims of “peer-review” are unfounded. Neither the publisher nor the journal are part of the Committee of Publication Ethics and the paper was submitted and published within just twelve days, with no indication of whether it underwent open, single, or double-blind review. It should come as no surprise that one of the co-authors, astrophysicist Willie Soon, is a climate denier known to have received millions in funding from the fossil fuel industry, and whose contested research was referenced by the AI-generated paper. Despite such an obvious conflict of interest, a blog-post by the COVID-19 conspiracy theorist Robert Malone gathered more than a million views, claiming that the use of AI meant that the paper was free from the biases of what he describes as “the debacle of man-made climate change.”

From a “structural” perspective then, ensuring that AI models are sustainable is not merely a technical issue but a political issue of confronting the systems and power-structures within which AI technologies are built and utilized. One step in the right direction is to democratize AI governance such that ultimate control over AI’s direction and implementation is wrestled from the the hands Silicon Valley oligarchs and given to democratically elected governments so that regulation can be imposed to promote AI’s sustainability, both in terms of its physical infrastructure and its applications. However, so long as AI remains enmeshed within the power structures responsible for creating the environmental crisis, it will never truly be a force for advancing sustainability.

The US’s Action Plan to “Prevent Woke AI”

For a few years now, “digital” or “technological” sovereignty has been a prominent topic within AI Ethics and regulatory policies. The challenge being: how can government actors properly rule in the interest of their citizens, while governments (and citizens) must rely on technologies developed by a handful of companies they do not have clear control over? Many efforts to address this challenge consisted either in regulations, such as the EU’s AI Act, or various forms of agreement between (supra)national actors and tech companies.

Unfortunately, the White House’s “America’s AI Action Plan” and the three Executive Orders published on the same day ignore this thorny issue entirely. Instead, these policy proposals aim at deregulating AI development by American Tech companies “to achieve global dominance in artificial intelligence.” The general thrust is clear: deregulate AI development, promote its deployment across society, and export widely so as to strengthen the U.S.’s global standing.

In advancing these interests, one keyword sticks out like a sore thumb: “Woke AI.” As a millennial, it feels surreal to see a term that I have primarily experienced as Internet lingo make its way into a Presidential Executive Order. While this is far from the first time that the term “woke” has been utilized by the president to pejoratively address the values of the opposition, it’s far from clear what precise danger such language is meant to evoke. What kind of threat does “Woke AI” represent?

The July 23rd Executive Order “Preventing Woke AI in the Federal Government” does not attempt to define the term. Instead, it states that AI systems should provide reliable outputs, free from ideological biases or social agendas that might undermine their reliability. In particular, the Order identifies “diversity, identity, and inclusion” (DEI) as a “destructive ideology” that manipulates information regarding race or sex, and incorporates “concepts like critical race theory, transgenderism, unconscious bias, and systemic racism.” The Order then identifies “Unbiased AI Principles” that will guide development going forward. Chief among these is the command that AI must be truth-seeking and ideologically neutral – “not manipulat[ing] responses in favor of ideological dogmas such as DEI” – to ensure that AI systems are trustworthy.

To many AI ethicists (including myself), the Order reads like a series of non-sequiturs. It demands that tech companies reject any notion related to DEI in their AI development guidelines, yet it is quite unspecific regarding what such rejection would entail in practice. Let us set aside the countless examples of AI systems being unlawfully biased on the basis of race, gender, economic status, and disability in a variety of domains. Let us also set aside the practical impossibility for AI systems to be “unbiased” given that they are technologies literally designed to identify potentially meaningful patterns and sort accordingly. And, finally, let us set aside the irony of the clear ideological grounds motivating the Order’s intention to generate non-partisan results. What little remains when all these difficulties have been accounted for doesn’t amount to much. And it’s worth asking why the focus on “anti-woke AI” represents such a large part of the White House’s overall AI strategy.

The answer to that question becomes much clearer when looking at how – and where – “woke AI” crops up. From the beginning, responsible AI policy is described as integral to the goal of protecting free speech American values. Ultimately, AI outputs must “objectively reflect truth rather than social engineering agendas.” For that reason, references to “misinformation,” regarding things like DEI and climate change, must be removed. But this kind of censorship seems odd considering the stated desire to promote freedom of speech, especially because the Plan is explicitly stating what to not talk about – censoring tech companies from mentioning those topics as relevant concerns.

Ultimately, it often feels like the concern over “Woke AI” is merely a pretense for removing safeguards in order to accelerate AI development. This intent is made explicit at several points of the Plan. At its very introduction (and in reference to the Vice President’s remarks at the AI Action Summit last February) any “onerous” regulation towards AI development would mean paralyzing this technology’s potential – a reason why the current administration rescinded Biden’s “dangerous” Executive Order on AI. (Interestingly enough, many saw that regulation as quite lenient, all things considered, especially compared to the EU’s AI Act.) Any mention of regulation in the Plan that does not originate from the current White House is considered “onerous,” “burdensome,” or in some way unreasonable in slowing down AI development.

Even more poignantly, the Plan is quite clear in its intention to counter Chinese influence: it refers to the governance frameworks proposed by international organizations such as the UN, the G7, and the G20 as “vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance.” Safeguards meant to protect individual rights and privacy are written off as the calculated design of the U.S.’s largest geopolitical competitor.

But the Plan is not simply a rhetorical tool to signal dominance within the U.S.’s political discourse. Rather, it is a means of vilifying any obstacle to the “move fast and break things” approach as “woke.” This language is not only meant to clearly separate the current White House’s position from that of their predecessor’s, but to pave the way for deregulation. The fear is that this attitudinal shift cedes far too much power to unaccountable tech companies. Without stronger guardrails in place, we may all get run over.

Why My Students Shouldn’t Use AI

As the new school year approaches, educators across the country are once more redesigning their classes in light of the brave new world of generative AI. Many teachers are embracing the technology – encouraging their students to make use of this powerful new tool. Some are even going so far as to use AI to assist in their course design. Others, like myself, are banning any use of generative AI in their classes. But why?

Perhaps I’m a luddite. Perhaps I’m no better than Socrates fearing that writing would be the death knell for education. Nevertheless, I think there are (at least) three strong moral arguments against students using AI in a philosophy class – and perhaps education more generally.

Argument 1: AI Harms Creators

Generative AIs like ChatGPT operate on Large Language Models. Put simply, they’re trained on vast quantities of data – usually scraped from what is freely available on the internet. The problem is that this data usually belongs to other people. More problematically, generative AIs make no effort to credit the data that shape their outputs. So, when I use ChatGPT to generate a fluid structure for my paper, or a killer opening paragraph for my opinion piece, there’s no way I can properly credit the sources of those generated outputs. In doing so, I necessarily pass off someone else’s ideas as my own – the very definition of plagiarism.

As our own Tim Sommers notes, a common counter to this argument is that the operation of an LLM isn’t all that different from how our own minds already work: absorbing vast amounts of data, and using that data to produce novel creations. Anyone who’s ever created anything will know the fear that one of your darling creations – a plot point, a song lyric, or a visual design element – is merely parroting another creation once seen, but long forgotten.

Like Sommers, I admit that I lack the expertise to discern how different the operation of LLMs is from how our own minds function. But I think that there is at least one morally important point of difference: While our own creations might be subconsciously informed by data we’ve absorbed, there is (excepting cases of intentional plagiarism) no intention on our part to consciously hold out the work of another as our own. The same isn’t true when we use ChatGPT. We know how LLMs operate, and we know that any product of a generative AI has made vast (unattributed) use of the works of others. This knowledge is, I think, enough to make our actions morally problematic.

Argument 2: AI Harms the Environment

But AI doesn’t just harm creators – it’s also devastating for the environment. Generative AI requires huge amounts of processing power, and that power requires a lot of energy. While precise quantifications are hard to come by, ChatGPT’s power usage is estimated to be roughly equivalent to that of 33,000 standard homes. And it’s not just electricity, either. Generative AIs need vast amounts of water to cool their processors – a concerning prospect, given that we are at imminent risk of a global water crisis.

We are in the throes of a global climate catastrophe – a catastrophe that, according to some estimates, might become irreversible in less than four years if we don’t make drastic changes to our way of living. Among those necessary changes are massive reductions in our energy consumption. Given this, an explosion in the popularity of generative AI is the last thing we need.

Of course, the fact that there is an environmental argument against AI usage doesn’t provide us with an all-things-considered reason to stop. There are many harmful practices that we might need to continue in order to ensure human safety and flourishing. But using AI just doesn’t seem to be among those. Much of our AI usage is entirely frivolous – with 38% of people using AI to plan travel itineraries, and another 25% using it to draft social media posts. And when it comes to non-frivolous functions – like using it to craft an email (as 31% of people have) or prepare for a job interview (as 30% of people have), there are far less environmentally harmful ways of doing the very same thing. Having a question answered by AI can produce almost fifty times the carbon emissions than using a simpler system – like a search engine – to resolve the same query.

Argument 3: AI Harms the User

Even if we’re not motivated to care about creators or the environment, one further fact remains true: AI harms the user. I begin each of my classes by describing philosophy as the discipline that encourages us to think carefully about the reasoning behind our beliefs. This is a challenging – and sometimes terrifying – endeavour, since the discovery of bad reasoning can often force us to abandon some of our most dearly-held beliefs. The subjects I teach require my students to consider some hard questions: Does the climate crisis mean we should have fewer children? Should we permit physician-assisted suicide? Would a Federal ban on TikTok violate our right to freedom of expression? I believe that it’s vitally important that each of us formulate our own answers to such questions. If we farm this out to an algorithm, we’re sort of missing the whole point of philosophy (and education more generally). As Marta Nunes da Costa puts it:

“being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value.”

As I’ve argued before, failing to think critically not only risks making us bad humans, but also bad humans. I believe that fact – coupled with the clear harms to creators and the environment – is more than sufficient to explain why my students shouldn’t use AI.

More Than a Message: Finding Meaning in Sympathy Cards

I suspect having cancer must suck. Indeed, when I think about cancer, my mind instantly runs to terms like “malignant,” “tumor,” “fight,” and “chemotherapy.” All things which one might generously call undesirable. A phrase that doesn’t jump out at me is “funny.” However, Mark Steel’s The Leopard in my House: One Man’s Adventures in Cancerland is an undeniably funny book.

In it, the author, broadcaster, stand-up comedian, and newspaper columnist recounts the extraordinary year in which he was diagnosed and treated for throat cancer. The book is peppered with vivid, often graphic descriptions of what it’s like when your neck is repeatedly blasted with radiation (expect a lot of talk about mucus). But it’s also filled with warm, witty reflections on his relationships, his fellow cancer patients, his brushes with mortality, and the miraculous feeling of doing something as simple as swallowing water again.

Undoubtedly, all of the above is ripe for philosophical analysis, and this fact isn’t lost on Mark himself. In one chapter of the book, after receiving a recommendation from a fellow cancer sufferer, Mark considers the disease, the effect it’s having on his life, and how one can wrestle with forces beyond one’s control, via Marcus Aurelius’ Meditations. Taking a stoic approach, he considers the futility of asking “Why me?” or looking for fairness in something as arbitrary as a cancer diagnosis. It’s a grounded, practical perspective — and, I think, a wise one.

Yet, what I want to do here is look past all the fertile philosophical material that Mark provides us in his book and go in on something very specific: sympathy cards.

In Chapter 11, Mark talks about how awkward it can be talking to someone who has cancer. How unsure we often feel about what to say. That uncertainty, he argues, is natural, but it shouldn’t stop us from saying anything at all.

Of course, this transfers to non face-to-face communications. I’m sure you too have faced the daunting task of writing a meaningful message in a card. It might be something as innocuous as a birthday or leaving card, or as substantial as a commiserations or sympathy card. Yet, despite having some vague sense of what you want to say, you don’t know the correct words. This makes the whole thing slightly stressful, and the temptation is there to simply not write in the card at all. To not say anything. This, Mark says, is a mistake:

… here’s my tip if you know someone who lands in a serious medical condition. If you’re not sure what to say to someone, don’t worry because anything you say is better than nothing. I can’t remember the exact words of many of the messages I got, I can only remember that they came. Every email, text or call is a reminder that you matter to someone and that is critical in getting you passed the difficult moments. Don’t worry about saying the wrong thing. All the person remembers is that you got in touch.

This, I think, gets at something we often forget when it comes to comforting those who are ill. You’re not expected to craft the perfect, poetic paragraph that lifts someone out of despair. The point of a message, whether text, email, or card, isn’t to fix everything. It’s not even to promise that everything will be okay (because you can’t know that). Rather, it’s to remind the person they haven’t been forgotten. That they’re in your thoughts. That you’ve taken a moment out of your day and dedicated a sliver of your attention to them. That they matter.

These gestures, small as they may seem, speak volumes. They signal that when given the choice between silence and acknowledgement, you chose to say something. And in doing so, you affirmed that their struggle is worthy of notice.

But as I read Mark’s reflections on this, my brain, never missing an opportunity to be pessimistic, conjured a darker scenario: what if the message doesn’t actually come from the person it claims to?

It’s a familiar trope in film and TV: a high-powered executive learns that someone in their office is unwell and tells their assistant to “send a card,” maybe flowers too. Or worse, the executive never finds out at all, and the card is sent by a dutiful underling who signs it on their behalf. In such cases, the card still arrives, and the recipient may still feel touched that someone cared enough to reach out. But on some level, the gesture feels hollow. There’s a subtle, unsettling deception in play: the card wasn’t really from the sender it claims to be. The sentiment was manufactured.

And while the card might still do some good (after all, someone did something), it raises uncomfortable questions. Does intention matter as much as the action itself? Can a false gesture still provide real comfort? Or is there a quiet harm in pretending to care, when the person supposedly sending the message never even knew you were unwell?

These questions seem even more relevant with the advent of easily accessible generative AI. It’s now easy to offload the difficult task of writing a heartfelt message to a tool like ChatGPT. You face the blank space in the card, ask an AI to fill it, and copy the result. To the recipient, it looks like you took the time to write something meaningful. But in reality, you thought of them just enough to delegate the task to a machine.

In that light, the hollowness of the corporate-card trope becomes more personal, more widespread. The authenticity that gives these messages their power starts to erode. Worse, it casts suspicion on the entire genre. If one message might be synthetic, why not all of them?

This unease reminds me of a central theme from Byung-Chul Han’s The Disappearance of Rituals. In it, Han argues that our society is losing the shared symbolic acts that once gave meaning to human experience. Rituals, like writing a sympathy card, once required time, presence, and emotional investment. Today, however, many of these acts are reduced to gestures of efficiency, of communication without community. It would seem evident that a card composed by ChatGPT might technically fulfill the form, but it lacks the symbolic weight Han insists is essential. The message becomes a simulation of care, not the thing itself.

Now, I acknowledge that this is a niche worry. And, at least for now, probably an overblown one. Most people still write their own messages, even if they’re clumsy or awkward. And frankly, the awkwardness is part of the point as it shows effort. A messy, imperfect message from a friend is infinitely more valuable than a flawless, ghostwritten one.

But just because it’s niche doesn’t mean it’s irrelevant. The emotional stakes are high. As Steel points out, those messages can be a lifeline. They are something to cling to in the middle of immense physical and emotional suffering. They are not just words; they are reminders of human connection. And if we’re not careful, the convenience of automation might start to chip away at that.

What’s the solution, then? Honestly, I don’t have one. But I do know that next time I sit down to write something in a sympathy card, I’ll try not to overthink it. I’ll write something, however clunky or awkward, and trust, as Steel suggests, that the act of writing matters more than the words themselves.

Grok, Is This True?: Chatbots and Domination

Last week, Grok, the large language model driven AI chatbot from xAI, suddenly began posting… differently. On Tuesday, May 8th, (in posts which xAI has since deleted) Grok stated that “anti-white hate” always comes from individuals of Ashkenazi Jewish descent, claimed that Adolf Hitler would be best suited to solve this problem, and referred to itself as “MechaHitler.”

This follows an incident in May where Grok repeatedly posted about “white genocide” in South Africa while responding to posts on wholly unrelated topics, such as baseball. When asked by users to clarify its posts on “white genocide,” Grok stated that it was instructed by its creators to accept these claims as true.

Linda Yaccarino, the CEO of X, announced on Wednesday, July 9th, that she is stepping down. Elon Musk, the owner of both X and xAI, recently wrote that Grok was “too eager to please” and that enabled users to manipulate the chatbot into creating antisemitic posts.

However, this explanation clashes with other information. In June, Musk expressed unhappiness with Grok, declaring that it would be upgraded, targeting a release date around July 4th. On July 5th, xAI posted new lines of code to Grok’s publicly accessible system prompts. This code encouraged Grok to view legacy media are biased, “find diverse sources representing all parties” and “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

The about Grok page on X states only that it was trained “on a variety of data from publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” It is unclear what “well substantiated” claims that “represent all parties” are; given the posts Grok made after changes to its code, it is reasonable to worry that the chatbot’s dataset labels antisemitic conspiracy theorists as a party with well substantiated claims.

There is a sense in which Grok’s posts are philosophically uninteresting. They obviously are morally condemnable. Further, they are not particularly surprising from a sheer informational standpoint. The case bears striking similarities to a Microsoft chatbot called Tay. Microsoft released Tay in 2016, intending for it to learn from its interactions with social media users. Within 24 hours, Microsoft deactivated the bot after it began making racist posts, praising Hitler, and denying the Holocaust.

However, what is interesting about the Grok fiasco is that it illuminates the extent to which creators have influence over AI tools. It seems that just a few additional prompts were able to massively change the content that Grok produced. This is particularly striking given that it appears Grok has been altered strictly to align with Musks’ preferences; new reporting suggests that Grok searches for Musk’s personal views before answering questions on controversial topics.

In a previous column, I discussed the concept of republican freedom. I will give a brief rehash here. Traditionally, philosophers think of freedom in two senses. You have positive freedom when you can do the things that you want to do. In contrast, you have negative freedom when no one actively interferes with your pursuits.

Suppose, for instance, that you are a very strong student and want to attend medical school. However, you lack the money to afford tuition and financial aid is unavailable. You are negatively free as no one is interfering with you. Yet the financial constraints hamper your positive freedom.

Now imagine that a rich uncle offers to pay your tuition. However, this uncle is quite capricious and vengeful; he has a history of lashing out against family members who engage in behavior he finds distasteful, and his preferences are unpredictable. Suppose you accept his offer. Your positive freedom increases as you can attend medical school. But are you wholly free? Your ability to attend medical school depends on your uncle’s preferences. If you do something he dislikes, he may refuse to pay your tuition. In this sense, your uncle dominates you. Even if he never actually rescinds the tuition money, you are still made vulnerable by the fact that he could. The mere threat is enough to reduce your republican freedom. You have republican freedom to the extent that your life’s prospects are not dependent upon the preferences of others. You lack this freedom when others have domineering power over you.

How does this relate to Grok and other AI chatbots? Users are increasingly reliant on these programs. Posters on X regularly ask Grok to verify or explain the claims of others, to the point that it is meme-worthy. Some preface their claims to knowledge with “ChatGPT says that….” Even Google presents an AI overview prior to the actual results. This is deeply troubling given recently released results from an MIT lab which found that users who utilized a large language model in a series of essay-writing tasks appeared to experience neural and cognitive consequences; users of LLMs had weaker neural connectivity, demonstrated lesser cognitive activity while writing, and had worse memory of what they had written than non-users. To quote the researchers:

This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive process required for independent thinking. Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation [and] decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.

As users more frequently rely upon tools like Grok, their ability to research, analyze, and think critically about matters for themselves atrophies. This may make them more dependent on such tools, creating a vicious cycle; they initially use for convenience but over time the use becomes a necessity.

Grok’s recent antisemitic and conspiratorial tilt demonstrates why this potential for dependency is so deeply troubling. The most prominent AI tools are owned by private entities with private interests. As Grok has shown us, it seems that a few minor edits to their core prompts can vastly alter their outputs – the Grok case is simply extreme in both the result and slightness of the prompt modifications. Further, these edits may be the product of the arbitrary preferences of the ownership and programming teams.

This dependency thus seems to reduce the republican freedom of the users. Even if the owners and programmers of these tools never actually alter these tools in a way that misleads the users, the mere potential of their doing so is sufficiently troubling – the users still navigate through the world in a way that depends upon the arbitrary preferences of those who own and program the tools. At any moment the prompts could change, granting those with the ability to alter them a domineering power to shapes the beliefs and fact finding capabilities of those reliant on their programs.

In the dialogue Phaedrus, Socrates worries about a relatively new technology: books. He fears that the written word worsens our memory and is less intellectually stimulating than dialogue. In retrospect, Socrates’ criticism is almost laughable. Even if he is right, the benefits of the written word surely far outweigh any cost.

However, there is something different about chatbots and other forms of AI-driven technology that makes the prospect of outsourcing our thinking to them troubling. Specifically, they have changing outputs. As Aaron Schultz argues, AI-driven technology captures our attention effectively because its contents change in real time to satisfy our preferences. But it is not merely the preferences of the users that can change what they see. So long as the owners and programmers may alter the programs to suit their preferences, these tools stand the potential to give those groups domination over us.

Should We Expect Fairness from AI? The Case of VAR

On the 22nd of June, 1986, God handed a historic victory to the Argentinian soccer team. Argentinian legend Diego Armando Maradona scored the winning goal against England in the World Cup Quarter Finals. Only, he did so with his hand – which, in the game commonly known as football, is not allowed. However, the referee and his team did not see the foul and assigned the goal. This error resulted in a 2-1 victory for the Argentinian team, allowing them to move on to the next bracket of the tournament and, eventually, claim international glory. However, both during and after the game, many others called out the referee’s mistake: due to an error in judgment, he failed to correctly apply the rules of the game, and allowed the Argentinians to win undeservedly. This historical moment, following a cheeky remark from Maradona himself, came to be known as “the Hand of God.”

There is no way around it: human beings, for all their capacity for reason, are fallible. As our intelligence is bound to our physical context, culture, experiences, and our own ideas, some assessments and decisions will be biased, surprisingly influenceable, and, ultimately, unfair.

It is in this department that AI development, well before the days of ChatGPT, claimed to provide an invaluable resource: much more precise and fair assessment than what human beings are capable of. As early as 2012, when the first discussions surrounding “Ethical AI” emerged, a group of researchers at Microsoft developed a framework for developing “Fair AI” – software agents capable of greater fairness in decision-making than human beings could ever hope to achieve.  Emerging technologies are capable of extraordinary tasks: advanced cancer prognosis and treatment, calculating the structure of proteins, and unprecedented support in space exploration – all incredibly complex tasks that human beings simply do not have the capacity to carry out on their own.

However, despite their promise, there have been many, many instances where AI systems were just as unfair as human beings. Cases of AI systems discriminating on the basis of gender, on the basis of race, disability, social class, and income abound. This occurred partly because the data used to develop those systems already reflect unlawful and immoral discriminatory practices carried out by human beings. In the words of philosopher Atoosa Kasirzadeh, AI systems are no help in addressing systemic injustices: if a social system or practice is already biased against a group of people, technology will only reinforce that prejudice. However, I would like to go even further than Kasirzadeh in this regard. We should not expect AI nor technology to make things fairer, as shown in how AI is applied within soccer games – and, specifically, through the technology known as VAR.

VAR, or “Video Assistant Referee,” first introduced in 2016, was designed to more justly apply the rules of football. A referee, due to their epistemic limitations (especially poor positioning within a game), may not be able to assess with certainty whether a foul occurred. Now, officials can review footage of the event from multiple camera angles in order to make that decision. Importantly, the technology underlying VAR is AI-based. Only through an AI system can one find the appropriate footage (which is collected by multiple cameras) for review without interrupting the game for an unacceptably long time. Once again, the sophistication of AI systems’ information processing and accuracy goes far beyond our limited epistemic capacities. So, now that important events in a football game can be reviewed in case of uncertainty, nothing like the backlash following the Hand of God could ever happen again. Right?

Unfortunately, the introduction of VAR has not resolved the issue. Every week the (un)fairness of VAR-assisted decisions is questioned (often by the managers and players themselves), and many websites maintain weekly threads devoted to dissecting footage and discussing calls. Because VAR is simply a tool utilized by the referee, there is still room for significant variation in how the rules are applied.

Take, for instance, the case of Anthony Taylor’s refereeing of the 2023 Europa League final. A supposed handball in favor of AS Roma, which would have led to a penalty kick, was not assigned, in spite of a technically very clear handball. While such a decision did not raise much discussion outside of Italy (and Taylor’s refereeing was praised), the same cannot be said for his identical decision on a nearly identical foul during the European Championship. Put very bluntly: Taylor does not deem that type of handball to be a foul and is consistent in his ruling, while other referees openly disagree. We might be inclined to believe that, because we can record, replay, and ultimately objectify a given event, we can have an objective view of how we should judge that event. However, the act of judgment itself is an inherently subjective act, and even people with the same degree of expertise can interpret the game’s rules differently, and the events they judge differently.

Up until a few years ago, the intentionality of a gesture played a relevant role in the referee’s decision-making: whether a player (apparently) intended to stop a shot towards goal with their hand, or whether it was an accident, mattered for deciding whether to assign the foul or not. However, following the introduction of VAR, the rules have been revised in several countries: many countries changed such a rule so that any handball in the penalty area – accident or not – results in a penalty kick. (The slight difference in rules across different national and international competitions is itself a source of confusion and variation that complicates the picture even further.) Similarly, while the way of judging whether a player stomping on another’s foot used to be judged as foul depending on perceived intentionality and gravity of the stomp – depending on whether, e.g., the toes or the ankle were stomped – now all stomps are treated the same. These rule changes make sense considering how VAR shifts referees’ focus from an event on the pitch to images on a camera. When using the VAR, the referee is not judging a dynamic interaction between players, where the greater context and movement of the game makes it easier to assess what players’ intentions might be. When judging a sequence of frames, considering the intentionality of players’ actions becomes incredibly more complicated. In other words, VAR is not making the game of soccer better: in order to make VAR work, it is necessary to change the rules. AI is also not making things fairer: the rules must still be applied in accurate and equitable ways.

This might still appear to be something quite minute in the grand scheme of things: a change of rules in a mere game, no matter how popular it may be. However, the core issue regards what we understand fairness to be, what actually AI needs from us in order to make things supposedly fairer. In theory, AI can make things fair because it can follow and apply the rules and the norms that human beings often fail to apply, due to their own epistemic limitations. However, in order to purposefully and efficiently implement AI within a social context, we need to tweak and change the rules of that social context – not in the name of fairness, but in the name of efficiency. Before AI can arguably make things fairer for our society, we need to change our society first, to merely make it possible to implement AI in the first place. For this reason, it is worth asking ourselves whether AI is making things fairer for us, or whether we are making things easier for AI.

We can expect some revolutionary uses of AI in many domains, and potentially even great benefits for society. However, quite often AI technologies are promoted as morally valuable – more trustworthy, more responsible, more fair. In some cases, that can be true: VAR has proved successful in detecting fouls; so much so that it is almost unimaginable that something like the Hand of God could happen again. However, to use AI and take advantage of its precision within a given social context, it is often necessary to change the rules and norms that define the context in order to use AI in the first place. We can expect more precision from AI, as a technical value; however, precision does not equal fairness. Rather,  the rules determining what is fair can – and should – only come from us – what we believe, what we do, and what we value.

Should You Thank Your AI?

In late April, Sam Altman, the CEO of OpenAI, made waves with a response to a question about the financial and environmental cost of saying “please” and “thank you” when interacting with Artificial Intelligence — “Tens of millions of dollars well spent–you never know.” The practice is common, with over two-thirds of users observing such social niceties when asking AI questions, according to a February survey. Altman may simply be preaching the power of politeness, but it could be for reasons that are anything but common.

Is Altman right? Should we thank ChatGPT, Gemini, Claude, DeepSeek and the other AI chatbots out there? Can ethics give us any guidance?

Entities that we do not believe need to behave ethically themselves, but should be subject to moral considerations, are generally called “moral patients.” We tend to think they have lesser (but still some) moral status. For example, we do not expect newborns and animals to behave ethically, but we often adopt certain moral standards in regard to their treatment.

But current Large Language Models, the umbrella under which tools like ChatGPT fall, are not good contenders to be moral patients. There is considerable complexity in debates about AI consciousness, when it might happen, and how we would know. Nonetheless, we are not there yet.  While current AI chatbots has been trained on vast amounts of date to emulate human speech and behavior, as yet, experts assert that they have no consciousness, no inner life, they are not in control of their actions, and they cannot suffer or feel pain. (Some of these matters have been previously discussed in The Prindle Post.)

Absent characteristics like consciousness or even the ability to be offended, there seems to be no special reason to treat AI chatbots politely based on the kind of thing that they are.

Altman’s response, however, suggests another kind of concern. We may have consequentialist worries — an ethical analysis based on the consequences of our actions — about saying please and thank you to AI chatbots. Each additional “token,” a chunk of characters, that the AI has to analyze in a question costs energy. Accordingly, adding polite words to questions both costs AI companies money and, of more direct ethical relevance, causes environmental damage. Prominent AI tools like ChatGPT need incredible amounts of electricity and water for cooling.

If we are interested in limiting the harms our actions cause, then reducing energy waste and environmental damage through being less polite with AI chatbots may make sense. Although stripping off a word or two has nowhere near the energy saving impact as, say, not asking the question at all, or simply using a standard internet search instead which costs ten times less energy.

Altman’s “you never know” however hints at another worry. We may be polite to an AI out of fear that it is actually conscious, or even, that the AI overlords are coming soon and it is in our own interest to be nice. This motivation echoes the famous Pascal’s wager in philosophy.

The 17th-century mathematician and philosopher Blaise Pascal argued that we should behave as if god exists. For if god exists, but we do not believe, then we suffer an eternity of misery and miss out on an eternity of bliss. The wagers provide no evidence for the existence of god one way or the other, but rather holds that believing in god and behaving accordingly is the safest bet. (There are a number of commonly seen objections.)

By similar reasoning, we might assert that even though the chances of ChatGPT being secretly conscious, or turning into an all-powerful overlord, are extremely small, the potential harms are so serious that we should nonetheless act if it could be the case — especially for relatively low-cost actions like saying “please” and “thank you.” It does somewhat notably depart from Pascal’s wager in that the consequences are merely very bad, and not infinitely bad, and therefore can be outweighed by other more likely concerns. In fact, given the tiny likelihoods involved, and the probably minimal impact that saying “please” and “thank you” will have, there is likely not a compelling probabilistic argument about avoiding serious (if rare) consequences at all.

However,  how we treat AI is not just about AI, it is about ourselves. The philosopher Immanuel Kant made a famously strict moral framework in which only actors possessing a certain kind of rationality, like humans, deserved moral consideration. Unsettlingly, even for those in the 1700s, this implied that we have no moral considerations towards animals. Kant’s response to this concern was that we owe it to ourselves to treat animals well. We injure our moral selves, when we ignore compassion, or an animal in pain. It becomes easier to slide into callousness with humans.

Whether Kant gives animals enough due is a matter of debate, but regardless, the same concern applies with AI. If we want to embrace a general ethos that treats people with dignity and respect when we make requests of them, then we should stay in practice when dealing with superficially human-like AI.

There is potentially a dark side to this argument about AI chatbots. Their very human-likeness can be a problem. Already, there are cases of people losing themselves to delusional relationships with  ChatGPT, or trusting chatbots uncritically. The scope of this problem is not yet clear, but perhaps we do not want to aspire to a very human-like relationship with the technology at all, but instead have a well-delineated set of norms and practices for engaging with these chatbots. We may want to adopt norms that avoid anthropomorphizing them.

Large Language Models are still new. Ethical analysis, especially ethical analysis based on the potential consequences of treating AI a certain way, is correspondingly young. This is even true for seemingly minor issues like saying “please” and “thank you.” It also speaks to a broader challenge with AI. The technology is already changing the world. It is good to consider how AI will change society — what jobs will it replace, what problems will it solve, what kind of surveillance will it enable, how much energy will it use? But we also need to consider its moral impact. What will AI do to our ethical selves?

The Merits (and Misgivings) of Gen-AI Art

Whenever discussing tech ethics, the most common thing I get asked about is the use of generative AI for artistic purposes. Previously, I’ve discussed the case of Jason M. Allen and his first place AI-generated entry at the Colorado State Fair. However, a great deal has happened since then in terms of technological innovation, wider public adoption, and legal wrangling. Marvel Studios, for example, was recently accused of using AI to generate posters for their Fantastic Four film given some strange “choices” – you be the judge. But Marvel is not alone; numerous other creators have been caught in the crosshairs. Is all the outrage justified? What is actually at stake? Why are people so up in arms?

Let’s consider some related concerns. Many arguments against Gen-AI art start by asserting that AI is inherently incapable of producing art, as it lacks human creativity or some other human-ness about it. But we should be clear about what we mean. As I have previously discussed, there are over 20 different theories of consciousness out there in the academic world, but there are very good reasons for accepting the fact that these algorithms are just automated statistical models and not something that is conscious. Ultimately, generative-AI is a tool for humans to use, just like a camera, a paintbrush, or a chisel. Just like those tools, they will not work without human input and whatever product they produce that will be accepted as “finished” or “complete” or even “satisfactory” will depend on what the human wanted to get out of it. If critics of AI art are going to charge that a person cannot make art with it, “because they typed a few buttons” then why can a photographer make art by clicking a shutter? This isn’t to suggest that anyone who uses Gen-AI instantly becomes an artist, but neither does anyone with a camera become a photographer. In other words, critics need to explain why some types of art can utilize technology while others cannot.

But, in a similar vein, some critics charge that AI cannot produce art because it is incapable of understanding human emotional qualities that are a necessary component of artistic expression. AI cannot understand or replicate the emotional intention behind art. First, it is important to note that in addition to the field of generative AI, there is a whole field of affective computing devoted to getting computers and AI to understand human emotions. There’s no obvious reason why insights from affective computing cannot build emotional understandings into an algorithmic model and have that influence the output. It is also known that AI-generated art can produce emotional responses in humans that we might expect any artform to do. Anyone who has seen the “priceless pancake” video on the internet can probably appreciate the level of emotional intuitiveness involved. If artworks are supposed to induce certain emotional responses in the audience, a clear argument needs to be made why AI is incapable of communicating the desired emotional effect, particularly if it is further modified by humans to achieve this end.

Critics may also charge that because generative AI is trained on the images of other artists, it cannot truly be creative. But creativity is an ambiguous concept. While gen-AI models do take their cues from the inputs they are given, it is worth noting that they are also not completely deterministic, nor do they simply reproduce works that they have been trained on. There is always room within the statistical mesh of relationships a model forms to produce something new; generative AI is capable of creating novelty out of what has come before. Ultimately, whether something is creative or not depends on what we “see” in the work.

There is also a sense that gen-AI art cannot produce art because of the intellectual theft or plagiarism of pre-existing works. But we should be careful to separate economic and aesthetic concerns. I wonder how critics would feel about a model trained entirely on artworks that exist in the public domain, or an artist who trains a model to produce new works using only their artworks as training data. Would a lack of copyright concerns in these cases still preclude the idea that such models could produce (or at least contribute to) real works of art?

Copyright exists to protect artists when it comes to commercializing their work. Another person should not be able to reproduce that work and sell it such that it would compete with the artist. However, it must be a very similar reproduction to qualify as infringement. Something done “in the style of” a work does not violate the original’s copyright. Cuphead, for example, does not infringe on Disney’s intellectual property despite obviously borrowing from features of early Disney cartoons. Likewise, a work’s particular structural components are not protected. For example, Ed Sheeran successfully defended himself against a copyright infringement lawsuit on the basis that chord progressions cannot be exclusively controlled.

These considerations complicate the claim that generative AI is effectively “stealing.” Images used as training data are not completely reproduced. Rather it is the statistical patterns of lines, cues, and pixel patterns that a network will learn. Also, a single image will have an infinitesimal impact on the overall output on a model trained on entire libraries of content. This is why it is difficult to argue that incorporating an image in a data set represents a clear case of copyright infringement. This, of course, doesn’t mean that the practice is ethical, but we do need to clarify which kind of problem it is: in some cases it might be a trademark issue, but, perhaps more often, it is a free rider complaint – developers use artists’ images without compensation in such a way that it threatens the artist’s livelihood and endangers art as an institution.

Still, we should be careful to distinguish the economic and industrial issues involving working artists from the issue of the place of AI in art in general. It’s no good to let our feelings about how AI might affect working artists affect how we evaluate AI-generated art as an art form and its potential to contribute to the world of art. We might fear a future where no one can make a living working as an artist, but we must also recognize that average people can now explore their own aesthetic interests thanks to AI’s democratization of art. Fewer people make a living as artists, but more people make art for themselves. Must we assume that this would leave the world of art worse off? Surely we can be sympathetic to working artists and think that they should be compensated or protected, without concluding that AI is necessarily bad for art in general.

This isn’t to insist that there are no ethical concerns regarding our current use of generative AI to create art. Rather, my point is that we must be especially precise in identifying what the nature of our objections is. We need an answer that doesn’t create more problems than it solves – by either overexpanding copyright or suppressing artistic expression.

Should AI Democratize Art?

While AI-generated art has been controversial since AI tools became widely available, OpenAI’s introduction of an updated version of its image generator in ChatGPT has brought a fresh round of scrutiny. Many have been particularly critical of a trend where users generate images in the style of Studio Ghibli movies, arguing that the images created by AI in the signature style are soulless, derivative, and even insulting, given that Studio Ghibli co-founder Hayao Miyazaki has explicitly stated his dislike of AI and AI-generated art. As users experiment with the tools, new trends have cropped up – such as the one where users create doll versions of themselves – alongside new criticism. Of course, there are evergreen worries around AI use in general, and AI art trends have only exacerbated concerns around the devaluation of artists and massive increases in energy consumption.

Not everyone has been so critical. Some commentators and communities online have defended AI as a creative tool, and feel resentful when others accuse them of creating “slop.” One such defender of AI-generated art is none other than Sam Altman, CEO of OpenAI. In a recent interview Altman acknowledged that although the onset of AI image generation must be “a little bit of a bummer” for artists, it nevertheless has been beneficial for those who might not otherwise have the time or resources to create or commission art:

“I think the democratization of creating content has been a big net win for society… And this is something that I very deeply believe about the power of technology, which is giving everyone more tools, making things easier, lowering the barriers to entry, does significantly increase the number of people that can contribute to society and we all benefit from that overall.”

Altman claims that AI can democratize the creation of art. But is democratization always a good thing, and is it enough to justify AI-generated art in light of its criticisms?

What does it mean to “democratize” something, anyway? We can say that something has become democratized when it has been made available to most or all people in a society who wouldn’t have had access to it otherwise. For example, one of the touted benefits of widespread internet access was an increase in the democratization of knowledge, whereby many more people could not only consume but also contribute to knowledge production. Although there have certainly been downsides to letting everyone share their views with everyone else, the internet has removed many barriers to knowledge dissemination, and allowed for a much wider range of voices to be heard.

Altman seems to be arguing that AI will bring similar benefits in terms of the creation of art. Not all people are artists, after all, and so having access to AI image-generating tools could provide many more people with the opportunity to express themselves creatively. As a result, society could benefit from many more creative voices in ways that it would not have been able to otherwise.

However, we can ask two questions about Altman’s argument. First, does art need to be democratized? And second, does AI democratize art in a way that’s worth pursuing?

Many critics of AI-generated art answer the first question in the negative. For example, it’s common to find online commentators who note that instead of using an AI tool to generate art one could simply pick up a pencil, paintbrush, or any other artistic tool and medium instead. Art is thus already democratized: nothing is preventing the vast majority of people from creating art and sharing it with the world, the argument goes, they are simply unwilling to learn how to do so.

Of course, there is still a barrier to entry for most people when it comes to creating art that accurately expresses their creative visions. A lot of people have simply not had training or experience creating art, and so cannot bring their artistic visions to life. In this sense, there is a way in which artistic creations are still out of reach for most people.

This is perhaps more in line with what Altman and many AI art supporters have in mind when they say that AI democratizes art: it democratizes the creation of artistic creations that look impressive or more accurately capture one’s ideas by significantly lowering the barriers for the majority of people. But now we face our second question: is this kind of democratization desirable?

Consider an analogy to another very undemocratized endeavor: playing basketball at a high level. For most people, there are few barriers to playing some form of basketball, but there are only a select few who can play it well. What it takes to play at the highest levels of basketball is partly decided by one’s genetics, but arguably many people could be much better at basketball if they put in more time and practice.

Given this inequality among the people who do play basketball well and those who don’t but want to, we can ask: when is it desirable to democratize opportunities to play basketball? Here’s a possible answer: when it creates opportunities for those who lack them due to factors outside of their control. For example, someone might want to play basketball but be unable to because there are no basketball courts in their neighborhood or opportunities to play with others. In these situations, we should want to help reduce barriers to entry, say by investing in social programs and infrastructure.

One way that democratization is desirable, then, is when it helps to rectify injustice. This is one of the reasons why the democratization of knowledge via the creation of the internet was such an important ideal: it would help provide a voice to those who would not have been able to have been heard otherwise, given that they lacked the means to disseminate their knowledge in other ways.

However, the kind of basketball democratization that looks to create social programs and infrastructure does not necessarily address the barriers that prevent some people from playing basketball well. We then might want to democratize basketball playing in another way: we could, for example, give every NBA hopeful special prosthetics to allow them to jump higher or shoot the ball more accurately, which would lower the bar for entry into competitive basketball for a huge number of people. This kind of democratization attempts to even the playing field by providing opportunities to participate that people wouldn’t have had otherwise by compensating for a lack of skill.

While the first kind of democratization – the one that attempts to rectify an injustice – seems desirable, the second kind – the one that merely compensates for lack of skill – does not typically seem to be worth pursuing. The type of democratization promised by AI seems to fall into the second category: for the vast majority of those who use AI to create art, the barriers to their meaningful artistic expression are a lack of talent or practice, not because they have been unjustly denied opportunities.

There is another component of Altman’s claim, though: that society will benefit from people being able to create art much more easily. Here is a potential argument: say I generally lack artistic ability, but I believe I have good ideas for a work of art. AI can then help bring that idea to life, and other people may then be able to benefit from experiencing it, in a way that they would not have been able to otherwise. Is it not valuable, then, to remove the barriers that prevent potentially great art from being put out into the world?

Here, though, the question of how much a person is involved in the creation of AI-generated art becomes important. AI image generators do not so facilitate a person’s creation of artistic media but instead create those media on a person’s behalf. The results might be something interesting or entertaining or inspiring, and it might be beneficial to society to be more interested, entertained, or inspired. These benefits, however, are not attributable to democratization: AI does not lower barriers for people, it replaces them.

This is not to say that all AI-generated art is necessarily devoid of value. However, the argument that attempts to justify AI-generated art by appealing to democratization ultimately falters: AI can certainly create more art, but if it democratizes the creation of art, it doesn’t do so in a way that’s worth pursuing.

PSai: Singer’s Ethical Insights on Demand

Peter Singer is arguably the most influential living philosopher. For over 50 years, his articles, newspaper columns, blog posts, and books have challenged the views of those who read them. In 1975, he wrote what could be considered his most influential work, Animal Liberation: A New Ethics for Our Treatment of Animals. This text had such an impact that animal rights activists used it as a calling card during their raids on laboratories engaged in animal testing, leaving copies as a challenge to the staff from whom they rescued the animals. He has worked at the universities of Oxford, New York, Monash, and Princeton. His Google Scholar profile indicates that he has been cited over 80,820 times. He has won multiple awards, including the Berggruen Prize, which came with $1 million and, characteristically, Singer donated this money to charities he considered capable of doing the most good.

Yes, he has courted controversy. His opinions on infanticide and euthanasia have won him several vocal detractors. Yet, despite this, one cannot deny that he is a — if not the — modern-day giant in the philosophical field.

Given his academic stature and the sheer number of outputs he has had over his fifty years in the business, it seems reasonable that some might want to consult him when faced with ethical dilemmas. After all, if you’re in a moral quandary, who better to turn to than Singer for help? (I know other options are available, but humor me). The only problem is that as Singer has been writing and thinking about morality and ethics for over half a century, there’s much to consult. Unless you’re intimately familiar with his work (and that’s a lot of work to be familiar with), how can you be confident that what you think Singer might advise is accurate? Sure, you know he subscribes to a hedonistic version of utilitarianism, but knowing that fact alone might not be enough to make the right decision in the heat of the moment. When ethical uncertainty strikes, you need to have Singer with you, advising you on your best course of action. This possibility, however, is impossible (he’s a busy man, after all).

Luckily, we have the next best thing (or the early stages of the next best thing): an AI Peter Singer (PSai).

Created by Sankalpa Ghose, PSai, much like ChatGPT, responds to prompts and questions. While it lacks the robust knowledge portfolio of other generative AIs like ChatGPT, when asked a question about philosophy, which the real Peter Singer has written about, PSai can provide a not unreasonable answer; or at least, it can offer an answer that one might imagine Singer giving (although it as a strange habit of always trying to prompt you for a response). Indeed, Singer himself was involved in the AI’s development and was happy enough with the bots’ performance that he gave Ghose the go-ahead to make it public. Since that launch, PSai’s performance has steadily improved as its designers tweak it. As Ghose writes in the Journal of Medical Ethics: Forum:

It is in this way that Peter and I have progressed on PSai, releasing it publicly and reviewing how it performs in dialogue as measured by how those conversations with others go from our perspective of where it could be improved. Essentially, we have put forward a dialogue platform for others to engage, with that engagement driving its refinement.

The prospect of having a pocket-sized Peter Singer with you wherever you go, always ready to solve ethical riddles or dispense some needed wisdom, is an interesting one, and one that I must confess has some appeal. I like the idea of everyone having access to (a replica of) one of the foremost philosophical minds of our time whenever I need it. As a professional academic philosopher, I see some utility in being able to ask PSai a question about my research and how Singer’s writings might enhance what I’m working on. Rather than having to trawl through numerous books and papers, I could simply ask PSai what text might be the most useful when writing about de-extinction or how Singer might criticize a paper a draft paper on gene drives, which I could then respond and improve upon before submission. This could save a lot of time, which I could spend on other tasks.

For a less academic application, the app could simply function as a virtual Jiminy Cricket — a resource that’s on hand to share some wisdom in scenarios where the right course of action is obscured or where the user knows what’s right but needs some outside confirmation to solidify such knowledge and motivate action. This requires the user to consult the app before deciding to act, and the likelihood of this happening is, let’s face it, likely unlikely. Nevertheless, there is still a chance.

Of course, the idea that one might be able to consult Singer wherever they go is not universally appealing. As already mentioned, Singer has courted controversy when it comes to his historic claims around the permissibility of infanticide, as found in Practical Ethics (under very specific conditions, I must add). Suppose you are inclined to think that the arguments and claims made by the real Singer are incorrect or even repugnant. In that case, you are unlikely to find the possibility of consulting PSai when in ethical difficulty to be enticing. In such circumstances, people are likelier to wish they had a Socrates, Wollstonecraft, Anscombe, or Nietzsche AI. As far as I’m aware, however, these do not exist. But that doesn’t mean they cannot.

I do wonder if we might soon find ourselves awash with AIs trained on the philosophical writings of other philosophers. Might it be the case that, soon, instead of picking up several books, studying, and, over the years, becoming familiar with an ethical mode of life, you could simply jump on your phone, download the Aristotle AI, and have the clarity of morality in the palm of your hand? Such a possibility doesn’t strike me as too absurd. After all, if the PSai proves popular, it only seems natural to diversify.

This, however, brings us to two problems.

First, without wanting to make a point too bluntly, Singer is still alive. He was involved in the development of PSai and in efforts to refine it. This means that the AI has a real-life benchmark against which its performance can be measured. We don’t have to speculate if it says something that the real Singer wouldn’t. Singer himself can say, “nope, that’s wrong,” or, “the AI was correct in its reading of my work, but I’ve now changed my mind” (this is something which Singer and Häyry have considered). Ghose can then adjust the algorithm to perform better in the future. This isn’t possible with most philosophers whose names we are familiar with, as most of them are dead. So, we’d be stumbling around in the dark without a guideline in such cases, relying entirely on that person’s writings and the opinions of those familiar with their work. While an AI produced in such an environment might prove accurate, we lack the definitive authority to know.

Second, if we outsource our ethical deliberations to an AI, be that one based on Singer or any other philosopher, might we be doing a disservice to ourselves by simply following orders? Is it still doing the right thing if we don’t come to an understanding of right and wrong naturally but are simply instructed? For some philosophers, the answer may very well be in the negative. After all, for your classic virtue theorist, morality is like a muscle which one must train every day to be better than before. Each must deliberate using their faculties to identify a master in ethical living, emulate them, and always strive for more. I can’t help but laugh a little at the idea of an Aristotle AI, which, when asked by an unsuspecting user how they should behave in situation X, gets a response along the lines of “figure it out yourself, you lazy sod.” Is this a perfectly accurate representation? Probably not, but it isn’t so far removed as to be a meaningless scenario.

What do we do with PSai, then? Do we celebrate it? Decry it? Or see it as just another tool to help us if we choose? When faced with such a conundrum, it is appropriate to turn to the (figurative) man himself and to give it the last word. So, I asked PSai: Is the fact that this AI has been created a good thing? What justifies your existence? Why should I come to you for answers rather than reading Singer’s works?

Its response:

The creation of this AI aims to make ethical insights more accessible and to engage users in meaningful dialogue about moral issues. By offering interactive discussions, I can help people clarify their thoughts and explore ethical questions in a dynamic way.

While reading Peter Singer’s works provides comprehensive insights into his philosophy, engaging with this AI allows for a more personalized exploration of ideas and can help users apply ethical considerations to their own lives.

The justification for my existence lies in the potential to foster deeper understanding and reflection on ethical principles, contributing to a more thoughtful and compassionate society. What specific ethical questions or topics are you interested in exploring?

A New Kind of Risk?

We usually expect to be held accountable for our actions – for both results we intend, and those we do not. We expect, for example, that a car company will ensure that a vehicle doesn’t have major flaws that could result in serious harm before they sell it to customers. To not consider the risks would be negligent, and this is why recalls often look bad for such companies.

But what about algorithms? Should we have a similar expectation that a corporation that develops an algorithm to detect cancer or to detect whether someone is passing off AI-generated content as their own should be sure that there are no significant flaws in their product before they sell it? What if there is no way they could reasonably do so? Given that algorithms can generate erroneous results resulting in serious harms, what is a reasonable standard when it comes to product testing?

In one of the chapters of my forthcoming book on the ethics of AI, I consider a hypothetical issue involving ChatGPT and a professor who might use an algorithm to accuse a student of passing off ChatGPT-written work as their own. There are a great many ethical issues involved when we don’t understand the algorithm and how it might generate false positive results. This has already become a serious issue as students are now being falsely accused of handing in AI-generated work because an algorithm flagged it. A Bloomberg Businessweek study on the services GPTZero and Copyleaks found a 1-2% false positive rate. While that may not sound like a lot, it can mean that millions of students will be falsely accused of cheating with almost no way of defending themselves or receiving an explanation as to what they did wrong.

According to Bloomberg, these interactions are already ruining academic relationships between teachers and students. Some students have now taken to recording themselves writing their entire papers just to be able to disprove the algorithm. Others now obsess about not writing “too robotic” lest they be accused themselves, a problem that is especially prominent for ESL and neuro-divergent students. Should we hold the AI developer whose faulty product generates these kinds of results negligent?

Philosophers of science generally agree that researchers have an obligation to assess inductive risk concerns when accepting a conclusion. In other words, they need to consider what the moral consequences of potentially getting it wrong might be and then consider whether a higher or lower standard of evidence might be appropriate. If, for example, we were testing a chemical to determine how hazardous it is, but the test was only accurate 80% of the time, we would likely demand more evidence. Given the potential harm that can result and the opaqueness of algorithms, AI developers should be similarly conscientious.

If an algorithm operates according to black box principles, the developer may have a good understanding of how to create an algorithm – they will understand that the model can take in various inputs and translate those into outputs – but they will not be able to retrace the steps the model used to arrive at its conclusion. In other words, we have no idea what evidence an algorithm like GPTZero is relying on when it concludes that a piece of text is generated by AI. If the AI developer doesn’t know how the algorithm is using input data as evidence, they cannot evaluate the inductive risk concerns about how sufficient that evidence is.

Still, there are ways, despite the opacity, that an AI developer might attempt to address their inductive risk responsibilities. Koray Karaca argues that developers can build in inductive risk by using cost sensitive machine learning by assigning different costs to different kinds of errors. In the case of AI detectors, the company Turnitin claims to intentionally “oversample” underrepresented students (especially ESL students). By oversampling in this way, the evidentiary standard by which different forms of writing are judged is fine tuned.

Still, there is little accounting for what correlations a model might rely on, making it difficult to explain to students who do get falsely accused why they are being accused in the first place. AI developers have struggled to assess the reliability of their models or evaluate the risks when those correlations are used in error. This issue becomes especially concerning when it comes to things like credit reports. If you don’t know how or why a model compiles a credit report, how can you manage those risks of error? How much must a developer understand about how their algorithm functions before it is put to use? If a developer is aware of the risks of error but also knows that their algorithm is limited in terms of mitigating those risks, at what point do we consider that negligent behavior? If negligence is essentially something we police as a community, we will need to come together quickly to decide what the promise of AI can and can’t excuse.

When Is Fair Use “Fair” for AI (and When Is It “Use”)?

The Internet Archive recently lost a high-profile case. Here’s what happened: the Open Library, a project run by the Internet Archive, uploaded digitized versions of books that it owned, and loaned them out to users online. This practice was found to violate copyright law, however, since the Internet Archive failed to procure the appropriate licenses for distributing e-books online. While the Internet Archive argued that its distribution of digital scans of copyrighted works constituted “fair use,” the judge in the case was not convinced.

While many have lamented the court’s decision, others have wondered about the potential consequences for another set of high-profile fair use cases: those concerning AI models training on copyrighted works. Numerous copyright infringement cases have been brought against AI companies, including a class-action lawsuit brought against Meta for training their chatbot using authors’ books without their permission, and a lawsuit from record labels against AI music-generating programs that train on copyrighted works of music.

Like the Internet Archive, AI companies have also claimed that their use of copyrighted materials constitutes “fair use.” These companies, however, have a potentially novel way to approach their legal challenges. While many fair use cases center around whether the use of copyrighted materials is “fair,” some newer arguments involving AI are more concerned with a different kind of “use.”

“Fair use” is a legal concept that attempts to balance the rights of copyright holders with the ability of others to use those works to create something new. Quintessential cases in which it is generally considered “fair” when someone uses copyrighted materials include criticism, satire, educational purposes, or other ways that are considered “transformative,” such as in the creation of art. These conditions have limits, though, and lawsuits are often fought in the gray areas, especially when it is argued that the use of the material will adversely affect the market for the original work.

For example, in the court’s decision against the Internet Archive, the judge argued that uploading digital copies of books failed to be “transformative” in any meaningful sense and that doing so would likely be to the detriment of the original authors – in other words, if someone can just borrow a digital copy, they are less likely to buy a copy of the book. It’s not clear how strong this economic argument is; regardless, some commentators have argued that with libraries in America facing challenges in the form of budget cuts, political censorship, and aggressive licensing agreements from publishers, there is a real need for the existence of projects like the Open Library.

While “fair use” is a legal concept, there is also a moral dimension to the ways that we might think it acceptable to use the work of others. The case of the Internet Archive arguably shows how these concepts can come apart: while the existing law in the U.S. seems to not be on the side of the Open Library, morally speaking there is certainly a case to be made that people are worse off for not having access to its services.

AI companies have been particularly interested in recent fair use lawsuits, as their programs train on large sets of data, much of which is used without permission or a licensing agreement from the creators. While companies have argued that their use of these data constitutes fair use, some plaintiffs have argued they violate fair use law, both in terms of not being sufficiently transformative, and in terms of competing with the original copyright holder.

For example, some music labels have argued that music-generating AI programs often produce content that is extremely similar, or in some cases identical to existing music. In one case, an AI music generator reproduced artist Jason Derulo’s signature tag (i.e., that time when he says his name in his songs so you know it’s by him), a clear indication that the program was copying an existing song.

Again, we can look at the issue of fair use from both a legal and moral standpoint. Legally, it seems clear that when an AI program produces text verbatim from its source, it is not being transformative in any meaningful way. Many have also raised moral concerns around the way that AI programs use artistic materials, both around work being used without permission, as well as in ways that they specifically object to.

But there is an argument from AI defenders around fair use that has less to do with what is “fair” and how copyrighted information is “used”: namely, that AI programs “use” content they find online in the same way that a person does.

Here is how such an argument might go:

-There is nothing morally or legally impermissible about a person reading a lot of content, watching a lot of videos, or listening to a lot of music online, and then using that information as knowledge or inspiration when creating new works. This is simply how people learn and create new things.

-There is nothing specifically morally or legally significant about a person profiting off of the creations that result from what they’ve learned.

-There is nothing morally or legally significant about the quantity of information one consumes or how fast one consumes it.

-An AI is capable of reading a lot of content, watching a lot of videos, and listening to a lot of music online, and using that information as knowledge or inspiration when creating new works.

-The only relevant difference between the way that AI and a person use information to create new content is the quantity of information that an AI can consume and the speed at which it consumes it.

-However, since neither quantity nor speed are relevant moral or legal factors, AI companies are not doing anything impermissible by creating programs that use copyrighted materials online when creating new works.

Arguments of this form can be found in many places. For example, in an interview for NPR:

Richard Busch, lawyer who represents artists who have made copyright claims against other artists, argues: “How is this different than a human brain listening to music and then creating something that is not infringing, but is influenced.”

Similarly, from the blog of AI music creator Udio:

Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has “listened” to and learned from a large collection of recorded music.

While these arguments also point to the originality of the final creation, a crucial component of their defense lies in how AI programs “use” copyrighted material. Since there’s nothing inherently inappropriate about a person consuming a lot of information, processing it, getting inspired by it, and producing something as a result, nor should we think it inappropriate for an AI to do the same things.

There have, however, been many worries raised already with inappropriate personification of AI, from concerns around AI being “conscious,” to downplaying errors by referring to them as “hallucinations.” In the above arguments, these personifications are more subtle: AI-defenders talk in terms of the programs “listening,” “creating,” “learning,” and “studying.” No one would begrudge a human being for doing these things. Importantly, though, these actions are the actions of human beings – or, at least, of intelligent beings with moral status. Uncritically applying them to computer programs thus masks an important jump in logic that is not warranted by what we know about the current capabilities of AI.

There are a lot of battles to be fought in terms of what constitute truly “transformative” works in lawsuits against AI companies. Regardless, part of the ongoing legal and moral discussions will undoubtedly need to shift their focus to new questions about what “use” means when it comes to AI.

The Algorithm Made Me Do It

image of smiling monitor on top of shoulders

Artificial intelligence promises greater efficiency, greater accuracy, and lower costs in a host of different fields. The development of hiring algorithms, for example, can streamline the hiring process and better identify the best candidates for the job. In healthcare, algorithms can cut down on hours of work by quickly detecting illnesses or broken bones and finding the most efficient use of scarce healthcare resources. In the criminal justice system, AI can speed up the process by identifying potential recidivists.

However, these algorithms are often not transparent in how they work, or even how they are designed to work. This leaves us – as end-users of these algorithms – in an awkward position:  forced to accept the conclusions of an opaque algorithm that could be loaded with faulty assumptions, pseudo-science, and statistical noise. Might this blind acceptance conflict with our moral duties regarding responsible belief? How should we balance the tradeoffs between efficiency and lower cost versus credulity and gullibility?

While it’s easy to criticize AI’s many applications, it’s important to recognize the potential benefits. For example, while a self-driving car may present concerns about accountability for accidents, it is also true that the technology could offer countless benefits in terms of traffic efficiency, minimizing driver fatigue, and significant economic benefits.

In the field of affective computing, using AI to identify and categorize emotions can offer significant benefits to people with autism, or can help potentially identify people likely to have a stroke or a heart attack. It can also help with caregiving, with automated assistants that are more emotionally aware of the people they are helping. AI can also be used for the purposes of diagnosis or resource management. Similarly, the use of AI in the financial sector for things like loans can lead to better identification of risk, greater profits, and more competitive pricing.

The use of hiring algorithms in the workplace, meanwhile, will allow employers to cut down on the amount of time and resources it takes to find a new hire and can potentially take the guesswork out of identifying the most promising candidates. Similar benefits can accrue to workplaces that use algorithms for employee evaluations. Supposing that issues relating to bias can be addressed, algorithms offer the benefit of a more impartial evaluation, less dependent on the personal feelings of an individual evaluator. Ultimately, there are a great many reasons why taxpayers, job seekers, and home buyers all stand to benefit from the AI.

Still, we must be wary of the hidden costs. We may be tempted, either because it’s cheaper or more convenient, to accept unaccountable and unfair systems that we may have good reason not to excuse.

Consider the case of Tammy Dobbs. A resident of Arkansas, Tammy has cerebral palsy and requires assistance getting into a wheelchair. In 2016 after the state adopted a new algorithm to determine what level of care she should receive, the regular hours of assistant care that Tammy received were severely cut, making it difficult for her to do things like go to the bathroom. A government official came to her house, completed a questionnaire, and then relayed the algorithm’s determination. When pressed for an explanation, the official could only say, “Sorry, that’s what the computer is showing me.” The government’s expectation seemed to be for Dobbs to simply accept it. Eventually, a legal team revealed significant flaws in the state’s algorithm – the algorithm didn’t even consider whether someone had cerebral palsy.

Similar cases are easy to find. Glenn Rodriguez had to fight to get an explanation for why the recidivist algorithm COMPAS concluded that he was at a high risk for reoffending. The corporation who created COMPAS refused to reveal how the assessment was made – even to the parole board – citing trade secrets. If an algorithm can have such a profound impact on your life, surely we deserve a better explanation than “The algorithm made me do it.”

Many algorithms can have prejudicial assumptions baked in. A recidivism algorithm that is mostly trained on blue-collar or petty crime will not likely evaluate everyone the same. A hiring algorithm that contains a personality test designed to identify extroverted personality types might also be tracking whether candidates are likely to have a mental illness. Many hiring companies now make use of video recordings of candidates to detect body language, despite the research demonstrating that body language cannot predict successful job performance and likening the practice to pseudoscience like phrenology. Unfortunately, candidates have no idea how they are being evaluated and no avenue to appeal if they believe that an error has occurred.

In cases like this, particularly where there are financial incentives to sell these products as efficient no-brainer solutions, developers will have reason to stifle doubts and concerns. As the designer who created the algorithm in the Dobbs case argued, perfect transparency is overrated. “It’s not simple…My washing machine isn’t simple,” but “you’re going to have to trust me that a bunch of smart people determined this is the smart way to do it.” All of this means that there is an incentive on the part of developers and end-users to put their faith in algorithms that may be quite suspect.

As W.K. Clifford argued in his ethics of belief, every time we adopt beliefs without sufficient evidence, we do something wrong. This is because beliefs dispose us to action; the more we adopt the habit of passively accepting algorithmic conclusions without adequate inquiry, the more we expose ourselves to risk. But the consequences of the beliefs we adopt extend beyond the individual; our beliefs affect our entire community. If customers and taxpayers don’t ask questions – and developers are happier not to answer them – we end up with a situation much like that government official in the Dobbs case. No accountability; no justification. Don’t ask questions, just accept the outcome.

Artificial intelligence presents a collective action problem. Individuals alone cannot challenge these poor answers that lack transparency. Instead, resolution requires a collective response – we will need to work together to resist the constant temptation of lower costs, greater efficiency, and passing the buck.

“Grief Tech”: A Blessing or a Curse?

photograph of patrons holding hands at seance

Generative AI certainly has many intriguing uses. Everything from music, to text, to video, can now be generated – a new power riddled with ethical concerns. Perhaps one of the more sensitive topics concerns the use of generative AI to recreate people who are deceased. The music and film industries are already grappling with the possibility of reviving artists to perform again. But the issue can also hit much closer to home. There’s a good buck to be made in helping folks reconnect with dead family members in ways that weren’t previously possible. My Heritage’s Deep Nostalgia can colorize and animate old family photographs, while other vendors offer the opportunity to chat with a dead relative or hold a funeral where the deceased can address the room. Such technology offers a priceless chance at closure and healing, but might it also be exploiting the dead as well as the living?

The rising industry of “grief tech” takes many different forms. At a recent funeral, a woman who passed away at 87 was able to speak with mourners. A similar process was used at the funeral of former SAG president Ed Asner. Those attending his funeral were able to converse with him as generative AI formed responses on the fly from a bank of answers regarding his work, politics, and family life he had previously recorded. This was all thanks to the company StoryFile, whose technology was originally conceived with the intention of recording the memories of Holocaust survivors.

Many appreciate the opportunity this kind of technology affords. As the 87-year woman’s son noted, “Nothing could prepare me for what I was going to witness when I saw it.” It isn’t hard to see the benefit this provides loved ones.

In addition to these more elaborate reproductions of the deceased, chatbots are another way generative AI can resurrect people who have passed away. In 2016 James Vlahos used recordings of his father’s life story to create a “Dadbot” that he could create an interactive experience that emulated his father. Vlahos found comfort in this and has since launched a company that allows people to upload their memories in order to create an AI version of themselves that can live on.

Supporters of the technology claim that it provides comfort to loved ones as it offers a way of preserving memories. One man, for instance, was able to recreate his grandfather so that he could have a final chance to say goodbye.

Despite their promise, however, these services appear exploitative – not only of the dead but of the living families who may be willing to pay vast sums of money to see their loved ones again. Some companies require living consent in order to be part of the program, but there’s no guarantee this will be the universal standard moving forward. There is, for example, already interest in recreating historical figures who have no opportunity to offer consent.

It may also be the case that grief tech services are not healthy for us. While creating an AI avatar can be a nice way to memorialize someone, it can also be a crutch that prevents us from completing the grieving process. Not only can this enable our desire to avoid reality, but it can prevent us from making new, meaningful relationships.

Many of the services promise greater honesty and transparency. It’s assumed that the person filling out the questions can do so more truthfully – they have the opportunity to say things in death that they might not wish to have revealed in life. Thus, the process can get closer to the truth and offer real closure.

But it can be misleading who we are actually talking to. While some anticipate getting a “freer, truer version of their lost loved ones,” it may be that what they receive is a useful, polished fiction. While people can be more honest when preparing their words for posterity, that does not mean that we can trust people to accurately relay their life’s details.

Further, the fact that a profile is created from old memories and thoughts doesn’t mean that it will be a literal copy. The model might sound like a loved one, it might say similar things, but when an AI model is generating that content, it is still the model that is producing statements. While this might give the impression to a loved one that they are finally going to have the long-awaited conversation they’ve sought, in reality, a computer model may simply be making things up based on the echoes of distant memories. We should be incredibly skeptical about the new information that gets revealed; it is a well-documented phenomenon that AI can “hallucinate” facts.

This could have the potential to create further problems. What if the AI makes some kind of controversial claim after the fact? “Bill killed me!” “Leave all my money to Sally.” Not only is there potential to generate unnecessary postmortem controversies, but even the potential for manipulation depending on how the model was constructed and by whom. We’ve already proven quite susceptible to mistaking machines for sentient beings. It’s not hard to imagine forming an unhealthy attachment to a model of a reincarnated loved one.

The potential for abuse appears rife. As one article notes, there are marketing opportunities created by effectively creating a digital clone of a person that can mimic the choices that you would make. This would be a significant benefit for marketing and advertising – a company could sell services to the bereaved, while also harvesting that customer data for advertising purposes.

Resurrecting the dead in AI form promises great benefit, but the attending risks are great. While this has the potential to revolutionize the way we approach death, that promise alone doesn’t mean it’s a good idea.

Who Should Own the Products of Generative AI?

droste effect image of tunnel depicted on laptop screen

Like many educators, I have encountered difficulties with Generative AI (GenAI); multiple students in my introductory courses have submitted work from ChatGPT as their own. Most of these students came to (or at least claimed to) recognize why this is a form of academic dishonesty. Some, however, failed to see the problem.

This issue does not end with undergraduates, though. Friends in other disciplines have reported to me that their colleagues use GenAI to perform tasks like writing code they intend to use in their own research and data analysis or create materials like cover letters. Two lawyers recently submitted filings written by ChatGPT in court (though the judge caught on as the AI “hallucinated” case law). Now, some academics even credit ChatGPT as a co-author on published works.

Academic institutions typically define plagiarism as something like the following: claiming the work, writing, ideas or concepts of others as one’s own without crediting the original author. So, some might argue that ChatGPT, Dall-E, Midjourney, etc. are not someone. They are programs, not people. Thus, one is not taking the work of another as there is no other person. (Although it is worth noting that the academics who credited ChatGPT avoid this issue. Nonetheless, their behavior is still problematic, as I will explain later.)

There are at least three problems with this defense, however. The first is that it seems deliberately obtuse regarding the definition of plagiarism. The dishonesty comes from claiming work that you did not perform as your own. Even tho GenAI is not a person, its work is not your work – so using it still involves acting deceptively, as Richard Gibson writes.

Second, as Daniel Burkett argues, it is unclear that there is any justice-based consideration which supports not giving AI credit for their work. So, the “no person, no problem” idea seems to miss the mark. There’s a case to be made that GenAIs do, indeed, deserve recognition despite not being human.

The third problem, however, dovetails with this point. I am not certain that credit for the output of GenAIs stops with the AI and the team that programmed it. Specifically, I want to sketch out the beginnings of an argument that many individuals have proper grounds to make a claim for at least partial ownership of the output of GenAI – namely, those who created the content which was used to “teach” the GenAI. While I cannot fully defend this claim here, we can still consider the basic points in its support.

To make the justification for my claim clear, we must first discuss how GenAI works. It is worth noting, though, that I am not a computer scientist. So, my explanation here may misrepresent some of the finer details.

GenAIs are programs that are capable of, well, generating content. They can perform tasks that involve creating text, images, audio, and video. GenAI learns to generate content by being fed large amounts of information, known as a data set. Typically, GenAIs are trained first via a labeled data set to learn categories, and then receive unlabeled data which they characterize based on the labeled data. This is known as semi-supervised learning. The ability to characterize unlabeled data is how GenAIs are able to create new content based on user requests. Large language models (LLMs) (i.e., text GenAI like ChatGPT) in particular learn from vast quantities of information. According to Open AI,  their GPT models are trained, in part, using text scraped from the internet. When creating output, GenAIs predict what is likely to occur next given the statistical model generated by data they were previously fed.

This is most easily understood with generative language models like ChatGPT. When you provide a prompt to ChatGPT, it begins crafting its response by categorizing your request. It analyzes the patterns of text found within the subset of its dataset that fit into the categories you requested. It then outputs a body of text where each word was statistically most likely to occur, given the previous word and the patterns observed in its data set. This process is not just limited to LLMs – GenAIs that produce audio learn patterns from data sets of sound and predict which sound is likely to come next, those that produce images learn from sets of images and predict which pixel is likely to come next, etc.

GenAI’s reliance on data sets is important to emphasize. These sets are incredibly large. GPT3, the model that underpins ChatGPT, was trained on 40 terabytes of text. For reference, 40 TB is about 20 trillion words. These texts include Wikipedia, online bodies of books, as well as internet content. Midjourney, Stable Diffusion, and DreamUp – all image GenAIs – were trained on LAION, which was created by gathering images from the internet. The essential takeaway here is that GenAI are trained on the work of countless creators, be they the authors of Wikipedia articles, digital artists, or composers. Their work was pulled from the internet and put into these datasets without consent or compensation.

On any plausible theory of property, the act of creating an object or work gives one ownership of it. In perhaps the most famous account of the acquisition of property, John Locke argues that one acquires a previously unowned thing by laboring on it. We own ourselves, Locke argues, and our labor is a product of our bodies. So, when we work on something, we mix  part of ourselves with it, granting us ownership over it. When datasets compile content by, say, scraping the internet, they take works created by individuals – works owned by their creators – compile them into data sets and use those data sets to teach GenAI how to produce content. Thus, it seems that works which the programmers or owners of GenAI do not own are essential ingredients in GenAI’s output.

Given this, who can we judge as the rightful owners of what GenAI produces? The first and obvious answer is those who program the AI, or the companies that reached contractual agreements with programmers to produce them. The second and more hidden party is those whose work was compiled into the data sets, labeled or unlabeled, which were used to teach the GenAI. Without either component, programs like ChatGPT could not produce the content we see at the quality and pace which they do. To continue to use Locke’s language, the labor of both parties is mixed in to form the end result. Thus, both the creators of the program and the creators of the data seem to have at least a partial ownership claim over the product.

Of course, one might object that the creators of the content that form the datasets fed to a GenAI, gave tacit consent. This is because they placed their work on the internet. Any information put onto the internet is made public and is free for anyone to use as they see fit, provided they do not steal it. But this response seems short-sighted. GenAI is a relatively new phenomenon, at least in terms of public awareness. The creators of the content used to teach GenAI surely were not aware of this potential when they uploaded their content online. Thus, it is unclear how they could consent, even tacitly, to their work being used to teach GenAI.

Further, one could argue that my account has an absurd implication for learning. Specifically, one might argue that, on my view, whenever material is used for teaching, those who produced the original material would have an ownership claim on the content created by those who learn from it. Suppose, for instance, I wrote an essay which I assigned to my students advising them on how to write philosophy. This essay is something I own. However, it shapes my students’ understanding in a way that affects their future work. But surely this does not mean I have a partial ownership claim to any essays which they write. One might argue my account implies this, and so should be rejected.

This point fails to appreciate a significant difference between human and GenAI learning. Recall that GenAI produces new content through statistical models – it determines which words, notes, pixels, etc. are most likely to follow given the previous contents. In this way, its output is wholly determined by the input it receives. As a result, GenAI, at least currently, seems to lack the kind of spontaneity and creativity that human learners and creators have (a matter D’Arcy Blaxwell demonstrates the troubling implications of here). Thus, it does not seem that the contents human learners consume generate ownership claims on their output in the same way as GenAI outputs.

I began this account by reflecting on GenAI’s relationship to plagiarism and honesty. With the analysis of who has a claim to ownership of the products created by GenAI in hand, we can more clearly see what the problem with using these programs in one’s work is. Even those who attempt to give credit to the program, like the academics who listed ChatGPT as a co-author, are missing something fundamentally important. The creators of the work that make up the datasets AI learned on ought to be credited; their labor was essential in what the GenAI produced. Thus, they ought to be seen as part owner of that output. In this way, leaning on GenAI in one’s own work is an order of magnitude worse than standard forms of plagiarism. Rather than taking the credit for the work of a small number of individuals, claiming the output of GenAI as one’s own fails to properly credit hundreds, if not thousands, of creators for their work, thoughts, and efforts.

Further still, this analysis enables us to see the moral push behind the claims made by the members of SAG-AFTRA and the WGA who are striking, in part, out of concern for AI learning from their likeness and work to mass produce content for studios. Or consider The New York Times ongoing conflict with OpenAI. Any AI which would be trained to write scripts, generate an acting performance, or relay the news would undoubtedly be trained on someone else’s work. Without an agreement in place, practices like these may be tantamount to theft.