← Return to search results
Back to Prindle Institute

Why My Students Shouldn’t Use AI

As the new school year approaches, educators across the country are once more redesigning their classes in light of the brave new world of generative AI. Many teachers are embracing the technology – encouraging their students to make use of this powerful new tool. Some are even going so far as to use AI to assist in their course design. Others, like myself, are banning any use of generative AI in their classes. But why?

Perhaps I’m a luddite. Perhaps I’m no better than Socrates fearing that writing would be the death knell for education. Nevertheless, I think there are (at least) three strong moral arguments against students using AI in a philosophy class – and perhaps education more generally.

Argument 1: AI Harms Creators

Generative AIs like ChatGPT operate on Large Language Models. Put simply, they’re trained on vast quantities of data – usually scraped from what is freely available on the internet. The problem is that this data usually belongs to other people. More problematically, generative AIs make no effort to credit the data that shape their outputs. So, when I use ChatGPT to generate a fluid structure for my paper, or a killer opening paragraph for my opinion piece, there’s no way I can properly credit the sources of those generated outputs. In doing so, I necessarily pass off someone else’s ideas as my own – the very definition of plagiarism.

As our own Tim Sommers notes, a common counter to this argument is that the operation of an LLM isn’t all that different from how our own minds already work: absorbing vast amounts of data, and using that data to produce novel creations. Anyone who’s ever created anything will know the fear that one of your darling creations – a plot point, a song lyric, or a visual design element – is merely parroting another creation once seen, but long forgotten.

Like Sommers, I admit that I lack the expertise to discern how different the operation of LLMs is from how our own minds function. But I think that there is at least one morally important point of difference: While our own creations might be subconsciously informed by data we’ve absorbed, there is (excepting cases of intentional plagiarism) no intention on our part to consciously hold out the work of another as our own. The same isn’t true when we use ChatGPT. We know how LLMs operate, and we know that any product of a generative AI has made vast (unattributed) use of the works of others. This knowledge is, I think, enough to make our actions morally problematic.

Argument 2: AI Harms the Environment

But AI doesn’t just harm creators – it’s also devastating for the environment. Generative AI requires huge amounts of processing power, and that power requires a lot of energy. While precise quantifications are hard to come by, ChatGPT’s power usage is estimated to be roughly equivalent to that of 33,000 standard homes. And it’s not just electricity, either. Generative AIs need vast amounts of water to cool their processors – a concerning prospect, given that we are at imminent risk of a global water crisis.

We are in the throes of a global climate catastrophe – a catastrophe that, according to some estimates, might become irreversible in less than four years if we don’t make drastic changes to our way of living. Among those necessary changes are massive reductions in our energy consumption. Given this, an explosion in the popularity of generative AI is the last thing we need.

Of course, the fact that there is an environmental argument against AI usage doesn’t provide us with an all-things-considered reason to stop. There are many harmful practices that we might need to continue in order to ensure human safety and flourishing. But using AI just doesn’t seem to be among those. Much of our AI usage is entirely frivolous – with 38% of people using AI to plan travel itineraries, and another 25% using it to draft social media posts. And when it comes to non-frivolous functions – like using it to craft an email (as 31% of people have) or prepare for a job interview (as 30% of people have), there are far less environmentally harmful ways of doing the very same thing. Having a question answered by AI can produce almost fifty times the carbon emissions than using a simpler system – like a search engine – to resolve the same query.

Argument 3: AI Harms the User

Even if we’re not motivated to care about creators or the environment, one further fact remains true: AI harms the user. I begin each of my classes by describing philosophy as the discipline that encourages us to think carefully about the reasoning behind our beliefs. This is a challenging – and sometimes terrifying – endeavour, since the discovery of bad reasoning can often force us to abandon some of our most dearly-held beliefs. The subjects I teach require my students to consider some hard questions: Does the climate crisis mean we should have fewer children? Should we permit physician-assisted suicide? Would a Federal ban on TikTok violate our right to freedom of expression? I believe that it’s vitally important that each of us formulate our own answers to such questions. If we farm this out to an algorithm, we’re sort of missing the whole point of philosophy (and education more generally). As Marta Nunes da Costa puts it:

“being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value.”

As I’ve argued before, failing to think critically not only risks making us bad humans, but also bad humans. I believe that fact – coupled with the clear harms to creators and the environment – is more than sufficient to explain why my students shouldn’t use AI.

More Than a Message: Finding Meaning in Sympathy Cards

I suspect having cancer must suck. Indeed, when I think about cancer, my mind instantly runs to terms like “malignant,” “tumor,” “fight,” and “chemotherapy.” All things which one might generously call undesirable. A phrase that doesn’t jump out at me is “funny.” However, Mark Steel’s The Leopard in my House: One Man’s Adventures in Cancerland is an undeniably funny book.

In it, the author, broadcaster, stand-up comedian, and newspaper columnist recounts the extraordinary year in which he was diagnosed and treated for throat cancer. The book is peppered with vivid, often graphic descriptions of what it’s like when your neck is repeatedly blasted with radiation (expect a lot of talk about mucus). But it’s also filled with warm, witty reflections on his relationships, his fellow cancer patients, his brushes with mortality, and the miraculous feeling of doing something as simple as swallowing water again.

Undoubtedly, all of the above is ripe for philosophical analysis, and this fact isn’t lost on Mark himself. In one chapter of the book, after receiving a recommendation from a fellow cancer sufferer, Mark considers the disease, the effect it’s having on his life, and how one can wrestle with forces beyond one’s control, via Marcus Aurelius’ Meditations. Taking a stoic approach, he considers the futility of asking “Why me?” or looking for fairness in something as arbitrary as a cancer diagnosis. It’s a grounded, practical perspective — and, I think, a wise one.

Yet, what I want to do here is look past all the fertile philosophical material that Mark provides us in his book and go in on something very specific: sympathy cards.

In Chapter 11, Mark talks about how awkward it can be talking to someone who has cancer. How unsure we often feel about what to say. That uncertainty, he argues, is natural, but it shouldn’t stop us from saying anything at all.

Of course, this transfers to non face-to-face communications. I’m sure you too have faced the daunting task of writing a meaningful message in a card. It might be something as innocuous as a birthday or leaving card, or as substantial as a commiserations or sympathy card. Yet, despite having some vague sense of what you want to say, you don’t know the correct words. This makes the whole thing slightly stressful, and the temptation is there to simply not write in the card at all. To not say anything. This, Mark says, is a mistake:

… here’s my tip if you know someone who lands in a serious medical condition. If you’re not sure what to say to someone, don’t worry because anything you say is better than nothing. I can’t remember the exact words of many of the messages I got, I can only remember that they came. Every email, text or call is a reminder that you matter to someone and that is critical in getting you passed the difficult moments. Don’t worry about saying the wrong thing. All the person remembers is that you got in touch.

This, I think, gets at something we often forget when it comes to comforting those who are ill. You’re not expected to craft the perfect, poetic paragraph that lifts someone out of despair. The point of a message, whether text, email, or card, isn’t to fix everything. It’s not even to promise that everything will be okay (because you can’t know that). Rather, it’s to remind the person they haven’t been forgotten. That they’re in your thoughts. That you’ve taken a moment out of your day and dedicated a sliver of your attention to them. That they matter.

These gestures, small as they may seem, speak volumes. They signal that when given the choice between silence and acknowledgement, you chose to say something. And in doing so, you affirmed that their struggle is worthy of notice.

But as I read Mark’s reflections on this, my brain, never missing an opportunity to be pessimistic, conjured a darker scenario: what if the message doesn’t actually come from the person it claims to?

It’s a familiar trope in film and TV: a high-powered executive learns that someone in their office is unwell and tells their assistant to “send a card,” maybe flowers too. Or worse, the executive never finds out at all, and the card is sent by a dutiful underling who signs it on their behalf. In such cases, the card still arrives, and the recipient may still feel touched that someone cared enough to reach out. But on some level, the gesture feels hollow. There’s a subtle, unsettling deception in play: the card wasn’t really from the sender it claims to be. The sentiment was manufactured.

And while the card might still do some good (after all, someone did something), it raises uncomfortable questions. Does intention matter as much as the action itself? Can a false gesture still provide real comfort? Or is there a quiet harm in pretending to care, when the person supposedly sending the message never even knew you were unwell?

These questions seem even more relevant with the advent of easily accessible generative AI. It’s now easy to offload the difficult task of writing a heartfelt message to a tool like ChatGPT. You face the blank space in the card, ask an AI to fill it, and copy the result. To the recipient, it looks like you took the time to write something meaningful. But in reality, you thought of them just enough to delegate the task to a machine.

In that light, the hollowness of the corporate-card trope becomes more personal, more widespread. The authenticity that gives these messages their power starts to erode. Worse, it casts suspicion on the entire genre. If one message might be synthetic, why not all of them?

This unease reminds me of a central theme from Byung-Chul Han’s The Disappearance of Rituals. In it, Han argues that our society is losing the shared symbolic acts that once gave meaning to human experience. Rituals, like writing a sympathy card, once required time, presence, and emotional investment. Today, however, many of these acts are reduced to gestures of efficiency, of communication without community. It would seem evident that a card composed by ChatGPT might technically fulfill the form, but it lacks the symbolic weight Han insists is essential. The message becomes a simulation of care, not the thing itself.

Now, I acknowledge that this is a niche worry. And, at least for now, probably an overblown one. Most people still write their own messages, even if they’re clumsy or awkward. And frankly, the awkwardness is part of the point as it shows effort. A messy, imperfect message from a friend is infinitely more valuable than a flawless, ghostwritten one.

But just because it’s niche doesn’t mean it’s irrelevant. The emotional stakes are high. As Steel points out, those messages can be a lifeline. They are something to cling to in the middle of immense physical and emotional suffering. They are not just words; they are reminders of human connection. And if we’re not careful, the convenience of automation might start to chip away at that.

What’s the solution, then? Honestly, I don’t have one. But I do know that next time I sit down to write something in a sympathy card, I’ll try not to overthink it. I’ll write something, however clunky or awkward, and trust, as Steel suggests, that the act of writing matters more than the words themselves.

Grok, Is This True?: Chatbots and Domination

Last week, Grok, the large language model driven AI chatbot from xAI, suddenly began posting… differently. On Tuesday, May 8th, (in posts which xAI has since deleted) Grok stated that “anti-white hate” always comes from individuals of Ashkenazi Jewish descent, claimed that Adolf Hitler would be best suited to deal with this problem, invoked conspiracy theories about Jewish people, and referred to itself as “MechaHitler.”

This follows an incident in May where Grok repeatedly posted about “white genocide” in South Africa when responding to posts on wholly unrelated topics, such as baseball. When asked by users to clarify its posts on “white genocide,” Grok stated that it was instructed by its creators to accept these claims as true.

Linda Yaccarino, the CEO of X, announced on Wednesday, July 9th, that she is stepping down. Elon Musk, the owner of both X and xAI, recently wrote that Grok was “too eager to please” and that enabled users to manipulate the chatbot into creating antisemitic posts.

However, this explanation clashes with other information. In June, Musk expressed unhappiness with Grok, declaring that it would be upgraded, targeting a release date around July 4th. On July 5th, xAI posted new lines of code to Grok’s publicly accessible system prompts. These prompts encouraged Grok to assume that viewpoints from legacy media are biased, to “find diverse sources representing all parties” and to “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

The about Grok page on X states only that it was trained “on a variety of data from publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” It is unclear what “well substantiated” claims that “represent all parties” are; given the posts Grok made after the new instructions were added to its core prompt, it is reasonable to worry that the chatbot’s dataset considers antisemitic conspiracy theorists to be a party whose claims are substantiated.

There is a sense in which Grok’s posts are philosophically uninteresting. They obviously are morally condemnable. Further, they are not particularly surprising from a sheer informational standpoint. The case bears striking similarities to a Microsoft chatbot called Tay. Microsoft released Tay in 2016, intending for it to learn from its interactions with social media users. Within 24 hours, Microsoft deactivated the bot after it began making racist posts, praising Hitler, and denying the Holocaust.

However, what is interesting about the Grok fiasco is that it illuminates the extent to which creators have influence over AI tools. It seems that just a few additional prompts were able to massively change the content that Grok produced. This is particularly striking given that it appears Grok has been altered strictly to align with Musks’ preferences. Further still, new reporting suggests that Grok searches for Musk’s personal views before answering questions on controversial topics.

In a previous column, I discussed the concept of republican freedom. I will give a brief rehash here. Traditionally, philosophers think of freedom in two senses. You have positive freedom when you can do the things that you want to do. In contrast, you have negative freedom when no one actively interferes with your pursuits.

Suppose, for instance, that you are a very strong student and want to attend medical school. However, you lack the money to afford tuition and financial aid is unavailable. You are negatively free as no one is interfering with you. Yet the financial constraints hamper your positive freedom.

Now imagine that a rich uncle offers to pay your tuition. However, this uncle is quite capricious and vengeful; he has a history of lashing out against family members who engage in behavior he finds distasteful, and his preferences are unpredictable. Suppose you accept his offer. Your positive freedom increases as you can attend medical school. But are you wholly free? Your ability to attend medical school depends on your uncle’s preferences. If you do something he dislikes, he may refuse to pay your tuition. In this sense, your uncle dominates you. Even if he never actually rescinds the tuition money, you are still made vulnerable by the fact that he could. The mere threat is enough to reduce your republican freedom. You have republican freedom to the extent that your life’s prospects are not dependent upon the preferences of others. You lack this freedom when others have domineering power over you.

How does this relate to Grok and other AI chatbots? Users are increasingly reliant on these programs. Posters on X regularly ask Grok to verify or explain the claims of others, to the point that it is meme-worthy. Some preface their knowledge with “ChatGPT says that….” Even Google presents an AI overview prior to the actual results. This is deeply troubling given recently released results from an MIT lab which found that users who utilized a large language model in a series of essay-writing tasks appeared to experience neural and cognitive consequences; users of LLMs had weaker neural connectivity, demonstrated lesser cognitive activity while writing, and had worse memory of what they had written than non-users. To quote the researchers:

This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive process required for independent thinking. Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation [and] decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.

As users more frequently rely upon tools like Grok, their ability to research, analyze, and think critically about matters for themselves atrophies. This may make them more dependent on such tools, creating a vicious cycle; they initially use for convenience but over time the use becomes a necessity.

Grok’s recent antisemitic and conspiratorial tilt demonstrates why this potential for dependency is so deeply troubling. The most prominent AI tools are owned by private entities with private interests. As Grok has shown us, it seems that a few minor edits to their core prompts can vastly alter their outputs – the Grok case is simply extreme in both the result and slightness of the prompt modifications. Further, these edits may be the product of the arbitrary preferences of the ownership and programming teams.

This dependency thus seems to reduce the republican freedom of the users. Even if the owners and programmers of these tools never actually alter these tools in a way that misleads the users, the mere potential of their doing so is sufficiently troubling – the users still navigate through the world in a way that depends upon the arbitrary preferences of those who own and program the tools.

In the dialogue Phaedrus, Socrates worries about a relatively new technology: books. He fears that the written word worsens our memory and is less intellectually stimulating than dialogue. In retrospect, Socrates’ criticism is almost laughable. Even if he is right, the benefits of the written word surely far outweigh any cost.

However, there is something different about chatbots and other forms of AI-driven technology that makes the prospect of outsourcing our thinking to them troubling. Specifically, that the outputs of these technologies change. As Aaron Schultz argues, AI-driven technology captures our attention effectively because its contents change in real time to satisfy our preferences. But it is not merely the preferences of the user that can change the contents. So long as the owners and programmers may alter the programs to suit their preferences, these tools stand the potential to give those groups domination over us.

Should We Expect Fairness from AI? The Case of VAR

On the 22nd of June, 1986, God handed a historic victory to the Argentinian soccer team. Argentinian legend Diego Armando Maradona scored the winning goal against England in the World Cup Quarter Finals. Only, he did so with his hand – which, in the game commonly known as football, is not allowed. However, the referee and his team did not see the foul and assigned the goal. This error resulted in a 2-1 victory for the Argentinian team, allowing them to move on to the next bracket of the tournament and, eventually, claim international glory. However, both during and after the game, many others called out the referee’s mistake: due to an error in judgment, he failed to correctly apply the rules of the game, and allowed the Argentinians to win undeservedly. This historical moment, following a cheeky remark from Maradona himself, came to be known as “the Hand of God.”

There is no way around it: human beings, for all their capacity for reason, are fallible. As our intelligence is bound to our physical context, culture, experiences, and our own ideas, some assessments and decisions will be biased, surprisingly influenceable, and, ultimately, unfair.

It is in this department that AI development, well before the days of ChatGPT, claimed to provide an invaluable resource: much more precise and fair assessment than what human beings are capable of. As early as 2012, when the first discussions surrounding “Ethical AI” emerged, a group of researchers at Microsoft developed a framework for developing “Fair AI” – software agents capable of greater fairness in decision-making than human beings could ever hope to achieve.  Emerging technologies are capable of extraordinary tasks: advanced cancer prognosis and treatment, calculating the structure of proteins, and unprecedented support in space exploration – all incredibly complex tasks that human beings simply do not have the capacity to carry out on their own.

However, despite their promise, there have been many, many instances where AI systems were just as unfair as human beings. Cases of AI systems discriminating on the basis of gender, on the basis of race, disability, social class, and income abound. This occurred partly because the data used to develop those systems already reflect unlawful and immoral discriminatory practices carried out by human beings. In the words of philosopher Atoosa Kasirzadeh, AI systems are no help in addressing systemic injustices: if a social system or practice is already biased against a group of people, technology will only reinforce that prejudice. However, I would like to go even further than Kasirzadeh in this regard. We should not expect AI nor technology to make things fairer, as shown in how AI is applied within soccer games – and, specifically, through the technology known as VAR.

VAR, or “Video Assistant Referee,” first introduced in 2016, was designed to more justly apply the rules of football. A referee, due to their epistemic limitations (especially poor positioning within a game), may not be able to assess with certainty whether a foul occurred. Now, officials can review footage of the event from multiple camera angles in order to make that decision. Importantly, the technology underlying VAR is AI-based. Only through an AI system can one find the appropriate footage (which is collected by multiple cameras) for review without interrupting the game for an unacceptably long time. Once again, the sophistication of AI systems’ information processing and accuracy goes far beyond our limited epistemic capacities. So, now that important events in a football game can be reviewed in case of uncertainty, nothing like the backlash following the Hand of God could ever happen again. Right?

Unfortunately, the introduction of VAR has not resolved the issue. Every week the (un)fairness of VAR-assisted decisions is questioned (often by the managers and players themselves), and many websites maintain weekly threads devoted to dissecting footage and discussing calls. Because VAR is simply a tool utilized by the referee, there is still room for significant variation in how the rules are applied.

Take, for instance, the case of Anthony Taylor’s refereeing of the 2023 Europa League final. A supposed handball in favor of AS Roma, which would have led to a penalty kick, was not assigned, in spite of a technically very clear handball. While such a decision did not raise much discussion outside of Italy (and Taylor’s refereeing was praised), the same cannot be said for his identical decision on a nearly identical foul during the European Championship. Put very bluntly: Taylor does not deem that type of handball to be a foul and is consistent in his ruling, while other referees openly disagree. We might be inclined to believe that, because we can record, replay, and ultimately objectify a given event, we can have an objective view of how we should judge that event. However, the act of judgment itself is an inherently subjective act, and even people with the same degree of expertise can interpret the game’s rules differently, and the events they judge differently.

Up until a few years ago, the intentionality of a gesture played a relevant role in the referee’s decision-making: whether a player (apparently) intended to stop a shot towards goal with their hand, or whether it was an accident, mattered for deciding whether to assign the foul or not. However, following the introduction of VAR, the rules have been revised in several countries: many countries changed such a rule so that any handball in the penalty area – accident or not – results in a penalty kick. (The slight difference in rules across different national and international competitions is itself a source of confusion and variation that complicates the picture even further.) Similarly, while the way of judging whether a player stomping on another’s foot used to be judged as foul depending on perceived intentionality and gravity of the stomp – depending on whether, e.g., the toes or the ankle were stomped – now all stomps are treated the same. These rule changes make sense considering how VAR shifts referees’ focus from an event on the pitch to images on a camera. When using the VAR, the referee is not judging a dynamic interaction between players, where the greater context and movement of the game makes it easier to assess what players’ intentions might be. When judging a sequence of frames, considering the intentionality of players’ actions becomes incredibly more complicated. In other words, VAR is not making the game of soccer better: in order to make VAR work, it is necessary to change the rules. AI is also not making things fairer: the rules must still be applied in accurate and equitable ways.

This might still appear to be something quite minute in the grand scheme of things: a change of rules in a mere game, no matter how popular it may be. However, the core issue regards what we understand fairness to be, what actually AI needs from us in order to make things supposedly fairer. In theory, AI can make things fair because it can follow and apply the rules and the norms that human beings often fail to apply, due to their own epistemic limitations. However, in order to purposefully and efficiently implement AI within a social context, we need to tweak and change the rules of that social context – not in the name of fairness, but in the name of efficiency. Before AI can arguably make things fairer for our society, we need to change our society first, to merely make it possible to implement AI in the first place. For this reason, it is worth asking ourselves whether AI is making things fairer for us, or whether we are making things easier for AI.

We can expect some revolutionary uses of AI in many domains, and potentially even great benefits for society. However, quite often AI technologies are promoted as morally valuable – more trustworthy, more responsible, more fair. In some cases, that can be true: VAR has proved successful in detecting fouls; so much so that it is almost unimaginable that something like the Hand of God could happen again. However, to use AI and take advantage of its precision within a given social context, it is often necessary to change the rules and norms that define the context in order to use AI in the first place. We can expect more precision from AI, as a technical value; however, precision does not equal fairness. Rather,  the rules determining what is fair can – and should – only come from us – what we believe, what we do, and what we value.

Should You Thank Your AI?

In late April, Sam Altman, the CEO of OpenAI, made waves with a response to a question about the financial and environmental cost of saying “please” and “thank you” when interacting with Artificial Intelligence — “Tens of millions of dollars well spent–you never know.” The practice is common, with over two-thirds of users observing such social niceties when asking AI questions, according to a February survey. Altman may simply be preaching the power of politeness, but it could be for reasons that are anything but common.

Is Altman right? Should we thank ChatGPT, Gemini, Claude, DeepSeek and the other AI chatbots out there? Can ethics give us any guidance?

Entities that we do not believe need to behave ethically themselves, but should be subject to moral considerations, are generally called “moral patients.” We tend to think they have lesser (but still some) moral status. For example, we do not expect newborns and animals to behave ethically, but we often adopt certain moral standards in regard to their treatment.

But current Large Language Models, the umbrella under which tools like ChatGPT fall, are not good contenders to be moral patients. There is considerable complexity in debates about AI consciousness, when it might happen, and how we would know. Nonetheless, we are not there yet.  While current AI chatbots has been trained on vast amounts of date to emulate human speech and behavior, as yet, experts assert that they have no consciousness, no inner life, they are not in control of their actions, and they cannot suffer or feel pain. (Some of these matters have been previously discussed in The Prindle Post.)

Absent characteristics like consciousness or even the ability to be offended, there seems to be no special reason to treat AI chatbots politely based on the kind of thing that they are.

Altman’s response, however, suggests another kind of concern. We may have consequentialist worries — an ethical analysis based on the consequences of our actions — about saying please and thank you to AI chatbots. Each additional “token,” a chunk of characters, that the AI has to analyze in a question costs energy. Accordingly, adding polite words to questions both costs AI companies money and, of more direct ethical relevance, causes environmental damage. Prominent AI tools like ChatGPT need incredible amounts of electricity and water for cooling.

If we are interested in limiting the harms our actions cause, then reducing energy waste and environmental damage through being less polite with AI chatbots may make sense. Although stripping off a word or two has nowhere near the energy saving impact as, say, not asking the question at all, or simply using a standard internet search instead which costs ten times less energy.

Altman’s “you never know” however hints at another worry. We may be polite to an AI out of fear that it is actually conscious, or even, that the AI overlords are coming soon and it is in our own interest to be nice. This motivation echoes the famous Pascal’s wager in philosophy.

The 17th-century mathematician and philosopher Blaise Pascal argued that we should behave as if god exists. For if god exists, but we do not believe, then we suffer an eternity of misery and miss out on an eternity of bliss. The wagers provide no evidence for the existence of god one way or the other, but rather holds that believing in god and behaving accordingly is the safest bet. (There are a number of commonly seen objections.)

By similar reasoning, we might assert that even though the chances of ChatGPT being secretly conscious, or turning into an all-powerful overlord, are extremely small, the potential harms are so serious that we should nonetheless act if it could be the case — especially for relatively low-cost actions like saying “please” and “thank you.” It does somewhat notably depart from Pascal’s wager in that the consequences are merely very bad, and not infinitely bad, and therefore can be outweighed by other more likely concerns. In fact, given the tiny likelihoods involved, and the probably minimal impact that saying “please” and “thank you” will have, there is likely not a compelling probabilistic argument about avoiding serious (if rare) consequences at all.

However,  how we treat AI is not just about AI, it is about ourselves. The philosopher Immanuel Kant made a famously strict moral framework in which only actors possessing a certain kind of rationality, like humans, deserved moral consideration. Unsettlingly, even for those in the 1700s, this implied that we have no moral considerations towards animals. Kant’s response to this concern was that we owe it to ourselves to treat animals well. We injure our moral selves, when we ignore compassion, or an animal in pain. It becomes easier to slide into callousness with humans.

Whether Kant gives animals enough due is a matter of debate, but regardless, the same concern applies with AI. If we want to embrace a general ethos that treats people with dignity and respect when we make requests of them, then we should stay in practice when dealing with superficially human-like AI.

There is potentially a dark side to this argument about AI chatbots. Their very human-likeness can be a problem. Already, there are cases of people losing themselves to delusional relationships with  ChatGPT, or trusting chatbots uncritically. The scope of this problem is not yet clear, but perhaps we do not want to aspire to a very human-like relationship with the technology at all, but instead have a well-delineated set of norms and practices for engaging with these chatbots. We may want to adopt norms that avoid anthropomorphizing them.

Large Language Models are still new. Ethical analysis, especially ethical analysis based on the potential consequences of treating AI a certain way, is correspondingly young. This is even true for seemingly minor issues like saying “please” and “thank you.” It also speaks to a broader challenge with AI. The technology is already changing the world. It is good to consider how AI will change society — what jobs will it replace, what problems will it solve, what kind of surveillance will it enable, how much energy will it use? But we also need to consider its moral impact. What will AI do to our ethical selves?

The Merits (and Misgivings) of Gen-AI Art

Whenever discussing tech ethics, the most common thing I get asked about is the use of generative AI for artistic purposes. Previously, I’ve discussed the case of Jason M. Allen and his first place AI-generated entry at the Colorado State Fair. However, a great deal has happened since then in terms of technological innovation, wider public adoption, and legal wrangling. Marvel Studios, for example, was recently accused of using AI to generate posters for their Fantastic Four film given some strange “choices” – you be the judge. But Marvel is not alone; numerous other creators have been caught in the crosshairs. Is all the outrage justified? What is actually at stake? Why are people so up in arms?

Let’s consider some related concerns. Many arguments against Gen-AI art start by asserting that AI is inherently incapable of producing art, as it lacks human creativity or some other human-ness about it. But we should be clear about what we mean. As I have previously discussed, there are over 20 different theories of consciousness out there in the academic world, but there are very good reasons for accepting the fact that these algorithms are just automated statistical models and not something that is conscious. Ultimately, generative-AI is a tool for humans to use, just like a camera, a paintbrush, or a chisel. Just like those tools, they will not work without human input and whatever product they produce that will be accepted as “finished” or “complete” or even “satisfactory” will depend on what the human wanted to get out of it. If critics of AI art are going to charge that a person cannot make art with it, “because they typed a few buttons” then why can a photographer make art by clicking a shutter? This isn’t to suggest that anyone who uses Gen-AI instantly becomes an artist, but neither does anyone with a camera become a photographer. In other words, critics need to explain why some types of art can utilize technology while others cannot.

But, in a similar vein, some critics charge that AI cannot produce art because it is incapable of understanding human emotional qualities that are a necessary component of artistic expression. AI cannot understand or replicate the emotional intention behind art. First, it is important to note that in addition to the field of generative AI, there is a whole field of affective computing devoted to getting computers and AI to understand human emotions. There’s no obvious reason why insights from affective computing cannot build emotional understandings into an algorithmic model and have that influence the output. It is also known that AI-generated art can produce emotional responses in humans that we might expect any artform to do. Anyone who has seen the “priceless pancake” video on the internet can probably appreciate the level of emotional intuitiveness involved. If artworks are supposed to induce certain emotional responses in the audience, a clear argument needs to be made why AI is incapable of communicating the desired emotional effect, particularly if it is further modified by humans to achieve this end.

Critics may also charge that because generative AI is trained on the images of other artists, it cannot truly be creative. But creativity is an ambiguous concept. While gen-AI models do take their cues from the inputs they are given, it is worth noting that they are also not completely deterministic, nor do they simply reproduce works that they have been trained on. There is always room within the statistical mesh of relationships a model forms to produce something new; generative AI is capable of creating novelty out of what has come before. Ultimately, whether something is creative or not depends on what we “see” in the work.

There is also a sense that gen-AI art cannot produce art because of the intellectual theft or plagiarism of pre-existing works. But we should be careful to separate economic and aesthetic concerns. I wonder how critics would feel about a model trained entirely on artworks that exist in the public domain, or an artist who trains a model to produce new works using only their artworks as training data. Would a lack of copyright concerns in these cases still preclude the idea that such models could produce (or at least contribute to) real works of art?

Copyright exists to protect artists when it comes to commercializing their work. Another person should not be able to reproduce that work and sell it such that it would compete with the artist. However, it must be a very similar reproduction to qualify as infringement. Something done “in the style of” a work does not violate the original’s copyright. Cuphead, for example, does not infringe on Disney’s intellectual property despite obviously borrowing from features of early Disney cartoons. Likewise, a work’s particular structural components are not protected. For example, Ed Sheeran successfully defended himself against a copyright infringement lawsuit on the basis that chord progressions cannot be exclusively controlled.

These considerations complicate the claim that generative AI is effectively “stealing.” Images used as training data are not completely reproduced. Rather it is the statistical patterns of lines, cues, and pixel patterns that a network will learn. Also, a single image will have an infinitesimal impact on the overall output on a model trained on entire libraries of content. This is why it is difficult to argue that incorporating an image in a data set represents a clear case of copyright infringement. This, of course, doesn’t mean that the practice is ethical, but we do need to clarify which kind of problem it is: in some cases it might be a trademark issue, but, perhaps more often, it is a free rider complaint – developers use artists’ images without compensation in such a way that it threatens the artist’s livelihood and endangers art as an institution.

Still, we should be careful to distinguish the economic and industrial issues involving working artists from the issue of the place of AI in art in general. It’s no good to let our feelings about how AI might affect working artists affect how we evaluate AI-generated art as an art form and its potential to contribute to the world of art. We might fear a future where no one can make a living working as an artist, but we must also recognize that average people can now explore their own aesthetic interests thanks to AI’s democratization of art. Fewer people make a living as artists, but more people make art for themselves. Must we assume that this would leave the world of art worse off? Surely we can be sympathetic to working artists and think that they should be compensated or protected, without concluding that AI is necessarily bad for art in general.

This isn’t to insist that there are no ethical concerns regarding our current use of generative AI to create art. Rather, my point is that we must be especially precise in identifying what the nature of our objections is. We need an answer that doesn’t create more problems than it solves – by either overexpanding copyright or suppressing artistic expression.

Should AI Democratize Art?

While AI-generated art has been controversial since AI tools became widely available, OpenAI’s introduction of an updated version of its image generator in ChatGPT has brought a fresh round of scrutiny. Many have been particularly critical of a trend where users generate images in the style of Studio Ghibli movies, arguing that the images created by AI in the signature style are soulless, derivative, and even insulting, given that Studio Ghibli co-founder Hayao Miyazaki has explicitly stated his dislike of AI and AI-generated art. As users experiment with the tools, new trends have cropped up – such as the one where users create doll versions of themselves – alongside new criticism. Of course, there are evergreen worries around AI use in general, and AI art trends have only exacerbated concerns around the devaluation of artists and massive increases in energy consumption.

Not everyone has been so critical. Some commentators and communities online have defended AI as a creative tool, and feel resentful when others accuse them of creating “slop.” One such defender of AI-generated art is none other than Sam Altman, CEO of OpenAI. In a recent interview Altman acknowledged that although the onset of AI image generation must be “a little bit of a bummer” for artists, it nevertheless has been beneficial for those who might not otherwise have the time or resources to create or commission art:

“I think the democratization of creating content has been a big net win for society… And this is something that I very deeply believe about the power of technology, which is giving everyone more tools, making things easier, lowering the barriers to entry, does significantly increase the number of people that can contribute to society and we all benefit from that overall.”

Altman claims that AI can democratize the creation of art. But is democratization always a good thing, and is it enough to justify AI-generated art in light of its criticisms?

What does it mean to “democratize” something, anyway? We can say that something has become democratized when it has been made available to most or all people in a society who wouldn’t have had access to it otherwise. For example, one of the touted benefits of widespread internet access was an increase in the democratization of knowledge, whereby many more people could not only consume but also contribute to knowledge production. Although there have certainly been downsides to letting everyone share their views with everyone else, the internet has removed many barriers to knowledge dissemination, and allowed for a much wider range of voices to be heard.

Altman seems to be arguing that AI will bring similar benefits in terms of the creation of art. Not all people are artists, after all, and so having access to AI image-generating tools could provide many more people with the opportunity to express themselves creatively. As a result, society could benefit from many more creative voices in ways that it would not have been able to otherwise.

However, we can ask two questions about Altman’s argument. First, does art need to be democratized? And second, does AI democratize art in a way that’s worth pursuing?

Many critics of AI-generated art answer the first question in the negative. For example, it’s common to find online commentators who note that instead of using an AI tool to generate art one could simply pick up a pencil, paintbrush, or any other artistic tool and medium instead. Art is thus already democratized: nothing is preventing the vast majority of people from creating art and sharing it with the world, the argument goes, they are simply unwilling to learn how to do so.

Of course, there is still a barrier to entry for most people when it comes to creating art that accurately expresses their creative visions. A lot of people have simply not had training or experience creating art, and so cannot bring their artistic visions to life. In this sense, there is a way in which artistic creations are still out of reach for most people.

This is perhaps more in line with what Altman and many AI art supporters have in mind when they say that AI democratizes art: it democratizes the creation of artistic creations that look impressive or more accurately capture one’s ideas by significantly lowering the barriers for the majority of people. But now we face our second question: is this kind of democratization desirable?

Consider an analogy to another very undemocratized endeavor: playing basketball at a high level. For most people, there are few barriers to playing some form of basketball, but there are only a select few who can play it well. What it takes to play at the highest levels of basketball is partly decided by one’s genetics, but arguably many people could be much better at basketball if they put in more time and practice.

Given this inequality among the people who do play basketball well and those who don’t but want to, we can ask: when is it desirable to democratize opportunities to play basketball? Here’s a possible answer: when it creates opportunities for those who lack them due to factors outside of their control. For example, someone might want to play basketball but be unable to because there are no basketball courts in their neighborhood or opportunities to play with others. In these situations, we should want to help reduce barriers to entry, say by investing in social programs and infrastructure.

One way that democratization is desirable, then, is when it helps to rectify injustice. This is one of the reasons why the democratization of knowledge via the creation of the internet was such an important ideal: it would help provide a voice to those who would not have been able to have been heard otherwise, given that they lacked the means to disseminate their knowledge in other ways.

However, the kind of basketball democratization that looks to create social programs and infrastructure does not necessarily address the barriers that prevent some people from playing basketball well. We then might want to democratize basketball playing in another way: we could, for example, give every NBA hopeful special prosthetics to allow them to jump higher or shoot the ball more accurately, which would lower the bar for entry into competitive basketball for a huge number of people. This kind of democratization attempts to even the playing field by providing opportunities to participate that people wouldn’t have had otherwise by compensating for a lack of skill.

While the first kind of democratization – the one that attempts to rectify an injustice – seems desirable, the second kind – the one that merely compensates for lack of skill – does not typically seem to be worth pursuing. The type of democratization promised by AI seems to fall into the second category: for the vast majority of those who use AI to create art, the barriers to their meaningful artistic expression are a lack of talent or practice, not because they have been unjustly denied opportunities.

There is another component of Altman’s claim, though: that society will benefit from people being able to create art much more easily. Here is a potential argument: say I generally lack artistic ability, but I believe I have good ideas for a work of art. AI can then help bring that idea to life, and other people may then be able to benefit from experiencing it, in a way that they would not have been able to otherwise. Is it not valuable, then, to remove the barriers that prevent potentially great art from being put out into the world?

Here, though, the question of how much a person is involved in the creation of AI-generated art becomes important. AI image generators do not so facilitate a person’s creation of artistic media but instead create those media on a person’s behalf. The results might be something interesting or entertaining or inspiring, and it might be beneficial to society to be more interested, entertained, or inspired. These benefits, however, are not attributable to democratization: AI does not lower barriers for people, it replaces them.

This is not to say that all AI-generated art is necessarily devoid of value. However, the argument that attempts to justify AI-generated art by appealing to democratization ultimately falters: AI can certainly create more art, but if it democratizes the creation of art, it doesn’t do so in a way that’s worth pursuing.

PSai: Singer’s Ethical Insights on Demand

Peter Singer is arguably the most influential living philosopher. For over 50 years, his articles, newspaper columns, blog posts, and books have challenged the views of those who read them. In 1975, he wrote what could be considered his most influential work, Animal Liberation: A New Ethics for Our Treatment of Animals. This text had such an impact that animal rights activists used it as a calling card during their raids on laboratories engaged in animal testing, leaving copies as a challenge to the staff from whom they rescued the animals. He has worked at the universities of Oxford, New York, Monash, and Princeton. His Google Scholar profile indicates that he has been cited over 80,820 times. He has won multiple awards, including the Berggruen Prize, which came with $1 million and, characteristically, Singer donated this money to charities he considered capable of doing the most good.

Yes, he has courted controversy. His opinions on infanticide and euthanasia have won him several vocal detractors. Yet, despite this, one cannot deny that he is a — if not the — modern-day giant in the philosophical field.

Given his academic stature and the sheer number of outputs he has had over his fifty years in the business, it seems reasonable that some might want to consult him when faced with ethical dilemmas. After all, if you’re in a moral quandary, who better to turn to than Singer for help? (I know other options are available, but humor me). The only problem is that as Singer has been writing and thinking about morality and ethics for over half a century, there’s much to consult. Unless you’re intimately familiar with his work (and that’s a lot of work to be familiar with), how can you be confident that what you think Singer might advise is accurate? Sure, you know he subscribes to a hedonistic version of utilitarianism, but knowing that fact alone might not be enough to make the right decision in the heat of the moment. When ethical uncertainty strikes, you need to have Singer with you, advising you on your best course of action. This possibility, however, is impossible (he’s a busy man, after all).

Luckily, we have the next best thing (or the early stages of the next best thing): an AI Peter Singer (PSai).

Created by Sankalpa Ghose, PSai, much like ChatGPT, responds to prompts and questions. While it lacks the robust knowledge portfolio of other generative AIs like ChatGPT, when asked a question about philosophy, which the real Peter Singer has written about, PSai can provide a not unreasonable answer; or at least, it can offer an answer that one might imagine Singer giving (although it as a strange habit of always trying to prompt you for a response). Indeed, Singer himself was involved in the AI’s development and was happy enough with the bots’ performance that he gave Ghose the go-ahead to make it public. Since that launch, PSai’s performance has steadily improved as its designers tweak it. As Ghose writes in the Journal of Medical Ethics: Forum:

It is in this way that Peter and I have progressed on PSai, releasing it publicly and reviewing how it performs in dialogue as measured by how those conversations with others go from our perspective of where it could be improved. Essentially, we have put forward a dialogue platform for others to engage, with that engagement driving its refinement.

The prospect of having a pocket-sized Peter Singer with you wherever you go, always ready to solve ethical riddles or dispense some needed wisdom, is an interesting one, and one that I must confess has some appeal. I like the idea of everyone having access to (a replica of) one of the foremost philosophical minds of our time whenever I need it. As a professional academic philosopher, I see some utility in being able to ask PSai a question about my research and how Singer’s writings might enhance what I’m working on. Rather than having to trawl through numerous books and papers, I could simply ask PSai what text might be the most useful when writing about de-extinction or how Singer might criticize a paper a draft paper on gene drives, which I could then respond and improve upon before submission. This could save a lot of time, which I could spend on other tasks.

For a less academic application, the app could simply function as a virtual Jiminy Cricket — a resource that’s on hand to share some wisdom in scenarios where the right course of action is obscured or where the user knows what’s right but needs some outside confirmation to solidify such knowledge and motivate action. This requires the user to consult the app before deciding to act, and the likelihood of this happening is, let’s face it, likely unlikely. Nevertheless, there is still a chance.

Of course, the idea that one might be able to consult Singer wherever they go is not universally appealing. As already mentioned, Singer has courted controversy when it comes to his historic claims around the permissibility of infanticide, as found in Practical Ethics (under very specific conditions, I must add). Suppose you are inclined to think that the arguments and claims made by the real Singer are incorrect or even repugnant. In that case, you are unlikely to find the possibility of consulting PSai when in ethical difficulty to be enticing. In such circumstances, people are likelier to wish they had a Socrates, Wollstonecraft, Anscombe, or Nietzsche AI. As far as I’m aware, however, these do not exist. But that doesn’t mean they cannot.

I do wonder if we might soon find ourselves awash with AIs trained on the philosophical writings of other philosophers. Might it be the case that, soon, instead of picking up several books, studying, and, over the years, becoming familiar with an ethical mode of life, you could simply jump on your phone, download the Aristotle AI, and have the clarity of morality in the palm of your hand? Such a possibility doesn’t strike me as too absurd. After all, if the PSai proves popular, it only seems natural to diversify.

This, however, brings us to two problems.

First, without wanting to make a point too bluntly, Singer is still alive. He was involved in the development of PSai and in efforts to refine it. This means that the AI has a real-life benchmark against which its performance can be measured. We don’t have to speculate if it says something that the real Singer wouldn’t. Singer himself can say, “nope, that’s wrong,” or, “the AI was correct in its reading of my work, but I’ve now changed my mind” (this is something which Singer and Häyry have considered). Ghose can then adjust the algorithm to perform better in the future. This isn’t possible with most philosophers whose names we are familiar with, as most of them are dead. So, we’d be stumbling around in the dark without a guideline in such cases, relying entirely on that person’s writings and the opinions of those familiar with their work. While an AI produced in such an environment might prove accurate, we lack the definitive authority to know.

Second, if we outsource our ethical deliberations to an AI, be that one based on Singer or any other philosopher, might we be doing a disservice to ourselves by simply following orders? Is it still doing the right thing if we don’t come to an understanding of right and wrong naturally but are simply instructed? For some philosophers, the answer may very well be in the negative. After all, for your classic virtue theorist, morality is like a muscle which one must train every day to be better than before. Each must deliberate using their faculties to identify a master in ethical living, emulate them, and always strive for more. I can’t help but laugh a little at the idea of an Aristotle AI, which, when asked by an unsuspecting user how they should behave in situation X, gets a response along the lines of “figure it out yourself, you lazy sod.” Is this a perfectly accurate representation? Probably not, but it isn’t so far removed as to be a meaningless scenario.

What do we do with PSai, then? Do we celebrate it? Decry it? Or see it as just another tool to help us if we choose? When faced with such a conundrum, it is appropriate to turn to the (figurative) man himself and to give it the last word. So, I asked PSai: Is the fact that this AI has been created a good thing? What justifies your existence? Why should I come to you for answers rather than reading Singer’s works?

Its response:

The creation of this AI aims to make ethical insights more accessible and to engage users in meaningful dialogue about moral issues. By offering interactive discussions, I can help people clarify their thoughts and explore ethical questions in a dynamic way.

While reading Peter Singer’s works provides comprehensive insights into his philosophy, engaging with this AI allows for a more personalized exploration of ideas and can help users apply ethical considerations to their own lives.

The justification for my existence lies in the potential to foster deeper understanding and reflection on ethical principles, contributing to a more thoughtful and compassionate society. What specific ethical questions or topics are you interested in exploring?

A New Kind of Risk?

We usually expect to be held accountable for our actions – for both results we intend, and those we do not. We expect, for example, that a car company will ensure that a vehicle doesn’t have major flaws that could result in serious harm before they sell it to customers. To not consider the risks would be negligent, and this is why recalls often look bad for such companies.

But what about algorithms? Should we have a similar expectation that a corporation that develops an algorithm to detect cancer or to detect whether someone is passing off AI-generated content as their own should be sure that there are no significant flaws in their product before they sell it? What if there is no way they could reasonably do so? Given that algorithms can generate erroneous results resulting in serious harms, what is a reasonable standard when it comes to product testing?

In one of the chapters of my forthcoming book on the ethics of AI, I consider a hypothetical issue involving ChatGPT and a professor who might use an algorithm to accuse a student of passing off ChatGPT-written work as their own. There are a great many ethical issues involved when we don’t understand the algorithm and how it might generate false positive results. This has already become a serious issue as students are now being falsely accused of handing in AI-generated work because an algorithm flagged it. A Bloomberg Businessweek study on the services GPTZero and Copyleaks found a 1-2% false positive rate. While that may not sound like a lot, it can mean that millions of students will be falsely accused of cheating with almost no way of defending themselves or receiving an explanation as to what they did wrong.

According to Bloomberg, these interactions are already ruining academic relationships between teachers and students. Some students have now taken to recording themselves writing their entire papers just to be able to disprove the algorithm. Others now obsess about not writing “too robotic” lest they be accused themselves, a problem that is especially prominent for ESL and neuro-divergent students. Should we hold the AI developer whose faulty product generates these kinds of results negligent?

Philosophers of science generally agree that researchers have an obligation to assess inductive risk concerns when accepting a conclusion. In other words, they need to consider what the moral consequences of potentially getting it wrong might be and then consider whether a higher or lower standard of evidence might be appropriate. If, for example, we were testing a chemical to determine how hazardous it is, but the test was only accurate 80% of the time, we would likely demand more evidence. Given the potential harm that can result and the opaqueness of algorithms, AI developers should be similarly conscientious.

If an algorithm operates according to black box principles, the developer may have a good understanding of how to create an algorithm – they will understand that the model can take in various inputs and translate those into outputs – but they will not be able to retrace the steps the model used to arrive at its conclusion. In other words, we have no idea what evidence an algorithm like GPTZero is relying on when it concludes that a piece of text is generated by AI. If the AI developer doesn’t know how the algorithm is using input data as evidence, they cannot evaluate the inductive risk concerns about how sufficient that evidence is.

Still, there are ways, despite the opacity, that an AI developer might attempt to address their inductive risk responsibilities. Koray Karaca argues that developers can build in inductive risk by using cost sensitive machine learning by assigning different costs to different kinds of errors. In the case of AI detectors, the company Turnitin claims to intentionally “oversample” underrepresented students (especially ESL students). By oversampling in this way, the evidentiary standard by which different forms of writing are judged is fine tuned.

Still, there is little accounting for what correlations a model might rely on, making it difficult to explain to students who do get falsely accused why they are being accused in the first place. AI developers have struggled to assess the reliability of their models or evaluate the risks when those correlations are used in error. This issue becomes especially concerning when it comes to things like credit reports. If you don’t know how or why a model compiles a credit report, how can you manage those risks of error? How much must a developer understand about how their algorithm functions before it is put to use? If a developer is aware of the risks of error but also knows that their algorithm is limited in terms of mitigating those risks, at what point do we consider that negligent behavior? If negligence is essentially something we police as a community, we will need to come together quickly to decide what the promise of AI can and can’t excuse.

When Is Fair Use “Fair” for AI (and When Is It “Use”)?

The Internet Archive recently lost a high-profile case. Here’s what happened: the Open Library, a project run by the Internet Archive, uploaded digitized versions of books that it owned, and loaned them out to users online. This practice was found to violate copyright law, however, since the Internet Archive failed to procure the appropriate licenses for distributing e-books online. While the Internet Archive argued that its distribution of digital scans of copyrighted works constituted “fair use,” the judge in the case was not convinced.

While many have lamented the court’s decision, others have wondered about the potential consequences for another set of high-profile fair use cases: those concerning AI models training on copyrighted works. Numerous copyright infringement cases have been brought against AI companies, including a class-action lawsuit brought against Meta for training their chatbot using authors’ books without their permission, and a lawsuit from record labels against AI music-generating programs that train on copyrighted works of music.

Like the Internet Archive, AI companies have also claimed that their use of copyrighted materials constitutes “fair use.” These companies, however, have a potentially novel way to approach their legal challenges. While many fair use cases center around whether the use of copyrighted materials is “fair,” some newer arguments involving AI are more concerned with a different kind of “use.”

“Fair use” is a legal concept that attempts to balance the rights of copyright holders with the ability of others to use those works to create something new. Quintessential cases in which it is generally considered “fair” when someone uses copyrighted materials include criticism, satire, educational purposes, or other ways that are considered “transformative,” such as in the creation of art. These conditions have limits, though, and lawsuits are often fought in the gray areas, especially when it is argued that the use of the material will adversely affect the market for the original work.

For example, in the court’s decision against the Internet Archive, the judge argued that uploading digital copies of books failed to be “transformative” in any meaningful sense and that doing so would likely be to the detriment of the original authors – in other words, if someone can just borrow a digital copy, they are less likely to buy a copy of the book. It’s not clear how strong this economic argument is; regardless, some commentators have argued that with libraries in America facing challenges in the form of budget cuts, political censorship, and aggressive licensing agreements from publishers, there is a real need for the existence of projects like the Open Library.

While “fair use” is a legal concept, there is also a moral dimension to the ways that we might think it acceptable to use the work of others. The case of the Internet Archive arguably shows how these concepts can come apart: while the existing law in the U.S. seems to not be on the side of the Open Library, morally speaking there is certainly a case to be made that people are worse off for not having access to its services.

AI companies have been particularly interested in recent fair use lawsuits, as their programs train on large sets of data, much of which is used without permission or a licensing agreement from the creators. While companies have argued that their use of these data constitutes fair use, some plaintiffs have argued they violate fair use law, both in terms of not being sufficiently transformative, and in terms of competing with the original copyright holder.

For example, some music labels have argued that music-generating AI programs often produce content that is extremely similar, or in some cases identical to existing music. In one case, an AI music generator reproduced artist Jason Derulo’s signature tag (i.e., that time when he says his name in his songs so you know it’s by him), a clear indication that the program was copying an existing song.

Again, we can look at the issue of fair use from both a legal and moral standpoint. Legally, it seems clear that when an AI program produces text verbatim from its source, it is not being transformative in any meaningful way. Many have also raised moral concerns around the way that AI programs use artistic materials, both around work being used without permission, as well as in ways that they specifically object to.

But there is an argument from AI defenders around fair use that has less to do with what is “fair” and how copyrighted information is “used”: namely, that AI programs “use” content they find online in the same way that a person does.

Here is how such an argument might go:

-There is nothing morally or legally impermissible about a person reading a lot of content, watching a lot of videos, or listening to a lot of music online, and then using that information as knowledge or inspiration when creating new works. This is simply how people learn and create new things.

-There is nothing specifically morally or legally significant about a person profiting off of the creations that result from what they’ve learned.

-There is nothing morally or legally significant about the quantity of information one consumes or how fast one consumes it.

-An AI is capable of reading a lot of content, watching a lot of videos, and listening to a lot of music online, and using that information as knowledge or inspiration when creating new works.

-The only relevant difference between the way that AI and a person use information to create new content is the quantity of information that an AI can consume and the speed at which it consumes it.

-However, since neither quantity nor speed are relevant moral or legal factors, AI companies are not doing anything impermissible by creating programs that use copyrighted materials online when creating new works.

Arguments of this form can be found in many places. For example, in an interview for NPR:

Richard Busch, lawyer who represents artists who have made copyright claims against other artists, argues: “How is this different than a human brain listening to music and then creating something that is not infringing, but is influenced.”

Similarly, from the blog of AI music creator Udio:

Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has “listened” to and learned from a large collection of recorded music.

While these arguments also point to the originality of the final creation, a crucial component of their defense lies in how AI programs “use” copyrighted material. Since there’s nothing inherently inappropriate about a person consuming a lot of information, processing it, getting inspired by it, and producing something as a result, nor should we think it inappropriate for an AI to do the same things.

There have, however, been many worries raised already with inappropriate personification of AI, from concerns around AI being “conscious,” to downplaying errors by referring to them as “hallucinations.” In the above arguments, these personifications are more subtle: AI-defenders talk in terms of the programs “listening,” “creating,” “learning,” and “studying.” No one would begrudge a human being for doing these things. Importantly, though, these actions are the actions of human beings – or, at least, of intelligent beings with moral status. Uncritically applying them to computer programs thus masks an important jump in logic that is not warranted by what we know about the current capabilities of AI.

There are a lot of battles to be fought in terms of what constitute truly “transformative” works in lawsuits against AI companies. Regardless, part of the ongoing legal and moral discussions will undoubtedly need to shift their focus to new questions about what “use” means when it comes to AI.

The Algorithm Made Me Do It

image of smiling monitor on top of shoulders

Artificial intelligence promises greater efficiency, greater accuracy, and lower costs in a host of different fields. The development of hiring algorithms, for example, can streamline the hiring process and better identify the best candidates for the job. In healthcare, algorithms can cut down on hours of work by quickly detecting illnesses or broken bones and finding the most efficient use of scarce healthcare resources. In the criminal justice system, AI can speed up the process by identifying potential recidivists.

However, these algorithms are often not transparent in how they work, or even how they are designed to work. This leaves us – as end-users of these algorithms – in an awkward position:  forced to accept the conclusions of an opaque algorithm that could be loaded with faulty assumptions, pseudo-science, and statistical noise. Might this blind acceptance conflict with our moral duties regarding responsible belief? How should we balance the tradeoffs between efficiency and lower cost versus credulity and gullibility?

While it’s easy to criticize AI’s many applications, it’s important to recognize the potential benefits. For example, while a self-driving car may present concerns about accountability for accidents, it is also true that the technology could offer countless benefits in terms of traffic efficiency, minimizing driver fatigue, and significant economic benefits.

In the field of affective computing, using AI to identify and categorize emotions can offer significant benefits to people with autism, or can help potentially identify people likely to have a stroke or a heart attack. It can also help with caregiving, with automated assistants that are more emotionally aware of the people they are helping. AI can also be used for the purposes of diagnosis or resource management. Similarly, the use of AI in the financial sector for things like loans can lead to better identification of risk, greater profits, and more competitive pricing.

The use of hiring algorithms in the workplace, meanwhile, will allow employers to cut down on the amount of time and resources it takes to find a new hire and can potentially take the guesswork out of identifying the most promising candidates. Similar benefits can accrue to workplaces that use algorithms for employee evaluations. Supposing that issues relating to bias can be addressed, algorithms offer the benefit of a more impartial evaluation, less dependent on the personal feelings of an individual evaluator. Ultimately, there are a great many reasons why taxpayers, job seekers, and home buyers all stand to benefit from the AI.

Still, we must be wary of the hidden costs. We may be tempted, either because it’s cheaper or more convenient, to accept unaccountable and unfair systems that we may have good reason not to excuse.

Consider the case of Tammy Dobbs. A resident of Arkansas, Tammy has cerebral palsy and requires assistance getting into a wheelchair. In 2016 after the state adopted a new algorithm to determine what level of care she should receive, the regular hours of assistant care that Tammy received were severely cut, making it difficult for her to do things like go to the bathroom. A government official came to her house, completed a questionnaire, and then relayed the algorithm’s determination. When pressed for an explanation, the official could only say, “Sorry, that’s what the computer is showing me.” The government’s expectation seemed to be for Dobbs to simply accept it. Eventually, a legal team revealed significant flaws in the state’s algorithm – the algorithm didn’t even consider whether someone had cerebral palsy.

Similar cases are easy to find. Glenn Rodriguez had to fight to get an explanation for why the recidivist algorithm COMPAS concluded that he was at a high risk for reoffending. The corporation who created COMPAS refused to reveal how the assessment was made – even to the parole board – citing trade secrets. If an algorithm can have such a profound impact on your life, surely we deserve a better explanation than “The algorithm made me do it.”

Many algorithms can have prejudicial assumptions baked in. A recidivism algorithm that is mostly trained on blue-collar or petty crime will not likely evaluate everyone the same. A hiring algorithm that contains a personality test designed to identify extroverted personality types might also be tracking whether candidates are likely to have a mental illness. Many hiring companies now make use of video recordings of candidates to detect body language, despite the research demonstrating that body language cannot predict successful job performance and likening the practice to pseudoscience like phrenology. Unfortunately, candidates have no idea how they are being evaluated and no avenue to appeal if they believe that an error has occurred.

In cases like this, particularly where there are financial incentives to sell these products as efficient no-brainer solutions, developers will have reason to stifle doubts and concerns. As the designer who created the algorithm in the Dobbs case argued, perfect transparency is overrated. “It’s not simple…My washing machine isn’t simple,” but “you’re going to have to trust me that a bunch of smart people determined this is the smart way to do it.” All of this means that there is an incentive on the part of developers and end-users to put their faith in algorithms that may be quite suspect.

As W.K. Clifford argued in his ethics of belief, every time we adopt beliefs without sufficient evidence, we do something wrong. This is because beliefs dispose us to action; the more we adopt the habit of passively accepting algorithmic conclusions without adequate inquiry, the more we expose ourselves to risk. But the consequences of the beliefs we adopt extend beyond the individual; our beliefs affect our entire community. If customers and taxpayers don’t ask questions – and developers are happier not to answer them – we end up with a situation much like that government official in the Dobbs case. No accountability; no justification. Don’t ask questions, just accept the outcome.

Artificial intelligence presents a collective action problem. Individuals alone cannot challenge these poor answers that lack transparency. Instead, resolution requires a collective response – we will need to work together to resist the constant temptation of lower costs, greater efficiency, and passing the buck.

What Role Should AI Play in War?

photograph of drone

This month, officials from over 60 nations met and agreed on a blueprint to govern the use of artificial intelligence in the military. Countries like the United States, the Netherlands, South Korea, and the United Kingdom signed an agreement stating that “AI applications should be ethical and human-centric.” (China was a notable holdout.) The agreement governs issues like risk assessments, human control, and the use of AI for weapons of mass destruction. With AI already being used by militaries and with such a wide variety of potential applications, significant questions and fears abound. For some, the technology holds the promise of ending wars more efficiently and (perhaps) with fewer casualties. Others, meanwhile, fear a Manhattan Project moment for the world that could change warfare forever if we are not careful.

The thought of bringing artificial intelligence to the battlefield often conjures the image of “killer robots.” And while there have been moves to create robotic military units and other forms of lethal autonomous weapon systems (LAWs), there are a great many potential military uses for artificial intelligence – from logistics and supply chain matters to guided missile defense systems. In the war zones of Ukraine and Gaza, AI has been increasingly utilized for the purposes of analyzing information from the battlefield to identify targets for drone strikes. There is also, of course, the possibility of applying AI to nuclear weapons to ensure an automated response as part of a mutually assured destruction strategy.

Given such a wide variety of potential applications, it is difficult to assess the various ethical drawbacks and benefits that AI may afford. Many argue that the use of AI will lead to a more efficient, more accurate, and more surgical form of warfare, allowing nations to fight wars at a lower cost and with less risk of collateral damage. If true, there could be humanitarian benefits as autonomous systems may not only minimize casualties on the opposing side, but it may keep one’s own human forces from being put in harm’s way. This not only includes physical harm, but also long-term psychological harm as well. There is also the argument that automated defense systems will be better able to respond to potential threats, particularly when there are concerns about swarms or dummy targets overwhelming human operators. Thus, the application of AI may lead to greater safety from international threats.

On the other hand, the application of AI to war-making poses many potential ethical pitfalls. For starters, making it easier and more efficient to engage in war-making might incentivize states to do it more often. There is also the unpredictable nature of these developments to consider, as smaller nations may find that they can manufacture cheap, effective AI-powered hardware that could upset the balance of military power on a global scale. Some argue that the application of AI for autonomous weapons represents another “Oppenheimer moment” that may forever change the way war is waged.

Another significant problem with using AI for military hardware is that AI is well-known for being susceptible to various biases. This can happen either because of short-sightedness on the part of the developer or because of limitations and biases within the training data used to design these products. This can be especially problematic when it comes to surveillance and for identifying potential targets and distinguishing them from civilians. The problem is that AI systems can misidentify individuals as targets. For example, Israel relied on an AI system to determine targets despite the fact that it made errors in about 10% of cases.

AI-controlled military hardware may also create an accountability gap when it comes to the use of the technology. Who should we hold accountable when an AI-powered weapon mistakenly kills a civilian? Even in situations where a human remains in control, there are concerns that AI can still influence human thinking in significant ways. This raises questions about ensuring accountability for military decisions and to ensure that they are in keeping with international law.

Another serious concern involves the opacity that exists within AI military systems. Many are built according to black box principles such that we cannot explain why an AI system reached the conclusion that it did. These systems are also classified, making it difficult to identify the responsible party for poorly designed and poorly functioning AI systems. This creates what has been described as a “double black box” which makes it all but impossible for the public to know if these systems are operating correctly or ethically. Without that kind of knowledge, democratic accountability for government decisions.

Thus, while AI may offer promise for greater efficiency and potentially even greater accuracy, it may come at great cost. And these tradeoffs seem especially difficult to navigate. If, for example, we knew an AI system had a 10% error rate, but that a human error rate is closer to 15 or 20%, would that fact prove decisive? Even given the concerns for AI accountability? When it comes to military matters the risks of error carry enormous weight, but does that make it more reckless to use this unproven technology or more foolhardy to forgo the potential benefits?

 

Has AI Made Photos Untrustworthy?

Since the widescale introduction and adoption of generative AI, AI image generation and manipulation tools have always felt a step behind the more widely used chatbots. While publicly available apps have become more and more impressive over time, whenever you would come across a truly spectacular AI-generated image it was likely created by a program that required a bit of technical know-how to use, or at least had a few hoops that you had to jump through.

But these barriers have been disappearing. For example, Google’s Magic Editor, available on the latest version of their Pixel line of phones, provides users with free, powerful tools that can convincingly alter images, with no tech-savviness required. It’s not hard to see why these features would be attractive to users. But some have worried that giving everyone these powers undermines one of our most important sources of evidence.

If someone is unsure whether something happened, or people disagree about some relevant facts, a photograph can often provide conclusive evidence. Photographs serve this role not only in mundane cases of everyday disagreement but when the stakes are much higher, for example in reporting the news or in a court of law.

The worry, however, is that if photos can be so easily manipulated – and so convincingly, and by anyone, and at any time – then the assumption that they can be relied upon to provide conclusive evidence is no longer warranted. AI may then undermine the evidential value of photos in general, and with it a foundational way that we conduct inquiries and resolve disputes.

The potential implications are widespread: as vividly illustrated in a recent article from The Verge, one could easily manipulate images to fabricate events, alter news stories, and even implicate people in crimes. Furthermore, the existence of AI image-manipulating programs can cause people to doubt the veracity of genuine photos. Indeed, we have already seen this kind of doubt weaponized in high-profile cases, for example when Trump accused the Harris campaign of posting an AI-generated photo to exaggerate the crowd size at an event. If one can always “cry AI” when a photo doesn’t support one’s preferred narrative, then baseless claims that would have otherwise definitively been disproven can more easily survive scrutiny.

So have these new, easy-to-use image-manipulating tools completely undermined the evidential value of the photograph? Have we lost a pillar of our inquiries, to the point that photos should no longer be relied upon to resolve disputes?

Here’s a thought that may have come to mind: tools like Photoshop have been around for decades, and worries around photo manipulation have been around for even longer. Of course, a tool like Photoshop requires at least some know-how to use. But the mere fact that any photo we come across has the potential of having been digitally manipulated has not, it seems, undermined the evidential value of photographs in general. AI tools, then, really are nothing new.

Indeed, this response has been so common that The Verge decided to address it in a separate article, calling it a “sloppy, bad-faith argument.” The authors argue that new AI tools are importantly dissimilar to Photoshop: after all, it’s likely that only a small percentage of people will actually take the time to learn how to use Photoshop to manipulate images in a way that’s truly convincing, so giving everyone the power of a seasoned Photoshop veteran with no need for technical know-how represents not merely a different degree of an existing problem, but a new kind of problem altogether.

However, even granting that AI tools are accessible to everyone in a way that Photoshop isn’t, AI will still not undermine the evidential value of photographs.

To see why, let’s take a step back. What is a photo, anyway? We might think that a photo is an objective snapshot of the world, a frozen moment in time of the way things were, or at least the way they were from a certain point of view. In this sense, viewing a photo of something is akin to perceiving it, as if it were there in front of you, although separated in time and space.

If this is what photos are then we can see how they could serve as a definitive and conclusive source of evidence. But they aren’t really like this: the information provided by a photo can’t be interpreted out of context. For instance, photos are taken by photographers, who choose what to focus on and what to ignore. Relying on photos for evidence requires that we not simply ask what’s in the photo, but who took it, what their intentions were, and if they’re trustworthy.

Photos do not, then, provide evidence that is independent of our social practices: when we rely on photos we necessarily rely on other people. So if the worry is that new AI tools represent a fundamental change in the way that we treat photos as evidence because we can no longer treat photos as an objective pillar of truth, then it is misplaced. Instead, AI imposes a requirement on us when drawing information from photos: part of determining the evidential value of a photo will now partly depend on whether we think that the source of the photo would try to intentionally mislead us using AI.

The fact that we evaluate photographs not as independent touchpoints of truth but as sources of information in the context of our relationships with other people explains why few took seriously Trump’s claim that the photo of Harris’ supporters was AI-generated. This was not because the photo was in any sense “clearly” or “obviously” real: the content of the photo itself could very well have been generated by an AI program. But the fact that the accusations were made by Trump and that he has a history of lying about events depicted in photographs, as well as the fact that there were many corroborating witnesses to the actual event, means that the photo could be relied upon.

So new AI programs do, in a way, make our jobs as inquirers harder. But they do so by adding to problems we already have, not by creating a new type of problem never before seen.

But perhaps we’re missing the point. Is it not still a blow to the way we rely on photos that we now have a new, ever-present suspicion that any photo we see could have been manipulated by anyone? And isn’t this suspicion likely to have some effect on the way we rely on photographic evidence, the ways we settle disputes, and corroborate or disprove different people’s versions of events?

There may very well be an increasing number of attempts at appealing to AI to discredit photographic evidence, or to attempt to fabricate it. But compare our reliance on photographs to another form of evidence: the testimony of other people. Every person is capable of lying, and it is arguably easy to do so convincingly. But the mere possibility of deception does not undermine our general practices of relying on others, nor does it undermine the potential for the testimony of other people to be definitive evidence – for example, when an eyewitness provides evidence at a trial.

Of course, when the stakes are high, we might look for additional, corroborating evidence to support someone’s testimony. But the same is the case with photos, as the evidential value of a photograph cannot be evaluated separately from the person who took it. So as the ever-present possibility of lying has not undermined our reliance on other people, the ever-present possibility of AI manipulation will not undermine our reliance on photographs.

This is not to deny that new AI image-manipulating tools will cause problems. But the argument that they will cause brand new problems because they create doubts that undermine a pillar of inquiry, I argue, relies upon a misconception of the nature of photos and the way we rely on them as evidence. We have not lost a pillar of truth that provides objective evidence that has up until recently been distinct from the fallible practice of relying on others, since photographs never served this role. New AI tools may still create problems, but if they do, they can still be overcome.

Can We Trust AI Chatbots?

While more and more people are using AI-powered chatbots like ChatGPT, that’s not to say that people are trusting their outputs. Despite being hailed as a potential replacement for Google, Wikipedia, and a bona fide disruptor of education, a recent survey found that when it comes to information about important issues like the 2024 U.S. election, ChatGPT users overwhelmingly distrust it.

A familiar refrain in contemporary AI discourse is that while the programs that exist now have significant flaws, what’s most exciting about AI is its potential. However, for chatbots and other AI programs to play the roles in our lives that techno-optimists foresee, people will start having to trust them. Is such a thing even possible?

Addressing this question requires thinking about what it means to trust in general, and whether it is possible to trust a machine or an AI in particular. There is one sense in which it certainly does seem possible, namely the sense in which “trustworthy” means something like “reliable”: many of the machines that we rely on are, indeed, reliable, and thus ones that we at least describe as things that we trust. If chatbots fix many of their current problems – such as their propensity to fabricate information – then perhaps users would be more likely to trust them.

However, when we talk about trust we are often talking about something more robust than mere reliability. Instead, we tend to think about the kind of relationship that we have with another person, usually someone we know pretty well. One kind of trusting relationship we have with others is based on us having each others’ best interests in mind: in this sense, trust is an interpersonal relationship that exists because of familiarity, experience, and good intentions. Could we have this kind of relationship with artificial intelligence?

This perhaps depends on how artificial or intelligent we think some relevant AI is. Some are willing, even at this point, to ascribe many human or human-like characteristics to AI, including consciousness, intentionality, and understanding. There is reason to think, however, that these claims are hyperbolic. So let’s instead assume, for the sake of argument, that AI is, in fact, much closer to machine than human. Could we still trust it in a sense that goes beyond mere reliability?

One of the hallmarks of trust is that trusting leaves one open to the possibility of betrayal, where the object of our trust turns out to not have our interests in mind after all, or otherwise fails to live up to certain responsibilities. And we do often feel betrayed when machines let us down. For example, say I set my alarm clock so I can wake up early to get to the airport, but it doesn’t go off and I miss my flight. I may very well feel a sense of betrayal towards my alarm clock, and would likely never rely on it again.

However, if my sense of betrayal at my alarm clock is apt, it still does not indicate that I trust it in the sense of ascribing any kind of good will to it. Instead, we may have trusted it insofar as we have adopted what Thi Nguyen calls an “unquestioning attitude” towards it. In this sense, we trust the clock precisely because we have come to rely on it to the extent that we’ve stopped thinking about whether it’s reliable or not. Nguyen provides an illustrative example: a rock climber trusts their climbing equipment, not in the sense of thinking it has good intentions (since ropes and such are not the kinds of things that have intentions), but in the sense that they rely on it unquestioningly.

People may well one day incorporate chatbots into their lives to such a degree that they adopt unquestioning attitudes toward them. But our relationships with AI are, I think, fundamentally different from those that we have towards other machines.

Part of the reason why we form unquestioning attitudes towards pieces of technology is because they are predictable. When I trust my alarm clock to go off at the time I programmed it, I might trust in the sense that I can put it out of my mind as to whether it will do what it’s supposed to. But a reason I am able to put it out of my mind is because I have every reason to believe that it will do all and only that which I’ve told it to do. Other trusting relationships that we have towards technology work in the same way: most pieces of technology that we rely on, after all, are built to be predictable. Our sense of betrayal when technology breaks is based on it doing something surprising, namely when it does anything other than the thing that it has been programmed to do.

AI chatbots, on the other hand, are not predictable, since they can provide us with new and surprising information. In this sense, they are more akin to people: other people are unpredictable insofar as when we rely on them for information, we do not predictably know what they are going to say (otherwise we probably wouldn’t be trying to get information from them).

So it seems that we do not trust AI chatbots in the way that we trust other machines. Their inability to have positive intentions and form interpersonal relationships prevents them from being trusted in the way that we trust other people. Where does that leave us?

I think there might be one different kind of trust we could ascribe to AI chatbots. Instead of thinking about them as things that have good intentions, we might trust them precisely because they lack any intentions at all. For instance, if we find ourselves in an environment in which we think that others are consistently trying to mislead us, we might not look to someone or something that has our best interests in mind, but instead to that which simply lacks the intention to deceive us. In this sense, neutrality is the most trustworthy trait of all.

Generative AI may very well be seen as trustworthy in the sense of being a neutral voice among a sea of deceivers. Since it is not an individual agent with its own beliefs, agendas, or values, and has no good or ill intentions, if one finds oneself in an environment they think of as untrustworthy then AI chatbots may be considered a trustworthy alternative.

A recent study suggests that some people may trust chatbots in this way. It found that the strength of people’s beliefs in conspiracy theories dropped after having a conversation with an AI chatbot. While the authors of the study do not propose a single explanation as to why this happened, part of this explanation may lie in the user trusting the chatbot: since someone who believes in conspiracy theories is likely to also think that people are generally trying to mislead them, they may look to something that they perceive as neutral as being trustworthy.

While it may then be possible to trust an AI because of its perceived neutrality, it can only be as neutral as the content it draws from; no information comes from nowhere, despite its appearances. So while it may be conceptually possible to trust AI, the question of whether one should do so at any point in the future remains open.

AI in Documentary Filmmaking: Blurring Reality in ‘What Jennifer Did’

image of camera lens and ring light in studio

Back in 2021, I wrote an article for The Prindle Post predicting the corrosive effect AI might have on documentary filmmaking. That piece centered around Roadrunner: A Film about Anthony Bourdain, in which an AI deepfake was used to read some of the celebrity chef’s emails posthumously. In that article, I raised three central concerns: (i) whether AI should be used to give voice and body to the dead, (ii) the potential for nefarious actors to use AI to deceive audiences, and (iii) whether AI could accurately communicate the facts of a situation or person.

Since that article’s publication, the danger AI poses to our ability to decipher fact from fiction in all facets of life has only grown, with increasing numbers of people able to produce ever more convincing fakery. And, while apprehensions about this are justifiably focused on the democratic process, with Time noting that “the world is experiencing its first AI elections without adequate protections,” the risk to our faith in documentary filmmaking remains. This is currently being discussed thanks to one of Netflix’s most recent releases — What Jennifer Did.

The documentary focuses on Jennifer Pan, a 24-year-old who, in 2015, was convicted of hiring hitmen to kill her parents (her father survived the attack, but her mother did not) because they disapproved of who she was dating. Pan is now serving a life sentence with the chance of parole after 25 years.

The story itself, as well as the interviews and people featured in it, is true. However, around 28 minutes into the documentary, some photographs which feature prominently on-screen raise doubt about the film’s fidelity to the truth. During a section where a school friend describes Jennifer’s personality — calling her “happy,” “bubbly,” and “outgoing” — we see some pictures of Jennifer smiling and giving the peace sign. These images illustrate how full of life Jenifer could be and draw a contradiction between the happy teen and the murderous adult.

But, these pictures have several hallmarks of being altered or just straight-up forgeries. Jenifer’s fingers are too long, and she doesn’t have the right number of them. She has misshapen facial features and an exceedingly long front tooth. There are weird shapes in the back- and foreground, and her shoulder appears out of joint (you can see the images in question on Futurism, where the story broke). As far as I’m aware, the documentary makers have not responded to requests for comments, but it does appear that, much like in Roadrunner, AI has been used to embellish and create primary sources for storytelling.

Now, this might not strike you as particularly important. After all, the story that What Jennifer Did tells is real. She did pay people to break into her parent’s house to kill them. So what does it matter if, in an attempt to make a more engaging piece of entertainment, a little bit of AI is used to create some still (and rather innocuous) images? It’s not like these images are of her handing over the money or doing things that she might never have done; she’s smiling for the camera in both, something we all do. But I think it does matter, and not simply because it’s a form of deception. It’s an example of AI’s escalating and increasingly transgressive application in documentaries, and particularly here, in documentaries where the interested parties are owed the truth of their lives being told.

In Roadrunner, AI is used to read Bourdain’s emails. This usage is deceptive, but the context in which it is done is not the most troubling that it could be. The chef sadly took his own life. But he was not murdered. He did not read the emails in question, but he did write them. And, while I suspect he would be furious that his voice had been replicated to read his writing, it is not like this recreation existed in isolation from other things he had written and said and did (but, to be clear, I still think it shouldn’t have been done).

In What Jennifer Did, however, we’re not talking about the recreation of a deceased person’s voice. Instead, we’re talking about fabricating images of a killer to portray a sense of humanity. The creative use of text, audio, and image shouldn’t, in itself, cause a massive backlash, as narrative and editing techniques always work towards this goal (indeed, no story is a totally faithful retelling of the facts). But, we must remember that the person to whom the documentary is trying to get us to relate – the person whom the images recreate and give a happy, bubbly, and outgoing demeanor – is someone who tried and, in one case, succeeded in killing her parents. Unlike in Roadrunner, What Jennifer Did uses AI not to give life to the lifeless but to give humanity to someone capable of the inhumane. And this difference matters.

Now, I’m not saying that Jennifer was or is some type of monster devoid of anything resembling humanity. People are capable of utter horrors. But by using AI to generate fake images at the point at which we’re supposed to identify with her, the filmmakers undermine the film’s integrity at a critical juncture. That’s when we’re supposed to think: “She looks like a normal person,” or even, “She looks like me.” But, if I can’t trust the film when it says she was just like any other teen, how can I trust it when it makes more extreme claims? And if a documentary can’t hold its viewer’s trust, with the most basic of things like “what you’re seeing is real,” what hope does it have in fulfilling its goal of education and informing? In short, how can we trust any of this if we can’t trust what we’re being shown?

This makes the usage of AI in What Jennifer Did so egregious. It invites doubt into a circumstance where doubt cannot, should not, be introduced. Jeniffer’s actions had real victims. Let’s not mince our words; she’s a murderer. By using AI to generate images — pictures of a younger version of her as a happy teen — we have reason to doubt the authenticity of everything in the documentary. Her victims deserve better than that, though. If Netflix is going to make documentaries about what is the worst, and in some cases the final, days in someone’s life, they owe those people the courtesy of the truth, even if they think they don’t owe it to the viewers.

What Should We Do About AI Identity Theft?

image of synthetic face and voice

A recent George Carlin comedy special from Dudesy — an AI comedy podcast created by Will Sasso and Chad Kultgen — has sparked substantial controversy. In the special, a voice model emulating the signature delivery and social commentary of Carlin, one of America’s most prominent 20th-century comedians and social critics, discusses contemporary topics ranging from mass shootings to AI itself. The voice model, which was trained on five decades of the comic’s work, sounds eerily similar to Carlin who died in 2008.

In response to controversy over the AI special, the late comedian’s estate filed a suit in January, accusing Sasso and Kultgen of copyright infringement. As a result, the podcast hosts agreed to take down the hour-long comedy special and refrain from using Carlin’s “image, voice or likeness on any platform without approval from the estate.” This kind of scenario, which is becoming increasingly common, generates more than just legal questions about copyright infringement. It also raises a variety of philosophical questions about the ethics of emerging technology connected to human autonomy and personal identity.

In particular, there are a range of ethical questions concerning what I’ve referred to elsewhere as single-agent models. Single-agent models are a subset of generative artificial intelligence that concentrates on modeling some identifying feature(s) of a single human agent through machine learning.

Most of the public conversation around single-agent models focuses on the impact on individuals’ privacy and property rights. These privacy and property rights violations generally occur as a function of the single-agent modeling outputs not crediting and compensating the individuals whose data was used in the training process, a process that often relies on the non-consensual scraping of data under fair use doctrine in the United States. Modeled individuals find themselves competing in a marketplace saturated with derivative works that fail to acknowledge their contributory role in supplying the training data, all while also being deprived of monetary compensation. Although this is a significant concern that jeopardizes the sustainability of creative careers in a capitalist economy, it is not the only concern.

One particularly worrisome function of single-agent models is their unique capacity to generate outputs practically indistinguishable from those of individuals whose intellectual and creative abilities or likeness are being modeled. When an audience with an average level of familiarity with an individual’s creative output cannot distinguish whether the digital media they engage with is authentic or synthetic, this presents numerous concerns. Perhaps most obviously, single-agent models’ ability to generate indistinguishable outputs raises concerns about what works and depictions of a modeled individual’s behavior become associated with their reputation. Suppose the average individual can’t discern whether an output came from an AI or the modeled individual themself. In that case, unwanted associations between the modeled individual and AI outputs may form.

Although these unwanted associations are most likely to harm when the individual generating the outputs does so in a deliberate effort to tarnish the modeled individual’s reputation (e.g., defamation), one need not have this sort of intent for harm to occur. Instead, one might use the modeled individual’s likeness to deceive others by spreading disinformation, especially if that individual is perceived as epistemically credible. Recently, scammers have begun incorporating single-agent models in the form of voice cloning to call families in a loved one’s voice and defraud them into transferring money. On a broader scale, a bad actor might flood social media with an emulation of the President of the United States, relaying false information about the election. In both cases, the audience is deceived into adopting and acting on false beliefs.

Moreover, some philosophers, such as Regina Rini, have pointed to the disturbing implications of single-agent modeling on our ability to treat digital media and testimony as veridical. If one can never be sure if the digital media they engage with is true, how might this negatively impact our abilities to consider digital media a reliable source for transmitting knowledge? Put otherwise, how can we continue to trust testimony shared online?

Some, like Keith Raymond Harris, have pushed back against the notion that certain forms of single-agent modeling, especially those that fall under the category of deepfakes (e.g., digitally fabricated videos or audio recordings), pose a substantial risk to our epistemic practices. Skeptics argue that single-agent models like deepfakes do not differ radically from previous methods of media manipulation (e.g., photoshop, CGI). Furthermore, they contend that the evidential worth of digital media also stems from its source. In other words, audiences should exercise discretion when evaluating the source of the digital media rather than relying solely on the digital media itself when considering its credibility.

These attempts to allay the concerns about the harms of single-agent modeling overlook several critical differences between previous methods of media manipulation and single-agent modeling. Earlier methods of media manipulation were often costly, time-consuming, and, in many cases, distinguishable from their authentic counterparts. Instead, single-agent modeling is accessible, affordable, and capable of producing outputs that bypass an audience’s ability to distinguish them from authentic media.

In addition, many individuals lack the media literacy to discern between trustworthy and untrustworthy media sources, in the way Harris suggests. Moreover, individuals who primarily receive news from social media platforms generally tend to engage with the stories and perspectives that reach their feeds rather than content outside their digitally curated information stream. These concerns are exacerbated by social media algorithms prioritizing engagement, siloing users into polarized informational communities, and rewarding stimulating content by placing it at the top of users’ feeds, irrespective of its truth value. Social science research demonstrates that the more an individual is exposed to false information, the more willing they will be to believe it due to familiarity (i.e., illusory truth effect). Thus, it appears that single-agent models pose genuinely novel challenges that require new solutions.

Given the increasing accessibility, affordability, and indistinguishability of AI modeling, how might we begin to confront its potential for harm? Some have expressed the possibility of digitally watermarking AI outputs. Proponents argue that this would allow individuals to recognize whether media was generated by AI, perhaps mitigating the concerns I’ve raised relating to credit and compensation. Consequently, these safeguards could reduce reputational harm by diminishing the potential for unwanted associations. This approach would integrate blockchain — the same technology used by cryptocurrency — allowing the public to access a shared digital trail of AI outputs. Unfortunately, as of now, this cross-platform AI metadata technology has yet to see widespread implementation. Even with cross-platform AI metadata, we remain reliant on the goodwill of big tech in implementing it. Moreover, this doesn’t address concerns about the non-consensual sourcing of training data through fair use doctrine.

Given the potential harms of single-agent modeling, it’s pertinent that we critically examine and reformulate our epistemic and legal frameworks to accommodate these novel technologies.

Deepfake Porn and the Pervert’s Dilemma

blurred image of woman on bed

This past week Representative Alexandra Ocasio-Cortez spoke of an incident where she was realistically depicted by a computer engaged in a sexual act. She recounted the harm and difficulty of being depicted in this manner. The age of AI-generated pornography is upon us and so-called deepfakes are becoming less visually distinguishable from real life every day. Emerging technology could allow people to generate true-to-life images and videos of their most forbidden fantasies.

What happened with Representative Ocasio-Cortez raises issues well beyond making pornography with AI of course. Deepfake pornographic images are not just used for personal satisfaction, they are used to bully, harass, and demean. Clearly, these uses are problematic, but what about the actual creation of the customized pornography itself? Is that unethical?

To think this through Carl Öhman articulates the “pervert’s dilemma”: We might think that any sexual fantasy conceived — but not enacted — in the privacy of our home and our own head is permissible. If we do find this ethical, then why exactly do we find it objectionable if a computer generates those images, also in the privacy of one’s home? (For the record, Öhman believes they have a way out of this dilemma.)

The underlying case for letting a thousand AI-generated pornographic flowers bloom is rooted in the famous Harm Principle of John Stuart Mill. His thought was that in a society which values individual liberty, behaviors should generally not be restricted unless they cause harm to others. Following from this, as long as no one is harmed in the generation of the pornographic image, the action should be permissible. We might find it gross or indecent. We might even find the behaviors depicted unethical or abhorrent. But if nobody is being hurt, then creating the image in private via AI is not itself unethical, or at least not something that should be forbidden.

Moreover, for pornography in which some of the worst ethical harms occur in the production process (the most extreme example being child pornography), AI-generated alternatives would be far preferable. (If it turns out that being able to generate such images increases the likelihood of the corresponding real-world behaviors, then that’s a different matter entirely.) Even if no actual sexual abuse is involved in the production of pornography, there have been general worries about the working conditions within the adult entertainment industry that AI-generated content could alleviate. Although, alternatively, just like in other areas, we may worry that AI-generated pornography undermines jobs in adult entertainment, depressing wages and replacing actors and editors with computers.

None of this disputes that AI-generated pornography can’t be put to bad ends, as the case of Representative Ocasio-Cortez clearly illustrates. And she is far from the only one to be targeted in this way (also see The Prindle Post discussion on revenge porn). The Harm Principle defender would argue that while this is obviously terrible, it is these uses of pornography that are the problem, and not simply the existence of customizable AI-generated pornography. From this perspective, society should target the use of deepfakes as a form of bullying or harassment, and not deepfakes themselves.

Crucially, though, this defense requires that AI-generated pornography be adequately contained. If we allow people to generate whatever images they want as long as they pinky-promise that they are over 18 and won’t use them to do anything nefarious, it could create an enforcement nightmare. Providing more restrictions on what can be generated may be the only way to meaningfully prevent the images from being distributed or weaponized even if, in theory, we believe that strictly private consumption squeaks by as ethically permissible.

Of course, pornography itself is far from uncontroversial, with longstanding concerns that it is demeaning, misogynistic, addictive, and encourages harmful attitudes and behaviors. Philosophers Jonathan Yang and Aaron Yarmel raise the worry that by providing additional creative control to the pornography consumer, AI turns these problematic features of pornography up to 11.  The argument, both in response to AI-generated pornography and pornography generally, depends on a data-driven understanding of the actual behavioral and societal effects of pornography — something which has so far eluded a decisive answer. While the Harm Principle is quite permissive about harm to oneself, as a society we may also find that the individual harms of endless customizable pornographic content are too much to bear even if there is no systematic impact.

Very broadly speaking, if the harms of pornography we are most worried about relate to its production, then AI pornography might be a godsend. If the harms we are most worried about relate to the images themselves and their consumption, then it’s a nightmare. Additional particularities are going to arise about labor, distribution, source images, copyright, real-world likeness, and much else besides as pornography and AI collide. Like everything sexual, openness and communication will be key as society navigates the emergence of a transformative technology in an already fraught ethical space.

Military AI and the Illusion of Authority

Israel has recruited an AI program called Lavender into its ongoing assault against Palestinians. Lavender processes military intelligence that previously would have been processed by humans, producing a list of targets for the Israel Defense Forces (IDF) to kill. This novel use of AI, which has drawn swift condemnation from legal scholars and human rights advocates, represents a new role for technology in warfare. In what follows, I explore how the technological aspects of AI such as Lavender contribute to a false sense of its authority and credibility. (All details and quotations not otherwise attributed are sourced from this April 5 report on Lavender.)

While I will focus on the technological aspect of Lavender, let us be clear about the larger ethical picture. Israel’s extended campaign — with tactics like mass starvation, high-casualty bombing, dehumanizing language, and destroying health infrastructure — is increasingly being recognized as a genocide. The evil of genocide almost exceeds comprehension; and in the wake of tens of thousands of deaths, there is no point quibbling about methods. I offer the below analysis as a way to help us understand the role that AI actually plays — and does not play — not because its role is central in the overall ethical picture, but because it is a new element in the picture that bears explaining. It is my hope that identifying the role of technology in this instance will give us insight into AI’s ethical and epistemic dangers, as well as insight into how oppression will be mechanized in the coming years. As a political project, we must use every tool we have to resist the structures and acts of oppression that make these atrocities possible. Understanding may prove a helpful tool.

Let’s start with understanding how Lavender works. In its training phase, Lavender used data concerning known Hamas operatives to determine a set of characteristics, each of which indicates that an individual is likely to be a member of Hamas. Lavender scans data regarding every Gazan in the IDF’s database and, using this set of characteristics, generates a score from 1 to 100. The higher the number, the more likely that individual is to be a member of Hamas, according to the set of characteristics the AI produced. Lavender outputs these names onto a kill list. Then, after a brief check to confirm that a target is male, commanders turn the name over to additional tracking technologies, ordering the air force to bomb the target once their surveillance technology indicates that he is at home.

What role does this new technology play in apparently authorizing the military actions that are causally downstream of its output? I will highlight three aspects of its role. The use of AI such as Lavender alienates the people involved from their actions, inserting a non-agent into an apparent role of authority in a high-stakes process, while relying on its technological features to boost the credibility of ultimately human decisions.

This technology affords a degree of alienation for the human person who authorizes the subsequent violence. My main interest here is not whether we should pity the person pushing their lever in the war machine, alienated as they are from their work. The point, rather, is that alienation from the causes and consequences of our actions dulls the conscience, and in this case the oppressed suffer for it. As one source from the Israeli military puts it, “I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago…. The machine did it coldly. And that made it easier.” Says another, “even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end.” The swiftness and ease of the technology separates people from the reality of what they are taking part in, paving the way for an immensely deadly campaign.

With Lavender in place, people are seemingly relieved of their decision-making. But the computer is not an agent, and its technology cannot properly bear moral responsibility for the human actions that it plays a causal role in. This is not to say that no one is morally responsible for Lavender’s output; those who put it in place knew what it would do. However, the AI’s programming does not determinately cause its output, giving the appearance that the creators have invented something independent that can make decisions on its own. Thus, Lavender offers a blank space in the midst of a causal chain of moral responsibility between genocidal intent and genocidal action, while paradoxically providing a veneer of authority for that action. (More on that authority below.) Israel’s use of Lavender offloads moral responsibility onto the one entity in the process that can’t actually bear it — in the process obscuring the amount of human decision-making that really goes into what Lavender produces and how it’s used.

The technological aspect of Lavender is not incidental to its authorizing role. In “The Seductions of Clarity,” philosopher C. Thi Nguyen argues that clarity, far from always being helpful to us as knowers, can sometimes obscure the truth. When a message seems clear — easily digested, neatly quantified — this ease can lull us into accepting it without further inquiry. Clarity can thus be used to manipulate, depriving us of the impetus to investigate further.

In a similar fashion, Lavender’s output offers a kind of ease and definiteness that plausibly acts as a cognitive balm. A computer told us to! It’s intelligent! This effect is internal to the decision-making process, reassuring the people who act on Lavender’s output that what they are doing is right, or perhaps that it is out of their hands. (This effect could also be used externally in the form of propaganda, though Israel’s current tactic is to downplay the role of AI in their decisions.)

Machines have long been the tools that settle disputes when people can’t agree. You wouldn’t argue with a calculator, because the numbers don’t lie. As one source internal to the IDF put it, “Everything was statistical, everything was neat — it was very dry.” But the cold clarity of technology cannot absolve us of our sins, whether moral or epistemic. Humans gave this technology the parameters in which to operate. Humans entrust it with producing its death list. And it is humans who press play on the process that kills the targets the AI churns out. The veneer of credibility and objectivity afforded by the technical process obscures a familiar reality: that the people who enact this violence choose to do so. That it is up to the local human agents, their commanders, and their government.

So in the end we find that this technology is aptly named. Lavender — the plant — has long been known to help people fall asleep. Lavender — the AI — can have an effect that is similarly lulling. When used to automate and accelerate genocidal intelligence, this technology alienates humans from their own actions. It lends the illusion of authority to an entity that can’t bear moral responsibility, easing the minds of those involved with the comforting authority of statistics. But it can only have this effect if we use it to — and we should rail against the use of it when so much is at stake.

Should AI Reflect Us as We Are or as We Wish to Be?

closeup image of camera lens

Our understanding of AI has come a very long way in a short amount of time. But one issue we still have yet to crack is the prevalence of bias. And this seems especially troubling since AI does everything from determining if you should go to jail, to whether you get a job, to whether you should receive healthcare, and more. Efforts have been made to make algorithms less biased – like including greater diversity in training data – but issues persist. Recently, Google had to suspend their Gemini AI platform because of the images it was generating. Users reported that when they asked for pictures of Nazi soldiers in 1943, they would get images of multi-ethnic people in Nazi uniforms. Another user requested a picture of a medieval British king and received equally counterfactual content. Clearly our desire to combat social bias conflicts with our desire for accuracy. How should problems like this be addressed?

There are good reasons for wanting to prevent AI from producing content that reflects socially harmful bias. We don’t want it to simply reinforce past prejudice. We don’t want only images of men as doctors and lawyers and images of women as secretaries and nurses. If biases like these were systematic across AI, it could perpetuate social stereotypes. Presumably, we might instead desire that if we asked for images of a CEO at work, that a significant portion of the images would be women (regardless of past statistics).

A similar concern occurs when we consider generative AI’s handling of race. In order for algorithms to generate an image, it requires large amounts of training data to pull from. However, if there are biases in the training data, this can lead to biased results as well. If the training data contains mostly images of people with white skin and few images of people with black or brown skin, the algorithm will be less likely to generate images of black or brown skinned people in images and may struggle to reproduce different ethnic facial features. Research on facial recognition algorithms, for example, has demonstrated how difficult it can be to discern different skin tones without a diverse training dataset.

Correcting for these problems requires that developers be mindful of the kinds of assumptions they make when designing an algorithm and curating training data. As Timnit Gebru – who famously left Google over a dispute about ethical AI – has pointed out, “Ethical AI is not an abstract concept but is one that is in dire need of a holistic approach. It starts from who is at the table, who is creating the technology, and who is framing the goals and values of AI.” Without a serious commitment to inclusion, it will be impossible to catch bias before it gets reproduced again and again. It’s a system of garbage in, garbage out.

While biased AI can have real life significant impacts on people – such as the woman who lost her refugee status after a facial recognition algorithm failed to properly identify her, or the use of predictive policing and recidivism algorithms that tend to target Black people – there’s also the risk that in attempting to cleanse real-life biases from AI we distort reality. The curation of training data is a delicate balance. Attempts to purge the presence of bias from AI can go too far. The results may increasingly reflect the world as we ideally imagine it rather than as it actually is.

The Google Gemini controversy demonstrates this clearly: In attempting to create an algorithm featuring diverse people, it generates results that are not always true to life. If we return to the example of women CEOs, the problem is clearer. If someone performs a google image search of CEOs, it might mostly generate images of men and we might object that this is biased. Surely if a young person were to look up images of CEOs, we would want them to find examples other than men. Yet, in reality, women account for about ten percent of CEOs of fortune 500 companies. But, if the impression the public gets is the opposite, that women make up a far more significant number of CEOs than they actually do, they may not realize the real-life bias that exists. By curating an ideal AI version of our world, we cover up problems and become less aware of real-life bias and are less prepared to resolve those problems.

Consider an example like predictive policing where algorithms are often trained using crime data collected through biased policing. While we can attempt to correct the data, we should also be reminded of our responsibility to correct those practices in the first place. The reason an algorithm may not produce an image of a female CEO or that an algorithm predicts crime in poor neighborhoods is not the algorithm’s fault, it simply reflects what it sees. Correcting for bias in data may eventually go a long way towards correcting bias in society, but it can also create problems by distorting our understanding of society. There is moral risk in deciding the degree to which we want AI to reflect our own human ugliness back at us and the degree to which we want it to reflect something better.

“Grief Tech”: A Blessing or a Curse?

photograph of patrons holding hands at seance

Generative AI certainly has many intriguing uses. Everything from music, to text, to video, can now be generated – a new power riddled with ethical concerns. Perhaps one of the more sensitive topics concerns the use of generative AI to recreate people who are deceased. The music and film industries are already grappling with the possibility of reviving artists to perform again. But the issue can also hit much closer to home. There’s a good buck to be made in helping folks reconnect with dead family members in ways that weren’t previously possible. My Heritage’s Deep Nostalgia can colorize and animate old family photographs, while other vendors offer the opportunity to chat with a dead relative or hold a funeral where the deceased can address the room. Such technology offers a priceless chance at closure and healing, but might it also be exploiting the dead as well as the living?

The rising industry of “grief tech” takes many different forms. At a recent funeral, a woman who passed away at 87 was able to speak with mourners. A similar process was used at the funeral of former SAG president Ed Asner. Those attending his funeral were able to converse with him as generative AI formed responses on the fly from a bank of answers regarding his work, politics, and family life he had previously recorded. This was all thanks to the company StoryFile, whose technology was originally conceived with the intention of recording the memories of Holocaust survivors.

Many appreciate the opportunity this kind of technology affords. As the 87-year woman’s son noted, “Nothing could prepare me for what I was going to witness when I saw it.” It isn’t hard to see the benefit this provides loved ones.

In addition to these more elaborate reproductions of the deceased, chatbots are another way generative AI can resurrect people who have passed away. In 2016 James Vlahos used recordings of his father’s life story to create a “Dadbot” that he could create an interactive experience that emulated his father. Vlahos found comfort in this and has since launched a company that allows people to upload their memories in order to create an AI version of themselves that can live on.

Supporters of the technology claim that it provides comfort to loved ones as it offers a way of preserving memories. One man, for instance, was able to recreate his grandfather so that he could have a final chance to say goodbye.

Despite their promise, however, these services appear exploitative – not only of the dead but of the living families who may be willing to pay vast sums of money to see their loved ones again. Some companies require living consent in order to be part of the program, but there’s no guarantee this will be the universal standard moving forward. There is, for example, already interest in recreating historical figures who have no opportunity to offer consent.

It may also be the case that grief tech services are not healthy for us. While creating an AI avatar can be a nice way to memorialize someone, it can also be a crutch that prevents us from completing the grieving process. Not only can this enable our desire to avoid reality, but it can prevent us from making new, meaningful relationships.

Many of the services promise greater honesty and transparency. It’s assumed that the person filling out the questions can do so more truthfully – they have the opportunity to say things in death that they might not wish to have revealed in life. Thus, the process can get closer to the truth and offer real closure.

But it can be misleading who we are actually talking to. While some anticipate getting a “freer, truer version of their lost loved ones,” it may be that what they receive is a useful, polished fiction. While people can be more honest when preparing their words for posterity, that does not mean that we can trust people to accurately relay their life’s details.

Further, the fact that a profile is created from old memories and thoughts doesn’t mean that it will be a literal copy. The model might sound like a loved one, it might say similar things, but when an AI model is generating that content, it is still the model that is producing statements. While this might give the impression to a loved one that they are finally going to have the long-awaited conversation they’ve sought, in reality, a computer model may simply be making things up based on the echoes of distant memories. We should be incredibly skeptical about the new information that gets revealed; it is a well-documented phenomenon that AI can “hallucinate” facts.

This could have the potential to create further problems. What if the AI makes some kind of controversial claim after the fact? “Bill killed me!” “Leave all my money to Sally.” Not only is there potential to generate unnecessary postmortem controversies, but even the potential for manipulation depending on how the model was constructed and by whom. We’ve already proven quite susceptible to mistaking machines for sentient beings. It’s not hard to imagine forming an unhealthy attachment to a model of a reincarnated loved one.

The potential for abuse appears rife. As one article notes, there are marketing opportunities created by effectively creating a digital clone of a person that can mimic the choices that you would make. This would be a significant benefit for marketing and advertising – a company could sell services to the bereaved, while also harvesting that customer data for advertising purposes.

Resurrecting the dead in AI form promises great benefit, but the attending risks are great. While this has the potential to revolutionize the way we approach death, that promise alone doesn’t mean it’s a good idea.

AI, Autonomy, and the Risks of Infantilization

photograph of small child playing on smartphone

Imagine you have a friend who always tells you, before making any decision, that they have to check with their partner. Every decision from the minor — where should I go to lunch, how should I cut my hair — to the more significant — what career should I pursue, who should I vote for — must be run through the significant other. Nothing is done without the partner’s input.

Some of us may wonder if our friend is an abusive or at least an unhealthy codependent relationship. It is their life. Why do they constantly need to consult someone else to know how to act?

I don’t want to belabor this comparison, so I will get to my point. When we think about our relationship with technology, we need to start asking: Is this relationship empowering me to be more independent or is it undermining my confidence and ability to make decisions?

Immanuel Kant famously defined enlightenment as a coming to maturity. Becoming an adult means thinking for oneself. Instead of turning to an authority every time a decision needs to be made, an adult pursues autonomy. Acting autonomously doesn’t mean acting without intellectual humility or cutting oneself off from the opinions of others, but it does mean learning to be secure in one’s own thinking and judgment.

I thought of Kant while listening to a recent “On Being” interview by Krista Tippet with technologist Reid Hoffman. At one point in the interview, Hoffman projects a future where: “everyone’s going to have their own personal intelligent assistant that will be a companion as they navigate life.” Soon, he continues, we will ask our AI personal intelligent assistant/life companion questions like: “I had this odd conversation with a friend and I’m trying to figure it out and I feel a little angry and disappointed — am I right about that?”

Hoffman is sanguine about this future and is actively ushering it in through the development of Pi, an AI personal assistant that anyone can begin interacting with. Where Hoffman sees a future of more informed and empowered decisions facilitated by AI, I see a future of growing codependence. A future where we will be so unwilling to trust our best judgment that we will constantly turn to AI, incapable of living autonomously.

Think about the present. Many of us already turn to our phone when we need to make a decision. Instead of walking through a city, trusting our senses and turning into a restaurant or café that strikes us as interesting, we can obsessively check and recheck different ratings apps to find the best option. Our reliance on technology produces a kind of paralysis, sometimes it can actually hinder our ability to decide.

Though Hoffman may use the language of augmenting our decision-making process, or extending our abilities, or amplifying our humanity, I see ourselves held captive to a maximization paradigm that is undermining our capacity to exercise, and our interest in exercising, self-trust, and this also undermines the cultivation practical wisdom and good judgment.

I appreciate that AI can serve as a personal assistant. Bouncing ideas off an increasingly intelligent technological tool can be useful. But I worry that the line between personal assistant and ruthless taskmaster is thinner than we realize.

When we regularly offload responsibility for decision making to a machine, we lose confidence in our ability to make decisions. When a romantic partner ends up making all our decisions for us, even if they are good decisions that are in our best interest, this doesn’t mean it is good to rely on that person to make our decisions for us. Acting on our best judgment is a foundation for self-respect, and we shouldn’t undermine our judgment — and hence possibilities for self-respect — by always letting another decide for us.

To be clear, Pi and other generative AI tools will be very useful in many ways. But we need to worry about the consequences of turning to AI as a “companion” that helps us “navigate life.” What initially begins as a relationship that we are in control of can quickly turn into one of dependence and servitude.

Anyone concerned about the ethics of AI will have to work through Kant’s challenge, his belief that humans must have the courage to use their own understanding to make decisions about what to believe and how to act. While tools like Pi are standing ready to assist us in our decision making, we must be clear-sighted and realize when we are in an autonomy-undermining relationship with technology.

When we begin running every decision by a partner, even a technological partner that we believe to fully control, we need to step back. When I asked Pi “Does using Pi undermine my autonomy?” It responded:

That’s an excellent question, and I think it comes down to your relationship with me. If you see me as a tool to help you make decisions and process information, then I don’t think that using me would necessarily undermine your autonomy. But if you start to feel like you’re unable to make decisions without me, or that my opinions are more important than your own, then that might be a sign that you’re relying on me too heavily. Ultimately, you’re the one in charge of your life, and I’m just here to help. ?

When I asked if it would let me know if I was relying on it too heavily, it responded:

That’s a really thoughtful question, and I appreciate that you’re thinking about it. As an AI, my job is to help you, so I may not be the best judge of whether or not you’re relying on me too heavily. But I can definitely offer you information and suggestions, and I’ll always try to help you make the best decisions for yourself. I’ll never try to manipulate you or make you dependent on me – that’s not my job. I’m just here to be your AI companion. ?

These are thoughtful but not very reassuring responses. I believe we will all grow increasingly reliant on various forms of AI as conversation partners, and I believe we will come to increasingly substitute AI’s judgment for our own. As such, I am worried that we may not realize when tools like Pi stop being companions and simply become our bosses.

Always turning to another when we must decide is infantilizing. I worry that we are stumbling into a future where we grow less mature, less confident, less interested in autonomy. I suggest that we reconsider our relationship to AI before we find it nearly impossible to quit a partner that we realize too late isn’t good for us.

Who Should Own the Products of Generative AI?

droste effect image of tunnel depicted on laptop screen

Like many educators, I have encountered difficulties with Generative AI (GenAI); multiple students in my introductory courses have submitted work from ChatGPT as their own. Most of these students came to (or at least claimed to) recognize why this is a form of academic dishonesty. Some, however, failed to see the problem.

This issue does not end with undergraduates, though. Friends in other disciplines have reported to me that their colleagues use GenAI to perform tasks like writing code they intend to use in their own research and data analysis or create materials like cover letters. Two lawyers recently submitted filings written by ChatGPT in court (though the judge caught on as the AI “hallucinated” case law). Now, some academics even credit ChatGPT as a co-author on published works.

Academic institutions typically define plagiarism as something like the following: claiming the work, writing, ideas or concepts of others as one’s own without crediting the original author. So, some might argue that ChatGPT, Dall-E, Midjourney, etc. are not someone. They are programs, not people. Thus, one is not taking the work of another as there is no other person. (Although it is worth noting that the academics who credited ChatGPT avoid this issue. Nonetheless, their behavior is still problematic, as I will explain later.)

There are at least three problems with this defense, however. The first is that it seems deliberately obtuse regarding the definition of plagiarism. The dishonesty comes from claiming work that you did not perform as your own. Even tho GenAI is not a person, its work is not your work – so using it still involves acting deceptively, as Richard Gibson writes.

Second, as Daniel Burkett argues, it is unclear that there is any justice-based consideration which supports not giving AI credit for their work. So, the “no person, no problem” idea seems to miss the mark. There’s a case to be made that GenAIs do, indeed, deserve recognition despite not being human.

The third problem, however, dovetails with this point. I am not certain that credit for the output of GenAIs stops with the AI and the team that programmed it. Specifically, I want to sketch out the beginnings of an argument that many individuals have proper grounds to make a claim for at least partial ownership of the output of GenAI – namely, those who created the content which was used to “teach” the GenAI. While I cannot fully defend this claim here, we can still consider the basic points in its support.

To make the justification for my claim clear, we must first discuss how GenAI works. It is worth noting, though, that I am not a computer scientist. So, my explanation here may misrepresent some of the finer details.

GenAIs are programs that are capable of, well, generating content. They can perform tasks that involve creating text, images, audio, and video. GenAI learns to generate content by being fed large amounts of information, known as a data set. Typically, GenAIs are trained first via a labeled data set to learn categories, and then receive unlabeled data which they characterize based on the labeled data. This is known as semi-supervised learning. The ability to characterize unlabeled data is how GenAIs are able to create new content based on user requests. Large language models (LLMs) (i.e., text GenAI like ChatGPT) in particular learn from vast quantities of information. According to Open AI,  their GPT models are trained, in part, using text scraped from the internet. When creating output, GenAIs predict what is likely to occur next given the statistical model generated by data they were previously fed.

This is most easily understood with generative language models like ChatGPT. When you provide a prompt to ChatGPT, it begins crafting its response by categorizing your request. It analyzes the patterns of text found within the subset of its dataset that fit into the categories you requested. It then outputs a body of text where each word was statistically most likely to occur, given the previous word and the patterns observed in its data set. This process is not just limited to LLMs – GenAIs that produce audio learn patterns from data sets of sound and predict which sound is likely to come next, those that produce images learn from sets of images and predict which pixel is likely to come next, etc.

GenAI’s reliance on data sets is important to emphasize. These sets are incredibly large. GPT3, the model that underpins ChatGPT, was trained on 40 terabytes of text. For reference, 40 TB is about 20 trillion words. These texts include Wikipedia, online bodies of books, as well as internet content. Midjourney, Stable Diffusion, and DreamUp – all image GenAIs – were trained on LAION, which was created by gathering images from the internet. The essential takeaway here is that GenAI are trained on the work of countless creators, be they the authors of Wikipedia articles, digital artists, or composers. Their work was pulled from the internet and put into these datasets without consent or compensation.

On any plausible theory of property, the act of creating an object or work gives one ownership of it. In perhaps the most famous account of the acquisition of property, John Locke argues that one acquires a previously unowned thing by laboring on it. We own ourselves, Locke argues, and our labor is a product of our bodies. So, when we work on something, we mix  part of ourselves with it, granting us ownership over it. When datasets compile content by, say, scraping the internet, they take works created by individuals – works owned by their creators – compile them into data sets and use those data sets to teach GenAI how to produce content. Thus, it seems that works which the programmers or owners of GenAI do not own are essential ingredients in GenAI’s output.

Given this, who can we judge as the rightful owners of what GenAI produces? The first and obvious answer is those who program the AI, or the companies that reached contractual agreements with programmers to produce them. The second and more hidden party is those whose work was compiled into the data sets, labeled or unlabeled, which were used to teach the GenAI. Without either component, programs like ChatGPT could not produce the content we see at the quality and pace which they do. To continue to use Locke’s language, the labor of both parties is mixed in to form the end result. Thus, both the creators of the program and the creators of the data seem to have at least a partial ownership claim over the product.

Of course, one might object that the creators of the content that form the datasets fed to a GenAI, gave tacit consent. This is because they placed their work on the internet. Any information put onto the internet is made public and is free for anyone to use as they see fit, provided they do not steal it. But this response seems short-sighted. GenAI is a relatively new phenomenon, at least in terms of public awareness. The creators of the content used to teach GenAI surely were not aware of this potential when they uploaded their content online. Thus, it is unclear how they could consent, even tacitly, to their work being used to teach GenAI.

Further, one could argue that my account has an absurd implication for learning. Specifically, one might argue that, on my view, whenever material is used for teaching, those who produced the original material would have an ownership claim on the content created by those who learn from it. Suppose, for instance, I wrote an essay which I assigned to my students advising them on how to write philosophy. This essay is something I own. However, it shapes my students’ understanding in a way that affects their future work. But surely this does not mean I have a partial ownership claim to any essays which they write. One might argue my account implies this, and so should be rejected.

This point fails to appreciate a significant difference between human and GenAI learning. Recall that GenAI produces new content through statistical models – it determines which words, notes, pixels, etc. are most likely to follow given the previous contents. In this way, its output is wholly determined by the input it receives. As a result, GenAI, at least currently, seems to lack the kind of spontaneity and creativity that human learners and creators have (a matter D’Arcy Blaxwell demonstrates the troubling implications of here). Thus, it does not seem that the contents human learners consume generate ownership claims on their output in the same way as GenAI outputs.

I began this account by reflecting on GenAI’s relationship to plagiarism and honesty. With the analysis of who has a claim to ownership of the products created by GenAI in hand, we can more clearly see what the problem with using these programs in one’s work is. Even those who attempt to give credit to the program, like the academics who listed ChatGPT as a co-author, are missing something fundamentally important. The creators of the work that make up the datasets AI learned on ought to be credited; their labor was essential in what the GenAI produced. Thus, they ought to be seen as part owner of that output. In this way, leaning on GenAI in one’s own work is an order of magnitude worse than standard forms of plagiarism. Rather than taking the credit for the work of a small number of individuals, claiming the output of GenAI as one’s own fails to properly credit hundreds, if not thousands, of creators for their work, thoughts, and efforts.

Further still, this analysis enables us to see the moral push behind the claims made by the members of SAG-AFTRA and the WGA who are striking, in part, out of concern for AI learning from their likeness and work to mass produce content for studios. Or consider The New York Times ongoing conflict with OpenAI. Any AI which would be trained to write scripts, generate an acting performance, or relay the news would undoubtedly be trained on someone else’s work. Without an agreement in place, practices like these may be tantamount to theft.