← Return to search results
Back to Prindle Institute

Engineered Appetite

For the longest time, we, as a species, have looked to improve the types of food and flavors to which we have access. From the ancient selective breeding practices that reshaped the remarkably large Aurochs into today’s modern-day cows to the globe-spanning flavor trade that animated the “spice routes” of the Roman empire and beyond, we have always been, if one puts it indelicately, led by our tongues and our stomachs.

Today, however, the scale of our ambition poses unprecedented challenges. Biodiverse ecosystems such as the Amazon rainforest are cleared for grazing land. Animals raised in factory farms often endure unsanitary conditions that can foster pathogens such as avian influenza. Our need to feed ourselves is increasingly entangled with environmental degradation and global health risks.

This is nothing new, though. We have long known that change is necessary to secure a stable food supply; whether to ensure food security or, more cynically, to secure greater profits. In the 1990s, this came in the form of genetically modified (GM) foods. Scientists, farmers, politicians, and business owners argued that if we were to continue to meet the demands of a growing global population, we would have to supercharge our food systems to better meet the challenges of the 21st century. As a representative for the bioscience company Zeneca said back in 1996, “Everybody wins; the farmer has a longer window for delivery, there is less mould damage, the tomatoes are easier to transport and they are better for processing.” Yet public resistance was fierce. In the UK, especially, media backlash against so-called “Frankenfoods” halted the rollout of the technology (it even featured as a major storyline within the farming-centric continuing drama The Archers). Indeed, in the aftermath, the Office for Science and Technology commissioned a report into the widespread rejection. And while countries such as the US went on to incorporate GM products into their agricultural systems, others rejected them outright or imposed strict regulatory limits.

The pressures that gave rise to GM crops, however, have not disappeared; if anything, they have intensified. As a result, new methods of accelerating agricultural improvement continue to emerge. The latest iteration is the development of “precision-bred organisms” (PBOs).

As described in a recent article in The Times, PBOs are created using gene-editing techniques to introduce traits that an organism might plausibly have acquired through natural mutation or selective breeding, but within a dramatically shortened timeframe. This distinguishes them from GM organisms, which have genetic material spliced into them that they would have never been able to incorporate through sexual or asexual reproduction, such as material from other species. Whether this distinction without a difference is something that I leave up to the reader: does opposition to GM food stem from the mere fact of alteration, from the introduction of “foreign” DNA, or from a broader discomfort with human intervention itself?

My concern, for the purposes of this piece, lies elsewhere. What interests me is what these technologies betray: not simply a desire for mastery over nature, but perhaps a failure of mastery over ourselves.

I should say that I have previously written about humanity’s efforts to control nature and how those efforts erode the giftedness of our embodied existence (you can find that here). Here, I want to focus on something slightly different: what might be called, for want of a better word, a kind of moral laziness.

By laziness, I do not mean that scientists, farmers, or supply-chain workers lack diligence or discipline. Rather, I mean an absence of collective aspiration. As a species, we possess unparalleled power. We reshape landscapes on a planetary scale; by some estimates, humans now move more earth each year than all natural processes combined. We are not merely inhabitants of the planet but one of its most transformative forces.

Yet for all this power, we often hesitate to change those things we find convenient. We know that demand for products such as palm oil contributes to rainforest destruction. We know that fossil-fuel dependence drives climate change. We know that high levels of meat consumption are linked to ecosystem degradation and animal suffering. Still, we persist along the same path. We both work hard to shape the world around us, but do so in ways that satisfy our most basic (perhaps animalistic) desires.

This, however, is not inherently a bad thing. Better access to nutrition and transport brings heightened economic prospects and lifts people out of poverty. Economic opportunity brings health, education, and security. The destruction of ecosystems is rarely driven solely by malice; more often, it is motivated by the pursuit of better livelihoods and prosperity in response to consumer demand. When meeting that demand is the clearest route to economic survival, resistance becomes difficult, even self-defeating. In other words, we can’t blame people for doing what they must to survive.

It is here that virtue ethics offers a useful lens. For Aristotle, moral virtue is like a muscle: it grows stronger through deliberate practice. As he writes in Book Two of the Nicomachean Ethics: “Men become builders by building and lyre-players by playing the lyre; so too we become just by doing just acts.” Virtue is not an abstract principle, but a habit cultivated through repeated choice and dedication. Much like the desire to get a six-pack, we don’t reach our goal of being good by taking the easy way out. Virtue requires taking the path of most (or at least, substantial) resistance. If we simply do what feels right in the moment, before we train ourselves to distinguish the good, then we’re being nothing more than morally lazy.

My worry is that each technological attempt to satisfy our expanding appetites, however ingenious, risks weakening that moral muscle. Instead of exercising restraint, reimagining consumption, or reshaping demand, we are once again looking at an innovation that allows us to continue down the same path we’ve been on. PBOs may be scientifically sophisticated and address economic, agricultural, and environmental needs, but they could also signal a deeper problem: a preference for altering our food rather than altering ourselves.

The question, then, is not only whether PBOs are safe or efficient. It is whether they represent progress in the fullest sense. Are they going to provide nutrition for the body while being yet another instance of us impoverishing our souls? I don’t have the answer. But, much like my fellow Brits, I reserve a lingering unease at the thought of food engineered in a lab; even if, as I readily admit, that unease may not be entirely rational.

(Not) Punishable By Death (Part 2)

Ethicists dedicate a lot of time to discussing whether the death penalty is morally permissible. Recently, I turned to consider a related – though often overlooked – question: does the death penalty even count as punishment in the first place? I noted that, on most standard accounts, a punishment must harm the offender. I considered three possible times at which that harm might occur: (1) during execution; (2) before execution; or (3) after execution. I argued that (3) is largely unknowable, while the harm associated with (1) and (2) is neither necessary nor sufficient for the successful completion of the death penalty.

But “harm” isn’t limited to merely giving us something bad. Harm can also come in the form of depriving us of something good. Perhaps the harm of the death penalty isn’t found in the bad things it might (though doesn’t have to) add to our lives, but rather in the good things it takes away. This provides us with a fourth option for how the death penalty might count as punishment.

Option 4: The Offender is Harmed via Deprivation

Epicurus famously argued that death can’t be bad for us. Why? Because if we don’t exist, we can’t be harmed. A key premise of Epicurus’s argument is that death means our total annihilation. This is often referred to as the “Termination Thesis.” It’s a controversial claim (anyone who believes in some kind of afterlife will necessarily reject the Termination Thesis), but let’s accept it, and see where it leads. If death means our total annihilation, then – by definition – we won’t be around to experience death. To use Epicurus’s own words:

“…so long as we exist, death is not with us; but when death comes, then we do not exist.”

Epicurus’s final move is to note that if we cannot experience something, then that thing cannot be bad for us.

It’s a comforting notion – but it rarely provides solace to those who have misgivings about death. Why? Because, for most of us, the badness of death isn’t about the experience of bad things, but rather our inability to experience good things. Put simply: death is bad because it deprives us. It deprives us of the ability to see our favorite team win the next Super Bowl, to watch the next installment in our favorite film franchise, or to experience the joy of our grandchildren graduating college.

If this deprivation explains the badness of death, then it might also describe the harm of the death penalty. Put simply: the death penalty isn’t harmful because it causes pain and suffering to the offender (though it may). Rather, it’s harmful because it deprives the offender of all of the good things they might otherwise have enjoyed.

In this way, the death penalty shares much with a prison sentence. Offenders are placed behind bars to deprive them of their freedom, and all of the things that they might have used that freedom to enjoy. The death penalty does the same thing – albeit on a more permanent basis.

But there’s a problem. Consider those other cases in which a punishment harms an offender via deprivation – like a fine or a prison sentence. In both cases, there is someone who is currently being deprived. The fined offender is now being deprived of their money. The imprisoned offender is now being deprived of their freedom. The same is not true of the executed offender. By the time that sentence deprives the offender – that is, by the time the death penalty has been carried out – the offender is no longer around to experience that deprivation. This is, in a way, a repackaging of Epicurus’s initial argument. In fact, we might reformulate his original claim to say precisely this:

“…so long as we exist, deprivation is not with us; but when deprivation comes, then we do not exist.”

But maybe this is a mistake. Fred Feldman argues that the question “when does death deprive us?” simply misses the point. All things considered, death causes us to live a shorter (and therefore less valuable) life. But when, precisely, is a shorter life worse than a longer life? According to Feldman, this question doesn’t really make sense. It’s sort of like asking “when is a math exam worse than a pizza party?” If there is an answer, it’s probably “eternally.”

What this means, then, is that the deprivation created by the death penalty doesn’t harm the offender at a time, but – rather – harms them eternally. But this leads to some unintuitive claims. Technically, an offender’s execution harms them prior to their execution. In fact, it harms them prior to the commission of their crime. It was even harming them as an infant – long before thoughts of wrongdoing even entered their mind.

Perhaps we’re willing to accept these unintuitive claims in order to preserve the death penalty’s status as a punishment. But even if we are, there is a larger concern at hand. Once again, this harm seems neither necessary nor sufficient for the satisfactory completion of the death penalty. In order to illustrate this, consider a case in which an offender’s remaining life would not in fact be filled with good things. In such a case, execution would not deprive them (and thus not harm) them. Yet we would still see them as having received their sentence upon their death. This shows that the harm of deprivation isn’t necessary for the successful completion of the death penalty. Consider an alternative case in which the execution of an offender goes wrong, and the offender doesn’t die, but instead falls into a permanent coma. In such a case, the offender would be deprived of future goods, but we’d most likely see them as having avoided their sentence. This shows that the harm of deprivation is insufficient for the successful completion of the death penalty.

So where does this leave us? Ultimately, there appears to be no good way of fleshing out the harm of the death penalty. This leaves us with three options going forward: The first is to find an alternative basis for the harm of the death penalty. The second would be to revise our definition of punishment and remove the requirement of “harm” – though this would create a raft of other problems, and see us include many things in our definition of punishment that ought to be excluded. The third alternative is to merely acknowledge that – despite its widespread use as a response to wrongdoing – the death penalty does not, in fact, qualify as a “punishment.”

But what would be the implications of this third alternative? Does it really matter if the death penalty fails to meet our standard definition of “punishment”? It might – especially when we remember that the reason why this failure has occurred is because the death penalty (despite appearances to the contrary) doesn’t actually harm the offender. In particular, this might create serious problems for how we come to justify the practice in the first place.

Consider, for example, the consequentialist justification for the death penalty, which argues that we should execute a murderer because doing so will deter future murders. But how can the death penalty effectively deter when it’s entirely incapable of harming the offender? Consider, alternatively, the retributivist justification for the death penalty, which argues that we should execute murderer merely because this is what he deserves. This is usually rooted in the idea that someone who commits the worst crime imaginable should receive the most harmful punishment at our disposal. But, if what we’ve said above is true, the death penalty isn’t harmful at all. So if the retributivist is looking to cause the most harm possible to the offender, they’ll need to look to a punishment other than the death penalty.

In this way, the inability of execution to harm the offender doesn’t just rob the death penalty of its status as a “punishment,” it also removes much of what we might’ve used to morally justify its use in the first place.

Control and the Dark Side of Technological Progress

Iran, with technological support from Chinese companies, has assembled a powerful system of digital censorship and surveillance over the past 15 years. That infrastructure was recently employed – using face recognition, internet blackouts, and, AI – to brutally crush protests, resulting in at least 7,000 deaths. On the other side of the world, two senior AI researchers, Zoë Hitzig at OpenAI and Minank Sharma at Anthropic, resigned, citing concerns about the AI business model and AI safety, respectively. Underlying these seemingly dissimilar events is a shared worry about the dangers of technology and who controls it.

Our stories of innovation and technological progress tend to focus on the broader public. We will have access to new mind-bending entertainment sources, life-changing medical technologies,and a vast array of time-saving devices, so the narrative goes. Yet, the most important impact may not be how it is enjoyed by the everyday consumer, but rather how it is wielded by powerful entities such as governments or large corporations – Iran has over 90 million people; OpenAI’s ChatGPT has over 700 million weekly users.

For many of us, especially in advanced economies, our lives are completely infused with technology. Communication with our friends, the news we read, our access to government services, the tools on which we work, recommendations for doctors and restaurants, our political engagement and activism, are all facilitated by either government- or corporate-controlled digital infrastructure. Often we are exchanging our personal details — birth date, favorite websites, anxieties, etc. — for access. Off our computers, we can be monitored by our phone’s GPS, or watched by our Ring cameras.

Increasingly, the tendency has been towards centralization and top-down control. The largest technology companies, such as Alphabet (Google), Microsoft, and Apple have all embraced a platform approach, where they provide digital real estate and tools, which can then be “rented” by others. Likewise, major Large Language Models, such as ChatGPT, charge users or product developers for access to their model. This has led to a digital landscape with very few owners and many borrowers. Even most e-books are simply licensed, rather than owned the way a paper copy is.

At the same time, countries are increasingly asserting digital sovereignty and their right to control digital infrastructure within their territorial domains. China’s Great Firewall is the most famous example, but nations such as Russia and Iran have also developed sophisticated ways to block and shut-off internet access. Even the EU has come to embrace digital sovereignty, although its current concern is minimizing dependence on US tech companies.

This wraparound technological infrastructure – and the data it harvests – represents a great deal of potential control over our lives. This has its advantages. Powerful actors can secure data, fight cybercrime, and provide valuable tools and products. Digital surveillance can be used to fight terrorism. Advertising and data collection allow companies to provide their services at discounted rates.

However, these same powers greatly amplify a tendency already present in 20th-century politics: governments and corporations’ translation of power and knowledge into impact and influence. Their ability to track, monitor, and influence is unrivaled historically.

Given this reality, it is valuable to consider what protects us from the undue exercise of power.

At the most extreme, is the nonexistence of that power. One way to prevent large corporations from wielding such awesome power, for example, is to simply break them up. Similarly, a weakened government can be limited in its capacity to oppress (at the cost of being limited in its capacity to help).

Less extreme are various restraints or counterweights to the exercise of power. For corporations, this includes regulations, supervisory bodies, robust consumer and worker protection laws, and competitive alternatives. For governments, this includes free and fair elections, an independent judiciary, and the separation of powers. A well-functioning government that is responsive to the interests of the people is, of course, better positioned to impose meaningful regulation on corporations than a government that is weak, corrupt, or malfeasant.

Finally, there is mere discretion. Here it is simply a matter of internal restraint whether corporations or governments exercise certain power. As governments and, in a sense, corporations, build up their data-gathering and surveillance architectures, we increasingly rely on trust to maintain data integrity and prevent abuse. This is especially the case for countries like the US with relatively lean regulations, consumer protections, and workers’ rights. On the topic of AI, the US administration asserted in a December executive order that  “AI companies must be free to innovate without cumbersome regulation.” Given the known role such technology can play in deepfakes, data gathering, face recognition, and even cybercrime, this puts a lot of trust in these companies.

Some philosophers emphasize what is called non-domination or republican freedom. The key feature here is that the arbitrary exercise of power is not possible (or is, at least, prohibited), as opposed to merely voluntarily withheld. They emphasize that a slave with a permissive master is still not free.

By the same token, domination represents a particular risk for a world with extraordinarily powerful governmental and corporate actors. We need not just worry about what they do, but what they could do. Good governance may help take the edge off, but can it eliminate the risk entirely? Not every country is blessed with good governance.

We will have to think deeply if we want a world that contains such powerful actors and prevents potential abuses. Do the potential benefits they can provide through incredible resources and economies of scale outweigh the risk of abusing their power? Is it too late to go back?

The accumulation of digital power and the weaponization of technology raise a more general point about the complexity of technological progress. Technological improvement and societal improvement need not walk in lockstep. Certainly, some innovations and new technologies are nearly uncontroversial good things: antibiotics, seatbelts, sanitation, braille.

Still, technological growth is not without its costs and risks. We cannot always see the full effect of new products and innovations – there are always unforeseen dangers and unanticipated applications. It’s also good to remember that the effects of technologies may not refract across a society evenly. AI-fueled innovations that are good for landlords are not necessarily good for renters; those good for companies are not necessarily good for their workers. Technology can exacerbate existing power differentials in society. Nor can we see the combined effect of many different technologies and the, often disorienting, changes they can bring to a society. How will, for example, large language model chatbots like ChatGPT impact how we learn, think, and socialize. It is worth considering what we lose, not just what we gain, in the pursuit of progress.

Stoicism in Times of Unrest

We live in a turbulent time. As with many volatile periods in history, it seems like every day the news brings reports of struggle and upheaval. Many people even experience it first-hand. This anxiety has given birth to comparisons like the civil unrest in the 1960s with the Civil Rights movement and the Vietnam War. Others have likened it to the 1930s and 1940s with tariffs and Japanese internment camps. Still others see parallels to the 1890s, given the rise in unionizing, labor struggles, and wealth inequality. These comparisons give a helpful perspective. Times were hard then, too, but we got through them, often leaving society better. Another window of history we can look to for unrest is the Hellenistic period in ancient Greece. After Alexander the Great’s death (323 BCE), the once mighty Macedonian Empire began to rapidly tear itself apart. This gave rise to several survival philosophies. These schools of thought aimed to provide inner peace and happiness during tough times.

The most popular of these was Stoicism. The Stoic philosophers said that unhappiness is caused by our passions – things outside of our control stir up an emotional response. The cure, they said, is to temper these harmful emotions through reasoning. Learning to accept the things we cannot change leads to a happy life. This conclusion has led to a common misconception that Stoicism means being cold and emotionless. After all, the experience of emotion is a fundamental part of our humanity. Should we really seek to rid ourselves of it? Additionally, doesn’t the focus on calmness and control encourage us to accept injustice and suffering rather than trying to resist it? Is this really what Stoicism teaches?

There are some helpful lessons to be learned here from Martin Luther King, Jr., who shows us what applied Stoicism can look like during challenging times. On April 16, 1963, he wrote an open letter after being jailed for protesting racial segregation. This Letter from Birmingham Jail includes powerful moral statements on the injuries inflicted when justice is delayed, as well as the dangers of prioritizing decorum over justice. If we are earnest about doing what’s right, responding to injustice could mean non-violent direct action such as protests, strikes, boycotts, or civil disobedience. King provides four steps to determine if action is needed, and if so, how to prepare. The first step requires gathering facts to determine if injustice has truly occurred. The second step is negotiation. If an injustice is identified, in an ideal world, we would bring it to the attention of those with power to stop it. However, if injustice occurs because those in power are either complicit or complacent, negotiations fail. What then? This brings us back to what the Stoics mean when we talk about accepting our suffering. Acceptance doesn’t mean to consent, but to come to terms with the reality of the situation so we can focus on what we can change. Stoicism is about “keeping your head in the game.”

Since we cannot control the past nor can we know the future, Stoicism helps us tune out noise so we can focus on what is in front of us. Stoic acceptance means not letting passions weigh you down so much that you lose focus on what you can do today. Likewise, Martin Luther King, Jr. shows that when negotiations fail, we must accept the reality of what happened without resigning ourselves to those failures.

The final step that King prescribes is direct action, a pressure campaign designed to bring people with power back to the bargaining table. This follows from his third step: self-purification for that campaign. He  describes self-purification as preparing oneself to react peacefully in the face of violence. This takes incredible mental training and self-discipline! It also helps to clarify what the Stoics might mean by ridding oneself of passions. In the face of violence, it is only natural to have strong emotions. If someone harms you, a person you love, or a community member, reactions like anger and fear are to be expected, especially if the harm is unjust. The problem is not the sudden onset of these feelings (in other words, a passion in itself), but what we do with them. King’s step of self-purification readies us to focus on what’s most important (justice) and not to let even the most powerful passions break our focus. While it is easy to “lean into” emotions that make us feel defeated or that distract us from what’s important, we should pause and ask ourselves, “What is in my control? I may be angry or feel crushed, but what can I do today to make a difference?”

In turbulent times, even when we think about what is in our control, it may still seem like too much. The Stoic philosopher Epictetus reminds us that our lives are like actors in a play. We do not decide the life we are given, but it is up to us to live it as well as we can. If we do, we might find we are capable of more strength and resilience than we were expecting: “…each thing that happens to you, remember to turn to yourself and ask what capacity you have for dealing with it… If hardship comes to you, you will find endurance. If it is abuse, you will find patience.”

Perhaps it feels like things are out of control, and you want to act, but feel overwhelmed. Maybe you have been doing what you can to help others, but there’s so much to do that it feels hopeless. It may be that you are struggling with daily mental health with the onslaught of negative information in this time of uncertainty. These are times when passions can prove especially difficult to master. These passions get in our way and can be paralyzing. They keep us from doing good in the world or providing the self-care that we need. The Stoics help us focus on what’s important and tune out what is not. What’s important is what is in front of us today. As Martin Luther King, Jr. shows, this does not mean giving in or giving up. Quite the contrary! If we focus on what’s in front of us right now, we position ourselves to take on whatever comes next.

AI and the Water Wasting Machine

An argument that’s frequently made against the use of AI (specifically, popular chatbots based on LLMs, like ChatGPT, Gemini, Copilot, etc.) is that AI is harmful to the environment. The hardware required to create, train, and run chatbots consumes a significant amount of energy, which in turn requires the use of natural resources, specifically water. Many articles have been written recently that focus on AI’s water usage, with some saying that AI is “accelerating the loss of our scarcest natural resource,” “draining water from areas that need it most,” and that by 2030 it will “match the annual drinking water needs of the United States.

While the fact that AI needs some amount of water to operate isn’t disputed, there is much more debate around how much it uses. Some AI-defenders, for instance, have argued that the amount of water AI uses is either negligible or at least comparatively negligible given the amount of water that other things use and that we are seemingly okay with. For instance, Sam Altman postulated that a single ChatGPT query uses “roughly one fifteenth of a teaspoon” of water per query, while others have written that the AI industry as a whole in the US uses about as much water as its golf courses.

Indeed, “you’ve been thinking about AI’s water use wrong” content is a plentiful resource in 2026. For example, an article from Wired, fittingly titled “You’re Thinking About AI and Water All Wrong,” notes that many early estimates of AI water use are likely off by a not-insignificant amount. And a popular Substack article, less-subtly titled “The AI water issue is fake,” lists the following items alongside the approximate amount of water they take to produce when compared to a prompt for a chatbot:

Leather Shoes – 4,000,000 prompts’ worth of water
Smartphone – 6,400,000 prompts
Jeans – 5,400,000 prompts
T-shirt – 1,300,000 prompts
A single piece of paper – 2550 prompts
A 400 page book – 1,000,000 prompts

These kinds of comparisons are intended not only to challenge the narrative that AI uses a lot of water, but also to defuse the moral argument against AI use on the basis of water consumption. As the Substack author notes: “If you want to send 2500 ChatGPT prompts and feel bad about it, you can simply not buy a single additional piece of paper. If you want to save a lifetime supply’s worth of chatbot prompts, just don’t buy a single additional pair of jeans.” In other words, we risk being moral hypocrites if we criticize AI for its water usage while still reading books and wearing comfortable clothes.

I am not going to question the exact numbers concerning AI water use. Instead, I want to consider the merits of the “you’ve been thinking about AI’s water use wrong” arguments, specifically whether we really are moral hypocrites for criticizing AI on the basis of its water use. If we are okay with playing golf and using phones and doing all sorts of other things that use water, must we refrain from criticizing AI on the same basis? Are the people who think that we shouldn’t use AI because of its water use really thinking about it “all wrong,” or chasing a “fake” issue?

I think that the answer is no: we can still criticize AI on the basis of its water consumption. To illustrate, consider the following thought experiment:

The Water Waster: A company announces a new product: the Water Waster 3000. Here’s how it works: it uses [one/one hundred/one thousand] gallons of potable water every day to spin a wheel. The wheel is not connected to anything, and its only tangible benefit is that some people enjoy watching the wheel spin.

We can write the thought experiment so that the Water Waster 3000 uses different amounts of water. However, no matter which number we chose, we would likely reach the same conclusion: that operating the Water Waster 3000 is a waste of water. Someone who really enjoyed watching wheels spin might argue that, in comparison to other forms of entertainment, a single spin of the wheel uses only a fraction of the amount of water required by other industries. But this argument doesn’t hold much water (so to speak): it doesn’t matter that other things use much more water; what matters is that the Water Waster 3000 wastes it.

It is, of course, unfair to say that the Water Waster 3000 is the exact same thing as an AI chatbot. However, one might argue that AI chatbots and much of the AI industry are akin to the Water Waster 3000 in the sense that the AI industry does not, by and large, produce anything of significant enough value, or at least not enough value to warrant its environmental impact. As argued in the aforementioned Wired article: “People who don’t think twice about eating a burger or buying a new T-shirt are angry about LLMs and water because they are rejecting the entire premise that AI is worth the price of its water use.”

If we approach the argument this way, then we can also avoid the charge of hypocrisy. After all, people need t-shirts and shoes: those things require water to produce, but they’re worth it. Sure, producing clothing has a negative impact on the environment, but since the trade-off is acceptable, it’s not a waste of water, just a use of it.

But does this argument free us from the charge of hypocrisy? After all, we do all sorts of things that arguably do waste water, but still do not subject them to as much scrutiny. For instance, while people certainly need shoes and t-shirts and such, we definitely don’t need as many as people tend to acquire (for example, the environmental damages of so-called “fast-fashion” – mass-produced, cheap and disposable clothing – are well-documented). That AI-enthusiasts often compare the water use of the AI industry to golf courses is thus particularly apt, not only because of the alleged similarities in the amount of water they use, but also because golf doesn’t serve any practical need. Golf is frivolous (arguably), so if we’re happy as a society to accept using a lot of water so that some people can more easily hit a small ball into a small hole that’s really far away, then arguably we should be okay using a similar amount of water for chatbots and other potentially much more useful things.

Of course, the claim that people are hypocritical when it comes to criticizing AI for its environmental impact does not negate the fact that AI use still has an impact on the environment. Using AI can then still be bad because of its water usage, just as it is (arguably) bad that we use a lot of water on golf courses. After all, the claim of moral hypocrisy is not a claim about the rightness or wrongness of an act, but instead about whether one has the grounds to criticize others because of their own acts.

So, perhaps AI is bad because it uses water, but then so is everything else because everything uses water. Isn’t it unfair, then, to single out AI?

Well, not really. We are, after all, in the unenviable position of living in a system where hardly any of our acts as consumers or users of technology are environmentally neutral. If this is enough to undermine our moral authority to criticize, then we could not criticize any acts that have a negative environmental impact, unless those acts are significantly disproportionately more extreme compared to what we do now. But it’s not clear that this is a fair standard to hold people to. Remember our Water Waster 3000: we seem perfectly within our rights to criticize the existence of such a machine, even though we use other, more useful machines that also use water.

The Wired analysis is then perhaps not entirely fair to AI critics: it is not necessarily that people who choose to eat burgers or buy new t-shirts don’t give any thought to the environmental impact of their actions – they might – nor that they think that burgers and t-shirts are worth the environmental costs – they might think they ultimately aren’t. But you can still criticize someone for leaving their tap running even if you happened to have bought a new pair of jeans recently. If AI is a waste of water, then we are not hypocrites for criticizing it.

Of course, the AI-enthusiast would likely reject the idea that AI use really is akin to leaving a tap running or something like the Water Waster 3000, or indeed is a waste of water at all. While it is hard to defend the value of every individual use of an AI chatbot, we might still think that, overall, AI should be conceived of as a use of water that produces something of value, rather than a waste. Maybe what we need to calculate is something like “value-per-milliliter of water used,” where values above a certain level would qualify as being “worth it” while those below are “not worth it.” Where the critic and the enthusiast might disagree, then, is whether AI falls above or below that line.

The ongoing disputes around the amount of water AI uses make this calculation practically impossible. But the charge of hypocrisy assumes that we have an answer to this question, namely that AI is, in fact, worth the water it uses. If we reject this premise, or even just call it into question, then we do not lose our moral ground to hypocrisy.

So, where does this leave us? We’ve seen that it is not enough to defuse the environmental argument against AI use to simply say that AI uses as much or less water than other things that we tend to think are acceptable, either at the level of queries, companies, or the industry as a whole. Nor does the fact that we use water for some things mean that we automatically undermine our ability to criticize AI use on the same basis: we do not lose our moral standing to criticize AI for its environmental impact if we reject the idea that AI is worth its water use. While there will undoubtedly be many more arguments about how much water AI uses, it is still criticizable on that basis.

(Not) Punishable By Death

Last week, Luigi Mangione – the man accused of fatally shooting UnitedHealthcare CEO Brian Thompson – received an unexpected reprieve. Two of the charges against Mangione were dropped, meaning that he is no longer eligible to receive the death penalty. The following week, New Hampshire voted against the reinstatement of the death penalty, while – on the same day – Alabama renewed its efforts to expand the use of executions to crimes beyond murder. While entirely unrelated, these events demonstrate the USA’s complicated relationship with the death penalty. And this is why it’s such a common topic of conversation in ethics classrooms. But there’s a more fundamental question that’s often overlooked: does the death penalty even count as punishment in the first place?

Providing a full and accurate definition of legal punishment is a big project. But, on most standard accounts, there’s widespread agreement: Whatever else it involves, punishment necessarily requires harm. Think about the many ways in which people are (and have been) punished for breaking the law: floggings, fines, prison sentences. In every case, the offender’s life is made worse-off in some way. So, in order for the death penalty to count as a punishment, it must harm the offender. But when, precisely, does it do this? As I see it, that harm has to occur either during, before or after execution. So let’s consider these one at a time.

Option 1: The Offender is Harmed During Execution

Executions used to be very painful affairs. Whether it was the guillotine, the gallows, or the firing squads – a sentence of death was a sentence to a significant amount of physical suffering. Nowadays, things have (mostly) changed. Provided that an execution is carried out in the right kind of way, it’s possible to end an offender’s life in a way that causes no pain.

But what about the other kinds of suffering necessarily associated with an execution? Facing one’s imminent death is likely to cause immense fear and dread. Could this be the harm that cements the death penalty’s status as a punishment? It seems not.  This is because the psychological suffering that occurs during an execution is neither necessary nor sufficient for the satisfactory completion of that sentence. Suppose, for example, that an offender faints thirty minutes prior to his execution, and remains unconscious for the entirety of the process. Chances are, we’d still see that offender as having received their sentence – even though they experienced no psychological suffering during their execution. This shows that such suffering isn’t necessary for the successful completion of the death penalty.

Consider an alternative scenario. Suppose that, through some elaborate (and somewhat sadistic) charade, we cause an offender to believe they are being executed – but instead merely render them unconscious, reviving them a few minutes later to, say, serve a life sentence instead. In such a case, we’d probably see the offender as having avoided their sentence – even though they experienced all the same psychological suffering as they would have had they been executed. This shows that such suffering isn’t sufficient for the successful completion of the death penalty.

Option 2: The Offender is Harmed Before Execution

Perhaps, then, we might look to the period prior to an offender’s execution to find the harm of the death penalty. The amount of time an offender spends on death row can be extensive, and is filled with the dread and existential fear associated with impending death. This might be compounded by the labyrinthine appeals and the subsequent uncertainty surrounding whether and when their sentence will even be carried out.

Once again, however, this suffering is neither necessary nor sufficient for the satisfactory completion of the death penalty. Consider cases in which offenders – for various reasons – suffer no psychological harm while on death row. Perhaps their faith allows them to feel no fear of what is to come. Or perhaps they’re merely blessed with a particularly stoic demeanour. Consider, also, cases of summary execution where an offender simply doesn’t have time to consider their imminent demise. Whatever the cause, it’s likely that when these offenders are executed, we will still see them as having received their sentence. If this is the case, then suffering prior to execution isn’t necessary for the successful completion of the death penalty.

Consider, in the alternative, cases in which an offender is on death row for many years, but then receives last-minute clemency. In such a case, we would say that the offender has avoided their sentence – even though they might’ve gone through all of the associated psychological suffering beforehand. This, again, shows that such suffering is not sufficient for the successful completion of the death penalty.

Option 3: The Offender is Harmed After Execution

Of course, we might argue that the harm of the death penalty lies in what comes for the offender after death. Perhaps we feel confident that post-execution, they will find themselves in a place of perpetual torment. But this is a tricky line of reasoning. Ultimately it rests on two rather large assumptions: (1) that there is an afterlife; and (2) that said afterlife will involve harm for the offender. There are many who might reject one or both of these claims. What’s more, even if we believe both, evidence of either can be hard to provide. What this means, then, is that whether or not the death penalty causes harm to the offender – and whether, then, it meets the definition of being a punishment – becomes largely unknowable.

So where does this leave us? In order for the death penalty to count as a ‘punishment’ it must harm the offender. I’ve argued that any suffering that occurs during or before an execution is neither necessary nor sufficient for the successful completion of the death penalty. What’s more, the occurrence of any harm after execution is either non-existent or, at the very least, unprovable.

But here’s the thing: harm doesn’t come exclusively in the form of pain or suffering. Sure, many punishments harm us by giving us something bad. A public flogging, for example, does precisely this. But many other punishments instead harm us by depriving us of something good. This is what happens when we receive a fine (which deprives us of money) or a prison sentence (which deprives us of liberty). Perhaps this is what’s going on in the case of an execution. Perhaps the harm of the death penalty isn’t found in the bad things it might (though doesn’t have to) add to our lives, but rather in the good things it takes away. It’s this possibility I’ll turn to discuss next time.

The Problem of the Ethical Ethicist

I am lucky enough to have a job as an ethicist, which, statistically speaking, is unlikely. Most of the people with whom I completed my undergraduate degree in philosophy do not work in the field. They have what might be called normal jobs: working with spreadsheets, tending bar, or selling something to someone. While I am sure they still occasionally think about philosophy, and perhaps even about ethics, they are not immersed in it in the same way that I am. I am paid to do philosophy. Whether that is a good thing, or whether I have simply turned something I care deeply about into nothing more than a job, remains to be seen.

Still, I do feel fortunate to spend my days thinking about right and wrong. With that fortune, however, comes what I take to be a certain self-imposed expectation. If I make my living evaluating the reasons and frameworks according to which we pass value judgments on the world and on those around us, then surely I ought to be a good person myself. If I am not, then on what grounds is it fair or even appropriate for me to judge others, whether directly or indirectly through their actions?

An analogy might help here. Imagine you go to the doctor because you have a cough that simply will not go away. Your usual doctor is unavailable, so you see someone new. This doctor, however, is conspicuously unhealthy. Most notably, they are smoking while they examine you. After finishing the exam, they tell you that you need to take better care of your lungs; perhaps take up running, and for God’s sake, quit smoking. You would likely find this advice a little rich, perhaps even hypocritical, coming from someone who is, at that very moment, hacking on a dart.

To me, something similar applies to people who work in ethics. You cannot reasonably claim to be an ethicist if you do not, even at a very basic level, attempt to act ethically in your own life. If you are a bad person, where does your justification come from to judge others? It seems almost self-evident that if you make your living examining right and wrong, then you yourself would be a good person, or at least better than average. After all, if that is not the case, then what chance does the rest of the population have, those who do not spend their time thinking about morality for a living?

And yet, despite how intuitive this view may be, it simply is not true. Ethicists are no more moral than non-ethicists. In their 2016 meta-analysis, Eric Schwitzgebel & Joshua Rust found that “ethicists in the United States appear to behave no morally better overall than do non-ethicist professors.” While this study was limited to the United States, it seems reasonable — unless one believes there is something uniquely morally corrosive about that country — to assume that similar trends would be found elsewhere. In short, studying morality full-time does not necessarily make one a more moral person.

Still, I suspect many of the people included in that study would resist this conclusion. My intuition is that most of us see ourselves not as bad people, but at the very least as morally acceptable. We see ourselves as good. This is backed up by research by Ben M. Tappin and Ryan T. McKay, who found that “most people believe they are just, virtuous, and moral.” I am no exception. I think I’m a good person, or at least, I think I try to be.

This tension came into focus for me recently, in both a personal and professional capacity. I was scheduled to present at the 2026 Law and Society conference, which this year is being held in San Francisco. I have long wanted to visit the city, and I saw the trip as an opportunity to fulfill both professional obligations and personal aspirations. In early January, I began making plans, aware that the socio-political climate in the United States was less than ideal but believing I could set those concerns aside. After all, nothing too terrible had yet happened.

Then came January 7th and the shooting of Renee Good. I will not rehearse the details here; by now, most of us are familiar with what occurred, and you have likely seen the footage. In the aftermath, I found myself asking whether I could justify entering the United States given what was unfolding in Minneapolis and, presumably, in other parts of the country. I was undecided — until I received a personal message from a friend who told me they had been accosted by men claiming to be ICE agents. According to their account, which I have no reason to doubt, they were nearly disappeared off the street, prevented only by the presence of enough bystanders to make the public pressure unbearable for the self-identified “agents.”

That was the moment that tipped me over. I immediately contacted the conference organizers and informed them that I would no longer be presenting, citing the deteriorating conditions within the country. Then came more news of similar incidents; the most recent being the shooting of Alex Pretti.

Now, would I have been at personal risk had I gone? Almost certainly not. The conference was in San Francisco, a sanctuary city in a sanctuary state. Moreover, and not to be indelicate, I am white. Unless I open my mouth and expose my accent, those who profile based on skin color would be unlikely to identify me as a target. It’s safe to say that personal safety was not the motivating factor. I have little doubt I would have been physically fine.

Rather, the issue was ethical. I could not justify going to the United States at this moment in time as an ethically acceptable thing to do. It would have been wrong. And if that is so, then as an ethicist, I take myself to be more obligated than most to avoid the wrong and to do the good — even when doing so comes at a personal or professional cost.

While I was considering whether to withdraw from the conference, my mind went to an unlikely place: Peter Singer’s Animal Liberation Now. On the back of my edition is a short list of endorsements, one of which comes from Richard Dawkins: “Peter Singer may be the most moral person on the planet.” I do not know what to make of that claim. Both Singer and Dawkins have attracted their share of controversy, and Dawkins’ assessment strikes me as, at best, hyperbolic. Still, the sentiment lodged itself with me. The idea that someone might look at a professional philosopher — someone paid to think about right and wrong — and conclude that they actually do the right thing was unexpectedly moving.

It forced me to reflect on my own behavior, and on what it would mean to deserve that kind of judgment.

Ultimately, I think what I am circling here is the idea that teaching and researching ethics are, on their own, insufficient. Is it valuable? Perhaps. I might even have motivated a student to do the right thing at some point. But the harder question is whether I do the right thing when the opportunity presents itself, or whether ethics is merely the means by which I make a living.

I hope it is not the latter. I hope my decision not to present at the conference was the right one. I am unsure whether anyone will follow my lead, or whether that matters. But at least in this instance, that quiet moral confidence Tappin and McKay attribute to the population at large feels — just possibly — earned.

Why Ethics Classes Don’t Give Answers

When I started doing philosophy, I was convinced that if I studied it long enough, I might be able to solve moral problems with some degree of certainty. When I first encountered the trolley problem and found myself stumped by all the competing justifications, I assumed that with enough time I would eventually know what I should do, for real, if I ever had to decide whether to pull the lever.

It did not take very long for me to realize that this moral certainty I was chasing is most likely a fantasy. As I sit here some fourteen years later, the truth is that ethicists have not solved the trolley problem, at least not in the way we think about solving other kinds of problems. What we have instead is a range of carefully argued, though often mutually incompatible answers. And even with those answers, there is a whole lot of baggage that comes with accepting them.

This realization often returns to me when I teach ethics. Clearly, I am not teaching students how to solve moral problems the way a math teacher teaches students to solve quadratic equations. When I teach ethics, I rarely teach students the correct answer to moral questions. So what, exactly, am I teaching them?

Last year, in a medical ethics course, I asked students to discuss a morally controversial case involving the treatment of an incapacitated patient. When some students struggled to reach a conclusion, they confidently suggested that the morally responsible thing to do would be to defer the decision to the attending physician. I then asked a simple follow-up question: What if you were the attending physician? Their faces sank. In that moment, many seemed to realize that their profession may someday demand that they make decisions where the stakes are literally life and death and the answer about what to do falls to them.

Medical ethics is especially revealing in this respect because it overlaps so closely with law and policy. Many students enter the course expecting to learn what the law requires or what hospital policy dictates. But that is not the primary aim in my version of this course. We are examining how moral questions arise within medical practice, even when legal and institutional guidance is already in place, and how to work through ethical problems in a way that might lead to law and policy in the first place.

I do not take pleasure in making students uncomfortable, but moments like the one in my medical ethics class reveal something important about what ethics education is really doing. Learning ethics involves learning to take moral decisions seriously, and that means confronting our own role in other people’s lives. It also means understanding the complexity that can arise with a morally charged event. It requires understanding moral terminology and being able to cordon off moral reasons from non-moral reasons.

After discussing difficult cases, students often want to know what the real answer is. At that point, I usually have to shrug and admit that I do not know. I can explain how a utilitarian might answer, or a deontologist, or a Buddhist, or even how I myself am inclined to think about the case. But I cannot tell them the correct answer.

If the value of teaching ethics depended entirely on delivering unassailable answers to moral dilemmas, then ethics might not have very much to offer (as far as I can tell, nearly every answer that has been given to a moral question is assailable). But this misunderstands what ethics is for. The value of ethics does not lie solely in providing solutions for the students to memorize and apply; it lies in helping them understand the precise nature of the problems we are facing in the first place.

One of my favorite examples of this in medical ethics comes from Judith Jarvis Thomson’s famous argument about abortion. One of the strengths of her argument is not that it definitively settles the issue, but that it reframes it. Thomson shows that acknowledging a moral right to life within the fetus does not automatically resolve questions about what others are morally required to do. In doing so, she helps clarify what is actually at stake in debates about abortion, even for those who ultimately disagree with her conclusions.

All of this is to say that ethical theorizing can be valuable even when it does not deliver final answers. Its point is not merely to tell us what to do, but to help us see more clearly what we are doing, what we are responsible for, and why certain decisions deserve our contemplation. What I want my students to take away from ethics classes is not how to solve moral problems with absolute certainty, but rather how to face moral problems in their entirety and with precision.

I cannot help but think about all of this in the shadow of AI. As debates continue to rage over whether students should use AI, whether workers should rely on it, or whether it provides genuine benefit at all, I find myself returning to the human work of reflection about how we ought to live. Socrates told us that the unexamined life is not worth living; Aristotle argued that the good life consists in the activity of the human soul in accordance with virtue. I think they were onto something and suspect this is why we don’t spend much time teaching students the right answers.