← Return to search results
Back to Prindle Institute

Sentience and the Morality of Lab-Grown Brains

photograph of human brains in glass jars with formaldehyde

Stories of lab-grown brains might seem like the preserve of Halloween tales. But in early October, a group of researchers writing in the journal Neuron claimed to have grown the world’s first “sentient” lab-grown brain – a creation they dubbed DishBrain. Lab-grown “brains” are nothing new, and were first produced in 2013 to study microcephaly. This is the first time, however, that such a brain has been exposed to external stimuli – in this case, a version of the video game Pong. The Neuron publication outlines how scientists grew 800,000 human brain cells from a collection of stem cells and mouse embryos, then connected this brain to the video game via electrodes. The cells responded – learning how to play the game in around five minutes. While its mastery of the game wasn’t perfect, its rate of success was well above random chance.

Many of us have a finely-honed “ew yuck” response that is triggered by these kinds of cases. But of course, being disgusted by something is not an ethical argument. Still, our distaste might signify something morally important. This is what philosopher Leon Kass famously referred to as the “wisdom of repugnance.”

So why might these lab-grown brains disgust us? We can start by considering what’s novel about DishBrain – that is, it’s claimed sentience. This is a notoriously ambiguous term. In many science fiction stories, “sentience” is used as shorthand for “consciousness” or “self-awareness.” Marvin the Paranoid Android, for example, might be described this way – exhibiting the capacity to question his own existence, experiencing bouts of depression and boredom, and even having the ability to compose lullabies. Often, this same understanding of sentience will be used to distinguish between different kinds of alien lifeforms – with the status of “sentience” being used to differentiate intelligent, communicative beings from other more primitive alien animals.

In ethical discussions, however, sentience is defined more narrowly. Derived from the Latin sentientem (a feeling), sentience is used to refer exclusively to the ability to feel pain and pleasure. If something has such an ability, it will have sentience.

On this narrower definition, a highly intelligent robot that is nevertheless incapable of experiencing pain will not be sentient, while an unintelligent animal that can experience pain will be.

I recently discussed the moral importance of this kind of sentience in light of the revelation that insects might feel pain. Why is it so important? Because anything with interests is morally relevant in our ethical decision making, and – as philosopher Peter Singer argues – if something can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain. If some living being experiences suffering, then there can be no moral justification for refusing to take that suffering into account.

Return, then, to the case of DishBrain. Suppose that – as its creators claim – this lab-grown brain has sentience. On the narrow definition above, this would mean that DishBrain could experience pain and pleasure. If this was the case, this might go some way towards describing our repugnance regarding the experiment.

While playing Pong for hours on end might not be a truly painful experience, being created solely for this purpose sounds like a life utterly devoid of any real pleasure. You or I certainly wouldn’t want to go through such a miserable existence.

Given this – and given Singer’s argument regarding sentience – it would be morally wrong to inflict this kind of life on someone (or something) else.

Fortunately, however, DishBrain doesn’t seem to possess sentience of this kind. In the absence of sensory receptors and a complex nervous system, it seems unlikely that DishBrain is capable of experiencing anything like pain or pleasure. Given this, there’s little reason to worry about this experiment falling afoul of an argument like Singer’s.

But is pain and pleasure all that is morally relevant? Consider, for example, an individual who suffers from congenital analgesia – a rare condition in which someone is unable to experience pain. Would it be morally permissible to inflict a battery of painful experiments on this person, justified on the basis that they will experience no pain as a result? It would seem not. And this suggests that something more than pain and pleasure might matter to our considerations of how we should treat other beings.

Perhaps this is where the alternative conception of sentience – referring to things that are capable of self-awareness – is useful. The capacity for this kind of sentience also seems morally important.

We might, for example, adopt something like the Kantian notion that any self-aware being should be treated as an end in itself – not as a means to some other end. This might be why we believe it would still be morally wrong to carry out painful experiments on someone who is incapable of experiencing pain.

Fortunately, lab-grown brains don’t seem to be sentient in this way either. DishBrain isn’t self-aware. It’s merely receiving input and providing output; much like a computer – or even something as rudimentary as a mechanical slot machine – might do.

There’s a warning here, however. Sentience – whether understood as (i) the ability to experience pain and pleasure, or (ii) the capacity for self-awareness – carries enormous moral weight. While DishBrain might (contra the claims of its creators) currently lack sentience, creating further iterations of lab-grown brains that do possess real sentience would be enormously problematic. Our repugnance at this – our “ew yuck” reaction – would then have a solid moral foundation.

Protest & Paint: What’s Wrong with Targeting Art?

photograph of van Gogh's Sunflowers

On Friday, October 14th, members of the activist group Just Stop Oil gained international attention by throwing cans of tomato soup on Vincent van Gogh’s Sunflowers. The two women who threw the cans were arrested and charged with criminal damage. London’s National Gallery, the home of the painting, stated that the painting was undamaged. This incident is one of numerous protests by the group, including vandalizing a luxury department store, disrupting sporting events and blocking traffic, sometimes by gluing themselves to roads and others by climbing on bridges. Targeting Sunflowers appears to have inspired copycat protests; on Sunday, October 23rd two protestors associated with the organization Letzte Generation (Last Generation), a German climate activist group, threw mashed potatoes on Monet’s Les Meules (Haystacks). This painting was also behind glass and undamaged.

These incidents have drawn backlash. Suella Braverman, the Home Secretary of the United Kingdom, referred to blocking traffic and slowing emergency vehicles as “completely indefensible,” calling for new legislation to counter the protests. Op-eds have called into question the means of targeting artwork, declaring that it is unlikely to drum up support. Others have argued against the mission of Just Stop Oil altogether, claiming that increased energy prices from ending oil production will simply harm the low income individuals while also criticizing the means of protest. Some on social media have adopted the conspiratorial view that Just Stop Oil is actually funded by oil interests in order to turn the public against climate activists.

However, there seems to be an unexamined assumption built into some of these criticisms.

Namely, I am interested in exploring why one would think that protests which target art would fail to garner support. This critique is often presented baldly. So, my goal is to consider some assertions one might make in claiming that protests which target art are ineffective and misdirected.

Perhaps some find fault with these protests because they are illegal. These protests are, after all, a form of vandalism. Yet this reasoning is specious for at least two reasons. First, legal and moral judgments are distinct. Few think that burning witches at the stake was moral, despite being the result of a legal process. Second, this analysis would result in a general prohibition on civil disobedience. Civil disobedience, helpfully analyzed by Giles Howdle here, is a form of protest that involves breaking laws one perceives to be unjust, and accepting the legal consequences that follow. This tactic was commonly deployed in universally approved movements, such as the U.S. Civil Rights movement and the Indian independence movement. This suggests that illegality is not, by itself, a way to demonstrate that a protest is immoral.

Another way one might object to these methods of protest is through an appeal to the harm principle. This principle states that acts are immoral if they produce harm. However, the claim needs to be further specified.

Some permissible forms of protests like boycotts may actually intend harm, although the picture is somewhat complicated.

Nonetheless, a protest’s causing harm does not seem like a sufficient reason for condemnation. Much more compelling is the claim that innocents ought not be harmed.

But first we need to identify these innocents. Perhaps one candidate is the museums themselves. Although the paintings were unharmed, there will be some costs to repair or replace their frames. Further, additional security may be needed, incurring costs to prevent incidents in the future. However, there are two issues with grounding the objection in harm to the museums.

First, museums do not seem to be the kind of entity whose interests count morally. Second, it is possible that these protests could benefit the museums in the long run.

Perhaps now more people will visit art galleries, hoping to see works before they suffer damage or to witness the next incident. So, until we have some idea of the long-term consequences, we cannot be sure that they harm the museum.

Perhaps the victims are instead the museum guests. After all, they bought tickets to see works of art. Unfortunately, their ability to do so was hampered by the actions of the protestors. Entire sections of the museum were closed following the protests. Still, these “harms” alone do not seem sufficient to show that the protests are wrong. Protests we consider justified often inflict this sort of collateral damage. Suppose, for instance, a group protested an unjust war at a local park, and this protest forced a family to cancel their annual reunion. Although regrettable, this by itself does not seem to make the protest immoral. Perhaps something like the doctrine of double effect holds here – so long as innocents are not the direct target, third parties may be forcibly and permissibly inconvenienced by protests.

One might argue that those who had their sensibilities offended by the actions of the protestors were harmed – the harm being psychological or emotional rather than physical. What would be offensive about these protests?

Well, the protestors engaged in what we might call profane acts. By profane, I mean actions that did not demonstrate the proper sort of respect or reverence towards a deserving object. The idea here being that great works of art may deserve our respect.

So, the act of throwing food on these works, even if aimed at contributing to a greater cause, demonstrates improper disrespect towards the art itself. But note, again, that the works targeted by the protestors in this particular case were not harmed. Instead, the moral fault – if there is one – must reside in what the acts demonstrated, not their results.

Why would art deserve respect? A likely reason is that they have significant aesthetic value. L.W. Sumner describes aesthetically valuable things as those “which we find in some respect appealing or attractive or admirable.”  The aesthetic value may come both from their physical appearance, as well as their significance in the history of art. So perhaps many found these protests shocking or offensive because to throw food on these artworks – even if they are protected – is to behave in a way that is unbecoming of their value.

Yet this may be precisely what the protests are trading on.

If climate change is indeed an existential threat, with consequences that threaten human civilization, many, many valuable things will be lost if we do not act soon. These losses would certainly include at least some priceless works of art. As a result, the protestors may be making a kind of trade-off.

They are willing to engage in profane behavior in hopes that it will help preserve value in the long run – not just works of art, but the many human and non-human lives that will be lost with the worst consequences of climate change.

Certainly, these protests targeting art have indeed been shocking. But before condemning them we must take a step back and reflect on the nature of the values at stake. Do works of art like Sunflowers and Haystacks have value such that we can never engage in behavior which disrespects that value? In other words, do they pose constraints on our behavior, such that certain acts are off-limits no matter how dire the circumstances? If not, then we ought to ask ourselves when we can transgress these values. Without proper assessment, we cannot be certain whether the protestors erred in selecting their target, or whether the error was made by those who offhandedly dismiss these protests.

Why It’s OK To Buy that Steak

photograph of grocery shopper debating purchase at meat aisle

We’ve all been there. Walking through the supermarket, you’re suddenly confronted by a refrigerator cabinet full of plastic-wrapped chicken and prepackaged sausage, or the butcher’s display case larded with cuts of marbled beef, richly red. Gazing at these morsels of animal flesh, you recall all of the ethical reasons why you shouldn’t eat meat — meat production violates animals’ rights and ruins the environment. The right thing to do in this situation seems clear: skip the steak and buy lentils instead.

But while the arguments against meat eating present a compelling case for societal-level change to the composition of our diets, it does not quite follow from this that your individual decision to buy a steak is unethical.

Indeed, there is a plausible argument that, notwithstanding the wrongness of meat consumption in the aggregate, there is nothing wrong with individual carnivorous choices. In short, it might not be OK for all of us to eat meat, but it is still OK for any one of us to eat meat. This column will attempt to articulate that argument, with due acknowledgement of its limitations.

The argument’s major premise is a very general claim: faced with the choice to do either A or B, we are morally obligated to do B only if A is, or at least is objectively likely to be, morally worse than B. What makes one choice morally worse than another? It’s beyond the scope of this column to provide an exhaustive answer to that question, but clearly two things that make a choice morally bad are that it causes harm to a person and that it violates a person’s rights. By “person” I mean here an entity worthy of strong moral consideration, which could include animals. So, one choice can be worse than another if the former causes more harm or violates more rights. In addition, it may violate more fundamental rights — think of the difference between the right to life and the right to vote.

It follows from this premise that buying the steak is worse than not buying the steak only if the former is morally worse than the latter. This is the case if buying the steak causes more harm or violates more rights, or more fundamental rights.

The question, then, is whether buying the steak does any of these things.

Let’s consider harm first. Clearly, buying the steak does not cause harm to the cow from which the steak was harvested — that cow no longer exists as a subject capable of feeling pain. Perhaps, however, buying the steak causes harm to presently existing or future cows or the environment, since it sends a signal to meat producers — a signal that would otherwise not have been sent — to produce more meat, and meat producers may respond to that signal by increasing the number of cows they raise and slaughter.

The trouble with this argument is that it is almost surely false. Your sixteen-dollar purchase will have no effect on the meat producers’ decisions,  which are influenced only by the aggregate demand of hundreds of thousands or millions of consumers. Furthermore, if you don’t buy the steak, someone else almost certainly will. Thus, even if you choose not to buy the steak, the aggregate demand for steaks almost certainly won’t be reduced even by as little as sixteen dollars — a reduction that, to reiterate, wouldn’t make a difference to meat producers’ market decisions anyway. So, if buying the steak is morally worse than not buying the steak, it isn’t because the former causes more harm than the latter.

The same points apply to the issue of whether buying the steak violates more rights, or more fundamental rights, than not buying the steak.

If killing the cow from which the steak was harvested violated its rights, buying its meat does not cure the violation — but it also adds no new violation. Eating a steak does not constitute a violation of an animal’s right, although it may depend upon it.

And if buying the steak will not cause more harm to present or future cows or the environment because of the insignificance of my individual consumer choices to meat producers’ decisions, neither will it lead to more rights violations.

It appears, then, that buying the steak is not morally worse than not buying the steak. If the major premise is true, it follows from this that you are not morally obligated not to buy the steak. Now for the fun part: answering objections.

First, it may be objected that precisely the same argument can be made with respect to any moral problem that arises due to the aggregate effects of many individual choices. Pollution and unfortunate election outcomes are two obvious examples. Some philosophers are happy to “bite the bullet” here and accept that individuals do not have obligations to behave in ways that would make a difference only if many others followed suit, like voting or refraining from polluting.

Actually, bad election outcomes are quite different from meat consumption in at least one key respect.

In elections, there is no reason to believe that when one person omits to vote, another person, who would not have voted unless the first person made her omission, will vote in that person’s stead. This is unlike when one person chooses not to buy a steak.

In that case another person will very likely buy that very same steak, which she could not have done had the first person bought it.

This distinction is important because it means that any individual’s vote might make a difference to who gets elected — it just has a very, very low likelihood of doing so. However, given the profound consequences of many elections, even that low probability of making a difference arguably makes it likely enough that not voting is morally worse than voting to ground an obligation to vote.

It might be objected here that if the exceedingly small probability of casting the decisive vote is enough to ground an obligation to vote, then the exceedingly small probability of influencing others in some way by not buying the steak is also a sufficient basis for an obligation not to buy the steak. But this objection fails for two reasons. First, voting is only morally required if the election’s outcome is likely to have significant downstream effects. While this is plausible with respect to elections, it is not plausible with respect to the act of not buying the steak. Because someone else will almost surely buy the very steak you omitted to buy, we can safely say that your omission’s influence will be nil. Instead, what can be influential is some further act, such as talking to someone about your choice not to buy a steak.

Nothing I’ve said in this column means that you aren’t morally obligated to perform some other acts that help promote a large-scale shift to vegetarianism if you can. My claim is merely that you aren’t morally obligated not to buy the steak.

Pollution is a more serious problem for my argument, since unlike a single person’s vote, a single person’s quantum of pollution is certainly not going to have a decisive effect on the overall health of the environment. Suppose you are considering whether to dump one day’s worth of garbage in a nearby lake. That amount of garbage may have no perceivable impact on the ecological health of the lake — perhaps not a single organism will or is likely to be affected. That can be the case even if, had the entire city in which you live followed suit, it would have destroyed the lake’s ecology. This seems to imply that dumping your garbage into the lake is not morally worse than refraining from doing so, and so there is no moral obligation for you not to pollute in this way.

However, this conclusion would be overhasty. It might be true that your garbage dumping does not cause ecological harm. But there are other ways in which even a small amount of pollution can have a small, but tangible negative impact. For example, pollution can be an aesthetic affront to people who have a strong interest in enjoying “unspoiled” nature. More importantly, there is another way in which our choices can be morally bad: they can violate rights.

One can violate rights without making the rights-holder worse off in a particular instance. It may plausibly be argued that animals, plants, and even ecosystems have rights not to be polluted at all. This is one way of explaining the intuition many people have that natural ecosystems are in some sense “sacred.”

If that’s so, then even an ecologically insignificant act of pollution may violate those rights, and so may be morally impermissible, despite not making a tangible difference in terms of the well-being or functioning of the affected animals, plants, and ecosystems.

A second objection comes in the form of a question: what if everyone subscribed to the foregoing reasoning? Then the morally bad aggregate effects of meat consumption would be realized. The implication is that the test for whether an individual morally ought to do something is whether the result of everyone doing the very same thing is acceptable. Admittedly, as a sort of quasi-utilitarian sister of Kant’s Formula of Universal Law, as well as a distant cousin of the Golden Rule, this claim belongs to a very illustrious family of moral theories. Apparently, these theories simply deny the major premise of my argument: even if your choice of A over B does not cause more harm or violate more rights, if everyone’s choice of A over B would do so, then your choice is nevertheless wrong.

Philosophers have collectively devoted literally thousands of pages to some version of this disagreement, so I don’t expect to settle it here. Suffice to say that there is something odd about focusing on some hypothetical scenario when considering whether one’s act is morally wrong, rather than on the act’s intrinsic nature and effects.

It is worth emphasizing the limitations of the argument I’ve just defended. As I mentioned, nothing in this argument means that you are not obligated to promote vegetarianism in ways likely to have significant aggregate effects. This means that public officials, public figures, and prominent or influential members of communities likely have stronger obligations to promote vegetarianism than ordinary people. Indeed, since even their omissions may be influential, such people may have obligations to be vegetarians themselves.

In short, the argument I’ve outlined here does not get you off the hook for doing something to help reduce aggregate meat production if you can. And it does not dispute that there are compelling moral reasons for societies to reduce aggregate meat consumption. It simply suggests that you shouldn’t feel too guilt-stricken about your particular consumption choices.

The Liberal Case for Federalism

image of US map with state flags identifying borders

One of the most striking aspects of the United States’ system of government is its federalism: those titular states are not mere administrative divisions, but unique sovereigns with extensive powers denied to the national government. Historically, support for federalism has cut across the liberal or conservative divide: it was the liberal Supreme Court Justice Louis Brandeis who, in 1932, famously coined the phrase “laboratories of democracy” to describe how the Constitution’s Tenth Amendment allows “a single courageous State . . . if its citizens so choose . . . to try novel social and economic experiments without risk to the rest of the country.”

That began to change around the middle of the last century, as Congress and the federal courts assumed the role of champions of individual rights and equality as against states committed to marginalizing certain classes of citizens. Now, with many statehouses dominated by conservatives, partly through partisan gerrymandering, and this Supreme Court paring away constitutional protections of individual rights — most notably, the right to abortion — there seems to be little reason for liberals to celebrate our decentralized system. Indeed, at least two liberal writers have recently tweaked Justice Brandeis with the titles of their books.

But federalism can, I think, be a friend to liberals.

In this column, I will argue that a state-centric approach to the liberal project is attractive for a wide variety of reasons.

The first point is prudential and straightforward. It is always better to have two shots at achieving your political goals rather than one. If liberals have control of statehouses, defeats on the national level — whether in the Supreme Court or in Congress — need not mean the demise of their agenda.

Moreover, the chances of success at the state level are not as dim, or at the national level as bright, as many liberals think.

There is a persistent myth that when it comes to individual rights and equality, the federal government gets things right more often than state governments.

This is understandable, given the central role the federal government played in vindicating the rights of African-Americans, denied to them by Southern states, during the Civil Rights era.

But the U.S. Supreme Court has not always been on the “right” side. The Court’s seminal civil rights decision in Brown v. Board of Education outlawed segregation in public schools by overturning one of its own rulings, Plessy v. Ferguson. Similarly, the important voting rights case Smith v. Allwright — where the Court held that in allowing the Texas Democratic Party to mandate whites-only primaries, Texas violated the Fourteenth Amendment rights of its Black citizens — reversed decisions of lower federal courts dismissing the case. 

The Court’s record on labor rights is also spotty. In Lochner v. New York, the Court held that the Fourteenth Amendment’s Due Process Clause guaranteed a substantive right to contract, a right incompatible with state laws setting upper ceilings on the number of hours certain kinds of laborers — in this case, bakers — could work. In the mid-1930s, the Court invalidated a raft of New Deal legislation, prompting President Roosevelt to introduce an abortive plan to pack the Court.

Among the most shameful of the Court’s decisions concern state laws authorizing compulsory sterilization of people in states’ custody. In Buck v. Bell, the Court upheld a Virginia law allowing forcible sterilization of mental institution inmates. Justice Holmes’ opinion contained this now-infamous line: “Three generations of imbeciles are enough.” It was not until almost twenty years later that the Court, in Skinner v. Oklahoma, held that the forced sterilization of criminals was unconstitutional. I could go on.

State governments are also not as often wrong from a liberal point of view as is commonly believed. Indeed, in many areas, states have been in the vanguard of progressive change.

Wyoming became the first state to grant women the right to vote in 1890, thirty years before the passage of the Nineteenth Amendment. By then, fifteen states had already granted women full suffrage, with an additional eleven granting partial suffrage.

While the Supreme Court got it horribly wrong about compulsory sterilization in Buck, the state courts often got it right. Years before Buck, the highest courts of Michigan and New Jersey held that laws authorizing compulsory sterilization of the mentally ill in public institutions violated the Fourteenth Amendment’s Equal Protection Clause. Similarly, a Nevada court ruled that a compulsory sterilization law that applied to male prison inmates violated the Nevada constitution’s cruel and unusual punishments prohibition.

Besides the fact that the states are not always enemies of individual rights and equality, and the federal government not always a friend, there are additional reasons why state-level progressive agendas may be more successful than national ones.

By design, the authority of federal institutions is limited: Congress can only pass legislation under one of its enumerated powers, and federal courts only have jurisdiction over a limited range of cases. Not so with the states.

As long as its exercise is compatible with their own and the federal constitutions, states are free to use their “police power” — the power to legislate in the interest of security, health, safety, morals, and welfare — to pass laws on just about any issue under the sun. Similarly, state courts are courts of “general jurisdiction,” empowered to adjudicate issues governed by both state and federal law. Finally, most state constitutions are much easier to amend than the federal constitution, allowing for more rapid policy experimentation.

Nor are policies pioneered by one state necessarily confined to that state. The idea of “laboratories of democracy” is that if a state’s experiment is successful, it will be emulated by others. In truth, there is more than just admiration at play here. The reason that California’s announcement that it would ban the sale of new gasoline-powered cars by 2035 instantly became nationwide news is that if California were a sovereign nation, it would be the world’s fifth largest economy. Policies adopted in California have a nationwide and, indeed, a global impact, whether other states like it or not.

It will be replied that state experimentation is all well and good, but certain individual rights and equality protections should not be left to the states because there is no good reason that they should vary across state lines. Women are the same in Alabama as they are in New York, and hence their claim to reproductive freedom is no less strong in the former than in the latter. The point is undeniably compelling. But recognizing the value of federalism does not require abandoning the effort to nationalize or even constitutionalize rights. Rather, it gives liberals more opportunities to push their agenda forward — even when the federal institutions are not particularly receptive to it, as seems to be the case today.

The basic problem Americans face is that the founders gave us a remarkably terse, frequently ambiguous Constitution, and then they made it very difficult to amend. The few successful amendments are usually no less terse and ambiguous than the original document. This means that there will always be deep disagreement about which rights the federal Constitution protects. Given this reality, it behooves liberals to accept that not all courts will be the Warren Court, that not all Congresses will be the Congress that passed the Civil Rights Act of 1964, and that federalism affords viable alternatives to bring about progressive change.

Creepy-Crawlies and the Long Dreamless Sleep

image of large spider silhouette at night

In graduate school, I lived in a dingy little apartment near the sea. My apartment faced a slough, beyond which was the water. On the wall next to my door was a bright light. At first, I could turn this light on and off. But after a year or two, some men came and altered the light to make it stay on all night. The area around the light and the eave above it became a den of death. At night, droves of insects would emerge from the littoral darkness of the slough to flap and buzz in a confused frenzy around the light. Dozens of spiders awaited them. When I entered my apartment, I could see the insects wriggling pitifully in their webs.

The situation became too much for me. The spiders started to draw their webs over my door. A nasty one sprung on top of my head. I decided to take drastic action. I found a sprayable toxin for killing insects and arachnids, some horrible thing with a sickly sweet chemical smell. In the morning, when the spiders were hidden in their crevices, I sprayed the toxin all around the den and leapt back. For one second, nothing happened. And then, all at once, thirty or forty large spiders began to erratically descend, desperately clinging to threads of silk. They were writhing as the toxin destroyed them. Some of them curled as soon as they hit the ground. Others stumbled off before dying. It was horrible. I couldn’t shake the thought that those spiders, like the insects they caught in their webs, died in pain.

My colleague, Daniel Burkett, has recently written about some new empirical research which suggests that insects can experience pain. Burkett argues that if insects (or spiders, which are arachnids) can experience pain, then that pain matters morally and thus we have defeasible moral reason to avoid causing them pain.

The basic thought is that pain is inherently bad no matter where it occurs, and it’s unacceptably arbitrary to discount a creature’s pain simply because that creature isn’t a human being (or isn’t cute or friendly or lovable).

Burkett’s argument is unsettling. It implies that I may have done something terrible when I slaughtered those spiders.

I agree with Burkett’s basic argument. We have pro tanto moral reason to refrain from inflicting pain on any creature, no matter how creepy or crawly. However, I do not think (as Burkett seems to) that this means we have pro tanto moral reason to avoid swiftly killing insects, for example swatting mosquitoes or squashing lanternflies. First, I doubt that the process of swiftly swatting or squashing a creepy-crawly causes a morally significant amount of pain. Being swiftly swatted is analogous to being vaporized in an explosion. The process totally destroys the creature’s body (rendering it incapable of experiencing pain), and the destruction occurs in a fraction of a second. Second, it does not follow from the fact that we have moral reason to avoid causing a creature pain that we have moral reason to avoid painlessly killing it. And there are good reasons for thinking that painless death is not bad for insects in any morally relevant sense.

To see why, let’s take a step back and talk about why death is bad generally.

When someone dies, they permanently cease to exist. The dead are beyond any sort of experiential harm. The dead can’t suffer; the dead can’t feel distressed, sad, bored, or lonely (it’s true that the dying process can be painful, but dying things are still alive). The imperviousness of the dead to any sort of suffering raises an ancient philosophical puzzle:

why is death bad for or harmful to the dier at all? And why is painlessly killing someone wrong, apart from how this affects people other than the victim?

One popular answer is that death is bad for a dier if and because it deprives the dier of good things that the dier would have had or experienced had they not died when they did. Consider a person who is instantaneously vaporized by an explosion at forty. Suppose that this person would have lived another forty good years had she not been vaporized. The explosion is not bad for the victim because it causes her pain or distress; actually, the explosion renders her completely impervious to pain and distress. Rather, the explosion is bad for the victim because it prevents her from experiencing those good years and thereby makes it the case that there is less total good in her life than there otherwise would have been.

A related answer is that death is bad for a dier if and because it frustrates the dier’s desires and curtails the dier’s projects. Many of our desires are directed toward the future and can give us a reason to go on living. For example, I want to visit space someday. Unlike a desire to, say, get a cavity filled, this desire gives me reason to try to stay alive until I can achieve it. If I were to die in my sleep tonight, this desire would go unsatisfied. Arguably, even if I don’t feel sad about it, it’s bad for me if this desire is never fulfilled. My life is worse as a result, all else being equal. Similar things can be said, mutatis mutandis, about many ongoing projects that are cut short by death.

These explanations of death’s badness presuppose that the dier is a temporally extended subject. All living things are temporally extended in a physical and biological sense, of course. But persons are extended through time in a psychological sense, too.

My current self is connected to my past self by a continuous chain of beliefs, memories, desires, preferences, intentions, character traits, and so forth, which change over time in regular, familiar, and typically gradual ways. For example, I now have a memory of an experience my twenty-year-old self had while riding a rollercoaster. And if I live till forty, my forty-year-old self will be similarly connected to my current self. For example, my forty-year-old self might remember writing this essay. On top of this, I have desires and projects that are directed at the future. For example, I want my forty-year-old self to be happy. All this explains why it makes sense for me, now, to identify with my future self, and why it would make sense for me to feel self-interested dismay if I were to discover that I won’t make it to forty after all.

Now imagine a human being, M, whose internal mental life is completely discontinuous from day to day. M wakes up every morning with new desires, preferences, and intentions, which are all directed at the day to come. M has enough general knowledge to function in a basic way but no autobiographical memories of past days. When M goes to sleep at night, M’s mental life is erased and rebooted in the morning. Effectively, M’s mind is a series of distinct, evanescent subjects, each of which occupies a small fraction of a temporally extended biological whole.

Death would not have the same significance for M as it has for you and me. The main reason is that when M dies, this is less like cutting a person’s life short and more like preventing a new person (i.e., a new iteration of M) from coming into existence. And this makes a difference.

Morally speaking, killing a person is quite different from preventing a new person from coming into existence. Look at it from M’s perspective. If on Monday M discovers that M’s body will be vaporized in sleep on Friday night, it’s hard to see why M should, on Monday, be disturbed about this in a self-interested way. After all, M’s desires and projects are all directed at the immediate future, and the psychological subject who exists on Monday is going to disappear on Monday night in the reboot. Thus, the vaporization won’t terminate an ongoing internal life that M, on Monday, is a part of, or even one M is invested in. And for this reason, the vaporization is not going to deprive the M who exists on Monday of anything or frustrate any of M’s desires or projects. It’s as if someone else is being vaporized.

This suggests that the extent to which death is bad for a dier depends on the extent to which the dier has a complex psychological life – a psychological life that has future-directed elements and is unified over time by a continuous chain of beliefs, memories, desires, preferences, intentions, character traits, and so on.

With this insight, we are in a position to return to the issue of whether death is bad for insects, spiders, and the like.

Death is bad for creepy-crawlies only if they have temporally extended mental lives that are unified over time through reasonably thick chains of mental states like beliefs, memories, desires, preferences, intentions, and character traits.

And while some insects have the ability to remember things and execute somewhat complex tasks (bees have a relatively sophisticated spatial memory that can be used to navigate, for example), it seems overwhelmingly likely that at most very few creepy-crawlies have brains that are sophisticated enough to support such chains, much less desires and projects directed beyond the specious present that could give them a reason to continue living. In other words, creepy-crawlies probably live in the present to an even greater degree than M does. Brain size alone would seem to suggest this. Mosquito brains only have about 200,000 neurons. For comparison, human brains have 86 billion.

The upshot for our purposes is that death probably isn’t bad for creepy-crawlies, and therefore it seems doubtful that we have any pro tanto moral reason to avoid painlessly killing them (or rather any reason unrelated to the side-effects that killing them might produce). This is consistent with saying that we should not cause insects pain and that painful methods of killing creepy-crawlies, such as my sprayable toxin, are objectionable. But swatting and squashing is probably fine.

This line of reasoning is somewhat comforting to me. Scientists estimate that there are 10,000,000,000,000,000,000 insects alive at any given moment. Most of those will die very soon. Fortunately, that probably isn’t bad for them. However, like the insects in the den of death outside my old apartment and the arachnids I slaughtered, many of those insects will suffer a great deal in the dying process. The weight of that collective suffering is unfathomable. I can only hope that our tiny brethren pass swiftly into the long dreamless sleep that awaits us all.

AI Writing and Epistemic Dilution

There is a lot of debate surrounding the ethics of artificial intelligence (AI) writing software. Some people believe that using AI to write articles or create content is unethical because it takes away opportunities from human writers. Others believe that AI writing software can be used ethically as long as the content is disclosed as being written by an AI. At the end of the day, there is no easy answer to whether or not we should be using AI writing software. It depends on your personal ethical beliefs and values.

That paragraph wasn’t particularly compelling, and you probably didn’t learn much from reading it. That’s because it was written by an AI program: in this case, I used a site called Copymatic, although there are many other to choose from. Here’s how Copymatic describes its services:

Use AI to boost your traffic and save hours of work. Automatically write unique, engaging and high-quality copy or content: from long-form blog posts or landing pages to digital ads in seconds.

Through some clever programming, the website takes in prompts on the topic you want to write about (for this article, I started with “the ethics of AI writing software”), scours the web for pieces of information that match those prompts, and patches them together in a coherent way. It can’t produce new ideas, and, in general, the more work it has to do the less coherent the text becomes. But if you’re looking for content that sounds like a book report written by someone who only read the back cover, these kinds of programs could be for you.

AI writing services have received a lot of attention for their potential to automate something that has, thus far, eluded the grasp of computers: stringing words together in a way that is meaningful. And while the first paragraph is unlikely to win any awards for writing, we can imagine cases in which an automated process to produce writing like this could be useful, and we can easily imagine these programs getting better.

The AI program has identified an ethical issue, namely taking away jobs from human writers. But I don’t need a computer to do ethics for me. So instead, I’ll focus on a different negative consequence of AI writing, what I’ll call epistemic dilution.

Here’s the problem: there are a ridiculous number of a certain type of article online, with more being written by the minute. These articles are not written to be especially informative, but are instead created to direct traffic toward a website in order to generate ad revenue. Call them SEO-bait: articles that are written to be search-engine optimized so that they can end up on early pages of Google searches, at the expense of being informative, creative, or original.

Search engine optimization is, of course, nothing new. But SEO-bait articles dilute the online epistemic landscape.

While there’s good and useful information out there on the internet, the sheer quantity of articles written solely for getting the attention of search engines makes good information all the more difficult to find.

You’ve probably come across articles like these: they are typically written on popular topics that are frequently searched – like health, finances, automobiles, and tech – as well as other popular hobbies – like video games, cryptocurrencies, and marijuana (or so I’m told). You’ve also probably experienced the frustration of wading through a sea of practically identical articles when looking for answers to questions, especially if you are faced with a pressing problem.

These articles have become such a problem that Google has recently modified its search algorithm to make SEO-bait less prominent in search results. In a recent announcement, Google notes how many have “experienced the frustration of visiting a web page that seems like it has what we’re looking for, but doesn’t live up to our expectations,” and, in response, that they will launch a “helpful content update” to “tackle content that seems to have been primarily created for ranking well in search engines rather than to help or inform people.”

Of course, whenever one looks for information online, they need to sift out the useful information from the useless; that much is nothing new. Articles written by AI programs, however, will only make this problem worse. As the Copymatic copy says, this kind of content can be written in mere seconds.

Epistemic dilution is not only obnoxious in that it makes it harder to find relevant information, but it’s also potentially harmful. For instance, health information is a frequently searched topic online and is a particular target of SEO-bait. If someone needs health advice and is presented with uninformative articles, then one could easily end up accepting bad information pretty easily. Furthermore, the pure quantity of articles providing similar information may create a false sense of consensus: after all, if all the articles are saying the same thing, it may be interpreted as more likely to be true.

Given that AI writing does not create new content but merely reconstitutes dismantled bits of existing content also means that low-quality information could easily propagate: content from a popular article with false information could be targeted by AI writing software, which could then result in that information getting increased exposure by being presented in numerous articles online. While there may very well be useful applications for writing produced by AI programs, the internet’s endless appetite for content combined with incentives to produce disposable SEO-bait means that these kinds of programs way very well end up being more of a nuisance than anything else.

High Theory and Ethical AI

There’s been a push to create ethical AI through the development of moral principles embedded into AI engineering. But debate has recently broken out as to what extent this crusade is warranted. Reports estimate that there are at least 70 sets of ethical AI principles proposed by governments, companies, and ethics organizations. For example, the EU adopted its Ethical Guidelines for Trustworthy AI which prescribes adherence to four basic principles: respect for human autonomy, prevention of harm, as well as a commitment to fairness and explicability.

But critics charge that these precepts are so broad and abstract as to be nearly useless. Without clear ways to translate principle into practice, they are nothing more than hollow virtue signaling. Who’s right?

Because of the novel ethical issues that AI creates, there aren’t pre-existing ethical norms to govern all use cases. To help develop ethics governance, many bodies have borrowed a “high theory” approach from bioethics – solving ethical problems involves the application of abstract (or “high”) ethical principles to specific problems. For example, utilitarianism and deontology are usually considered high level theories and a high theory approach to bioethics would involve determining how to apply these principles in specific cases. In contrast, a low theory approach is built from the ground up by looking at individual cases first instead of principles.

Complaints about the overreliance on principles in bioethics are well known. Steven Toulmin’s “The Tyranny of Principles” notes how people can often agree on actions, but still disagree about the principle. Brent Mittelstadt has argued against high theory approaches in AI because of the logistical issues that separates tech ethics from bioethics. He notes, for example, that unlike medicine which has always has the common aim of promoting health of a patient, AI development has no common aim.

AI development is not a formal profession that entails certain fiduciary responsibilities and obligations. There is no notion of what a “good” AI developer is relative to a “good” doctor.

As Mittelstadt emphasizes, “the absence of a fiduciary relationships in AI means that users cannot trust that developers will act in their best interests when implementing ethical principles in practice.” He also argues that unlike medicine where the effects of clinical decision-making are often immediate and observable, the impact of decisions in AI development may never be apparent to developers. AI systems are often opaque in the sense that no one person has a full understanding of the system’s design or function. The difficulty of tracing decisions, impacts, and ethical responsibilities for various decisions becomes incredibly confusing. For similar reasons, the broad spectrum of actors involved in AI development, all coming from different technical and professional backgrounds, means that there is no common culture to ensure that abstract principles are collectively understood. Making sure that AI is “fair,” for example, would not be specific enough to be action-guiding for all contributors regarding development and end-use.

Consider the recent case of the AI rapper who given a record deal only to have the deal dropped after a backlash over racial stereotypes, or the case of the AI who recently won an art contest over real artists and all the developers involved in making those projects possible.

Is it likely they share a common understanding of a concept like prevention of harm, or a similar way of applying it? Might special principles apply to things like the creation of art?

Mittelstadt points out that high level principles are uniquely applicable in medicine because there are proven methods in the field to translate principles into practice. All those professional societies, ethics review boards, licensing schemes, and codes of conduct help to do this work by comparing cases and identifying negligent behavior. Even then, high level principles rarely explicitly factor into clinical decision-making. By comparison, the AI field has no similar shared institutions to allow for the translation of high-level principles into mid-level codes of conduct, and it would have to factor in elements of the technology, application, context of use, and local norms. This is why even as new AI ethics advisory boards are created, problems persist. While these organizations can prove useful, they also face immense challenges owing to the disconnect between developers and end users.

Despite these criticisms, there are those who argue that high-level ethical principles are crucial for developing ethical AI. Elizabeth Seger has argued that building the kinds of practices that Mittelstadt indicates require a kind of “start-point” that moral principles can provide. Those principles provide a road map and suggest particular avenues for further research.

They represent a first step towards developing the necessary practices and infrastructure, and  cultivate a professional culture by establishing behavioral norms within the community.

High-level AI principles, Seger argues, provide a common vocabulary AI developers can use to discuss design challenges and weigh risks and harms. While AI developers already follow principles of optimization and efficiency, a cultural shift around new principles can augment the already existing professional culture. The resulting rules and regulations will have greater efficacy if they appeal to cultural norms and values held by the communities they are applied to. And if the professional culture is able to internalize these norms, then someone working in it will be more likely to respond to the letter and spirit of the policies in place.

It may also be the case that different kinds of ethical problems associated with AI will require different understandings of principles and different application of them during the various stages of development. As Abhishek Gupta of the Montreal AI Ethics Institute has noted, the sheer number of sets of principles and guidelines that attempt to break down or categorize subdomains of moral issues presents an immense challenge. He suggests categorizing principles according to the specific areas – privacy and security, reliability and safety, fairness and inclusiveness, and transparency and accountability – and working on developing concrete applications of those principles within each area.

With many claiming that adopting sets of ethics principles in AI is just “ethics washing,” and with AI development being so broad, perhaps the key to regulating AI is not to focus on what principles should be adopted, but to focus on how the AI development field is organized. It seems like whether we start with high theory or not, getting different people from different backgrounds to speak a common ethics language is he first step and one that may require changing the profession of AI development itself.

Too Clever for Our Own Good?: Moral Bioenhancement and Existential Harm

image of woman's profile in silhouette with sun behind clouds superimposed on her mind

Knowing things is good. How do you change a tire? What’s the right combination of time and temperature to cook a turkey? Why do we call the mitochondria the powerhouse of the cell? The answers to these questions make our lives easier, enabling us to overcome challenges. But these examples are just the tip of the iceberg. Over time, we’ve not only grown to understand more things about ourselves and the universe around us, we’ve also continued to discover new questions in need of answers.

But with this increase in our collective understanding comes an increase in the risks we pose to ourselves, each other, and, in extreme cases, the Earth itself. This is because each scientific, medical, and technological breakthrough brings opportunities for benefits and harm. The acquisition of knowledge is an inherently ethical enterprise characterized by what is known as the dual-use dilemma. As defined by Seamus Miller and Michael J. Selgelid:

The so-called “dual-use dilemma” arises in the context of research in the biological and other sciences as a consequence of the fact that one and the same piece of scientific research sometimes has the potential to be used for harm as well as for good.

For example, virology research is good as it means we have a greater understanding of how viruses evolve and spread through a population, enabling the development of societal and medical countermeasures like social distancing and vaccinations. However, if put into the wrong hands, such knowledge can also be used by terrorists and hostile political powers to create devastating viral weaponry or misinformation campaigns. Ultimately, every intellectual step forward brings both the potential for good and ill.

But this potential for risk and benefit has not grown steadily over the centuries; some advances prove more beneficial and some more devastating than others. For example, the creation of the plough revolutionized how we, as a species, farmed, but the negative implications of such a technological advancement are, arguably, minimal or at least nondirect.

Today, however, highly destructive technologies seem increasingly common due to our collective intellectual capacity and interconnected world. As such, even small groups can threaten existential harm.

For example, through advancements in genetics, virology, synthetic biology, or multiple other scientific disciplines, a few persons can, in principle, develop an organism or technology with the power to catastrophically ravage the planet. Moreover, with each discovery opening the door for new avenues for inquiry, there is no reason to think that this availability of potentially dangerous knowledge will subside anytime soon.

This leaves us with a problem. Suppose we continue to develop our collective cognitive capacities, enabling the discovery of even more methods through which we can come to harm ourselves or others, either through deliberate action or accident.

In that case, do we also need to enhance our ability to reason ethically to keep pace with this possibility of harm?

Ingmar Persson and Julian Savulescu posed this question in their 2008 article, The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. In it, they argue that moral bioenhancenment (MBE) – a biotechnological intervention aimed at enriching ethical decision-making and outcomes – should be developed and distributed to close the gap between humanity’s destructive capabilities and moral faculties. The idea is that our “natural” moral abilities are ill-equipped to deal with the complex and high-stakes world created by humanity’s mental prowess. They note, however, that those most needing a greater level of ethical understanding are those least likely to take such an intervention willingly; a nefarious actor planning to use a nuclear weapon to start an apocalyptic war isn’t exactly going to be first in line for a morality pill. So, according to Persson and Savulescu, MBE shouldn’t be optional – everyone should have to take it. As they write:

If safe moral enhancements are ever developed, there are strong reasons to believe that their use should be obligatory, like education or fluoride in the water, since those who should take them are least likely to be inclined to use them. That is, safe, effective moral enhancement would be compulsory.

According to them, this is the only way to ensure we can effectively mitigate the risk of existential harm. If left up to individual choice, some persons would inevitably choose not to become morally enhanced. This refusal would, in turn, leave the potential for cataclysmic risk unaffected, as even a tiny chance is too great to be left unaddressed. Much like playing Russian roulette, even the slightest probability is substantial enough to necessitate the rejection of the possibility altogether. To ensure we eliminate the risk of ultimate destruction, every person would need MBE.

Of course, this raises both principled and practical objections.

John Harris expresses concerns that, for MBE to be effective, it would have to prevent us from acting unethically. If it didn’t, it would be an effective countermeasure to the harms Persson and Savulescu envision. However, this would mean that the intervention directly prevents us from acting in a certain way and thus inhibits our free will. This possibility worries Harris as, without the ability to be unethical, the virtue of ethical actions ceases to exist – you’re not doing right if you have no choice. Vojin Rakić takes this worry even further, exporting it from the individual to the societal, arguing that MBE would deprive persons of their ability for collective morality and, ultimately, of a vital aspect of our humanity.

But, as I have argued, perhaps MBE need not be compulsory to be effective as we develop our behavioral attitudes from those around us.

If most people take MBE willingly, then there’s reason to believe that the unenhanced would act more morally as they would be surrounded by morally aspirational individuals and would be insulated from immorality’s temptation.

Additionally, there’s the political obstacle of simply getting every nation to agree to enact such a program. Given that we argue over seemingly unequivocable matters – like the need to tackle climate change – getting every world leader on board for such a program is practically impossible.

However, these objections don’t necessarily detract from Persson and Savulescu’s observation that our intellectual capacity has outpaced our moral capabilities. Instead, they highlight the difficulties in finding a suitable solution to the problem. Ultimately, if we all behaved more ethically, the world may not be in the precarious situation it is right now. The rise of fascism, the threat of global warming, the increase in conflicts, and the general breaking down of the established liberal world order may go some way in convincing skeptics that, while compulsory MBE may not be ideal, it’s preferable to the alternative of widespread, even global, destruction.

‘Dahmer’ and the Dramatization of Crime

photograph of 'Dahmer' logo on smarthpone with scene from the show displaying in background

I’ve never understood the cultural obsession with serial killers or the “true crime” genre – the focus on the history of a murder and the fascination with all the macabre details. But the genre has proven incredibly popular, and Netflix’s recent release of “Dahmer” only adds more evidence to the appeal. The ten part miniseries has received mixed reviews from critics, complaints after Netflix originally attached an LGBTQ tag to the series, and accusations that the series is exploitative, using the experiences of so many who were hurt by the murders to profit. But are products like “Dahmer” merely exploitative or is there a legitimate public interest in hearing these stories and viewing them on the big and small screen?

For those who are unfamiliar, Jeffrey Dahmer was a serial killer from Milwaukee, Wisconsin responsible for the deaths of 17 men between 1978 and 1991. After years of targeting gay men among others where Dahmer would kill and eat some of his victims, he was finally captured in 1991 after one of his victims managed to escape and alert police. When police entered Dahmer’s apartment, they found photographs of his previous victims and he was arrested. Dahmer confessed to police, and he pleaded guilty at his trial. He was sent to prison where he was beaten to death in 1994 by a fellow prisoner. Since then, there have been numerous films made about the events. A small budget film was made in 1993, but there was also a Jeremy Renner film made in 2002, and another in 2017, and now the recent Netflix series.

Making a film or show about a real-life crimes like murder attracts controversy, because such projects are considered exploitative. When these crimes occur, they affect not only the families of the victims but the communities in which they occur.

For a production company to come along after the fact and make a product capitalizing on their pain to make a profit, it is sure to attract criticism. For example, the series has been criticized for its slow, matter-of-fact pacing, giving the audience a voyeuristic perspective on Dahmer’s activities without analysis. As The Guardian notes, “Dahmer is undoubtedly fetishized here. The squalor of his apartment is lingered over, right down to the blood stains on the mattress.” Other stylistic choices like the fuzzy desaturated look of the series, as well as the attention paid to lead actor Evan Peters, have also been criticized for romanticizing the crimes.

The families of Dahmer’s victims have also criticized the show. One family member recently tweeted: “It’s traumatizing over and over again, and for what? How many movies/shows/documentaries do we need?” He also explained how strange it was watching a reaction of his cousin having an emotional breakdown in court in front of the person who tortured and murdered her brother. His cousin, Rita Isbell, has called the show “harsh and careless” saying that the showrunners are “just making money off of this tragedy. It’s just greed.” Netflix never consulted or sought consent from the families for their depictions. As a recent article notes, the families of homicide victims are disadvantaged when encountering inaccurate or insulting depictions of their loved ones because normal legal protections of reputation such as defamation doesn’t apply if the defamed is deceased.

There is also the effect that these productions have on the local community as a whole. The murders have commonly attracted tourism to Milwaukee and the recent series is attracting new tourism that isn’t always welcomed. It has brought back painful memories to family, friends, and neighbors, and it has upset members of the Black gay and queer community who lived through those times and the fear it inspired.

We might ask whether there is a valid public interest that is served by dramatizing these events.

For example, some dramatization of a traumatic event can be beneficial for reclaiming the lives of the victims if the focus of such a dramatization is on the victim rather than the killer. As The Guardian points out, “By being murdered, these people are robbed of a legacy…They will always simply be a photos and a name in a line up of victims…The good thing a show like this can do is steam the spotlight from the murder and show who these people actually were.” Towards the later half of the series, the show does focus more on the victims such as one episode showing the life of Anthony Hughes. While we could still criticize “Dahmer” for devoting too much attention to the killer (the name of the show is very telling here), that doesn’t mean that any production that chooses to focus on such events must be exploitative.

Another way in which productions about murders can serve a public interest is by better informing us about the social and political contexts in which the events took place.

For example, the police were criticized for not catching Dahmer sooner. Officers John Balcerzak and Joseph Gabrisk failed to protect Konerak Sinthasomphone after he escaped from Dahmer. They ignored the women who found Konerak and ultimately believed Dahmer and escorted them back to Dahmer’s apartment. The officers were also criticized for homophobic remarks they made, and the city was eventually sued by the family. Productions like “Damher” do shed light on how discrimination, neglect, and prejudice by police can allow crimes like this to occur and why the gay community in particular has been at risk. For example, documentarian Joe Berlinger produced a project on Dahmer with the stated intention of facilitating conversations about improving community police work.

More controversially, one might ask whether there is a public interest in covering events that hold the public imagination. As I said, I really don’t get the public fascination with true crime, but perhaps there is some benefit to reflecting the darker aspects of the world around us. As a Vanity Fair article by Richard Lawson points out,

Many of [Dahmer’s] viewers, myself included, are surely partially drawn to the show out of morbid fascination—a natural human impulse that has become perhaps over-served in these true-crime boom years…maybe there is, lurking somewhere in this heavily articulated dark, something profoundly relevant about Dahmer.

One might argue that there is a public interest in investigating what the existence of serial killers tells us about us and the society we live in. Perhaps it’s completely valid to explore these topics through the visual arts, but that doesn’t mean that “Dahmer” is the best or healthiest way of doing it.

These concerns haunt most of the true crime genre. Where should we draw the line between documenting something and exploiting it? Or between humanizing murders and glorifying them? Are shows like “Dahmer” romanticizing a killer or are we all just being morbid voyeurs? Perhaps as much consideration should be given our own intentions as consumers as we dedicate to judging the intentions of the filmmakers.

The Ethics of Weed-Out Courses

photograph of students attending class in lecture hall

On October 3rd, The New York Times reported that organic chemistry adjunct Professor Maitland Jones, Jr. had been fired by N.Y.U. after 82 out of 350 of his students signed a petition against him. Students complained that their grades did not reflect the amount of time and effort they put into the course and that Jones did not prioritize student well-being and learning. (Jones, meanwhile, reported a significant drop off in student performance in the past decade, and especially after the pandemic.) Before firing Jones, university officials had first offered to review student grades and allowed students to withdraw from the class retroactively.

Immediate responses varied: Jones supporters protested the decision; some students who had a bad experience in Jones’ class celebrated the decision in online reviews; and faculty critiqued the decision by administration to appease students as tuition-payers.

More broadly, there have been a wide range of takes offered on the whole situation: Jones’ firing illustrates the precarity of contingent faculty and an administration run amok; “weed-out” classes like organic chemistry exacerbate student inequalities; students are becoming coddled and entitled, which will make them bad doctors; academic degrees are becoming a consumer product – and the consumers with financial power are the parents, not the students; organic chemistry isn’t actually necessary to be a good doctor, and it only became a weed-out course through policy decisions to limit the number of new doctors, leaving us with a shortage of physicians; the systemic and structural factors that create out-of-touch professors, entitled students, and pandering administrators are what we should actually blame.

This case raises rich possibilities for discussion, but I would like to focus on the following question: What purpose do weed-out courses actually serve, and is it a purpose we can get behind?

I will limit the discussion for now to pre-med classes, but the question could be asked of other disciplines as well.

Let’s start with the more positive aspects of weed-out courses. The main purpose of such a course seems to be to allow students to assess whether they have the necessary aptitude for surviving medical school and becoming good doctors. Ideally, a professor would facilitate this task by ensuring the class has the adequate rigor to support students in their pursuits, but also kindly counseling struggling students that they should seek another career path.

One apparent benefit of this kind of course would be to prevent students from spending a great deal of time and money pursuing a career they are more and more unlikely to attain.

Unfortunately, it is a hard truth that effort alone is not enough to get one through medical school, even though dedication and determination are necessary ingredients.

Another benefit of this kind of course would be to encourage students to cultivate the studying and test-taking skills they will need to do well in medical school and to become good doctors.

These considerations seem reasonable to me, but I’m not sure the language of “weeding out” best captures this set of aims. Instead, it suggests hoops that students are required to jump through in order to demonstrate their commitment and thus be granted access to continuing along their career path. There are plenty of questions here as to which courses should serve as the key benchmarks for success as a physician (a bioethics course might be on the list of courses that should be included, and an organic chemistry class might be less central than an immunology class), but having such benchmarks does not itself seem to be a problem.

Doctors need to have a variety of skills to be effective physicians: from the people-skills required in doctor/patient interactions, to a good problem-solving ability to catch diseases and health conditions before they progress, to the vast memorization needed to keep up with best practices and treatments. These are all abilities that should be fostered by pre-med education. If a student lacks one or more of these core capacities, it seems best for them (and their potential patients) to turn to another career path where their abilities might shine.

At the same time, we need to ensure that students of all backgrounds receive the resources (and opportunity) needed to acquire these skills during the course of their undergraduate education so that we do not simply reify existing inequalities.

So, let’s turn to the more negative aspects of weed-out courses. Often, it seems that the goal of a weed-out course is to get a certain portion of the class to withdraw or fail. Even if the express reason for this design is to promote rigor and provide a benchmark for student success, the learning environment can become toxic in several ways. If the professor sets up the class so that only the “truly bright” students can pass and treats student confusion as signs of laziness or stupidity, this creates a host of problems.

First, students who can keep up with the learning environment, whether through advantages in past tutelage or an ability to more quickly grasp the material, may start to see themselves as superior to those who do not do as well in the class. Second, students of all aptitudes may feel immense pressure and dedicate excessive time to studying in order to succeed in the class, contributing to mental distress. Third, students who do not do well in the class despite putting in the same intensive effort may see themselves as failures or as less worthy than other students.

What this kind of weed-out mentality amounts to is a kind of bullying that identifies some people as superior and others as inferior, only loosely tracking a student’s academic merit.

This can create problems not only in the pre-med weed-out courses but also in medical school and beyond. Hierarchies might arise between different medical subspecialties, with physicians in some elite residencies seeing themselves as superior to those who did not make the cut. These dynamics might also lead to epistemic overconfidence in practicing physicians, causing disruptions in doctor/patient interactions and negatively impacting the quality of patient care.

More specifically, I worry that some of our initial defenses of these weed-out classes tend to reify bullying practices rather than establishing the necessary benchmarks one needs to meet in order to be a good physician – in the same way that there are certain benchmarks that one should be able to meet to be a good teacher, a good lawyer, a good journalist, a good businessperson, a good caretaker, and more. While the pandemic has negatively impacted student learning and well-being, the student petition can be read as reflecting an unwillingness to put up with a certain kind of bullying and as a demand for better institutional support.

The pandemic tested us all in a number of ways, and it has made apparent to many of us that some forms of treatment are untenable, especially in times of crisis.

If you take a look at the last 10 years of comments about Jones’ teaching performance on RateMyProfessors (for whatever the review site is worth), negative student ratings of Jones’ classes have been fairly consistent in quantity and quality over time. Students have raised the same concerns again and again, regardless of grades earned: no partial credit on tests, the necessity of studying excessive amounts of time compared to other organic chemistry classes, accusations that Jones did not respect students nor respond well to questions, consistently low test averages (there was conflicting information about whether tests were curved), and high drop rates. Students of all different academic backgrounds reported feeling excessively stressed out by the course, and many complained that the organic chemistry course was made intentionally difficult. While other students gave glowing reviews, it is clear that the instructional problems raised in the petition are not new.

In the end, we’re left – like some of Jones’s students – with what feels like an impossible task: How can we design weed-out classes to be sufficiently rigorous and supportive? And how would we know when we’ve done it right?

“Suicide Kits” for Sale

photograph of Amazon search bar

This article discusses suicide. Following common journalistic ethics practice, precise details about means or resources for committing suicide may have been deliberately left out or altered.

Method matters. Depending on the study, between 80% and 90% of people who attempt suicide and fail do not go on to attempt suicide again. The public health implication is that by regulating the availability of popular and effective means of suicide – mainly firearms and select chemicals and pharmaceuticals – deaths from suicide can be prevented.

Given this, what should we make of the fact that highly purified sodium nitrite, an increasingly popular option for suicide, has been readily available for purchase on Amazon in the United States? A lawsuit filed on September, 29th accuses Amazon and Loudwolf – a sodium nitrite manufacturer featured on Amazon – of “promoting and aiding” the suicide of two teenagers. A Twitter thread by Carrie Goldberg, a lawyer working on the case, characterized Amazon as a “serial killer.”

The case will likely turn on a number of details alleged by the plaintiffs: that Amazon recommendations packaged together sodium nitrite with other supplies and informational materials in so-called “suicide kits”; that Amazon failed to enforce its own policies; that Loudwolf failed to include FDA-required warning labels on sodium nitrite; that Amazon was previously warned and did nothing about sodium nitrite sold on its platform being used in suicides; that no information was included about methylene blue (the recommended treatment for sodium nitrite poisoning); that there is no compelling reason to allow household purchases of pure sodium nitrite; and, of course, that both deaths were minors.

Abstracting away from the details, however, the case is part of a decades-long pattern of the internet facilitating suicide – from providing community, to disseminating information, to assisting the purchase of supplies.

It began in 1990 with alt.suicide.holiday, a Usenet news group (similar to an internet discussion forum). Users would frankly discuss suicide and share tips and resources. While that group is now defunct, there have been multiple variants. The popularity of sodium nitrite as a means of suicide is attributed to a recent iteration. In many U.S. jurisdictions, advising or encouraging suicide is illegal, so these sites’ relationship with the law is complex – so too is their relationship with the media. Such forums begin as niche communities of the suicidal for the suicidal, and end up as New York Times exposés (most recently in December of 2021). Once aware, grieving families and the broader public often push (successfully) for these sites to be shut down or hidden from internet search results.

In contrast to the prevailing public health or prevention narrative of suicide, the leitmotif of these communities is, in their words, “pro-choice.” The idea is that the right to suicide is simply an extension of our personal autonomy and right to self-determination.

Especially in liberal individual rights-oriented contexts, autonomy is an enormously important ethical principle and people are given broad latitude to make their own decisions as long as they do not negatively impact the rights of others.

In American medicine, for example, patients have an almost unlimited license to refuse treatment. However, humans are not always autonomous actors. Children for instance are not allowed to make their own medical decisions. Being intoxicated is another common exception. In rare cases, people have been known to commit sexual assault or other crimes under the influence of the sleep aid zolpidem (Ambien). The defense is that these were not autonomous actions; that they did not flow from the authentic reasons and desires of the offender.

Can suicide be an autonomous act? Under the prevailing medical account of suicide, in which suicide results from serious mental illness, it almost definitionally cannot. In American law, risk of harm to self or others is grounds for violating patient autonomy and forcibly administering treatment.

That a person is suicidal is treated as evidence that they are not in sound mind and not an autonomous decision maker. Suicidality discounts autonomy.

Those in the online suicide “pro-choice” community challenge this logic and hold that suicide can be a reasonable reaction to a person’s life and circumstances, and people should have access to the knowledge and means to kill themselves relatively painlessly. In this they have at least some philosophical company. Thomas Szasz, a controversial Hungarian-American philosopher and psychotherapist, long asserted that suicide was simply a choice as opposed to an expression of sin or illness.

Szasz is an extreme case and was broadly skeptical of the very designation of mental illness. However, in contrast to a previous Christian sanctity-of-life framing, there is growing acceptance in the Western world that suicide may not always be unreasonable. Instead, it can be an understandable response to circumstances in which someone’s quality of life is below some personal threshold. A good case in point is the right-to-die movement, which advocates for medical-aid-in-dying and physician-assisted suicide. Ten states currently have medical-aid-in-dying in which a terminally ill person with six months or less to live is able to request a lethal medicine they can ingest. Supporters of medical-aid-in-dying stress that the practice is distinct from suicide, partly to escape the stigma associated with suicide, but the conceptual distinctions are slippery.

America is comparatively conservative, but several nations have far more permissive laws when it comes to assisted suicide. Belgium, the Netherlands, and Canada, among other countries, allow for voluntary euthanasia on the basis of extensive and untreatable mental suffering even absent terminal illness or, indeed, any physical illness whatsoever. (The ethics of this have been previously discussed here at the Prindle Post.) The 2018 case of Aurelia Brouwers, who was voluntarily euthanized in the Netherlands after years of failed mental health treatment, brought broader attention to the practice. She was the subject of a short film documentary.

Once it is accepted that unbearable suffering alone is an adequate basis for suicide, then distinctions about how long someone has left to live, or whether that suffering is mental or physical become secondary.

The process of seeking assisted suicide on the basis of mental suffering is supposed to have extensive safeguards, yet critics worry that slip-ups happen. Note, though, that the locus of discussion shifts from the act of suicide to the process of doing it responsibly and ethically.

Surprising to some, among the staunchest critics of the right-to-die movement are segments of the disability rights movement. The concern is that people may be pressured into choosing assisted suicide due to discrimination against people with disabilities or inadequate medical care, i.e. that these decisions are not fully autonomous. Of course, there will always be reasons for suicide, and these reasons may often be due to larger social and economic failings. Poverty is a known contributing factor to suicide. How reasonable this is may depend on where one is standing. In individual cases it is partly the environmental factors – poverty, debt, personal tragedy, discrimination – that can make suicide seem an appropriate response to circumstance. And yet, it may appear ghoulish to have a state-sanctioned process that facilitates suicides partly driven by these factors that the state itself perpetuates (or at least is often in the best position to address.)

Negotiating the appropriate policy prescription remains an impossible task. Mental health professionals, suicide prevention advocates, the American right-to-die movement, disability rights activists, and the online suicide pro-choice community can all share a broader commitment to self-determination and yet disagree vehemently about specific issues: when suicide is an autonomous act, what kind of safeguards need to be in place, what counts as unbearable suffering (or a lack of possibility of improvement), and what action is justified to prevent suicides.

Still, vanishingly few people would consider 16-year-olds killing themselves with online instructions and chemicals purchased on the internet as anything other than a tragedy.

It is statistically likely that had the teens in the lawsuit against Amazon attempted suicide with a less lethal method, they could have been successfully treated and their suicide attempt would have been a thing of the past.

Without speculating on the details of the specific case, it is nonetheless worth acknowledging that Amazon, whatever its failing as a corporation, cannot be the sole cause of this or any suicide. People are seeking information and supplies. And at least some suicides will default to known, highly lethal methods like firearms. It is also true that while the majority of those who attempt suicide and fail do not attempt again, previous suicide attempts are the single biggest risk factor for a later successful suicide. Put cynically, there is a demand. Regulating supply, while important given the relevance of the method, can only do so much. Suicide often exists at the intersection of means, mental health, and personal and environmental circumstance.

One relatively radical way to think about suicide would be as a regulated right – something permitted but tightly controlled. The provision of medical care and mental health care would presumably be part of seeking state-sanctioned suicide. People would need to have good reasons (whatever society decides those reasons are) for seeking materials-for or aid-in suicide, and undergo an appropriate approval process.

As countries like the Netherlands and Canada illustrate, negotiating what this approval process should be like is fraught. The balancing point of different communities with an interest in suicide including the suicidal, their families, mental health professionals, disability rights activists, religious communities, and the state will undoubtedly be a precarious one. Nonetheless, taking seriously the demand for suicide could plausibly help to bring suicidality out of the dark as something that people can talk seriously about and potentially get treated for. Surely a society ought to inquire as to why its citizens wish to take their own lives.

If you or someone you know is struggling with thoughts of suicide, (prevention-focused) resources can be found at SpeakingOfSuicide.com/resources.

On the Morality of Squashing Lanternflies

photograph of spotted lanterfly

This summer, the East Coast of the United States has been plagued by the spotted lanternfly. First discovered in Pennsylvania in 2014, the lanternfly is a highly invasive species that – if allowed to spread throughout the U.S. – could devastate the ecosystem, and seriously impact the grape, orchard, and logging industries. States have been swift to respond. Ohio is setting traps, while Pennsylvania has employed sniffer dogs to track down their eggs. Connecticut and Virginia, on the other hand, have issued a very clear message to their residents: “Squash these bugs on sight!”

Several weeks ago, I discussed the revelation that insects might experience pain and – for this reason – might be worthy of moral consideration. This was based upon Peter Singer’s assertion that the only prerequisite for having interests is the capacity to experience pleasure and pain (since if something can experience pleasure then it has an interest in pursuing pleasure, and if something can experience pain then it has an interest in avoiding pain). Once identified, these interests must – according to Singer – be counted equally with the same interests when experienced by any other being.

Put simply, if it is morally wrong to cause X amount of pain to a human, then it must also be morally wrong to cause X amount of pain to any other creature capable of experiencing pain – even insects.

But that reasoning seems to run counter to what we’re being urged to do in light of the lanternfly invasion. Being squashed is clearly a painful experience. As such, we would consider it morally reprehensible to squash a human, or a dog, or even a mouse. Yet, for some reason, this very action is here being condoned. How can we make sense of this? Are we in fact doing something morally wrong every time we squash a spotted lanternfly?

An important first step is to note that the experience of being squashed will not be consistent across species. For a human, it will be utterly traumatizing – filled with not only physical pain, but the dread and terror of one’s imminent end. Arguably, the pain will be slightly less for the dog or mouse – if only since they will largely lack awareness of what’s happening to them. What, then, will the experience be like for the lanternfly? This is a difficult question – made all the more difficult by the fact that we are only on the cusp of discovering that insects might feel pain, let alone being able to quantify it. Let’s assume, then, that the amount of pain (both physical and psychological) experienced by a lantern fly upon being squashed is significantly less than that felt by a human or dog or mouse going through the very same experience. Perhaps it’s the equivalent of a human receiving a particularly bad papercut.

What this means, then, is that our moral attitudes towards squashing lantern bugs should be roughly the same as inflicting painful papercuts on others. And, chances are, even though the latter is a relatively minor harm, we would usually refrain from doing this on the assumption that it is morally wrong.

For this reason, we would seem to have a moral reason to refrain from inflicting the precise same amount of pain on lanternflies. To do otherwise would, according to Singer, be speciest.

But we cannot stop our moral considerations there. While it might be wrong to inflict pain on a single insect for no good reason, we also need to take into account how our actions will affect the pain and pleasure of other living beings. This is particularly relevant in the context of invasive species. Some species – by their very existence in an alien environment – create enormous suffering and death for the local fauna. Just look at the ecological devastation wrought by domestic cats. In such cases, a small amount of harm to some animals might be justified by the fact that it avoids a much greater harm to other animals.

The lanternfly might be one such case. While the damage they cause is largely flora-based – feasting on around 70 host plant species – the flow-on ecological effects are set to be devastating, as native fauna finds itself starving as a result of dwindling food supplies.

But here’s the thing: even if some greater good justifies us causing harm to an invasive species, we are under a moral obligation to do all we can to minimize the harm necessary to achieve that good.

And this shouldn’t be surprising. It might be morally permissible for me to break someone’s car window in order to save the life of a severely dehydrated dog on a hot summer’s day. But that same justification wouldn’t allow me to then go on to key their door and slash their tires.

The same limits apply here. Even if we have good reason to do all we can to destroy lanternflies, this does not warrant wanton cruelty. This is why ethicists are so concerned about implementing ‘bounties’ on certain invasive species. Perverse incentives can bring about perverse outcomes. If there is a greater ecological good to be achieved, we may be morally justified in causing harm to certain invasive species. However, this harm will only be permitted to the extent that it is necessary in order to achieve that good. Gratuitous harm will remain morally impermissible. We should endeavor, then, to solve ecological crises while treating invasive species as humanely as possible. And if insects can experience pain, then this includes them too.

On the Morality of Declawing Cats

photograph of cat claws pawing at chair

In late May, the California State Assembly advanced a bill banning the declawing of cats. If the bill is passed by the State Senate, California will become only the third U.S. state – along with New York and Maryland – to have banned this particular procedure. But what does cat declawing involve, and why might we have reason to think that it’s wrong?

Several months ago, I argued that we have strong moral reasons to keep our pet cats indoors at all times. It’s a necessary step in order to prevent the decimation of native wildlife (including many endangered species) and it’s also much better for cats themselves – extending their expected lifespan from 2-5 years to 10-15 years. Further, so long as owners are attentive to indoor enrichment, these benefits can be obtained at almost no cost – with indoor cats capable of being just as happy as outdoor cats.

There may be some minor drawbacks, however – chief among these being the potential damage caused to furnishing and décor. Cats regularly (and instinctually) pull the claws on their front paws through surfaces that offer some kind of resistance. This is done for a number of reasons, including (1) marking their territory, (2) exercising their muscles, (3) relieving stress, and (4) removing worn sheaths from their claws. While outdoors, cats will typically direct this clawing behavior towards hardened ground, tree trunks, and other rough surfaces. Indoors, things get a little trickier, with cats obliviously directing their clawing behavior towards leather couches, expensive stereo speakers, and Grandma’s antique furnishings.

Frustration at this continual damage can often drive owners to declaw their cats. In fact, around 23 million pet cats – more than 20 percent of all domestic cats in the U.S. – have been through this procedure. To the uninitiated, ‘declawing’ might sound relatively harmless. But sadly, this is not the case.

Cat claws grow not from the skin, but from the bone. Thus, a cat declawing procedure – or onychectomy – necessarily requires the amputation of the last digital bone on each front toe. This would, for a human, be equivalent to cutting off the tips of your fingers at the knuckle just below the fingernail.

Understandably, this is far from a simple procedure, and is often accompanied by weeks to months of post-operative suffering and pain management. There are also accompanying risks of infection, tissue necrosis, nerve damage, and bone spurs. Even where successful, the procedure fundamentally alters the way in which a cat walks, often leading to lifelong pain.

It is these harms to cats – both actual and potential – that have already led more than forty countries (including UK, Ireland, Switzerland, Germany, Austria, Sweden, Australia, New Zealand, and Norway) to ban the declawing of cats. In California, the only opposition to the bill came from the California Veterinary Medical Association (CVMA), who claimed that veterinarians must be able to declaw the cats of autistic children.

This was unusual reasoning, given that scientific evidence shows that declawed cats actually bite more often – and much harder – than cats that have not been through the procedure.

It’s perhaps worth noting that declawing procedures are often charged at a rate of more than $1000/hour, meaning that successful passage of the bill will stem a large source of revenue for veterinarians.

There are, of course, certain circumstances in which declawing might be absolutely necessary – particularly for the well-being of the cat. But legislation often contains exceptions for such cases. The California Bill, for example, continues to allow the procedure for the medically necessary purpose of addressing a recurring infection, disease, injury, or abnormal condition that affects the cat’s health. What it does prohibit is the use of declawing for cosmetic or aesthetic purposes or “to make the cat more convenient to keep or handle.”

And this is precisely where the real immorality of cat declawing becomes evident. Suppose we take a consequentialist approach to an issue like this, claiming that we’ll be morally justified so long as the good consequences justify the unsavoury means. To be fair, there are good consequences that come from declawing. Having a pristine home with unshredded décor is a good thing. As is avoiding the replacement of a valuable piece of furniture or a priceless heirloom. But these very same goods can be achieved by other means – means that come at far less cost to our cuddly companions. For one, providing a cat with an abundance of more attractive clawing alternatives – like scratching posts – can minimize their desire to scratch other objects. This can be coupled with behavioral training, where cats are rewarded for clawing the right things, and discouraged from clawing the wrong things. Even frequent nail trimming (where a cat’s claws are clipped – but not removed) can go a long way to minimizing damage when a pet does target an item they are not supposed to. Unfortunately, these methods require time and energy – something many pet owners are unwilling to spend in addressing the issue of cat-related damage to furnishings. Declawing provides an easy (if not cheap) solution to the problem – but it’s certainly hard to argue that it’s the morally right one.

The Smithfield Piglet Case: Factory Farms and Civil Disobedience

photograph of pigs vying to look out of chain-link pen

In the middle of the night sometime in 2017, members of the animal welfare group Direct Action Everywhere entered Circle Four Farm, a factory farm in Beaver County, Utah, that processes and kills 1.2 million pigs a year for Smithfield Foods, the largest meat production company in the country. One of their objectives was to film the way that the animals in the facility were being treated. A second objective was to rescue some of the most vulnerable animals that they found.

On July 6th, the group posted the video of their experiences that night on YouTube. As it begins, the filmmakers describe witnessing a sow who had collapsed with sickness and was no longer capable of feeding piglets being tossed headfirst into a pile of at least a hundred dead young animals. The footage goes on to document countless sows and their piglets kept in very small crates. It includes disturbing images of a sow in a gestation crate, feeding some piglets while surrounded by other dead and crushed piglets, covered in feces, crammed into the tight space. The group selects two piglets to take with them. The first was a piglet who was found with her face covered in blood. She was small and close to death. The nipples of her mother were so badly cut that they no longer provided milk and her piglets were drinking blood to survive. This piglet was not likely to survive without intervention. The second piglet was weak with starvation and had collapsed. Prospects for survival for this piglet were similarly bleak. The cash value of the two animals was $42.50 each.

The loss of pigs such as these is built into the business plan of Circle Four Farms since many animals do not survive under these conditions. These piglets in particular, because of the state of their health at the time that they were found, were likely to die and to be counted among these losses.

The group took the two piglets from the facility, and brought them to a waiting vehicle where they were immediately fed. They received veterinary services and were then taken to an animal sanctuary to live out the remainder of their lives in peace. At the end of the video, the piglets are shown healthy and seemingly happy, while a member of the welfare group explains that rescuing animals from factory farms is crucial for the animals involved, but also serves an important function for the movement; optimism and hope can serve as an antidote to the despair caused by the magnitude of the problem of animal mistreatment in the world.

After the video was published on YouTube, an FBI manhunt for the people involved ensued and significant resources were used. During a government raid of an animal sanctuary, FBI veterinarians sliced off a portion of a pig’s ear for the purposes of genetic testing. Eventually, the investigation led to the arrest of activists Wayne Hsiung and Paul Darwin Picklesimer. The federal government declined to prosecute, but Utah prosecutors elected to pursue felony burglary and theft charges for which the defendants could have potentially faced ten years in prison.

When the case went to trial, District Court Judge Jeffrey Wilcox made a series of admissibility rulings that shocked those watching the case closely. He blocked the jury from viewing the video that the group took that night, which was the very video that motivated the investigation and prosecution in the first place. He only let jurors see photographs of the scene in an edited form (for instance, he ordered an image cut in half that portrayed a piglet sucking from a cut and bloody nipple), and he did not allow any evidence about the motive for the removal of the piglets to be introduced.

In other words, the judge would not allow the jury to hear that piglets were removed to save their lives or that the group entered the facility to raise awareness about animal mistreatment and cruelty. His justification for these rulings was that the case was about burglary, not about animal rights.

These rulings were made in the political context of a state with an economy that relies heavily on industrial animal agriculture. In 2012, as protection for these institutions the state implemented an “ag-gag law” that made it illegal to document evidence of animal abuse on factory farms. That law was ruled unconstitutional in 2017.

Despite the evidentiary restrictions, on October 8th, 2022, the jury acquitted Hsiung and Picklesimer of all charges. This is now being treated as a landmark case in animal law and animal ethics in general, and as an important case study for discussion of a potential right to rescue animals in distress.

Though many view the outcome of the trial as a victory, others are critical. They argue that trespassing, burglary, and theft are against the law for good reason. If a person or group has an important message to convey, surely they can do so without breaking the law. Some argue further that animals have a lesser or even non-existent moral status — they exist on this planet for us to do with what we will. We simply do not have the space to raise these animals on large farms where they can roam free and doing so would be impractical. If we want to feed the world’s population and to do so in ways that many people consider healthy and delicious, this form of meat production is our only choice. Critics also raise concerns that abandoning industrial animal agriculture would be devastating to the economy. The overriding principle to which many people on the other side of this case appeal is that our sole obligation is to do what is best for human beings. That animals trap and kill other animals is just a fact of nature, and there is no reason why humans should be exempt from that general principle.

Animal advocates argue that it is simply not true that this is the only way we can feed the human population in both healthy and delicious ways. Humans can satisfy their nutritional needs by eating plant-based foods.

Non-human animals, and farm animals in particular, can experience a full range of emotions, including suffering and joy. They form strong emotional attachments to their peers and to their offspring.

In light of this, if we can meet our food needs in other ways, we ought, morally, to do so.

The strategy employed by Direct Action Everywhere is nothing new. Their defenders argue that the actions of the group were an instance of justified civil disobedience — a strategy defended and practiced by figures like Henry David Thoreau, Mahatma Gandhi, and Martin Luther King Jr. Thoreau, for instance, refused to pay taxes in support of a government that actively participated in the institution of slavery. He argues that if a law

is of such a nature that it requires you to be the agent of injustice to another, then, I say, break the law. Let your life be a counter-friction to stop the machine. What I have to do is to see, at any rate, that I do not lend myself to the wrong which I condemn.

Martin Luther King Jr. broke unjust laws on many occasions and was jailed 29 times. In his Letter from a Birmingham Jail, he argues to the local clergy imploring him to change his tactics,

You deplore the demonstrations that are presently taking place in Birmingham. I am sorry that your statement did not express a similar concern for the conditions that brought the demonstrations into being. I am sure that each of you would want to go beyond the superficial social analyst who looks merely at effects and does not grapple with the underlying causes.

The activists who broke into Circle Four Farm that night in 2017 made no attempt to keep their actions secret, indeed, they posted their activities on the internet for the whole world to see. They engaged in civil disobedience fully aware that they might face consequences. In their trial, Justice Wilcox ruled in ways that sought to prevent careful consideration of underlying causes and encouraged jurors to focus on only one effect — theft. The jury refused to do so. The powerful lobby for industrial animal agriculture does everything in its power to control public perception of food production in the country and worldwide. With such widespread manipulation taking place, if the well-being of animals matters, we arguably can’t afford to wait. As Thoreau says, of unjust laws and practices,

Men generally, under such a government as this, think that they ought to wait until they have persuaded the majority to alter them. They think that, if they should resist, the remedy would be worse than the evil. But it is the fault of the government itself that the remedy is worse than the evil. It makes it worse. Why is it not more apt to anticipate and provide for reform? Why does it not cherish its wise minority? Why does it cry and resist before it is hurt? Why does it not encourage its citizens to be on the alert to point out its faults, and do better than it would have them?

Trophy Hunting Is Immoral Only If Hunting for Meat Is Immoral

photograph of stuffed birds and animal heads on hunting lodge wall

On July 2nd, 2015, American dentist Walter Palmer (legally) killed a lion named Cecil, a favorite of visitors of Hwange National Park in Zimbabwe. The news of Cecil’s death and several unsavory pictures of Palmer went viral, prompting a vicious backlash against Palmer and an international discussion about the morality of trophy hunting. People all over the world condemned the practice, and many people became convinced that trophy hunting is immoral.

My topic in this post is the morality of trophy hunting. Instead of denouncing or defending the practice, I argue that a distinction sometimes drawn by opponents of the practice cannot be maintained.

Some people believe that trophy hunting is inherently reprehensible yet hunting for meat is not. In my view, there is no inherent morally significant difference between trophy hunting and hunting for meat.

So, trophy hunting ought to be universally condemned only if meat hunting ought to be universally condemned.

First, let’s get clear on some terms and the scope of my claims. By ‘trophy hunting,’ I mean recreational hunting (or fishing) for trophies, sport, or prestige, without the intention of keeping meat for consumption. By ‘meat hunting,’ I mean recreational hunting (or fishing) with the intention of keeping some meat for consumption. Crucially, I limit my discussion to hunting as it is practiced by well-to-do Westerners and others who do not need to hunt to sustain themselves.

Now, to the argument.

Most people believe that animals matter, morally speaking. Although people disagree about how much and in what ways animals matter, there are zones of clear consensus. For instance, almost everyone would agree that it would be wrong to vivisect a stray dog in order to amuse guests at a party (as it is rumored the philosopher René Descartes did), mainly because the great harm that would be done to the dog by such an action would not be outweighed by other sufficiently important moral considerations.

Likewise, almost everyone would agree that hunting is permissible only if the harm or setback to the hunted animal is outweighed by other morally important considerations.

If hunting were, in general, perfectly analogous to frivolous vivisection, everyone would universally condemn it.

As it stands, hunting is not perfectly analogous to frivolous vivisection. While both activities involve animal suffering and death, the former but not the latter is associated with morally important goods. For one, hunting can have beneficial environmental and social effects. Hunting can be used to control invasive species, raise money for conservation, and so forth. Then there are the benefits to the hunter. I’m told hunting can be deeply pleasurable. Reportedly, it can be exhilarating, relaxing, challenging, satisfying, even transcendent. I’ve never been hunting, so I can’t speak from personal experience. But the philosopher José Ortega y Gasset can:

When one is hunting, the air has another, more exquisite feel as it glides over the skin or enters the lungs, the rocks acquire a more expressive physiognomy, and the vegetation becomes loaded with meaning. But all this is due to the fact that the hunter, while he advances or waits crouching, feels tied through the earth to the animal he pursues, whether the animal is in view, hidden, or absent.

Unlike the experience of partygoers watching a frivolous vivisection, the experience Gasset describes seems significantly valuable. Apart from the experience of hunting, the projects, skills, activities, and communities connected with the practice are part of what makes life meaningful and interesting to many hunters. And finally, there are the spoils. Trophy hunters obtain war stories, heads, antlers — that sort of thing. Meat hunters obtain meat. Hunters desire these spoils and are pleased when they obtain them, and since we have moral reason to care about whether a person is pleased and gets what they want, these spoils are morally important, too.

Now, you might think that the goods associated with recreational hunting can never outweigh its morally objectionable features. If so, then you probably already agree with me that there is no fundamental distinction between trophy and meat hunting. Both are always wrong. Many people, however, believe that hunting is permissible if and only if it yields some particular combination of the goods just enumerated. In other words, the overall benefits of hunting can (but won’t always) outweigh the harm to the hunted animals. For instance, you might think that deer hunting is permissible so long as the practice benefits the ecosystem and the hunter eats the meat.

I believe that ideas of this sort are what usually lead people to conclude that there is some inherent moral difference between meat hunting and trophy hunting.

Somehow, the fact that the hunter consumes parts of the hunted animal is supposed to justify the harm done to the animal in a way that nothing else, except perhaps direct environmental or social benefits, can.

The problem with this line of reasoning is that the value gained by eating hunted meat is not relevantly different from the value associated with the hunting experience itself or with the procurement of trophies. Eating hunted meat may be especially pleasurable, but it does not provide a well-off Westerner with any more sustenance than could be obtained by eating beans and a B12 supplement. Thus, when trying to determine if the suffering and death of a hunted animal is compensated for by the good that comes of it, we shouldn’t count the fact that the hunter will obtain sustenance by hunting, since the hunter will obtain sustenance either way. All the value gained by eating a hunted animal as opposed to letting the animal live and eating beans comes from the special pleasure obtained by eating the hunted animal.

An analogy may help make this last point clearer. Suppose you are trying to decide between eating dinner at two equally nutritious but differently priced restaurants. The fact that you will eat something nutritious if you go to the more expensive restaurant cannot play a part in justifying the extra money you would spend in going there, because you will eat something nutritious in either case. Spending the extra money is worth it only if the more expensive restaurant will provide you with a sufficiently more pleasurable gustatory experience.

And here’s the thing.

In principle, a trophy hunter can get the same amount of pleasure out of admiring a stuffed lion’s head or telling a great story as the meat hunter can get from eating hunted meat.

In fact, the trophy hunter’s pleasure is likely to be longer lasting, since trophies, unlike meat, needn’t be consumed to be enjoyed. So, if trophy hunting is universally morally problematic because the suffering and death of the animal can never be outweighed by the benefits of the practice, then recreational meat hunting is universally problematic, too, since both produce basically the same types of benefits. It looks as if there is no inherent morally important difference between recreational meat hunting and trophy hunting.

Let me consider two objections.

An objector might point out that trophy hunting is more likely than meat hunting to produce negative environmental and social effects since trophy hunters are more often interested in targeting endangered species, megafauna, and so on. If so, then trophy hunters need to be more careful than meat hunters when selecting their targets so as to avoid producing these effects. But the issue at hand is not whether it is morally acceptable to hunt this or that animal (in this or that context). The issue is whether eating the meat of a hunted animal makes any deep moral difference. And trophy hunting (as I’ve defined it) needn’t produce any special environmental or social effects. For example, someone who hunts deer in Putnam County and secretly throws the meat away is going to produce the same basic environmental and social impact as someone in Putnam County who consumes the deer they hunt.

An objector might argue that eating a hunted animal’s meat is the only way to properly respect its dignity. I find this hard to accept. First, it’s likely all the same to the dead animal; unlike humans, most animals do not have wishes or customs concerning the handling of their corpses. Second, a carcass left in the field by a hunter undergoes the same fate as a carcass of an animal that died naturally. How, then, can this fate constitute an indignity?

My argument, if successful, shows that from a moral perspective there is nothing special about trophy hunting. When an incident like the one involving Palmer and Cecil next captures the world’s attention, I think it would be a mistake for us to focus on the trophy hunting aspect. The relevant questions concern the morality of hunting the type of animal killed and of hunting (by well-to-do Westerners and others who don’t need the meat) generally.

Meat Replacements and the Logic of the Larder

photograph of vegetable larder

Every year, tens of billions of animals are killed for food. This is morally objectionable for all sorts of reasons directly related to the experiences of the individual animals involved: the process of food production causes them pain and suffering; they are prevented from flourishing in the ways that are appropriate for members of their species; they live shorter lives full of more suffering and less pleasure than they would have if those lives were not cut short; and so on. In response, entrepreneurs have worked hard to bring alternatives to the market in the form of plant-based and cell-cultured products, neither of which involve killing animals. Humans do not need to eat animals or animal products in order to enjoy nutritious diets and live long, healthy lives. If a person can give up animal products, many argue that they should.

In response, some have raised an objection that has come to be known as “the Logic of the Larder.” A larder is a storage space for food, traditionally a place for preparing and containing meat. This line of reasoning is also sometimes referred to as the “Replaceability Argument.” In his 1914 book The Humanities of Diet, famous vegetarian thinker Henry S. Salt presents and responds to the objection at length, introducing it with a common idiom at the time: “Blessed is the Pig, for the Philosopher is fond of bacon.” The idea is that farm animals are made better off by the fact that humans breed them for food. The contention is that farm animals, on average, have lives that are worth living.

Generally speaking, it is better to exist than not to exist. If human beings did not raise farm animals for food, those animals would not exist at all. Therefore, human beings do something good for farm animals by bringing them into existence to be used for food.

If this argument is sound — if humans do a good thing when they bring billions of animals into existence for use as food — then human beings would be doing a very bad thing by replacing that source of food; the animals involved would never have had the chance to live.

In responding to this argument, Salt and others point out that the Logic of the Larder seems more like a bit of sophistry — an ad hoc rationalization or, as Socrates puts it, an attempt to “make the weaker argument the stronger” — than an actual argument that is ever used as part of a decision to raise animals for food. When someone decides to get involved in raising animals for slaughter, they rarely say, “boy, what I’d really like to do is bring a bunch of new animals into existence and give them a shot at life.” Instead, animals are treated as objects to be mass produced in the most efficient and profitable way possible. If the lives of animals were valued, they would be allowed to age and grow at the appropriate speed and rate; instead, they are given growth hormones to shorten the period from birth to slaughter. Salt powerfully provides this argument from the pig’s perspective,

What shall be the reply of the Pig to the Philosopher? “Revered moralist” he might plead, “if it were unseemly for me, who am today a pig, and tomorrow but ham and sausages, to dispute with a master of ethics, yet to my porcine intellect it appeareth that having first determined to kill and devour me, thou hast afterwards bestirred thee to find a moral reason. For mark, I pray thee, that in my entry into the world my own predilection was in no wise considered, nor did I purchase life on the condition of my own butchery. If, then, thou art firm set on pork, so be it, for pork I am: but though thou hast not spared my life, at least spare me thy sophistry. It is not for his sake, but for thine, that in his life the Pig is filthily housed and fed, and at the end barbarously butchered.

This colorful response also draws out the idea that the “better to exist than not to exist” justification condones breeding sentient creatures for any purposes whatsoever. If we follow this line of argument, it is better to bring a being into existence, horribly mistreat it, and show no mercy or respect for its dignity, than it is to simply not bring a being into existence at all. And this seems to justify bringing humans into existence for the purposes of selling them into slavery — after all, it’s better to exist than not!

The proponent of the Logic of the Larder, however, might respond by emphasizing that humans are cognitively very different from non-human animals, and this is why raising animals for slaughter is defensible, while breeding humans for slavery is not. Human beings develop identities, have a sense of their past and their future, understand concepts like death and dignity, and are capable of applying those concepts to themselves and of integrating them into their own desires concerning the future. Many, including Peter Singer in his book Practical Ethics, have argued that this makes a difference when it comes to whether it is a bad thing to kill an animal.

But some humility is likely warranted when it comes to drawing conclusions regarding which mental capacities farm animals have and which they don’t.

Animals can’t express themselves in human language and their beliefs likely do not have propositional content in the ways that the beliefs of human beings sometimes do. Nevertheless, animals are clearly capable of making plans that have temporal components.

They understand that things take place in sequence, and they rely on this understanding to get what they want. They exhibit personality and those traits are enduring. They avoid death and members of many species grieve in response to the death of loved ones. Instead of judging whether raising animals only to kill them by the standards of anthropocentric metaphysics and moral psychology, we might want to at least entertain the possibility that we’ve been thinking about identity, autonomy, and future-related cognition in idealized ways that are unlikely to correctly characterize human moral psychology, let alone set humans apart as uniquely entitled to continued existence.

Moreover, to suggest that it is better for a farm animal to exist than not to exist presupposes that these animals have a welfare that can be measured relative to their welfare in other possible worlds (for example, worlds in which they do not exist). This is to concede the most important point when it comes to discussion of the ethics of using animals for food — animals are the kinds of beings that can experience pain and pleasure. If we think it can be good for them to come into existence, then it can also be quite bad for them to exist under conditions of deprivation, slavery, and slaughter. We can’t defensibly bring them into existence and then force them to live lives full of more suffering than joy.

Why Speciesism Is Not a Prejudice

color photograph of tiger at zoo with family posing in black and white

Despite some notable dissenters, it has become a near-article of faith in applied ethics that “speciesism” — giving greater moral consideration to one individual or group than to another based merely on their membership in a certain species — is a prejudice indistinguishable from racism, sexism, and other forms of bigotry. Daniel Burkett succinctly states the dominant view when he writes that the argument that the suffering of animals counts for less “simply because they are animals” is “the same (very bad) rationale that justifies” these discredited prejudices.

But the rationale for speciesism is different in key respects from that for racism or other forms of bigotry.

The typical justification for racism consists of two claims. First, it is claimed that some phenotypic trait — in this case, skin color — maps onto, or is at least a reliable indicator of, some other characteristic. Second, it is held that the latter characteristic determines, or is at least relevant to, the degree of moral consideration to which an individual is entitled. The first is an empirical claim, while the second is a moral claim. Both claims may be false, but need not be for racism to count as a prejudice. For example, in the nineteenth century there was widespread agreement among white scientists that African-Americans were impervious to pain — in effect, that they were less sentient than whites. Today, almost all moral philosophers agree that sentience is, if not the sole basis for moral consideration, then at least one of the main ones. Thus, those who used racist science to justify differential treatment of African-Americans were not mistaken in focusing on sentience as a characteristic relevant to moral consideration. Rather, their racism was a prejudice because it rested on a false and unjustified empirical belief that African-Americans have “duller sensibilities” than whites.

This analysis of racism suggests that there are actually two kinds of justification for speciesism.

The first, mirroring the typical rationale for racism in its basic structure, is that species membership maps onto or is a reliable indicator of some other characteristic, and this characteristic is relevant to moral consideration. Call this justification “Empirical Speciesism.” The second is that species membership itself is relevant to moral consideration. Call this justification “Categorical Speciesism.” Either justification differs from the typical rationale for racism in key respects. First, the empirical claim in Empirical Speciesism need not be false or unjustified. For example, the Empirical Speciesist might claim that membership in the species homo sapiens maps onto enhanced sentience. That may very well be true, and even if it is false we may be justified in believing it. Second, Categorical Speciesism does not rest on any empirical claim. Thus, neither Empirical Speciesism nor Categorical Speciesism makes speciesism a prejudice on a par with racism. Philosophers who use that analogy as a way to dismiss speciesism out of hand are simply mistaken.

But perhaps what philosophers have in mind when they compare speciesism to racism is racism justified in a manner analogous to Categorical Speciesism. Instead of partially relying on an empirical claim, this justification for racism simply asserts that skin color is the morally relevant characteristic. The anti-speciesist argument would then be that both justifications are erroneous for similar reasons: neither species membership nor skin color is a morally relevant characteristic.

What justifies our confident conclusion that skin color itself is not a morally relevant characteristic? It can only be that this claim does not cohere with our other settled moral judgments. For example, everyone, including racists, believes that very similar phenotypic traits — for example, eye color or hair color — are morally irrelevant. Skin color, a superficial phenotypic trait, differs markedly from other characteristics everyone agrees are morally relevant, such as sentience. In light of these judgments, it seems arbitrary to hold that skin color is morally relevant.

If a white racist’s friends and family woke up one morning with brown skin, it is doubtful that the racist would consider this sufficient reason to treat them differently. This tends to show that the racist is either an Empirical Racist, or his beliefs are simply incoherent. And so on.

But unlike Categorical Racism, Categorical Speciesism coheres fairly well with our other settled moral judgments. There are no other characteristics that are suitably similar to species membership and that we generally hold to be morally irrelevant. Species membership is not a superficial phenotypic trait: it is part of an individual’s biological essence. For most people, if their friends and family woke up one morning transformed into cockroaches — not cockroaches with human minds, just cockroaches — that would give them sufficient reason to treat them differently. Granted, we seem to have strong intuitions that membership in the species homo sapiens is not necessary for moral consideration — even the strong moral consideration to which humans are thought to be entitled. Any given episode of Star Trek suggests as much. But it does not follow that membership in that species is not relevant to moral consideration: for example, it may still be sufficient for it. In other words, while the argument that insects are not entitled to consideration because they are not human may fail, the argument that humans are entitled to consideration because they are human may still succeed. In addition, species membership may justify differential treatment of two individuals alike in all respects except their species — for example, Vulcans and humans.

The upshot of my argument is not that speciesism is justified. Rather, it is that it cannot be easily dismissed as belonging to the same category as racism, sexism, and other forms of bigotry. When Peter Singer popularized this argument in Animal Liberation, he may have done a tremendous amount of good by calling attention to the morally relevant characteristics that animals and humans share. But as the sometimes slipshod reasoning in certain seminal Supreme Court civil rights opinions demonstrates, there is no guarantee that moral progress will be grounded in sound arguments.

Wanted Dead: Should We Place Bounties on Invasive Species?

photograph of a pair of green iguanas

Iguanas in South Florida have become a problem. Green iguanas, a species native to Central and South America have made their way to Miami Beach. They may have started as exotic pets that were released after growing too large (adult green iguanas may be as long as 5 feet), or perhaps arrived as stowaways on ships importing fruit. Regardless, their numbers have exploded in recent years.

Officials are trying to address the iguana population. Dan Gelber, the Mayor of Miami Beach, says the city is quadrupling its budget for iguana removal – up to $200,000 annually from $50,000. In addition, the city commissioner, Kristen Rosen Gonzalez, has suggested implementing a “bounty” program. The plan would be to pay hunters to kill iguanas, with pay coming at a per iguana rate.

There are some concerns about implementing a bounty program. Some worry about potential cruelty towards the iguanas. In 2019, the Florida Fish and Wildlife Conservation Commission “encourage[d] homeowners to kill green iguanas on their own property whenever possible.” The commissioner at the time, Rodney Baretto, later noted that people should not shoot iguanas, but offered no description of what methods of extermination would be humane. Further, the policy could threaten human safety – a worker servicing a pool in Boca Raton was shot by an errant pellet from the gun of someone hunting iguanas.

Yet most troubling is that the bounty program could simply fail. It stands to create what economists call a perverse incentive structure.

Perverse incentives occur when a policy or intervention aimed to address a particular problem instead rewards acting in ways that do not contribute to solving, and may even worsen, the problem.

Bounties on pest animals are literal textbook examples of perverse incentive structures. In 1902, the French placed a bounty on rats in Hanoi. Anecdotes describe British governors in Delhi putting bounties on cobras. Both had similar results – after initial success in reducing populations, the policies eventually led to citizens breeding the bountied animals, to continually collect rewards. After the bounties were ended, breeders released their captive animals into the wild. Something similar could occur with iguanas.

The bounties, after all, do not incentivize reducing the wild population. They simply incentivize killing iguanas.

Suppose we could guarantee the bounty program would succeed. Would that make it desirable? First, we should consider why some call for the program. It’s because iguanas are a nuisance. The iguanas burrow underground, potentially causing damage to buildings and structures like sidewalks and seawalls. They destroy landscaping, eat plants on people’s property, and leave droppings wherever they walk. One woman found an iguana sitting in her toilet, it having apparently climbed through the pumping or other pipes connected to her home.

This way of justifying the policy does so by claiming it resolves a conflict of interest. The iguanas, by posing a destructive nuisance, threaten human interests in South Florida. However, it is not immediately obvious that the solution to this conflict is to declare open season on the iguanas. We are, if we endorse this reasoning, saying that the human interest in avoiding nuisance counts for more than the interests these iguanas have in their lives.

If we think the interests of the iguanas count for something – which is suggested by the fact that officials want to ensure iguanas are killed humanely – then it is not immediately obvious that our interests in avoiding a nuisance are sufficiently weighty to justify large scale elimination of the iguanas.

Perhaps an appeal to human interests would be more powerful if iguanas posed a risk to public health or safety. Though as of now they seem to merely be an annoyance.

A more compelling argument might come from the fact that these iguanas are an invasive species. Invasive species are both non-native to a region, and well-suited to live in that environment. As a result, their populations expand rapidly, and crowd out the native species in the area due to a lack of natural checks on their population like predators. The idea here is that the iguanas pose a risk to plant and animal life as well as the local ecosystem. Within any ecosystem, beings compete for finite resources like food, territory, and nests. Because invasive species have no predators, and their food sources lack defenses against them, they outcompete and overconsume local flora and fauna.

Notice that this rationale for the program shifts what’s doing the work in its justification. Instead of depending on human nuisance, viewing the program through this lens sees it pitting non-human interests against non-human interests. Since we may have good reason to reduce even wild animal suffering, this justification may go some ways.

The interests at stake are all vital interests, namely, the interests that both the iguanas and native species have in an environment capable of supporting them, ensuring their continued survival.

So, the idea may simply be that, on the balance of animal interests alone, it is better to remove the iguanas from the South Florida ecosystem. When invasive species experience population booms, it can cause native species to die through slower, more painful processes like starvation as they are outcompeted for resources. Indeed, continued explosive population growth may also result in harm for the iguanas themselves in the long run. For instance, deer populations in the U.S. are overabundant, leading to greater rates of disease and parasites among deer, in addition to poorer general health due to lesser access to food. So, it may be better for all animal parties involved to reduce the number of iguanas in South Florida through humane killing.

However, this view of the situation may suffer from a lack of imagination. In some cases, population booms of invasive species have led to the rebounding of predator species – saltwater crocodiles in Australia, and Florida panthers have experienced population resurgences, as a result of feeding on invasive feral pigs. Given the ecological role of predators, there is at least some potential that, on a long enough time horizon, the presence of green iguanas could ultimately help rebalance the local ecosystem. There may also be other methods of reducing iguana populations that do not require killing. For instance, despite the ecological damage they cause, feral cats are often trapped, neutered, and returned to the wild in order to reduce their numbers. Showing that we ought to reduce the iguana population does not demonstrate that we should kill them.

Overall, there are several questions we must answer before endorsing a policy like the bounty program. First, we should ask whether it would be effective, or if it may lead to unintentional consequences. Second, even with a well-designed policy, we must determine why we believe the policy is necessary. But before committing to a single course of action, we should carefully consider the options available to us rather than settling on what seems to be the simplest or easiest idea.

The Painful Truth About Insects

closeup photograph of mosquito

In a recent study, scientists from the Queen Mary University of London argue that insects possess central nervous control of ‘noiception’ – that is, the ability to detect painful stimuli. Put simply, this discovery makes it plausible that insects are capable of feeling pain in much the same way as humans and other animals. It’s worth considering, then, how this finding might be relevant to our moral considerations of insects.

Generally, we tend to think of humans as being equal. But what do we mean by this? Clearly it’s not that all humans are, in fact, equal. Humans differ enormously in their interests and capabilities. Some students want to become rock stars, others want to be mathematicians, while others might suffer from disabilities that make both of those options more difficult to pursue. Nor do we mean that all humans should receive equal treatment – since different humans have vastly different needs. The aspiring rock star needs a guitar, while the math-whiz needs access to quality education. The person suffering from a disability, on the other hand, might need extra assistance that would be unnecessary for their more able-bodied classmates.

It seems, then, that when we say that all humans are equal, we mean to say that the interests of all humans should be given equal consideration.

Put another way: we should care equally about all people – no person is of greater value than another. It’s this very notion that grounds the case against various types of bigotry like racism and sexism. To prioritize the interests of one person over another based purely on their ethnicity or gender is to deny the principle of equality.

In his seminal book, Animal Liberation, Peter Singer considers how the principle of equality might be extended beyond humans. If it’s wrong to prioritize the interests of certain beings based on their ethnicity or gender, then shouldn’t it also be wrong to prioritize them based on their species?

If animals have interests, how can we justify prioritizing our interests above theirs without, essentially, being speciest?

But this raises a very important question: do animals even have interests? It’s certainly clear that humans do. As noted above, some humans have an interest in becoming rock stars, while others have an interest in becoming mathematicians. And then there are those interests that are almost universally held by humans, such as interests in being healthy, safe, financially secure, and loved. But what about animals? It’s not obvious that there are goats who aspire to be rock stars, nor pigs that aspire to be mathematicians. Nor do any animals seem to show concern for things like financial security or love.

According to Singer, however, the only prerequisite for having interests is the capacity to experience pleasure and pain – or what we might call “sentience.” Why? Well, if something can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain.

If some living being experiences suffering, then there can be no moral justification for refusing to take that suffering into account. And, if we adopt the principle of equality, then that suffering must be counted equally with the same amount of suffering when experienced by any other being.

So if kicking a person and causing them X amount of pain is morally wrong, then kicking a dog and causing that same amount of pain is just as wrong. Likewise, if it would be morally wrong to inflict Y amount of pain on a human in order to test the safety of a new cosmetic, then it will be just as morally wrong to inflict this same amount of pain on an animal for the same purpose.

Singer’s argument has huge ramifications for many of the ways in which we treat animals. Consider the animal suffering that goes into the production of a single cheeseburger – and how terrible we would consider that same suffering if it was experienced by a human. What’s more, this suffering is offset by only a small benefit to the human who eats the burger – a benefit that could just as easily be achieved via non-meat and non-dairy alternatives. In fact, much – if not all – of the animal products and by-products we consume start to become morally questionable when seen in this light.

Of course, one simple solution would be to discount – or disqualify entirely – the suffering of animals on the basis that they aren’t as intelligent as humans. But this is to go against the very principle of equality that many of us hold dear. When thinking about humans, we would consider it reprehensible to say that someone’s pain and suffering is less important simply because they are less intelligent than someone else. So we must take the same approach to animals.

The only consistent way to justify the suffering we inflict on animals is to say that their suffering counts for less simply because they are animals. But that’s speciesism – and it shares precisely the same (very bad) rationale that justifies racism, sexism, and other forms of bigotry.

Indeed, Singer’s observations have motivated many people to adopt vegetarian or vegan lifestyles. But what are we to make of this new research that suggests insects might also be sentient? If an ant can feel pleasure and pain, then an ant has interests. And if an ant has interests, then the principle of equality demands that that suffering be counted equally with the same amount of suffering when experienced by any other being. Suppose, for example, that swatting a mosquito causes that mosquito to feel Z amount of pain. Suppose, then, that – for a human – that same amount of pain would be the equivalent of a hard slap to the face. If we believe that slapping a human is morally wrong, then the principle of equality would require us to reach the same moral judgement about inflicting the same amount of pain on a mosquito. This would mean, then, that swatting a mosquito was morally wrong.

It’s a strange conclusion, and one that is still very much open to debate. For one, we would need to establish that insects do in fact experience pain in the same morally relevant way as humans and other non-human animals. We would then need some way of measuring this pain in order to form reasonable moral judgements. It might, for example, turn out that the suffering experienced by a swatted mosquito is minuscule – much less, in fact, than the bite it gives to the next human it encounters. In such a case, we could possibly make a case for the moral permissibility of swatting that mosquito.

But in the absence of better information about whether – and to what extent – insects experience pain, what should we do? There’s a chance that there’s nothing problematic about causing insects to suffer. But there’s also a chance that we’ve been horribly wrong. Until only recently we were still unsure about whether non-human animals experienced pain, with veterinarians trained before 1989 taught to ignore animal suffering. In fact, doctors up until that decade were still skeptical that human babies experienced pain, with many infant surgeries routinely carried out without anesthesia. Given our poor track record of understanding pain in other living beings, the mere possibility that insects suffer should give us reason to pause and reconsider how we treat them.