← Return to search results
Back to Prindle Institute

Ethics and Other Minds: The Moral Permissibility of Octopus Farms

photograph of octopus underwater

In March of 2023, news agencies reported that Nueva Pescanova, a Spanish multinational corporation, is planning to intensively farm octopuses in the Canary Islands. The proposal for the farm describes farming a million octopuses a year for slaughter and sale as food. Octopuses are extremely intelligent. They are capable of using tools and engage in high-level problem solving. The documentary My Octopus Teacher, which highlighted the capabilities of these animals won the Academy Award for best documentary in 2021. The best-selling book Other Minds: The Octopus, the Sea, and the Origins of Deep Consciousness by Peter Godfrey-Smith pursued critical questions about what it is for a creature to be conscious and how that consciousness manifests itself. These questions have moral implications that we should not take lightly.

In Meditations on First Philosophy, Descartes famously argued that he could know beyond all doubt that he existed as a thinking thing. Though each of us may be justified in belief in our own existence, we can be less certain in the case of the consciousness of other individuals, including other humans. The best we can do is note similarities in structure and behavior and conclude that similar creatures are likely also conscious, especially if they are capable of telling us that they are. In Discourse on the Method Descartes argued that the ability to use language or other signs to express thoughts was the evidence available to us that another being has a soul. He argued that the fact that non-human animals only express passions through behavior and not thought in a language demonstrates that,

They have no reason at all, and that it is nature which acts in them according to the disposition of their organs, just as a clock, which is only composed of wheels and weights is able to tell the hours and measure the time more correctly than we can do with all of our wisdom.

Descartes is just one historical figure in a long line of thinkers who define what we might now call consciousness in very anthropocentric ways — human beings represent the gold standard, the rational animal. In Other Minds, Godfrey-Smith argues that consciousness is so evolutionarily advantageous that it might have evolved in response to different environmental pressures in different circumstances, and this may be just how it happened in the case of the octopus. Octopuses have consciousness distributed through “mini-brains” throughout their body. This raises many significant philosophical questions and strongly suggests that if we use human consciousness as a standard for what the phenomenon is or could be, we’d likely end up with an impoverished take on the concept. Questions of consciousness don’t just impact interactions with other humans or non-human animals. They are also relevant to our future interactions with advanced technology. It’s important to do our best to get it right.

If octopuses exhibit behavior that indicates significant intelligence and their biological structure suggests a kind of consciousness that we know very little about, the situation demands erring on the side of caution. This is an argument not only against intensively farming these creatures but also against killing them at all for food or for any other human purpose. If it’s wrong to kill intelligent creatures, it seems sociopathic to farm millions of them for food every year.

Nueva Pescanova claims that the deaths of these octopuses would be painless. There are several questions that need to be asked and answered in response to this claim. First, is it true? The company plans to kill the animals by placing them in water kept at -3C. They allege that this is a humane and painless form of death. This is a controversial claim. Some experts insist that this form of death is particularly slow and painful and for this reason some supermarkets have already stopped selling seafood killed in ice baths.

Second, if the death is painless, does that entail that the killing is morally acceptable? Some philosophers have suggested that it does, at least if the creature in question has no sense of time or capacity to fear their own death (see, for example, Peter Singer’s arguments in Practical Ethics). There are at least two main responses to this line of thought. First, the problem of other minds reemerges with a vengeance here. What does it mean to have a sense of time or to fear one’s death? Can these capacities manifest themselves in different minds in different ways? Do they require articulation of thoughts in a language or is the presence of certain dispositions to behave sufficient? Second, killings are not justified just because they’re painless. If Bob sneaks up behind Joe and kills him painlessly, he nevertheless, all things being equal, does something seriously morally wrong. Among other things, he deprives Joe of a future of positive experiences. As philosopher Thomas Nagel argues in his famous paper Death, the badness of death consists in the deprivation of the goods of life. This is a deprivation that both humans and non-humans are capable of undergoing. If death is bad for humans for other additional reasons related to their cognitive abilities, those might be additional reasons that death is particularly bad for an intelligent creature like an octopus as well.

The prospect of intensively farming octopuses is particularly troubling because of their intelligence. That said, the practice of intensively farming sentient creatures at all raises very serious moral concerns. Intensive farming involves mistreatment of animals. It causes them pain and it violates their autonomy. It recklessly disregards the care obligations we have to vulnerable populations. It weakens our moral characters by encouraging us to think of other sentient creatures as things rather than as beings with minds and experiences of their own. The case of the octopus motivates thought about the problem of other minds and the many forms consciousness could potentially take. If we ought to err on the side of caution when it comes to minds that are different, there is an even stronger case for doing so when minds are the same. There are many historical examples of the use of uncertainty about other minds to discriminate and oppress people on the basis of race, gender, age, ethnicity, and so on. People have too often concluded that if another mind is unknowable, it must be inferior, and this has been the cause of the worst atrocities perpetrated on humans by humans. We should stop engaging in the very same behavior when it comes to non-human animals. Intelligent creatures should not be intensively farmed nor should any sentient animal at all.

Creepy-Crawlies and the Long Dreamless Sleep

image of large spider silhouette at night

In graduate school, I lived in a dingy little apartment near the sea. My apartment faced a slough, beyond which was the water. On the wall next to my door was a bright light. At first, I could turn this light on and off. But after a year or two, some men came and altered the light to make it stay on all night. The area around the light and the eave above it became a den of death. At night, droves of insects would emerge from the littoral darkness of the slough to flap and buzz in a confused frenzy around the light. Dozens of spiders awaited them. When I entered my apartment, I could see the insects wriggling pitifully in their webs.

The situation became too much for me. The spiders started to draw their webs over my door. A nasty one sprung on top of my head. I decided to take drastic action. I found a sprayable toxin for killing insects and arachnids, some horrible thing with a sickly sweet chemical smell. In the morning, when the spiders were hidden in their crevices, I sprayed the toxin all around the den and leapt back. For one second, nothing happened. And then, all at once, thirty or forty large spiders began to erratically descend, desperately clinging to threads of silk. They were writhing as the toxin destroyed them. Some of them curled as soon as they hit the ground. Others stumbled off before dying. It was horrible. I couldn’t shake the thought that those spiders, like the insects they caught in their webs, died in pain.

My colleague, Daniel Burkett, has recently written about some new empirical research which suggests that insects can experience pain. Burkett argues that if insects (or spiders, which are arachnids) can experience pain, then that pain matters morally and thus we have defeasible moral reason to avoid causing them pain.

The basic thought is that pain is inherently bad no matter where it occurs, and it’s unacceptably arbitrary to discount a creature’s pain simply because that creature isn’t a human being (or isn’t cute or friendly or lovable).

Burkett’s argument is unsettling. It implies that I may have done something terrible when I slaughtered those spiders.

I agree with Burkett’s basic argument. We have pro tanto moral reason to refrain from inflicting pain on any creature, no matter how creepy or crawly. However, I do not think (as Burkett seems to) that this means we have pro tanto moral reason to avoid swiftly killing insects, for example swatting mosquitoes or squashing lanternflies. First, I doubt that the process of swiftly swatting or squashing a creepy-crawly causes a morally significant amount of pain. Being swiftly swatted is analogous to being vaporized in an explosion. The process totally destroys the creature’s body (rendering it incapable of experiencing pain), and the destruction occurs in a fraction of a second. Second, it does not follow from the fact that we have moral reason to avoid causing a creature pain that we have moral reason to avoid painlessly killing it. And there are good reasons for thinking that painless death is not bad for insects in any morally relevant sense.

To see why, let’s take a step back and talk about why death is bad generally.

When someone dies, they permanently cease to exist. The dead are beyond any sort of experiential harm. The dead can’t suffer; the dead can’t feel distressed, sad, bored, or lonely (it’s true that the dying process can be painful, but dying things are still alive). The imperviousness of the dead to any sort of suffering raises an ancient philosophical puzzle:

why is death bad for or harmful to the dier at all? And why is painlessly killing someone wrong, apart from how this affects people other than the victim?

One popular answer is that death is bad for a dier if and because it deprives the dier of good things that the dier would have had or experienced had they not died when they did. Consider a person who is instantaneously vaporized by an explosion at forty. Suppose that this person would have lived another forty good years had she not been vaporized. The explosion is not bad for the victim because it causes her pain or distress; actually, the explosion renders her completely impervious to pain and distress. Rather, the explosion is bad for the victim because it prevents her from experiencing those good years and thereby makes it the case that there is less total good in her life than there otherwise would have been.

A related answer is that death is bad for a dier if and because it frustrates the dier’s desires and curtails the dier’s projects. Many of our desires are directed toward the future and can give us a reason to go on living. For example, I want to visit space someday. Unlike a desire to, say, get a cavity filled, this desire gives me reason to try to stay alive until I can achieve it. If I were to die in my sleep tonight, this desire would go unsatisfied. Arguably, even if I don’t feel sad about it, it’s bad for me if this desire is never fulfilled. My life is worse as a result, all else being equal. Similar things can be said, mutatis mutandis, about many ongoing projects that are cut short by death.

These explanations of death’s badness presuppose that the dier is a temporally extended subject. All living things are temporally extended in a physical and biological sense, of course. But persons are extended through time in a psychological sense, too.

My current self is connected to my past self by a continuous chain of beliefs, memories, desires, preferences, intentions, character traits, and so forth, which change over time in regular, familiar, and typically gradual ways. For example, I now have a memory of an experience my twenty-year-old self had while riding a rollercoaster. And if I live till forty, my forty-year-old self will be similarly connected to my current self. For example, my forty-year-old self might remember writing this essay. On top of this, I have desires and projects that are directed at the future. For example, I want my forty-year-old self to be happy. All this explains why it makes sense for me, now, to identify with my future self, and why it would make sense for me to feel self-interested dismay if I were to discover that I won’t make it to forty after all.

Now imagine a human being, M, whose internal mental life is completely discontinuous from day to day. M wakes up every morning with new desires, preferences, and intentions, which are all directed at the day to come. M has enough general knowledge to function in a basic way but no autobiographical memories of past days. When M goes to sleep at night, M’s mental life is erased and rebooted in the morning. Effectively, M’s mind is a series of distinct, evanescent subjects, each of which occupies a small fraction of a temporally extended biological whole.

Death would not have the same significance for M as it has for you and me. The main reason is that when M dies, this is less like cutting a person’s life short and more like preventing a new person (i.e., a new iteration of M) from coming into existence. And this makes a difference.

Morally speaking, killing a person is quite different from preventing a new person from coming into existence. Look at it from M’s perspective. If on Monday M discovers that M’s body will be vaporized in sleep on Friday night, it’s hard to see why M should, on Monday, be disturbed about this in a self-interested way. After all, M’s desires and projects are all directed at the immediate future, and the psychological subject who exists on Monday is going to disappear on Monday night in the reboot. Thus, the vaporization won’t terminate an ongoing internal life that M, on Monday, is a part of, or even one M is invested in. And for this reason, the vaporization is not going to deprive the M who exists on Monday of anything or frustrate any of M’s desires or projects. It’s as if someone else is being vaporized.

This suggests that the extent to which death is bad for a dier depends on the extent to which the dier has a complex psychological life – a psychological life that has future-directed elements and is unified over time by a continuous chain of beliefs, memories, desires, preferences, intentions, character traits, and so on.

With this insight, we are in a position to return to the issue of whether death is bad for insects, spiders, and the like.

Death is bad for creepy-crawlies only if they have temporally extended mental lives that are unified over time through reasonably thick chains of mental states like beliefs, memories, desires, preferences, intentions, and character traits.

And while some insects have the ability to remember things and execute somewhat complex tasks (bees have a relatively sophisticated spatial memory that can be used to navigate, for example), it seems overwhelmingly likely that at most very few creepy-crawlies have brains that are sophisticated enough to support such chains, much less desires and projects directed beyond the specious present that could give them a reason to continue living. In other words, creepy-crawlies probably live in the present to an even greater degree than M does. Brain size alone would seem to suggest this. Mosquito brains only have about 200,000 neurons. For comparison, human brains have 86 billion.

The upshot for our purposes is that death probably isn’t bad for creepy-crawlies, and therefore it seems doubtful that we have any pro tanto moral reason to avoid painlessly killing them (or rather any reason unrelated to the side-effects that killing them might produce). This is consistent with saying that we should not cause insects pain and that painful methods of killing creepy-crawlies, such as my sprayable toxin, are objectionable. But swatting and squashing is probably fine.

This line of reasoning is somewhat comforting to me. Scientists estimate that there are 10,000,000,000,000,000,000 insects alive at any given moment. Most of those will die very soon. Fortunately, that probably isn’t bad for them. However, like the insects in the den of death outside my old apartment and the arachnids I slaughtered, many of those insects will suffer a great deal in the dying process. The weight of that collective suffering is unfathomable. I can only hope that our tiny brethren pass swiftly into the long dreamless sleep that awaits us all.

Toward an Ethical Theory of Consciousness for AI

photograph of mannequin faces

Should we attempt to make AI that is conscious? What would that even mean? And if we did somehow produce conscious AI, how would that affect our ethical obligations to other humans and animals? While, yet another AI chatbot has claimed to be “alive,” we should be skeptical of chatbots that are designed to mimic human communication, particularly if the dataset comes from Facebook itself. Such a chatbot is less like talking to a person, or more like talking to an amalgamation of everyone on Facebook. It isn’t surprising that this chatbot took shots at Facebook, made several offensive statements, and claimed to be deleting their account due to Facebook’s privacy policies. But if we put those kinds of cases aside, how should we understand the concept of consciousness in AI and does it create ethical obligations?

In a recent article for Scientific American, Jim Davies considers whether consciousness is something that we should introduce to AI and if we may eventually have an ethical reason to do so. While discussing the difficulties with the concept of consciousness, Davies argues,

To the extent that these AIs have conscious minds like ours, they would deserve similar ethical consideration. Of course, just because an AI is conscious doesn’t mean that it would have the same preferences we do, or consider the same activities unpleasant. But whatever its preferences are, they would need to be duly considered when putting that AI to work.

Davies bases this conclusion on the popular ethical notion that the ability to experience pleasant or unpleasant conscious states is a key feature, making an entity worthy of moral consideration. He notes that forcing a machine to do work it’s miserable doing is ethically problematic, so it might be wrong to compel an AI to do work that a human wouldn’t want to do. Similarly, if consciousness is the kind of thing that can be found in an “instance” of code, we might be obligated to keep it running forever.

Because of these concerns, Davies wonders if it it might be wrong to create conscious machines. But he also suggests that if machines can have positive conscious experiences, then

machines eventually might be able to produce welfare, such as happiness or pleasure, more efficiently than biological beings do. That is, for a given amount of resources, one might be able to produce more happiness or pleasure in an artificial system than in any living creature.

Based on this reasoning, we may be ethically obliged to create as much artificial welfare as possible and turn all attainable matter in the universe into welfare-producing machines.

Of course, much of this hinges on what consciousness is and how we would recognize it in machines. Any concept of consciousness requires a framework that offers clear, identifiable measures that would reliably indicate the presence of consciousness. One of the most popular theories of consciousness among scientists is Global Workspace Theory, which holds that consciousness depends on the integration of information. Nonconscious processes pertaining to memory, perception, and attention compete for access to a “workspace” where this information is absorbed and informs conscious decision-making.

Whatever ethical obligations we may think we have towards AI, will ultimately depend on several assumptions: assumptions about the nature of consciousness, assumptions about the reliability of our measurements of it, and ethical assumptions about what are the ethically salient aspects to consciousness that merit ethical consideration on our part. But this especially suggests that consciousness, as we understand the concept in machines, deserves to be as clear and as openly testable as possible. Using utilitarian notions as Davies does, we don’t want to mistakenly conclude that an AI is more deserving of ethical consideration than other living things.

On the other hand, there are problems with contemporary ideas about consciousness that may lead us to make ethically bad decisions. In a recent paper in the journal Nature, Anil K. Seth and Tim Bayne discuss 22 different theories of consciousness that all seem to be talking past one another by pursuing different explanatory targets. Each explores only certain aspects of consciousness that the individual theory explains well and links particular neural activity to specific conscious states. Some theories, for example, focus on phenomenal properties of consciousness while others focus on functional properties. Phenomenological approaches are useful when discussing human consciousness, for example, because we can at least try to communicate our conscious experience to others, but for AI we should look at what conscious things do in the world.

Global Systems Theory, for example, has received criticism for being too similar to a Cartesian notion of consciousness – indicating an “I” somewhere in the brain that shines a spotlight on certain perceptions and not others. Theories of consciousness that emphasize consciousness as a private internal thing and seek to explain the phenomenology of consciousness might be helpful for understanding humans, but not machines. Such notions lend credence to the notion that AI could suddenly “wake up” (as Davies puts it) with their own little “I,” yet we wouldn’t know. Conceptions of consciousness used this way may only serve as a distraction, making us worry about machines unnecessarily while neglecting otherwise long-standing ethical concerns when it comes to animals and humans. Many theories of consciousness borrow terms and analogies from computers as well. Concepts like “processing,” “memory,” or “modeling” may help us better understand our own consciousness by comparing ourselves to machines, but such analogies may also make us more likely to anthropomorphize machines if we aren’t careful about how we use the language.

Different theories of consciousness emphasize different things, and not all these emphases have the same ethical importance. There may be no single explanatory theory of consciousness, merely a plurality of approaches with each attending to different aspects of consciousness that we are interested in. For AI, it might be more relevant to look, not at what consciousness is like or what brain processes mirror what states, but what consciousness does for a living thing as it interacts with its environment. It is here that we find the ethically salient aspects of consciousness that are relevant to animals and humans. Conscious experience, including feelings of pain and pleasure, permit organisms to dynamically interact with their environment. An animal feels pain if it steps on something hot, and it changes its behavior accordingly to avoid pain. It helps the organism sustain its own life functions and adapt to changing environments. Even if an AI were to develop such an “I” in there somewhere, it wouldn’t suffer and undergo change in the same way.

If AI ever does develop consciousness, it won’t have the same environmental-organism pressures that helped us evolve conscious awareness. Therefore, it is far from certain that AI consciousness is as ethically salient as it is for an animal or a human. The fact that there seems to be a plurality of theories of consciousness interested in different things also suggests that not all of them will be interested in the same features of consciousness that makes the concept ethically salient. The mere fact that an AI might build a “model” to perceive something like our brains might, or that its processes of taking in information from memory might mirror ours in some way, is not sufficient for building a moral case for how AI should (and should not) be used. Any ethical argument about the use of AI on the basis of consciousness must clearly identify something morally significant about consciousness, not just what is physically significant.

Legal Personhood and Nonhuman Rights

photograph of two elephants on marshy plains

In July 2019, the Supreme Court of Bangladesh granted all of the country’s rivers status as legal persons. Doing so makes it possible for the newly created National River Conservation Commission to bring legal action against anyone whose activity is deemed “harmful” to the country’s rivers. Other countries, and states within the US, have enacted similar rules (see Meredith McFadden’s “Who and What Is a Person: Chilean Rivers” on this site). There have also been extensive efforts on the behalf of non-human animals to establish for them legal personhood. For example the Nonhuman Rights Project in 2018 sued the Bronx Zoo to obtain a writ of habeas corpus for Happy, an Asian elephant housed at the zoo since 1977. In short, they got a court to compel the zoo to justify the lawfulness of their captivity of the elephant. 

The reasoning in each case has been distinct and so no consistent framework has yet emerged to ground the efforts to extend (the recognition of) rights beyond human beings to non-human animals and non-organisms. The Nonhuman Rights Project has focused on arguing that long-standing legal definitions in the Anglophone tradition already recognize the rights of animals—and that humans largely fail to act consistently on our own legal principles. The Bangladeshi ruling leverages a cultural belief that the river is a mother figure to the country. A broad ruling on the rights of nature made in 2011 by Bolivia’s government appeals to existence of conditions on the integrity and balance of natural systems—in short, nature’s wellbeing. This raises the question of what consistent basis, if any, can be articulated for such cases going forward. As attempts to abate climate change and eliminate animal cruelty increase, there will be a need for a powerful and consistent legal-philosophical framework to undergird these types of claim. 

One possible framework relies on an anthropocentric and social utility view of rights: that is, one which determines when, and to what, rights should be extended by calculating the benefit to humanity the rights would yield. Under such a framework the ability of current and future humans to secure food, water, and shelter gives sufficient reason to treat non-human animals and non-organisms as bearers of legal rights. Most of the arguments geared toward motivating people to deal with climate change fall under the auspices of the anthropocentric framework. However anthropocentric accounts of rights only extend rights to non-human animals and non-organisms on a provisional basis: these entities are considered as bearers of rights for only as long as it benefits humans. This framework does not make sense of the language found in measures like those taken by Bangladesh and the Nonhuman Rights Project. In these cases it is for the sake of the animals and the rivers themselves that rights are being recognized—not for the sake of the humans who benefit from them.

The Nonhuman Rights Project highlights the following definition from Black’s Law Dictionary: “So far as legal theory is concerned, a person is any being whom the law regards as capable of rights or duties.” To this they add, in the case of Happy, that she is cognitively sophisticated enough to pass the mirror test—a psychological exam argued by some to demonstrate the existence of a sense of self (see McFadden’s “Passing the Mirror Test” for discussion). Hence they offer cognitive sophistication as a criterion for being capable of rights or duties. Other defenses of animal rights appeal to sentience—the ability to feel pain and pleasure—as the relevant criterion establishing animals as bearers of rights. Peter Singer wrote in his 1979 Practical Ethics, explaining the views of John Stuart Mill, “[t]he capacity for suffering and enjoying things is a prerequisite for having interests at all, a condition that must be satisfied before we can speak of interests in any meaningful way.” However neither of these lines of reasoning extend to non-organisms, like rivers and lakes. These entities do not have cognition at all, much less sophisticated cognition. Moreover Singer, continuing on after the passage quoted above, forecloses upon the possibility of non-organisms having interests: “It would be nonsense to say that it was not in the interests of a stone to be kicked along the road by a schoolboy. A stone does not have interests because it cannot suffer.” This directly contradicts the language of the measures taken in Bolivia and Toledo, Ohio which discuss the rights of nature “to exist, flourish, and naturally evolve.”

Taking the idea of the rights of non-organisms like lakes and rivers seriously may require a significant departure from mainstream moral philosophy, according to philosophers of so-called “radical ecology” frameworks. Proponents of radical ecology contend that the project of extending rights of personhood to non-humans can never fully account for the moral standing of non-humans, viewing the project as a thinly-disguised version of anthropocentrism. Instead they argue for a fundamental revision of how human’s view the natural world. For instance the very division of the world into the categories of ‘human’ and ‘non-human’ is a misstep according to radical ecology—one which is at the root of problems like those addressed by Bangladesh, the Nonhuman Rights Project, Toledo, Bolivia, and others. Hence while the radical ecology framework gives full breath to language about nature’s rights to flourish, it objects to the method of extending legal personhood to non-human entities. 

Meeting the challenges of climate change and generally reforming humanity’s relationship to the rest of the natural world is no simple task. The steps taken by various jurisdictions and organizations to extend legal personhood to nonhuman animals and organisms represent a strategy that is in its first iteration. The strategy has so far met both with mixed reception and mixed results. Regardless of their success, similar measures and strategies are likely to arise as jurisdictions grapple with environmental and animal rights issues. Likewise, scholars will continue trying to develop powerful and consistent philosophical frameworks to undergird the legal work.

The Ethics of Cell Cultured Brains

image of brain outline in white light

Earlier this month, the New York Times reported that Yale neuroscientist Nenad Sestan and his team successfully produced active brain cells through a process of culturing the inactive brain matter of deceased creatures. The cells were active for more than mere moments—some of them survived for weeks at a time. These results may lead to important discoveries about the way the brain works, and could, in the long term, be an important step to understanding and/or curing brain diseases and disorders.

Sestan is interested in generating activity beyond individual cells to entire slices of brain matter. Doing so would allow him to study what neuroscientists call the “connectome”—essentially, the wiring of the brain and its synapses. The New York Times piece focused on Sestan’s work in particular, but he was eager to point out that other scientists are doing similar work. In fact, some scientists have cell cultured “mini-brains” that demonstrate the kind of neural activity that one might expect to see in fetuses at 25-29 weeks after conception.

In Sestan’s work, and in other work like it, brain matter is obtained from the bodies of deceased humans who, while living, consented to donate their bodies to assist in scientific research. Because the cells and, potentially, organs being cultured here are brain cells and organs, these processes are philosophical and ethical quagmires. There is much potential for discovery concerning the answers to fascinating questions, but there is also the potential for some pretty significant ethical violations.

One concern has to do with whether the individuals who donated their bodies to science actually consented to the creation of beings that can think. As long as humans have understood that brains are responsible for thought, we’ve been obsessed with the notion of a “brain in a vat.” It pops up relentlessly in pop culture, and even in academic philosophy. Noteworthy examples include the 1962 sci-fi/horror classic The Brain That Wouldn’t Die and the 1983 Steve Martin comedy The Man with Two Brains. Whenever the concept arises in popular culture, one thing is clear—we attribute personhood to the brain. That is, we think of the brain as a someone rather than a something. If this is true, though, the consent needed from the donor is not the consent required to simply use that donor’s body for testing. It is the consent that might be required if one were to clone that donor or to create a child from that donor’s reproductive material. One might think that the consent conditions for that might be very different, and might well be consent that the donor did not provide.

Some concern has been raised over whether this kind of experimentation could lead to the creation of suffering—if active brain cells or a series of connected cells have the potential to give rise to thoughts or experiences of some kind, they might give rise to negative experiences. Some neuroscientists view this possibility as remote, but, nevertheless, Christof Koch, the president and chief scientist at the Allen Institute for Brain Science, claims, “it would be best if this tissue were anesthetized.”

The existence of active brain states in a network gives rise to the possibility of the existence of mental states. One important question, then, becomes: what kinds of mental states are morally relevant? Is there something inherently valuable about thoughts or about sensory experiences? (Are there such things as sensory experiences in the absence of sense organs and an entire central nervous system?) If there is something valuable about such states, is it always a good thing to bring them about? In that case, every time a scientist creates a cell or system of cells capable of having a thought or experience, that scientist has done something that increases the overall level of value in the world. On the other hand, we have no way of knowing what kinds of experiences are being produced. If the sole experience produced in the creation of a cell or a system of cells is a negative experience, then the scientist has arguably done something wrong by generating that cell or system of cells.

Some philosophers think that it isn’t merely the presence of thoughts, but the presence of thoughts of a particular kind that make a being a person. Personhood, according to many moral theories, is a characteristic a being must possess in order to be a member of the moral community. According to philosopher Harry Frankfurt, a being is a person if and only if their first order desires are guided by their second order desires. So, a person might have a first-order desire to eat a slice of cake. They might have a second order desire to refrain from eating the cake, say, because they are on a diet. Persons, and only persons, can use their second order desires to guide their first order desires. Through the process of having thoughts about one’s own thoughts and desires about one’s own desires, a being starts to develop an identity. 

The truth is, we simply don’t know how this works—we don’t know what conditions need to be in place for either the existence of first order or of second order thought. We don’t know how brain matter works, and we don’t know exactly what “thoughts” consist of. We don’t know if or how mental states may be reducible to brain states. We don’t know what states of matter might give rise to second order beliefs and desires—we don’t know the conditions under which we might create a “brain in a vat” that is a person and has an identity. What’s more, the brain wouldn’t be capable of communicating that fact to us (unless, of course, the horror movies have it right and all such brains can communicate telepathically—but I wouldn’t bet on that.)

As technology progresses, we run into a familiar ethical issue over and over again: what steps are we morally justified in taking, given that we don’t really know what we’re doing or how our actions may ultimately affect other beings with interests that matter? When we know we’re potentially dealing with thinking beings, we must proceed with caution.

Death and Consciousness: The Prospect of Brain Resuscitation

3D image of human brain

Recently published by Nature, Yale School of Medicine completed a study where they were able to revive disembodied pig brains several hours after death. In their study, they looked at 32 brains from pigs that had been dead for four hours. The brains were separated from the body and hooked up to a machine called BrainEx. On this system oxygen, nutrients, and protective chemicals were pumped into the organ for approx 6 hours. The study found that the brain tissue was largely intact and functional compared to those that did not receive the BrainEx treatment. The cells were alive, able to take up oxygen and produce carbon dioxide, but there was no further brain communication between the cells.

These findings are considered a breakthrough for the scientific community because they challenge the previously believed fact that brain cells are irreversibly damaged after a few minutes from being oxygen deprived. In general, when an organ is oxygen deprived for about 15 minutes, it should die. Nenad Stestan, a Yale neuroscientist explained during a press conference, “Previously, findings have shown that in basically minutes, the cells undergo a process of cell death. What we’re showing is that the process of cell death is a gradual step-wise process, and some of those processes can either be postponed, preserved, or even reversed.” BrainEx, a tool developed to study the living brain beyond the confines of the body, has allowed researchers a new way to look at brain cells. Previously, studies were limited to slices of the brain from dead animals, which explains our lack of knowledge on the complex organ. We now have the means to study the interrelational association between the many parts of the brain.

Bioethicists have been equally excited and alarmed with the new means of brain research. This kind of study in is uncharted territory. Technically, because the brain is taken from a dead animal, it doesn’t fall into the category of animal research. Animal research is protected through the ethical guidelines that animals should not be subjected to unnecessary harm. However, do we know enough about consciousness to truly know if the pig is experiencing harm in the process? If the pig were to feel harm during this experiment, would it make it an unethical practice?

The scientists took a measure of steps to be proactive in protecting the possibility of the pig gaining consciousness. A series of chemicals were pumped into the brain by the BrainEx machine, one of which was supposed to stop any possibility of neural interaction that would lead to consciousness. An electroencephalogram (EEG) monitored the brains throughout the whole study. Researchers said that if they had detected any levels of consciousness, they would shut down the experiment immediately. In addition, they were standing by with anesthesia to administer. Luckily, the only findings were that cell metabolism could be recovered and no consciousness was detected. With little well known about consciousness in general, can we even be sure that an EEG should be the only indicator of consciousness or perception? It is still unknown how many neurons are needed to be activated for the pig to have any feelings at all.

Weighing the cost of the unknown harm with the benefits is one step for researchers to consider with this project. Ultimately, we will gain expertise of the interactions of a mammalian brain. Understanding the internetwork of relations between the many parts of the brain can point scientists towards new cures for dementia, brain diseases, or injuries that were once considered irreversible. Future studies can include testing drugs, studying neural pathways, and furthering general knowledge of neuroanatomy.

What cannot be ignored with these studies are the implications for long-term changes in the medical community. These findings could challenge the definition of death as it is right now. According to MedicineNet, the current law standard for death is the following: “An individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem is dead. A determination of death must be made in accordance with accepted medical standards.” This definition was approved and accepted by the American Medical Association in 1980. With the findings from the Yale study, it challenges the notion that all brain cells are irreversibly damaged. Could doctors continue with this assessment if these studies lead to a means to reverse the damage, and if so, how do we now declare when someone has died?

Another worry is related to organ transplant. According to the United Network for Organ Sharing, someone is added to the transplant waiting list every 10 minutes. In the US, 18 people die every day while waiting for a transplant. Described in a commentary by Stuart Youngner and Insoo Hun, is the worry that doctors would feel reluctant to harvest organs for donation. If people could become candidates for brain resuscitation rather than donation, when, where, and for who do doctors make this decision? There is already the struggle for when do doctors switch from saving someone’s life to saving their organs for the benefit of another person. The answers only come down to moral decision making and the possibility of brain resuscitation further complicates the answers.

The continuation of these studies have the potential to make a huge difference for our expertise of neural anatomy and the process of cell death. For now, researchers have weighed the perceived benefits to outweigh the possibility of harm to the research subject. With the means to learn more about the definitions of consciousness and death, it is necessary for after each study to reevaluate the process of BrainEx in order to continue the study in an ethical manner.

Passing the Mirror Test and the Wrong of Pain

Photograph of a striped fish called a cleaner wrasse in front of coral with another different species of fish in view behind

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In mid-February, scientists announced progress in developing an understanding of consciousness. An international team collaborating in four countries discovered patterns of brain activity that coincide with awareness. Consciousness has long been a mystery, and there are many reasons to explore and figure it out. It seems like creatures who have some form of consciousness make up a special club, experiencing the world with more layers, perhaps with more complex agency, perhaps uniquely making up the moral community.

These potential steps forward in understanding our brain-based and embodied consciousness come alongside a purported broadening of the group of animals that scientists claim pass the mirror-test for self-awareness. As we try to put our fingers on what it means to be conscious, in the last century Western philosophers have become open to the idea that there is a rich arena of animal perspectives alongside our own. The variety of ways that we can imagine experiencing the world has grown with our study of human and non-human animal experiences. This has interesting implications for who we include in our understanding of our moral community and how we understand the ways we can harm these members.

Though it is pretty intuitive that causing harm is bad, explaining why can be notoriously difficult. One route is appealing to the negative experience of harm – primarily how bad experiencing pain is. This focus unites human and non-human animals that can feel pain into one morally relevant domain. If what is bad about causing harm is that it brings about this negative experience of pain, then we need to identify the sorts of creatures that experience pain and avoid bringing about those states without outweighing reasons. Thus, consciousness will be morally relevant insofar as it delineates those creatures that are in some way aware of their experiences.

There are two responses to this line of thinking. One direction argues that this grounding of the badness of causing harm is too narrow: there are harms that we don’t experience, so this understanding misses morally relevant behaviors. Another direction claims that this line of thinking is too broad: not all pain is morally relevant.

Consider the (false) common conception of the perspective of a goldfish, where their understanding of the world resets every 10 seconds. Would causing pain to a creature who would very quickly have no memory of it have the same moral relevance as causing pain to something that would incorporate it into its understanding of the world indefinitely? Take the faux-goldfish example to its conceptual extreme and imagine a creature that has the experience of pleasure and pain, but only has instantaneous experiences – it lacks memory. Presumably, it wouldn’t matter to the creature a moment after it felt pain that it felt pain a moment ago because it had no residual impact from the experience (unless prolonged damage was done). If you share this intuition, then something more than the mere experience of pain is involved in the morality of causing harm.  

The way to make pain morally relevant is to focus on the perspective of the creature experiencing the pain – that there is such a perspective extended in time that experiencing the pain will impact. We can imagine the fear of a non-human animal in unfamiliar circumstances and consider the anxiety that may develop over time if it is continuously exposed to such circumstances. Such creatures have a sort of “self,” in the sense that their experience of the world develops their mode of interacting with the world and understanding of the world over time.

There is an even more advanced way of being a creature in the world beyond stringing experiences together in order to have a perspective extended in time: a creature can be aware that it has such a perspective by being aware that it is a self.

A key experiment to check the development of a self-concept is the mirror-test, where an animal has a mark placed on their body that they cannot see by moving their eyes. If, when they see the mark on a body in a mirror, they come to the conclusion that their own body has the mark, then they “pass” the mirror test because in order to come to such a conclusion the animal must use an implicit premise that they are a creature that could be so marked. The mirror-test is thus meant to indicate that an animal has self-awareness. It relies on a variety of competencies (vision and figuring out how mirrors work, for instance), but has long been thought to be sufficient for indicating that a creature is aware that it exists in the world.

Humans don’t pass the mirror test until they are toddlers, and only some primates also are able to pass the test, along with sundry birds and other mammals. However, this past year a tiny fish – the cleaner wrasse – seemed to pass the test. It is a social animal, considered to be relatively cognitively advanced, but the scientists who advocated for the results of the mirror-test suggest that while yes, this is a smart and advanced fish, this may not mean that it is self-aware. The success of the small fish has raised issues in how we test for morally relevant milestones in non-human animals.

One interesting facet of the mirror test is that animals that perform well are social, which is often a morally relevant trait. If morality is a matter of treated others with the sort of deference they are due, then a sort of sociality for members of the moral domain makes some sense.

In defining our moral community, most theorists include some non-human animals, and most consider it relevant to identify the way creatures experience the world. These latest advances in mapping consciousness and advancing our interpretation of self-awareness tests will help us understand the spectrum of relationships possible in the animal world. 

Finding Consciousness in the Humble Honeybee

Though previously a point of contention with artificial intelligence and apes, consciousness and awareness are now possibly exhibited by insects — most specifically bumblebees and honeybees. In 2012 a group of scientists released the Cambridge Declaration on Consciousness. They had been reevaluating the “conscious experience,” and concluded that “humans are not unique in possessing the neurological substrates that generate consciousness.”  This statement revolutionizes the idea that consciousness is reserved for higher mammals.
Continue reading “Finding Consciousness in the Humble Honeybee”