← Return to search results
Back to Prindle Institute

Sentience and the Morality of Lab-Grown Brains

photograph of human brains in glass jars with formaldehyde

Stories of lab-grown brains might seem like the preserve of Halloween tales. But in early October, a group of researchers writing in the journal Neuron claimed to have grown the world’s first “sentient” lab-grown brain – a creation they dubbed DishBrain. Lab-grown “brains” are nothing new, and were first produced in 2013 to study microcephaly. This is the first time, however, that such a brain has been exposed to external stimuli – in this case, a version of the video game Pong. The Neuron publication outlines how scientists grew 800,000 human brain cells from a collection of stem cells and mouse embryos, then connected this brain to the video game via electrodes. The cells responded – learning how to play the game in around five minutes. While its mastery of the game wasn’t perfect, its rate of success was well above random chance.

Many of us have a finely-honed “ew yuck” response that is triggered by these kinds of cases. But of course, being disgusted by something is not an ethical argument. Still, our distaste might signify something morally important. This is what philosopher Leon Kass famously referred to as the “wisdom of repugnance.”

So why might these lab-grown brains disgust us? We can start by considering what’s novel about DishBrain – that is, it’s claimed sentience. This is a notoriously ambiguous term. In many science fiction stories, “sentience” is used as shorthand for “consciousness” or “self-awareness.” Marvin the Paranoid Android, for example, might be described this way – exhibiting the capacity to question his own existence, experiencing bouts of depression and boredom, and even having the ability to compose lullabies. Often, this same understanding of sentience will be used to distinguish between different kinds of alien lifeforms – with the status of “sentience” being used to differentiate intelligent, communicative beings from other more primitive alien animals.

In ethical discussions, however, sentience is defined more narrowly. Derived from the Latin sentientem (a feeling), sentience is used to refer exclusively to the ability to feel pain and pleasure. If something has such an ability, it will have sentience.

On this narrower definition, a highly intelligent robot that is nevertheless incapable of experiencing pain will not be sentient, while an unintelligent animal that can experience pain will be.

I recently discussed the moral importance of this kind of sentience in light of the revelation that insects might feel pain. Why is it so important? Because anything with interests is morally relevant in our ethical decision making, and – as philosopher Peter Singer argues – if something can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain. If some living being experiences suffering, then there can be no moral justification for refusing to take that suffering into account.

Return, then, to the case of DishBrain. Suppose that – as its creators claim – this lab-grown brain has sentience. On the narrow definition above, this would mean that DishBrain could experience pain and pleasure. If this was the case, this might go some way towards describing our repugnance regarding the experiment.

While playing Pong for hours on end might not be a truly painful experience, being created solely for this purpose sounds like a life utterly devoid of any real pleasure. You or I certainly wouldn’t want to go through such a miserable existence.

Given this – and given Singer’s argument regarding sentience – it would be morally wrong to inflict this kind of life on someone (or something) else.

Fortunately, however, DishBrain doesn’t seem to possess sentience of this kind. In the absence of sensory receptors and a complex nervous system, it seems unlikely that DishBrain is capable of experiencing anything like pain or pleasure. Given this, there’s little reason to worry about this experiment falling afoul of an argument like Singer’s.

But is pain and pleasure all that is morally relevant? Consider, for example, an individual who suffers from congenital analgesia – a rare condition in which someone is unable to experience pain. Would it be morally permissible to inflict a battery of painful experiments on this person, justified on the basis that they will experience no pain as a result? It would seem not. And this suggests that something more than pain and pleasure might matter to our considerations of how we should treat other beings.

Perhaps this is where the alternative conception of sentience – referring to things that are capable of self-awareness – is useful. The capacity for this kind of sentience also seems morally important.

We might, for example, adopt something like the Kantian notion that any self-aware being should be treated as an end in itself – not as a means to some other end. This might be why we believe it would still be morally wrong to carry out painful experiments on someone who is incapable of experiencing pain.

Fortunately, lab-grown brains don’t seem to be sentient in this way either. DishBrain isn’t self-aware. It’s merely receiving input and providing output; much like a computer – or even something as rudimentary as a mechanical slot machine – might do.

There’s a warning here, however. Sentience – whether understood as (i) the ability to experience pain and pleasure, or (ii) the capacity for self-awareness – carries enormous moral weight. While DishBrain might (contra the claims of its creators) currently lack sentience, creating further iterations of lab-grown brains that do possess real sentience would be enormously problematic. Our repugnance at this – our “ew yuck” reaction – would then have a solid moral foundation.

The Ethics of Cell Cultured Brains

image of brain outline in white light

Earlier this month, the New York Times reported that Yale neuroscientist Nenad Sestan and his team successfully produced active brain cells through a process of culturing the inactive brain matter of deceased creatures. The cells were active for more than mere moments—some of them survived for weeks at a time. These results may lead to important discoveries about the way the brain works, and could, in the long term, be an important step to understanding and/or curing brain diseases and disorders.

Sestan is interested in generating activity beyond individual cells to entire slices of brain matter. Doing so would allow him to study what neuroscientists call the “connectome”—essentially, the wiring of the brain and its synapses. The New York Times piece focused on Sestan’s work in particular, but he was eager to point out that other scientists are doing similar work. In fact, some scientists have cell cultured “mini-brains” that demonstrate the kind of neural activity that one might expect to see in fetuses at 25-29 weeks after conception.

In Sestan’s work, and in other work like it, brain matter is obtained from the bodies of deceased humans who, while living, consented to donate their bodies to assist in scientific research. Because the cells and, potentially, organs being cultured here are brain cells and organs, these processes are philosophical and ethical quagmires. There is much potential for discovery concerning the answers to fascinating questions, but there is also the potential for some pretty significant ethical violations.

One concern has to do with whether the individuals who donated their bodies to science actually consented to the creation of beings that can think. As long as humans have understood that brains are responsible for thought, we’ve been obsessed with the notion of a “brain in a vat.” It pops up relentlessly in pop culture, and even in academic philosophy. Noteworthy examples include the 1962 sci-fi/horror classic The Brain That Wouldn’t Die and the 1983 Steve Martin comedy The Man with Two Brains. Whenever the concept arises in popular culture, one thing is clear—we attribute personhood to the brain. That is, we think of the brain as a someone rather than a something. If this is true, though, the consent needed from the donor is not the consent required to simply use that donor’s body for testing. It is the consent that might be required if one were to clone that donor or to create a child from that donor’s reproductive material. One might think that the consent conditions for that might be very different, and might well be consent that the donor did not provide.

Some concern has been raised over whether this kind of experimentation could lead to the creation of suffering—if active brain cells or a series of connected cells have the potential to give rise to thoughts or experiences of some kind, they might give rise to negative experiences. Some neuroscientists view this possibility as remote, but, nevertheless, Christof Koch, the president and chief scientist at the Allen Institute for Brain Science, claims, “it would be best if this tissue were anesthetized.”

The existence of active brain states in a network gives rise to the possibility of the existence of mental states. One important question, then, becomes: what kinds of mental states are morally relevant? Is there something inherently valuable about thoughts or about sensory experiences? (Are there such things as sensory experiences in the absence of sense organs and an entire central nervous system?) If there is something valuable about such states, is it always a good thing to bring them about? In that case, every time a scientist creates a cell or system of cells capable of having a thought or experience, that scientist has done something that increases the overall level of value in the world. On the other hand, we have no way of knowing what kinds of experiences are being produced. If the sole experience produced in the creation of a cell or a system of cells is a negative experience, then the scientist has arguably done something wrong by generating that cell or system of cells.

Some philosophers think that it isn’t merely the presence of thoughts, but the presence of thoughts of a particular kind that make a being a person. Personhood, according to many moral theories, is a characteristic a being must possess in order to be a member of the moral community. According to philosopher Harry Frankfurt, a being is a person if and only if their first order desires are guided by their second order desires. So, a person might have a first-order desire to eat a slice of cake. They might have a second order desire to refrain from eating the cake, say, because they are on a diet. Persons, and only persons, can use their second order desires to guide their first order desires. Through the process of having thoughts about one’s own thoughts and desires about one’s own desires, a being starts to develop an identity. 

The truth is, we simply don’t know how this works—we don’t know what conditions need to be in place for either the existence of first order or of second order thought. We don’t know how brain matter works, and we don’t know exactly what “thoughts” consist of. We don’t know if or how mental states may be reducible to brain states. We don’t know what states of matter might give rise to second order beliefs and desires—we don’t know the conditions under which we might create a “brain in a vat” that is a person and has an identity. What’s more, the brain wouldn’t be capable of communicating that fact to us (unless, of course, the horror movies have it right and all such brains can communicate telepathically—but I wouldn’t bet on that.)

As technology progresses, we run into a familiar ethical issue over and over again: what steps are we morally justified in taking, given that we don’t really know what we’re doing or how our actions may ultimately affect other beings with interests that matter? When we know we’re potentially dealing with thinking beings, we must proceed with caution.

Death and Consciousness: The Prospect of Brain Resuscitation

3D image of human brain

Recently published by Nature, Yale School of Medicine completed a study where they were able to revive disembodied pig brains several hours after death. In their study, they looked at 32 brains from pigs that had been dead for four hours. The brains were separated from the body and hooked up to a machine called BrainEx. On this system oxygen, nutrients, and protective chemicals were pumped into the organ for approx 6 hours. The study found that the brain tissue was largely intact and functional compared to those that did not receive the BrainEx treatment. The cells were alive, able to take up oxygen and produce carbon dioxide, but there was no further brain communication between the cells.

These findings are considered a breakthrough for the scientific community because they challenge the previously believed fact that brain cells are irreversibly damaged after a few minutes from being oxygen deprived. In general, when an organ is oxygen deprived for about 15 minutes, it should die. Nenad Stestan, a Yale neuroscientist explained during a press conference, “Previously, findings have shown that in basically minutes, the cells undergo a process of cell death. What we’re showing is that the process of cell death is a gradual step-wise process, and some of those processes can either be postponed, preserved, or even reversed.” BrainEx, a tool developed to study the living brain beyond the confines of the body, has allowed researchers a new way to look at brain cells. Previously, studies were limited to slices of the brain from dead animals, which explains our lack of knowledge on the complex organ. We now have the means to study the interrelational association between the many parts of the brain.

Bioethicists have been equally excited and alarmed with the new means of brain research. This kind of study in is uncharted territory. Technically, because the brain is taken from a dead animal, it doesn’t fall into the category of animal research. Animal research is protected through the ethical guidelines that animals should not be subjected to unnecessary harm. However, do we know enough about consciousness to truly know if the pig is experiencing harm in the process? If the pig were to feel harm during this experiment, would it make it an unethical practice?

The scientists took a measure of steps to be proactive in protecting the possibility of the pig gaining consciousness. A series of chemicals were pumped into the brain by the BrainEx machine, one of which was supposed to stop any possibility of neural interaction that would lead to consciousness. An electroencephalogram (EEG) monitored the brains throughout the whole study. Researchers said that if they had detected any levels of consciousness, they would shut down the experiment immediately. In addition, they were standing by with anesthesia to administer. Luckily, the only findings were that cell metabolism could be recovered and no consciousness was detected. With little well known about consciousness in general, can we even be sure that an EEG should be the only indicator of consciousness or perception? It is still unknown how many neurons are needed to be activated for the pig to have any feelings at all.

Weighing the cost of the unknown harm with the benefits is one step for researchers to consider with this project. Ultimately, we will gain expertise of the interactions of a mammalian brain. Understanding the internetwork of relations between the many parts of the brain can point scientists towards new cures for dementia, brain diseases, or injuries that were once considered irreversible. Future studies can include testing drugs, studying neural pathways, and furthering general knowledge of neuroanatomy.

What cannot be ignored with these studies are the implications for long-term changes in the medical community. These findings could challenge the definition of death as it is right now. According to MedicineNet, the current law standard for death is the following: “An individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem is dead. A determination of death must be made in accordance with accepted medical standards.” This definition was approved and accepted by the American Medical Association in 1980. With the findings from the Yale study, it challenges the notion that all brain cells are irreversibly damaged. Could doctors continue with this assessment if these studies lead to a means to reverse the damage, and if so, how do we now declare when someone has died?

Another worry is related to organ transplant. According to the United Network for Organ Sharing, someone is added to the transplant waiting list every 10 minutes. In the US, 18 people die every day while waiting for a transplant. Described in a commentary by Stuart Youngner and Insoo Hun, is the worry that doctors would feel reluctant to harvest organs for donation. If people could become candidates for brain resuscitation rather than donation, when, where, and for who do doctors make this decision? There is already the struggle for when do doctors switch from saving someone’s life to saving their organs for the benefit of another person. The answers only come down to moral decision making and the possibility of brain resuscitation further complicates the answers.

The continuation of these studies have the potential to make a huge difference for our expertise of neural anatomy and the process of cell death. For now, researchers have weighed the perceived benefits to outweigh the possibility of harm to the research subject. With the means to learn more about the definitions of consciousness and death, it is necessary for after each study to reevaluate the process of BrainEx in order to continue the study in an ethical manner.