← Return to search results
Back to Prindle Institute

Sentience and the Morality of Lab-Grown Brains

photograph of human brains in glass jars with formaldehyde

Stories of lab-grown brains might seem like the preserve of Halloween tales. But in early October, a group of researchers writing in the journal Neuron claimed to have grown the world’s first “sentient” lab-grown brain – a creation they dubbed DishBrain. Lab-grown “brains” are nothing new, and were first produced in 2013 to study microcephaly. This is the first time, however, that such a brain has been exposed to external stimuli – in this case, a version of the video game Pong. The Neuron publication outlines how scientists grew 800,000 human brain cells from a collection of stem cells and mouse embryos, then connected this brain to the video game via electrodes. The cells responded – learning how to play the game in around five minutes. While its mastery of the game wasn’t perfect, its rate of success was well above random chance.

Many of us have a finely-honed “ew yuck” response that is triggered by these kinds of cases. But of course, being disgusted by something is not an ethical argument. Still, our distaste might signify something morally important. This is what philosopher Leon Kass famously referred to as the “wisdom of repugnance.”

So why might these lab-grown brains disgust us? We can start by considering what’s novel about DishBrain – that is, it’s claimed sentience. This is a notoriously ambiguous term. In many science fiction stories, “sentience” is used as shorthand for “consciousness” or “self-awareness.” Marvin the Paranoid Android, for example, might be described this way – exhibiting the capacity to question his own existence, experiencing bouts of depression and boredom, and even having the ability to compose lullabies. Often, this same understanding of sentience will be used to distinguish between different kinds of alien lifeforms – with the status of “sentience” being used to differentiate intelligent, communicative beings from other more primitive alien animals.

In ethical discussions, however, sentience is defined more narrowly. Derived from the Latin sentientem (a feeling), sentience is used to refer exclusively to the ability to feel pain and pleasure. If something has such an ability, it will have sentience.

On this narrower definition, a highly intelligent robot that is nevertheless incapable of experiencing pain will not be sentient, while an unintelligent animal that can experience pain will be.

I recently discussed the moral importance of this kind of sentience in light of the revelation that insects might feel pain. Why is it so important? Because anything with interests is morally relevant in our ethical decision making, and – as philosopher Peter Singer argues – if something can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain. If some living being experiences suffering, then there can be no moral justification for refusing to take that suffering into account.

Return, then, to the case of DishBrain. Suppose that – as its creators claim – this lab-grown brain has sentience. On the narrow definition above, this would mean that DishBrain could experience pain and pleasure. If this was the case, this might go some way towards describing our repugnance regarding the experiment.

While playing Pong for hours on end might not be a truly painful experience, being created solely for this purpose sounds like a life utterly devoid of any real pleasure. You or I certainly wouldn’t want to go through such a miserable existence.

Given this – and given Singer’s argument regarding sentience – it would be morally wrong to inflict this kind of life on someone (or something) else.

Fortunately, however, DishBrain doesn’t seem to possess sentience of this kind. In the absence of sensory receptors and a complex nervous system, it seems unlikely that DishBrain is capable of experiencing anything like pain or pleasure. Given this, there’s little reason to worry about this experiment falling afoul of an argument like Singer’s.

But is pain and pleasure all that is morally relevant? Consider, for example, an individual who suffers from congenital analgesia – a rare condition in which someone is unable to experience pain. Would it be morally permissible to inflict a battery of painful experiments on this person, justified on the basis that they will experience no pain as a result? It would seem not. And this suggests that something more than pain and pleasure might matter to our considerations of how we should treat other beings.

Perhaps this is where the alternative conception of sentience – referring to things that are capable of self-awareness – is useful. The capacity for this kind of sentience also seems morally important.

We might, for example, adopt something like the Kantian notion that any self-aware being should be treated as an end in itself – not as a means to some other end. This might be why we believe it would still be morally wrong to carry out painful experiments on someone who is incapable of experiencing pain.

Fortunately, lab-grown brains don’t seem to be sentient in this way either. DishBrain isn’t self-aware. It’s merely receiving input and providing output; much like a computer – or even something as rudimentary as a mechanical slot machine – might do.

There’s a warning here, however. Sentience – whether understood as (i) the ability to experience pain and pleasure, or (ii) the capacity for self-awareness – carries enormous moral weight. While DishBrain might (contra the claims of its creators) currently lack sentience, creating further iterations of lab-grown brains that do possess real sentience would be enormously problematic. Our repugnance at this – our “ew yuck” reaction – would then have a solid moral foundation.

Treating Psychopathy?

photograph of 1896 sketch of the brain and it's parts

Most of us are familiar with the idea of the psychopath – emotionally vacant, devoid of empathy, and possessing poor behavioral control. Despite psychopathy not being a recognized mental condition in its own right (or at least, not in that exact terminology), as personality disorders go, it is almost undoubtedly archetypal. Many of the names we attach to the idea of evil certainly qualify for the label, including Ted Bundy, Charles Manson, Jeffrey Dahmer, and David Berkowitz. Beyond the real world, the psychopath is also a staple of fiction, with some of the most heartless villains being written with the disorder in mind, including Hannibal Lecter, Annie Wilkes, Patrick Bateman, and Norman Bates.

However, despite these colorful examples, not all psychopaths stalk the night looking for victims. Most real-life psychopaths navigate the world without making the headlines for slaughtering the innocent or starting nationwide manhunts. For every John Wayne Gacy, there are countless more who, while being manipulative and callous, get through their lives without turning their neighbors into a rug. Indeed, estimates place the prevalence of severe psychopathy in the general population at around 1%. That means, statistically speaking, you probably know a psychopath. And while this is the general prevalence, some groups of society appear to have more psychopaths than others, such as those in corporate leadership positions (≈12%) and prisons (≈20-30%).

Several studies examining the brains of psychopaths have found that they appear to have abnormal neurological structures and functionality. Specifically, the areas of the brain associated with empathy are underdeveloped and lack an average level of responsiveness to external stimuli. Some suggest that this is evidence of a neurological basis for psychopathy and that the abnormal brain structure is why psychopaths behave in the way they do. Following this, others argue that, if possible, we might be justified in using medical techniques and technologies, such as neurosurgery, to alter the brains of criminally violent psychopaths, thereby removing the psychopathy and instilling a level of empathy previously absent.

But can medical techniques reducing or eliminating psychopathic tendencies be justified, or are we medicalizing a group of people out of existence to satisfy societal desire?

We often think that medical interventions, be they as minor as a course of antibiotics or as radical as brain surgery, should only occur when said intervention benefits that person. This requirement is one of the central components separating treatment from research; the former benefits the individual while the latter benefits society (and maybe the individual). While there are exceptions to this rule of thumb – living organ donation, for example – the idea that medical treatment must have some individual benefit is both widely accepted and intuitively appealing. For instance, it would be unjustifiable for a surgeon to operate on you if that operation knowingly provided no beneficial outcome. The idea of an intimate link between a medical treatment’s justification and its potential for a positive result is one of the central pillars underlying one of the most influential theories in medical ethics – principlism.

As conceptualized by philosophers Thomas L Beauchamp and James F. Childress, and formulated in their book Principles of Biomedical Ethics (now in its 8th edition), beneficence is one of the four fundamental concerns when it comes to the ethical permissibility of medical interventions; the others being autonomy, non-maleficence, and justice. According to Beauchamp and Childress, each principle is equally important when looking for ‘reflective equilibrium’ (a coherence between the principles). However, here we’ll focus on beneficence, and specifically positive beneficence, which requires persons to provide benefits wherever possible.

So, would treating psychopathy have a beneficial effect on the psychopath?

This question can be broken down into two parts: (i) do psychopaths suffer as a direct result of their psychopathy?; and (ii) do psychopaths suffer as an indirect result of their psychopathy?

Whether psychopaths suffer as a direct result of their psychopathy is, to a degree, disputed for several reasons.

First, it is unclear whether psychopathy is an illness or a disease. While we might think it causes people to act in less than desirable ways from a social standpoint, this is very different from claiming that psychopathy represents an impairment to health on behalf of the person with psychopathy. If the disorder is of a social (rather than medical) basis, then it would seem highly inappropriate to try and use medical techniques to remedy what is socially disvalued.

Second, even if we accept that psychopathy is predominately medical in nature, that doesn’t mean that its removal would provide a direct benefit. This is because the psychopath would need to experience relief from suffering in a subjective sense for such a direct benefit to occur. Much like how taking a painkiller can’t ease your suffering if you’re not in pain, psychopathy’s removal cannot provide the individual with relief if it didn’t cause suffering in the first place. From the evidence available, it’s not clear whether psychopathy does cause direct suffering. Unlike having a broken bone or terrifying delusions, there’s no clear casual line between psychopathy and suffering. Just because psychopathy is a disorder, doesn’t mean it is harmful.

However, psychopaths don’t exist in a vacuum. Like all of us, they’re situated in the world around them, alongside its complex social, economic, religious, educational, and legal systems. And psychopathy might cause suffering by separating the individual from those systems and, more generally, from society. For example, I suspect many of us would experience suffering if we went to prison for committing a crime. This type of suffering exists regardless of our personality, whether ordered or disordered, because prisons are subjectively unpleasant environments that frustrate our life plans. This is just as true for psychopaths as for everyone else; psychopaths generally don’t want to go to jail. So, by eliminating the root cause of psychopathy, we might be able to prevent psychopaths from being sentenced to prison and thus, help them avoid the indirect suffering they would otherwise experience.

This line of arguing applies beyond prisons, though. Without their psychopathic tendencies, those persons might be better equipped to engage with society, find meaningful connections with others, and empathize with the rest of humanity.

However, appeals to such indirect suffering avoidance are rarely effective for justifying medical treatments in other contexts, especially when the therapy offered has the potential to alter one of the foundations of a person’s personality. For example, we wouldn’t think that someone who lives on the street as a matter of personal preference should have their mental state altered because a result of their choice is the ostracization from society’s mainstay.

We might think their choice is odd, and we might try to convince them that they would be better off living another way of life. But this is very different from using their disordered lifestyle as a justification for a medical procedure based on the idea of harm prevention and reduction.

So what does this mean for our psychopaths? Other arguments might justify medical intervention. For example, it could be that removing psychopathy may restore that person’s autonomous decision-making (although psychopathy’s coercive potential is disputed). One might argue that, as psychopathy is so prevalent in prison populations, its elimination from that sector of society might reduce the pressure on valuable social resources (although this opens up a can of worms regarding the value of autonomy vs the interests of the state).

At the end of the day, if the availability of psychopathic-centric media is any indication, the question of how society handles psychopaths isn’t going away anytime soon, and neither are the psychopaths.

Death and Consciousness: The Prospect of Brain Resuscitation

3D image of human brain

Recently published by Nature, Yale School of Medicine completed a study where they were able to revive disembodied pig brains several hours after death. In their study, they looked at 32 brains from pigs that had been dead for four hours. The brains were separated from the body and hooked up to a machine called BrainEx. On this system oxygen, nutrients, and protective chemicals were pumped into the organ for approx 6 hours. The study found that the brain tissue was largely intact and functional compared to those that did not receive the BrainEx treatment. The cells were alive, able to take up oxygen and produce carbon dioxide, but there was no further brain communication between the cells.

These findings are considered a breakthrough for the scientific community because they challenge the previously believed fact that brain cells are irreversibly damaged after a few minutes from being oxygen deprived. In general, when an organ is oxygen deprived for about 15 minutes, it should die. Nenad Stestan, a Yale neuroscientist explained during a press conference, “Previously, findings have shown that in basically minutes, the cells undergo a process of cell death. What we’re showing is that the process of cell death is a gradual step-wise process, and some of those processes can either be postponed, preserved, or even reversed.” BrainEx, a tool developed to study the living brain beyond the confines of the body, has allowed researchers a new way to look at brain cells. Previously, studies were limited to slices of the brain from dead animals, which explains our lack of knowledge on the complex organ. We now have the means to study the interrelational association between the many parts of the brain.

Bioethicists have been equally excited and alarmed with the new means of brain research. This kind of study in is uncharted territory. Technically, because the brain is taken from a dead animal, it doesn’t fall into the category of animal research. Animal research is protected through the ethical guidelines that animals should not be subjected to unnecessary harm. However, do we know enough about consciousness to truly know if the pig is experiencing harm in the process? If the pig were to feel harm during this experiment, would it make it an unethical practice?

The scientists took a measure of steps to be proactive in protecting the possibility of the pig gaining consciousness. A series of chemicals were pumped into the brain by the BrainEx machine, one of which was supposed to stop any possibility of neural interaction that would lead to consciousness. An electroencephalogram (EEG) monitored the brains throughout the whole study. Researchers said that if they had detected any levels of consciousness, they would shut down the experiment immediately. In addition, they were standing by with anesthesia to administer. Luckily, the only findings were that cell metabolism could be recovered and no consciousness was detected. With little well known about consciousness in general, can we even be sure that an EEG should be the only indicator of consciousness or perception? It is still unknown how many neurons are needed to be activated for the pig to have any feelings at all.

Weighing the cost of the unknown harm with the benefits is one step for researchers to consider with this project. Ultimately, we will gain expertise of the interactions of a mammalian brain. Understanding the internetwork of relations between the many parts of the brain can point scientists towards new cures for dementia, brain diseases, or injuries that were once considered irreversible. Future studies can include testing drugs, studying neural pathways, and furthering general knowledge of neuroanatomy.

What cannot be ignored with these studies are the implications for long-term changes in the medical community. These findings could challenge the definition of death as it is right now. According to MedicineNet, the current law standard for death is the following: “An individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem is dead. A determination of death must be made in accordance with accepted medical standards.” This definition was approved and accepted by the American Medical Association in 1980. With the findings from the Yale study, it challenges the notion that all brain cells are irreversibly damaged. Could doctors continue with this assessment if these studies lead to a means to reverse the damage, and if so, how do we now declare when someone has died?

Another worry is related to organ transplant. According to the United Network for Organ Sharing, someone is added to the transplant waiting list every 10 minutes. In the US, 18 people die every day while waiting for a transplant. Described in a commentary by Stuart Youngner and Insoo Hun, is the worry that doctors would feel reluctant to harvest organs for donation. If people could become candidates for brain resuscitation rather than donation, when, where, and for who do doctors make this decision? There is already the struggle for when do doctors switch from saving someone’s life to saving their organs for the benefit of another person. The answers only come down to moral decision making and the possibility of brain resuscitation further complicates the answers.

The continuation of these studies have the potential to make a huge difference for our expertise of neural anatomy and the process of cell death. For now, researchers have weighed the perceived benefits to outweigh the possibility of harm to the research subject. With the means to learn more about the definitions of consciousness and death, it is necessary for after each study to reevaluate the process of BrainEx in order to continue the study in an ethical manner.