Stories of lab-grown brains might seem like the preserve of Halloween tales. But in early October, a group of researchers writing in the journal Neuron claimed to have grown the world’s first “sentient” lab-grown brain – a creation they dubbed DishBrain. Lab-grown “brains” are nothing new, and were first produced in 2013 to study microcephaly. This is the first time, however, that such a brain has been exposed to external stimuli – in this case, a version of the video game Pong. The Neuron publication outlines how scientists grew 800,000 human brain cells from a collection of stem cells and mouse embryos, then connected this brain to the video game via electrodes. The cells responded – learning how to play the game in around five minutes. While its mastery of the game wasn’t perfect, its rate of success was well above random chance.
Many of us have a finely-honed “ew yuck” response that is triggered by these kinds of cases. But of course, being disgusted by something is not an ethical argument. Still, our distaste might signify something morally important. This is what philosopher Leon Kass famously referred to as the “wisdom of repugnance.”
So why might these lab-grown brains disgust us? We can start by considering what’s novel about DishBrain – that is, it’s claimed sentience. This is a notoriously ambiguous term. In many science fiction stories, “sentience” is used as shorthand for “consciousness” or “self-awareness.” Marvin the Paranoid Android, for example, might be described this way – exhibiting the capacity to question his own existence, experiencing bouts of depression and boredom, and even having the ability to compose lullabies. Often, this same understanding of sentience will be used to distinguish between different kinds of alien lifeforms – with the status of “sentience” being used to differentiate intelligent, communicative beings from other more primitive alien animals.
In ethical discussions, however, sentience is defined more narrowly. Derived from the Latin sentientem (a feeling), sentience is used to refer exclusively to the ability to feel pain and pleasure. If something has such an ability, it will have sentience.
On this narrower definition, a highly intelligent robot that is nevertheless incapable of experiencing pain will not be sentient, while an unintelligent animal that can experience pain will be.
I recently discussed the moral importance of this kind of sentience in light of the revelation that insects might feel pain. Why is it so important? Because anything with interests is morally relevant in our ethical decision making, and – as philosopher Peter Singer argues – if something can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain. If some living being experiences suffering, then there can be no moral justification for refusing to take that suffering into account.
Return, then, to the case of DishBrain. Suppose that – as its creators claim – this lab-grown brain has sentience. On the narrow definition above, this would mean that DishBrain could experience pain and pleasure. If this was the case, this might go some way towards describing our repugnance regarding the experiment.
While playing Pong for hours on end might not be a truly painful experience, being created solely for this purpose sounds like a life utterly devoid of any real pleasure. You or I certainly wouldn’t want to go through such a miserable existence.
Given this – and given Singer’s argument regarding sentience – it would be morally wrong to inflict this kind of life on someone (or something) else.
Fortunately, however, DishBrain doesn’t seem to possess sentience of this kind. In the absence of sensory receptors and a complex nervous system, it seems unlikely that DishBrain is capable of experiencing anything like pain or pleasure. Given this, there’s little reason to worry about this experiment falling afoul of an argument like Singer’s.
But is pain and pleasure all that is morally relevant? Consider, for example, an individual who suffers from congenital analgesia – a rare condition in which someone is unable to experience pain. Would it be morally permissible to inflict a battery of painful experiments on this person, justified on the basis that they will experience no pain as a result? It would seem not. And this suggests that something more than pain and pleasure might matter to our considerations of how we should treat other beings.
Perhaps this is where the alternative conception of sentience – referring to things that are capable of self-awareness – is useful. The capacity for this kind of sentience also seems morally important.
We might, for example, adopt something like the Kantian notion that any self-aware being should be treated as an end in itself – not as a means to some other end. This might be why we believe it would still be morally wrong to carry out painful experiments on someone who is incapable of experiencing pain.
Fortunately, lab-grown brains don’t seem to be sentient in this way either. DishBrain isn’t self-aware. It’s merely receiving input and providing output; much like a computer – or even something as rudimentary as a mechanical slot machine – might do.
There’s a warning here, however. Sentience – whether understood as (i) the ability to experience pain and pleasure, or (ii) the capacity for self-awareness – carries enormous moral weight. While DishBrain might (contra the claims of its creators) currently lack sentience, creating further iterations of lab-grown brains that do possess real sentience would be enormously problematic. Our repugnance at this – our “ew yuck” reaction – would then have a solid moral foundation.