← Return to search results
Back to Prindle Institute

Bodies Out of Time: Rethinking Age in the Alien Universe

Warning: This article contains spoilers for several entries in the Alien franchise.

Recently, I’ve been trying to catch up on all the shows sitting on my ever-growing to-watch list. One that I was especially excited about was Alien: Earth, the latest installment in the Alien franchise, which began way back in 1979 with Ridley Scott’s science-fiction horror masterpiece, simply titled Alien. And, as it happens, it’s Halloween while I’m writing this, so it feels fitting to reflect on one of the many philosophical questions the new series raises: age.

The passage of time — and its effects on our bodies, minds, and relationships — is hardly a new theme in science fiction. In 1986’s Aliens, yet another masterpiece, Ellen Ripley, having survived the events of the first film, awakens after spending fifty-seven years in stasis aboard an escape pod. She soon learns that her daughter has grown up, lived an entire life, and died during Ripley’s suspended sleep (though expanded materials elaborate on her daughter’s adventures). A similar leap occurs in Alien: Resurrection, set more than two centuries after the original. In each case, this temporal dislocation isolates Ripley from any semblance of home or continuity.

This unmooring from time and place also runs through Alien: Earth, emerging in two key forms.

The first, and one consistent with the series’ legacy, is cryonics. Space is vast, and travel across it takes time. To traverse the immense distances between systems, the crew of the lone spaceship seen in Alien: Earth, the Maginot, much like Ripley before them, enters suspended animation. By “freezing” themselves, they avoid the ravages of age and survive journeys that would otherwise outlast a human lifespan. It’s worth noting that both NASA and ESA (the European Space Agency) have explored similar ideas — hibernation, torpor, and metabolic suspension — as potential tools for long-duration spaceflight.

However, while the ethics of cryonics and its implications for our understanding of age are intriguing, they are not the focus here. Instead, Alien: Earth complicates our everyday sense of time’s passing in a second, more existential way: it unsettles our notions of what it means to grow older and to live a whole life.

The series invites us to question what it means to age through its portrayal of the artificial lifeforms that populate its world. Androids are, of course, a familiar presence in the Alien universe. They have appeared in every film, sometimes as antagonists, sometimes as uneasy allies. The relationship between creators and their creations has been a central concern in the prequel films — Prometheus, Alien: Covenant, and Alien: Romulus. Yet while those stories focus on the nature of life itself and the moral bonds between makers and made, one question has remained largely unexplored: what does it mean for an artificial being to grow, to mature, to age? Alien: Earth addresses this question through its introduction of hybrids — synthetic bodies into which human consciousness has been uploaded.

At first glance, this might not seem problematic. If someone transfers their mind into a synthetic body identical to their original one, we would likely still regard them as the same age. A twenty-year-old who moves into an artificial body resembling their twenty-year-old self would, intuitively, remain twenty. Yet Alien: Earth upends this expectation. In the series, consciousness transfer is still experimental, and only a handful of test subjects, known as the “lost boys,” have undergone the procedure. Each of the five children, terminally ill in their biological forms, has had their consciousness transferred into a cutting-edge synthetic body. But these new bodies are not like-for-like replacements: they resemble those of adults in their late twenties. The result is five individuals who possess the physical form of adults but the minds and behaviors of children. These bodies themselves have only existed for a certain number of years. So, the synthetic bodies resemble adults, the minds of children inhabit them, but they have only been around for a short time.

What Alien: Earth reveals is how fragile our notion of age really is. We like to imagine age as a simple measure counted in years, written in the wear and tear on the body, reflected in our behavior. But when these elements drift apart, as they do for the lost boys, our categories of child and adult blur. The series forces us to ask what it truly means to grow up when body, mind, and time no longer move in step.

For much of Western thought, age and development have been bound together. In Book 1 of the Nicomachean Ethics, Aristotle posits that every living thing has a natural trajectory, a movement from potential to fulfillment. A child is an unfinished being, one whose form is still unfolding toward its purpose. The hybrids in Alien: Earth violate this order. Their bodies appear to have reached completion, but their minds remain suspended in childhood. Indeed, it is made clear in the first episode that the development, even the very functioning, of their minds may be different because those minds now inhabit bodies devoid of hormones and other biological substances that motivate and shape cognitive functioning. The lost boys embody an unnatural disjunction between form and essence, adulthood without maturity, completion without growth. In Aristotle’s terms, they have been forced into their ends before their time.

On the other end of the spectrum sits John Locke, who offers an alternative take. For Locke, as argued in An Essay Concerning Human Understanding, what makes someone the same person through time is not the body at all but the continuity of consciousness and memory. It is the awareness of oneself as the same thinking thing that secures identity. Judged by this standard, the lost boys remain children, no matter how adult their bodies appear. Their minds carry the same memories, emotions, and perspectives they held before their transfer. Their new bodies may be stronger, faster, and ageless, but the selves within them are still those of children.

Between Aristotle and Locke, the hybrids occupy an unsettling middle ground. They are caught between two incompatible notions of growth. Aristotle would see them as beings who have skipped the natural stages of life; Locke would see them as persons whose identity is unchanged despite physical transformation. Alien: Earth leaves us in this tension. It refuses to tell us whether maturity resides in the body’s development or in the mind’s continuity, and this uncertainty makes the lost boys so haunting. Indeed, the group itself reflects on this difficulty and eventually reaches no clear answer.

By imagining beings who age in one sense but not another, the series asks us to reconsider how much of our humanity depends on time itself. If growth can be engineered, if bodies can leap ahead of the selves that inhabit them, then age ceases to be a simple measure of experience. Alien: Earth reminds us that aging is not only about the number of years we live, but also about how our bodies and minds keep pace — or fail to.

Memory Erasure and Forgetting Who You Are

Earlier this month, MSN News detailed new research coming out of Japan that may revolutionize how we approach memory. The technology – being developed at Tohoku University – allows for the selective deletion of traumatic memories. While only tested on mice so far, the human application could provide widespread benefits – particularly for those who suffer from post-traumatic stress disorder (PTSD). But, as with the advent of any new technology, it’s important to pause and consider the implications of such a development. Might we have moral reason to not remove certain memories – even the really bad ones?

Immanuel Kant had much to say on matters like these. For Kant, our rationality – our ability to reach reasoned decisions – was of paramount importance. This is why he saw the circumvention of our rational processes as one of the worst things we could do. Kant’s fundamental moral rule – the “Categorical Imperative” – demands that we always treat people (including ourselves) as ends in themselves, and never merely as a means to an end. It’d be wrong, for example, to befriend a lawyer merely for free legal advice. This formulation of Kant’s rule also creates strong prohibitions against lying and other forms of deception. Why? Because feeding someone with false information will necessarily derail their ability to make informed – and therefore rational – decisions. That’s what’s so egregious about the blatant spread of misinformation by politicians and pundits.

It’s worth noting that, according to Kant, deception isn’t wrong just some of the time, but – rather – all of the time. And, for many, this might seem too strict. Consider, for example, the concept of “white lies.” Many of us believe that it’s morally permissible to engage in occasional harmless deceits – especially where doing so spares the feelings of others. Suppose that I was to play you a piece of music on my mandolin (an instrument I’m only just beginning to learn) and ask you what you thought of it. Suppose, further, that – owing to my inexperience – the performance wasn’t very good. It would be tempting to tell a white lie, and compliment my performance; but – according to Kant – it would be wrong for you to do so. And maybe we can see why. Among my ends, we might assume, is the desire to be a good mandolin player – but I won’t be able to achieve that end if I believe that my amateurish fumblings already sound excellent. In order to rationally decide how much practice I need, I require honest feedback – even if it comes at the cost of my feelings.

Let’s return, then, to the technology offered by Tohoku University. Removing one’s memories might be seen as a sort of “self-deception” – a lie we tell ourselves in order to feel better. But, if Kant’s approach is correct, then it’s morally impermissible for us to do this. Our experiences – even those that are traumatic – inform our rational decision-making. Put another way, they allow us to make better decisions going forward. To remove such memories might therefore be a case of failing to respect our own ends, instead treating ourselves as a mere means to the end of greater happiness.

Of course, subscribers to certain other ethical theories would say that this is precisely the point. Hedonistic utilitarians, for example, claim that an action is right so long as it maximizes pleasure (or, at the very least, minimizes pain). Utilitarians of this stripe are all for lying if it achieves some greater good. On this approach, then, the removal of one’s memories would be the right thing to do if it genuinely made the person happier.

But whether it’s morally permissible to remove one’s memories is only half the concern here. A more troubling consideration is whether we can even make sense of you removing your memories in the first place.

Philosophers spend a lot of time thinking about the problem of  “personal identity” – that is, what makes someone the same person over time. To be clear, there are many ways in which we aren’t the same person over time. We obviously change – inside and out. We grow, we age, we learn, and we change our minds about a great many things. But in spite of all these qualitative changes, there is a sense in which we still persist as the same person over time. When I look at a photo of my ten-year-old self, I know that’s me – even though he looks, acts, and thinks very differently to the me who sits here now. What, then, makes that ten-year-old the same person as me? This is the problem of personal identity.

A number of different answers have been given to this question (including the answer that there is no good answer). But one of the most popular solutions suggests that it’s psychological continuity – specifically, our memories – that make us the same person over time. In other words, the ten-year-old in that picture is me simply because I remember being him.

What this means, then, is that if I lost all of my memories, I would cease to exist. Sure, there would still be someone sitting here in this body, but that person wouldn’t be me. For many, this fits well with their intuitions regarding personal identity. What’s unclear, however, is how many of our memories we can afford to lose while continuing to be the same person. It seems that the answer is, at least, “some.” I have no memory of writing my first piece for The Prindle Post, but I can still confidently claim that the person who wrote that piece was me. But there are, it seems, a “critical mass” of memories (especially important, character-forming memories) that, if lost, would mean that I had gone out of existence. There’s a chance, then, that utilizing – or, at least over-utilizingtechnology like that being developed at Tohoku University might not merely raise moral concerns, but threaten our continued existence altogether.

Fantastic Beasts and How to Categorize Them

photograph of Niffler statue for movie display

For a short video explaining the piece click here.

Fantastic Beasts and Where to Find Them is both a film franchise and a book. But the book doesn’t have a narrative; it is formatted like a textbook assigned in the Care of Magical Creatures course at Hogwarts. It’s ‘written’ by Newt Scamander and comes with scribbles from Harry and Ron commenting on its contents.

Before the creature entries begin there is a multipart introduction. One part, entitled “What is a Beast?” seeks to articulate a distinction between creatures who are ‘beasts’ and those that are ‘beings.’ The text notes that a being is “a creature worthy of legal rights and a voice in the governance of magical world.” But how do we distinguish between beasts and beings? This is one of the main questions central to the topic of moral status.

So, the intro asks two questions: who is worthy and how do we know? The first question seeks to determine who is in the moral community and thus deserving of rights and a voice. This is a question concerning whether an entity has the property of ‘moral standing’ or ‘moral considerability.’ The second question seeks to identify what properties an entity must have to be a member of the moral community. In other words, how does one ground a claim that a particular entity is morally considerable? We can call this a question about the grounds of moral considerability. It is the main question of the short introduction to Fantastic Beasts:

What are the properties that a creature has to have in order to be in the category ’beast’ (outside the moral community) or ‘being’ (inside the moral community)?

Attempts to resolve a question of moral considerability confront a particular problem. Call it the Goldilocks Problem. Goldilocks wants porridge that is just right, neither too hot nor too cold. We want definitions of the moral community to be just right and avoid leaving out entities that should be in (under-inclusion) and avoid including entities that should be left out (over-inclusion). When it comes to porridge it is hard to imagine one bowl being both too hot and too cold at the same time. But in the case of definitions of the grounds of moral considerability, this happens often. We can see this in the attempts to define ‘being’ in the text of Fantastic Beasts.

Fantastic Beasts looks at three definitions of the grounds of being a ‘being.’ According to the text, “Burdock Muldoon, Chief of the Wizard Council in the fourteenth century, decreed that any member of the magical community that walked on two legs would henceforth be granted the status of ‘being,’ all others to remain ‘beasts.’” This resulted in a clear case of over-inclusion. Diriclaws, Augureys, Pixies and other creatures were included in the moral community of beings, but should not have been. The text states that “the mere possession of two legs was no guarantee that a magical creature could or would take an interest in the affairs of wizard government.”

What really mattered was not the physical characteristic of being bipedal but the psychological characteristic of having interests. By focusing on the wrong property this definition accidentally included entities that did not belong.

This of course is related to a humorous story that Plato once lectured about Aristotle’s definition of a human as a featherless biped only to have Diogenes show up the next day with a plucked chicken stating “Behold! A man.”

At the same time, however, this definition is under-inclusive. Centaurs are entities that could take an interest in the affairs of wizards, but they have four legs and thus are left out. Merpeople also could take an interest in the affairs of wizards, but have no legs and thus are left out. Clearly, this definition will not do.

And it is not surprising that the definition fails. Using a physical characteristic to determine whether an entity will have the right psychological characteristics is not likely to work.

So what is a wizard to do but try to find a property more closely linked to the relevant psychological characteristic. Interests — for example, wants and needs — are often expressed linguistically: “I want chocolate chip cookies”; “I need more vegetables.” This apparently led Madame Elfrida Clagg to define a being as “those who could speak with the human tongue.” But, again, we have an example where the definition is over- and under-inclusive. Trolls could be taught to say, but not understand, a few human sentences and were included in the community but should have been excluded. Once again, the merpeople, who could only speak Mermish, a non-human language, were left out when they should have been included.

In our own world, the focus on language and other activities as proxies for cognitive traits have been used to discuss the moral status of animals (also, here). Attempts to exclude animals from the moral community did, in fact, use speech-use and tool-use as reasons to exclude animals. Descartes famously claimed in part V of the Discourse on Methods that animals did not use language but were mere automatons. But apes can use sign language, and crows, elephants, otters and other animals can use tools. So, for many who want to only include humans as in the category of ‘being,’ these activity-based definitions turn out to be over-inclusive. But again, given the incapacity of new born humans to use language or tools, they would also leave out some humans and be under-inclusive. So, using a non-psychological property (an activity) to identify a psychological property is unsurprisingly problematic.

Apparently, the wizarding world got the memo regarding the problem of these definitions by the 19th century. In 1811, Minister of Magic Grogan Stump defined a being as “any creature that has sufficient intelligence to understand the laws of the magical community and to bear part of the responsibility in shaping those laws.” The philosophical term for this set of capabilities is autonomy, at least in the way Immanuel Kant defined the term.

One way to express Kant’s’ view is that the morally considerable beings, the beings that could be called ‘persons,’ were those that had the capacity to rationally identify their interests and then have the will to execute plans to see those interests realized.

Persons are also capable of seeing that others have this capacity and thus rationally adopt rules that limit what we can do to other persons. These are the moral rules that guide our interactions that ground our rights, legal and moral, as well as give us a voice in self- and communal-governance. In other words, the term ‘being’ in Fantastic Beasts is just the text’s term for ‘moral person.’ Furthermore, the relevant psychological characteristic of persons is autonomy as defined by Kant.

There is something questionable about this Kantian view of being-hood or person-hood. On this view, persons need sophisticated cognitive abilities to be identified as persons. Any entity that lacks these cognitive abilities needed for moral judgment are non-persons and thus wholly outside the moral community. In other words, non-persons are things, have only instrumental value, and can be equated with tools: you can own them and dispose of them without morally harming them. But, this definition also excludes human infants and humans with diminished cognitive abilities, but we do not think of them as outside the moral community.

Surely these implications for humans are unacceptable. They would probably be unacceptable to the fictional Newt Scamander as well as to people who fight for animal rights. But the Kantian view is binary: you are a person/being or a beast/thing. Those who find such a stark choice unappealing can and do recognize another category between person and things. This would be something that has interests, but not interests in having a voice in governance. These entities often are vulnerable to damaging impacts of the behavior of persons and have an interest in not suffering those impacts, even if they cannot directly communicate them.

So, we need a new set of terms to describe the new possible categories of moral considerability. Instead of just the categories being/person and beast/thing, we can discuss the categories of moral agent, moral patient, and thing.

A moral agent is an entity that meets the Kantian definition of person. It is an entity who is in the moral community and also shapes it. A thing is something that does not have interests and thus is outside the moral community. But a moral patient is an entity that has interests, specifically interests against harm and for beneficence that should be morally protected. Thus, they are members of the moral community, just not governing members. So, Centaurs and Merpeople and Muggles can all be considered moral agents and thus can, if they so desire, contribute to the governance of the magical community. But even if they don’t want to participate in governance, the magical community should still recognize them as being moral patients, as beings who can be impacted by and thus whose interests should be included in the discussion of governance. The giants, trolls, werewolves in werewolf form, and pixies should at least fall into this category of patient as well. In the human world, infants, young children, and those with cognitive impairment would also fall into this category.

To sum up, then, the text of Fantastic Beasts presents a view similar to Kant’s of the grounds of moral status, but it can be improved upon by recognizing the category of moral patients. Furthermore, Fantastic Beasts clearly supports psychological accounts of the grounds of moral status over physical accounts. In other words, what matters to many questions of identity and morality are psychological properties and not physical properties or behavioral capacities. This is consistent with a theme of the Harry Potter novels where the main villains focus on the physical characteristic of whether an entity has the right blood-status to be part of the wizarding community. In other words, only a villain would solely focus on physical characteristics as a source of moral value.

Were Parts of Your Mind Made in a Factory?

photograph of women using smartphone and wearing an Apple watch

You, dear reader, are a wonderfully unique thing.

Humor me for a moment, and think of your mother. Now, think of your most significant achievement, a long-unfulfilled desire, your favorite movie, and something you are ashamed of.

If I were to ask every other intelligent being that will ever exist to think of these and other such things, not a single one would think of all the same things you did. You possess a uniqueness that sets you apart. And your uniqueness – your particular experiences, relationships, projects, predilections, desires – have accumulated over time to give your life its distinctive, ongoing character. They configure your particular perspective on the world. They make you who you are.

One of the great obscenities of human life is that this personal uniqueness is not yours to keep. There will come a time when you will be unable to perform my exercise. The details of your life will cease to configure a unified perspective that can be called yours. For we are organisms that decay and die.

In particular, the organ of the mind, the brain, deteriorates, one way or another. The lucky among us will hold on until we are annihilated. But, if we don’t die prematurely, half of us, perhaps more, will be gradually dispossessed before that.

We have a name for this dispossession. Dementia is that condition characterized by the deterioration of cognitive functions relating to memory, reasoning, and planning. It is the main cause of disability in old age. New medical treatments, the discovery of modifiable risk factors, and greater understanding of the disorder and its causes may allow some of us to hold on longer than would otherwise be possible. But so long as we are fleshy things, our minds are vulnerable.

*****

The idea that our minds are made of such delicate stuff as brain matter is odious.

Many people simply refuse to believe the idea. Descartes could not be moved by his formidable reason (or his formidable critics) to relinquish the idea that the mind is a non-physical substance. We are in no position to laugh at his intransigence. The conviction that a person’s brain and and a person’s mind are separate entities survived disenchantment and neuroscience. It has the enviable durability we can only aspire to.

Many other people believe the idea but desperately wish it weren’t so. We fantasize incessantly about leaving our squishy bodies behind and transferring our minds to a more resilient medium. How could we not? Even the most undignified thing in the virtual world (which, of course, is increasingly our world) has the enviable advantage over us, and more. It’s unrottable. It’s copyable. If we could only step into that world, we could become like gods. But we are stuck. The technology doesn’t exist.

And yet, although we can’t escape our squishy bodies, something curious is happening.

Some people whose brains have lost significant functioning as a result of neurodegenerative disorders are able to do things, all on their own, that go well beyond what their brain state suggests they are capable of, which would have been infeasible for someone with the same condition a few decades ago.

Edith has mild dementia but arrives at appointments, returns phone calls, and pays bills on time; Henry has moderate dementia but can recall the names and likenesses of his family members; Maya has severe dementia but is able to visualize her grandchildren’s faces and contact them when she wants to. These capacities are not fluky or localized. Edith shows up to her appointments purposefully and reliably; Henry doesn’t have to be at home with his leatherbound photo album to recall his family.

The capacities I’m speaking of are not the result of new medical treatments. They are achieved through ordinary information and communication technologies like smartphones, smartwatches, and smart speakers. Edith uses Google Maps and a calendar app with dynamic notifications to encode and utilize the information needed to effectively navigate day-to-day life; Henry uses a special app designed for people with memory problems to catalog details of his loved ones; Maya possesses a simple phone with pictures of her grandchildren that she can press to call them. These technologies are reliable and available to them virtually all the time, strapped to a wrist or snug in a pocket.

Each person has regained something lost to dementia not by leaving behind their squishy body and its attendant vulnerabilities but by transferring something crucial, which was once based in the brain, to a more resilient medium. They haven’t uploaded their minds. But they’ve done something that produces some of the same effects.

*****

What is your mind made of?

This question is ambiguous. Suppose I ask what your car is made of. You might answer: metal, rubber, glass (etc.). Or you might answer: engine, tires, windows (etc.). Both answers are accurate. They differ because they presuppose different descriptive frameworks. The former answer describes your car’s makeup in terms of its underlying materials; the latter in terms of the components that contribute to the car’s functioning.

Your mind is in this way like your car. We can describe your mind’s makeup at a lower level, in terms of underlying matter (squishy stuff (brain matter)), or at a higher level, in terms of functional components such as mental states (like beliefs, desires, and hopes) and mental processes (like perception, deliberation, and reflection).

Consider beliefs. Just as the engine is that part of your car that makes it go, so your beliefs are, very roughly, those parts of your mind that represent what the world is like and enable you to think about and navigate it effectively.

Earlier, you thought about your mother and so forth by accessing beliefs in your brain. Now, imagine that due to dementia your brain can’t encode such information anymore. Fortunately, you have some technology, say, a smartphone with a special app tailored to your needs, that encodes all sorts of relevant biographical information for you, which you can access whenever you need to. In this scenario, your phone, rather than your brain, contains the information you access to think about your mother and so forth. Your phone plays roughly the same role as certain brain parts do in real life. It seems to have become a functional component, or in other words an integrated part, of your mind. True, it’s outside of your skin. It’s not made of squishy stuff. But it’s doing the same basic thing that the squishy stuff usually does. And that’s what makes it part of your mind.

Think of it this way. If you take the engine out of your ‘67 Camaro and strap a functional electric motor to the roof, you’ve got something weird. But you don’t have a motorless car. True, the motor is outside of your car. But it’s doing basically the same things that an engine under the hood would do (we’re assuming it’s hooked up correctly). And that’s what makes it the car’s motor.

The idea that parts of your mind might be made up of things located outside of your skin is called the extended mind thesis. As the philosophers who formulated it point out, the thesis suggests that when people like Edith, Henry, and Maya utilize external technology to make up for deficiencies in endogenous cognitive functioning, they thereby incorporate that technology (or processes involving that technology) into themselves. The technology literally becomes part of them by reliably playing a role in their cognition.

It’s not quite as dramatic as our fantasies. But it’s something, which, if looked at in the right light, appears extraordinary. These people’s minds are made, in part, of technology.

*****

The extended mind thesis would seem to have some rather profound ethical implications. Suppose you steal Henry’s phone, which contains unbacked biographical data. What have you done? Well, you haven’t simply stolen something expensive from Henry. You’ve deprived him of part of his mind, much as if you had excised part of his brain. If you look through his phone, you are looking through his mind. You’ve done something qualitatively different than stealing some other possession, like a fancy hat.

Now, the extended mind thesis is controversial for various reasons. You might reasonably be skeptical of the claim that the phone is literally part of Henry’s mind. But it’s not obvious this matters from an ethical point of view. What’s most important is that the phone is on some level functioning as if it’s part of his mind.

This is especially clear in extreme cases, like the imaginary case where many of your own important biographical details are encoded into your phone. If your grip on who you are, your access to your past and your uniqueness, is significantly mediated by a piece of technology, then that technology is as integral to your mind and identity as many parts of your brain are. And this should be reflected in our judgments about what other people can do to that technology without your permission. It’s more sacrosanct than mere property. Perhaps it should be protected by bodily autonomy rights.

*****

I know a lot of phone numbers. But if you ask me while I’m swimming what they are, I won’t be able to tell you immediately. That’s because they’re stored in my phone, not my brain.

This highlights something you might have been thinking all along. It’s not only people with dementia who offload information and cognitive tasks to their phones. People with impairments might do it more extensively (biographical details rather than just phone numbers, calendar appointments, and recipes). They might have more trouble adjusting if they suddenly couldn’t do it.

Nevertheless, we all extend our minds into these little gadgets we carry around with us. We’re all made up, in part, of silicon and metal and plastic. Of stuff made in a factory.

This suggests something pretty important. The rules about what other people can do to our phones (and other gadgets) without our permission should probably be pretty strict, far stricter than rules governing most other stuff. One might advocate in favor of something like the following (admittedly rough and exception-riddled) principle: if it’s wrong to do such-and-such to someone’s brain, then it’s prima facie wrong to do such-and-such to their phone.

I’ll end with a suggestive example.

Surely we can all agree that it would be wrong for the state to use data from a mind-reading machine designed to scan the brains of females in order to figure out when they believe their last period happened. That’s too invasive; it violates bodily autonomy. Well, our rough principle would seem to suggest that it’s prima facie wrong to use data from a machine designed to scan someone’s phone to get the same information. The fact that the phone happens to be outside the person’s skin is, well, immaterial.

Woke Capitalism and Moral Commodities

photograph of multi-colored shipping containers loaded on boat

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: “Woke Capitalism.”

Many have started to abandon the usage of the term “woke” since it is more and more used in a pejorative sense by ideological parties – as Charles M. Blow states “‘woke’ is now almost exclusively used by those who seek to deride it, those who chafe at the activism from which it sprang.” What the term refers to has become increasingly ambiguous to the point that it seems useless. As early as 2019, Damon Young was suggesting that “woke floats in the linguistic purgatory of terms coined by us that can no longer be said unironically,” and David Brooks concluded that no small  part of wokeism was simply the intellectual elite showing off with “sophisticated” language.

But when the term rose to popularity in 2016, it was referring to a kind of awareness of public issues, and “became the umbrella purpose for movements like #blacklivesmatter (fighting racism), the #MeToo movement (fighting sexism, and sexual misconduct), and the #NoBanNoWall movement (fighting for immigrants and refugees).” And new fronts are always opening up.

Discussions of “Woke Capitalism” tend to focus on corporate and consumer activism. Tyler Cowen has also pointed out the importance of wokeism as a new, uniquely American cultural export that may fundamentally change the world. And, indeed, despite the post-mortems, “woke” remains in the lexicon of both political parties.

Even though the term “woke” has fallen out of favor, I suspect there is a mostly unaddressed aspect of wokeism that needs reconsideration. There may very well be a new mode of consumption just beginning to dominate the market: commodities as moral entities.

How does this happen? Let’s consider what differentiates Woke Capitalism from more familiar moral considerations about market relations and discuss how products have become moral entities through comparison to non-woke products.

It is not just about moral considerations: In any decision-making process, it is natural for some moral considerations to arise. In the case of market relations, any number of factors – the company’s affiliations, its production methods, the status of workers, the trustworthiness of the company, etc. – may prove decisive. Traditionally, as in the case of moral appeal in marketing – “If you are a good parent, you should buy this shoe!” – there seems to be a necessity to link a moral consideration with a company or a product. With Woke Capitalism, this relation is transformed: an explicit link is no longer necessary. All purchasing is activism – one cannot help but make a statement with what they choose to buy and what they choose to sell.

It is not just corporate or consumer activism: The moral debate about Woke Capitalism mainly revolves around the sincerity of companies and customers in support of social justice causes. And that discussion of corporate responsibility often revisits the Shareholder vs. Stakeholder Capitalism distinction.

Corporate or consumer activism seems to be making use of the market as a way of demonstrating the moral preferences of individuals or a group. It can be seen as a way to support what is essentially important to us. Vote with your dollar. As such, most discussions focus on this positive reinforcement side of Woke Capitalism.

What is lost in this analysis of Woke Capitalism, however, is the production of Woke Products which forces consumers take sides with even the most basic day-to-day purchases.

How should we decide between two similarly-priced products according to this framing: a strong stain remover or a mediocre stain remover that helps sick children; a gender-equality supporting cereal or a racial-equality supporting cereal. Each of these decisions brings some imponderable trade-off with it: What’s more important – the health of children or stain-removing strength? Which problem deserves more attention – gender inequality or racial inequality?

Negation of Non-Woke: The main problem with these questions is not that some of them are unanswerable, absurd, or impossible to decide in a short time. Instead, the problem is the potential polarizing effect of its relational nature. Dan Ariely suggests, in his book Predictably Irrational: The Hidden Forces That Shapes Our Decisions, that we generally decide by focusing on the relativity – people often decide between options on the basis of how they relate to one another. He gives the example of three houses which have the same cost. Two of them are in the same style. One is a fixer-upper. In such a situation, he claims, we generally decide between the same style houses since they are easier to compare. In this case, the alternatively-styled house will not be considered at all.

In the case of wokeness, the problem is that it is quite probable that non-woke products will be ignored altogether. With our minds so attuned to the moral issue, all other concerns fade away.

Woke Capitalism creates a marketplace populated entirely by woke, counter-woke, and anti-woke products. Market relations continue to be defined by this dynamic more and more. As such, non-woke products are becoming obsolete. Companies must accommodate this trend and present themselves in particular ways, not necessarily because they want to, but because they are forced to. And this state of affairs feels inescapable; there is no breaking the cycle. Even anti-woke and counter-woke marketing feed that struggle. All consumption becomes a moral statement in a never-ending conflict.

To better see what makes Woke Capitalism unique (and uniquely dangerous), consider this comparison:

Classic moral consideration: Jimmy buys Y because Y conforms to his moral commitments.

Consumer activism: Jimmy buys Y because Y best signals his support for a deeply-held cause.

Woke Capitalism: Jimmy buys Y because purchasing products is necessary for his moral identity.

This is not just consumer activism whereby customers seek representation. Instead, commodities turn into fundamentally moral entities – building blocks for people to construct and communicate who they are. As morality becomes increasingly understood in terms of one’s relationship to commodities, a moral existence depends on buying and selling. Consumption becomes identity. “I buy, therefore – morally – I am.”

Is “Personhood” a Normative or Descriptive Concept?

photograph of young child watching elephant at zoo

Many ethical debates are about “getting the facts right” or “trusting the science,” and this sentiment is driven by a presumed difference between political and ethical values and facts. This can be problematic because it naively assumes that facts are never a product of values or that values can’t be a product of fact. This can lead to mistakes like thinking that evidence alone can be sufficient to change our thinking or that the way we value things shouldn’t be affected by the way the world is. Ethical inquiry requires us to consider many questions of both fact and worth to draw conclusions. To demonstrate, we will consider the recent case of Happy the elephant and whether it makes sense to call her a person.

While it is tempting to think of values as being something entirely personal or subjective, in reality most discussion and debate about values is far more nuanced and complex than that. Determining the value of something, whether it’s going for a walk or eating a candy bar, involves considerations of function, worth, and means.

Eating a candy bar has the function of providing sustenance and a pleasant taste. The worth of the bar will be determined by considering the means required to attain it compared to the worth of other things I could eat. If the cost of the candy bar goes up, the means required to attain it becomes dearer. While the candy bar provides necessary energy, it is also harmful to my health, and so I re-evaluate the worth of the bar.

People may differ over the value of the candy bar, but the disagreement will likely hinge on the different functions the candy bar has in life. But notice that function and means – two essential considerations for valuation – are factual in nature. To ask what the candy bar will do is to ask what it is good for. In other words, any thought about worth inevitably involves factual considerations. Often, the reason we want people to avoid misinformation or to trust expertise has to do with the ethical concerns rather than the factual concerns; we expect facts to moderate the way things are valued and thus the way we act.

But what about facts? Aren’t the facts just the facts? Well, no. There is no such thing in science as the “view from nowhere.”

We don’t study every part of the natural world; we study things we are interested in. Our investigations are always partial, infused with what we want to know, why we want to know it, and what means we have available to try to find an acceptable answer.

The risk that we over-generalize our findings – start making pronouncements about the world and forget about our practical aims in research – suggests that facts alone cannot settle ethical debates. Just like values, a fact is defined by function, worth, and means. Indeed, many concepts are “thick” in that they perform a dual function of both describing something while also offering normative content. “Cruel,” for example, is often used both normatively and descriptively. But what about “person?”

Recently a New York court ruled that an Asian elephant named Happy is not a person. The case began after the Nonhuman Rights Project filed a petition against the zoo holding Happy, arguing that Happy’s confinement was a violation of habeas corpus because Happy resides in a solitary enclosure. They demanded recognition of Happy’s legal personhood and her release from the zoo.

Habeas corpus – a person’s legal protection from unlawful detention – has historically been used to push legal boundaries. One of the most famous cases is Somerset v. Stewart, which found that a slave could not be forcibly removed from England and so was ordered to be freed. This suggests that “person” is often a “thick” concept that not only describes something, but also inherently (especially legally) contains normative elements as well. In the end, the court, found that Happy was not a person in the legal sense and was thus not entitled to invoke those rights.

Those who supported Happy’s case emphasized that elephants are intelligent, cognitively complex animals. The Nonhuman Rights Project argued that elephants share numerous cognitive abilities with humans such as self-awareness, empathy, awareness of death, and learning. Happy was the first elephant to pass a self-awareness indicator test. In addition, several nations such as Hungary, Costa Rica, Argentina, Pakistan have taken steps to recognize the legal rights of “non-human persons.” The argument is that because these animals are intelligent enough to have a sense of their own selves, they are entitled to the robust liberties and protections afforded by the law.

But the question is not whether Happy meets some cosmic notion of personhood, but an instrumental question of what function we want the concept of “person” to perform.

The question for the court was to determine the worth of competing conceptions of “personhood” which would perform different social functions (one which extends to animals and one which doesn’t), and which involve very different means in operation. For example, a legal person is usually someone who can be held legally accountable. A previous ruling in a similar case held that “the asserted cognitive linguistic capabilities of a chimpanzee do not translate to a chimpanzee’s capacity or ability, like humans, to bear legal duties, or to be held legally accountable for their actions.”

The issue of cognitive complexity in relationship to personhood is not static – simply meeting a given threshold of intelligence is not enough to warrant designation as a “person.” There are practical considerations that bear on the matter. Changing our conception of personhood would, as one justice noted, “have an enormous destabilizing impact on modern society.” It’s difficult to know what legal obligations this might create or how far they could extend. What would happen, for example, if there was a conflict of legal rights between a human and non-human person? The issue is thus not whether Happy should be treated well, but whether the concept of personhood is the right tool for sorting out these difficult ethical problems. Similar controversies crop up in the debate about extending rights to nature.

In other words, when we consider cases like this it will never simply be as simple as saying a fact that “elephants are cognitively intelligent” or proclaiming that “elephants should be protected.” As a “thick” concept, the definition of “personhood” is always going to depend on the practicality of the concept’s use in our particular social world. But if the problem with extending certain rights to elephants is problematic because of the stress it places on the function of the concept, then perhaps seeking to label elephants as “persons” is unhelpful. It simply isn’t going to be enough to point to evidence of cognitive awareness alone. When we consider what we want the concept “person” to do for us, we may find that by paying attention to the intended function we can achieve it more effectively with another ethical notion, such as the UK potentially granting rights to animals on the basis of “sentience.”

Great Man Syndrome and the Social Bases of Self-Respect

black and white photograph of David statue with shadow on wall

“Am I good enough?” “Was someone else smarter or more talented than me?” “Am I just lazy or incompetent?” You might find these thoughts familiar. It’s an anxiety that I have felt many times in graduate school, but I don’t think it’s a unique experience. It seems to show up in other activities, including applying for college and graduate school, pursuing a career in the arts, vying for a tenure-track academic job, and trying to secure grants for scientific research. This anxiety is a moral problem, because it can perpetuate imposter syndrome – feelings of failure and a sense of worthlessness, when none of these are warranted.

The source of this anxiety is something that I would like to call “great man syndrome.” The “great man” could be a man, woman, or non-binary person. What is important is the idea that there are some extra-capable individuals who can transcend the field through sheer force of innate ability or character. Gender, race, and other social categories matter for understanding social conceptions of who has innate ability and character, which can help to explain who is more likely to suffer from this angst, but “great man syndrome” can target people from any social class.

It functions primarily by conflating innate ability or character with professional success, where that professional success is hard to come by. For those of us whose self-conceptions are built around being academics, artists, scientists, or high achievers and whose professional success is uncertain, “great man syndrome” can generate uncertainty about our basic self-worth and identity. On the other hand, those who achieve professional success can easily start to think that they are inherently superior to others.

What does “great man syndrome” look like, psychologically? First, in order to continue pursuing professional success, it’s almost necessary to be prideful and think that because I’m inherently better than others in my field in some way, I can still achieve one of the few, sought-after positions. Second, the sheer difficulty and lack of control over being professionally recognized creates constant anxiety about not producing enough or not hitting all of the nigh-unattainable markers for “great-man-ness.” Third, these myths tie our sense of well-being to our work and professional success in a way that is antithetical to proper self-respect. This results in feelings of euphoria when we are recognized professionally, but deep shame and failure when we are not. “Great man syndrome” negatively impacts our flourishing.

My concept of “great man syndrome” is closely related to Thomas Carlyle’s 19th century “great man theory” of history, which posits that history is largely explained by the impacts of “great men,” who, by their superior innate qualities, were able to make a great impact on the world. There are several reasons to reject Carlyle’s theory: “great men” achieve success with the help of a large host of people whose contributions often go unrecognized; focusing on innate qualities prevents us from seeing how we can grow and improve; and there are multiple examples of successful individuals who do not have the qualities we would expect of “great men.”

Even if one rejects “great man theory,” it can still be easy to fall into “great man syndrome.” Why is this the case? The answer has to do with structural issues common to the fields and practices listed above. Each example I gave above — scientific enterprises, artistic achievement, higher educational attainment, and the academic job market — has the following features. First, each of these environments are highly competitive. Second, they contain members whose identities are tied up with that field of practice. Third, if one fails to land one of the scarce, sought-after positions, there are few alternative methods of gainful employment that allow one to maintain that social identity.

The underlying problem that generates “great man syndrome” isn’t really the competition or the fact that people’s identities are tied up with these pursuits; the problem is that there are only so many positions within those fields that ensure “the social bases of self-respect.” On John Rawls’s view, “the social bases of self-respect” are aspects of institutions that support individuals by providing adequate material means for personal independence and giving them a secure sense that their aims and pursuits are valuable. To be recognized as equal citizens, people need to be structurally and socially supported in ways that promote self-respect and respect from others.

This explains why “great man syndrome” strikes at our basic self-worth — there are only so many positions that provide “the social bases of self-respect.” So, most of the people involved in those pursuits will never achieve the basic conditions of social respect so long as they stay in their field. This can be especially troubling for members of social classes that are not commonly provided “the social bases of self-respect.” Furthermore, because these areas are intrinsically valuable and tied to identity, it can be very hard to leave. Leaving can feel like failing or giving up, and those who point out the structural problems are often labeled as pessimistic or failing to see the true value of the field.

How do we solve this problem? There are a few things that we as individuals can do, and that many people within these areas are already doing. We can change how we talk about the contributions of individuals to these fields and emphasize that we are first and foremost engaged in a collective enterprise which requires that we learn from and care for each other. We can reaffirm to each other that we are worthy of respect and love as human beings regardless of how well we perform under conditions of scarcity. We can also try to reach the halls of power ourselves to change the structures that fail to provide adequate material support for those pursuing these aims.

The difficulty with these solutions is that they do not fundamentally change the underlying institutional failures to provide “the social bases of self-respect.” Some change may be effected by individuals, especially those who attain positions of power, but it will not solve the core issue. To stably ensure that all members of our society have the institutional prerequisites needed for well-being, we need to collectively reaffirm our commitment to respecting each other and providing for each other’s material needs. Only then can we ensure that “the social bases of self-respect” will be preserved over time.

Collective action of this kind itself undermines the core myth of “great man syndrome,” as it shows that change rests in the power of organization and solidarity. In the end, we must build real political and economic power to ensure that everyone has access to “the social bases of self-respect,” and that is something we can only do together.

The Curious Case of Evie Toombes: Alternative Realities and Non-Identity

photograph of elongated shadow of person on paved road

Evie Toombes just won a lawsuit against her mother’s doctor. She was born with spina bifida, a birth defect affecting her spine, which requires continual medical care. Taking folic acid before and during pregnancy can help reduce the risk of spina bifida, but Toombes says that the doctor told her mother that folic acid supplements weren’t necessary. The judge ruled that, had the doctor advised Toombes’ mother “about the relationship between folic acid supplementation and the prevention of spina bifida/neural tube defects,” she would have “delayed attempts to conceive” until she was sure her folic acid levels were adequate, and that “in the circumstances, there would have been a later conception, which would have resulted in a normal healthy child.” The judge therefore ruled that the doctor was liable for damages because of Toombes’ condition.

Let’s assume that Toombes is right about the facts. If so, the case may seem straightforward. But it actually raises an incredibly difficult philosophical conundrum noted by the philosopher Derek Parfit. Initially, it seems Toombes was harmed by the doctor’s failure to advise her mother about folic acid. But the suggestion is that, if he’d done so, her mother would have “delayed attempts to conceive,” resulting in the “later conception” of a “normal healthy child.” And, presumably, that child would not have been Evie Toombes. Had her mother waited, a different sperm would have fertilized a different egg, producing a different child. So had the doctor advised her mother to take folic acid and delay pregnancy, it’s not as though Toombes would have been born, just without spina bifida. A different child without spina bifida would have been born, and Toombes would not have existed at all.

It may be that some lives are so bad that non-existence would be better. And if your life is worse than non-existence, then it’s easy to see why you’d have a complaint against someone who’s responsible for your life. But Toombes’ life doesn’t seem to be like this: she is a successful equestrian. And anyway, she didn’t make that claim as part of her argument, and the court didn’t rely on it. However, if Toombes’ life is worth living, and if the doctor’s actions are responsible for her existing at all, it might seem puzzling how the doctor’s actions could have wronged her.

The non-identity problem arises in cases like this, where we can affect how well-off future people are, but only by also changing which future people come to exist. It’s a problem because causing future people to be less well-off seems wrong, but it’s also hard to see who is wronged in these cases, provided the people who come to exist have lives worth living. E.g., it seems that the doctor should have told Toombes’ mother about folic acid, but, assuming her life is worth living, it’s also hard to see how Toombes is wronged by his not doing so, since that’s why she exists.

The non-identity problem also has implications for many other real-world questions. For instance, if we enact sustainable environmental policies, perhaps future generations will be better-off. But these generations will also consist of different people: the butterfly effect of different policies means that different people will get married, will conceive at different times, etc. Provided the (different) people in the resource-depleted future have lives worth living, it may be hard to see why living unsustainably would be wrong.

(It might be plausible that the doctor wronged Toombes’ mother, whose existence doesn’t depend on his actions. But wrongs against currently-existing people may not be able to explain the wrong of the unsustainable environmental policy, provided the bad effects won’t show up for a long time. Some unsustainable policies might only help current people, by allowing them to live more comfortably. And anyway, the court thought Toombes was also wronged: she’s getting the damages.)

Because it is relevant to important questions like this, it would be very handy to know what the solution to the non-identity problem is. Unfortunately, all solutions have drawbacks.

An obvious possibility is to say that we should make the world as good as possible. Since well-being is good, then, all else equal, we would be obligated to make sure that better-off people exist in the future rather than worse-off ones. But the decision of the court was that the doctor wronged Toombes herself, not just that he failed to make the world as good as possible: if that was the problem, he should have been ordered to pay money to some charity that makes the world as good as possible, rather than paying money to Toombes. And anyway, it isn’t obvious that we’re obligated to make sure future generations contain as much well-being as possible. One way to do that is by having happy children. But most people don’t think we’re obligated to have children, even if, in some case, that would add the most happiness to the world on balance.

Another possibility is to say that we can wrong people without harming them. Perhaps telling comforting lies is like this: here, lying prevents a harm, but can still be wrong if the person has a right to know the painful truth. Perhaps individuals have a right against being caused to exist under certain sorts of difficult conditions. But notice that we can usually waive rights like this. If I have a right to the painful truth, I can waive this right and ask you not to tell me. People who haven’t been born yet can’t waive rights (or do anything else). But when people are not in a position to waive a right, we can permissibly act based on whether we think they would or should waive the right, or something like that. You have a right to refuse having your legs amputated. But if paramedics find you unconscious and must amputate your legs to save your life, they’ll probably do it, since they figure you would consent, if you could.  Why not think that, similarly, future people whose lives are worth living generally would or should consent to the only course of action that can bring them into being, even if their lives are difficult in some ways?

A third solution says that Toombes’ doctor didn’t act wrongly after all–and neither would we act wrongly by being environmentally unsustainable, etc. But that’s very hard to believe. It’s even harder to believe in other cases. Here’s a case inspired by the philosopher Gregory Kavka. Suppose I and my spouse sign a contract to sell our (not yet conceived) first child into slavery. Because of the deal, we conceive a child under slightly different circumstances than we otherwise would have, resulting in a different child. (Maybe the slaver gives us a special hotel room.) There’s no way to break the contract and keep our child from slavery. Suppose the child’s life is, though difficult, (barely) worth living. This solution appears to suggest that signing the slave contract is permissible: after all, the child has a life worth living, and wouldn’t exist otherwise. But that doesn’t seem right!

I wrote more about this in chapter eight of this book. There are other possible moves, but they have problems, too. So the non-identity problem is a real head-scratcher. Maybe someone reading this can make some progress on it.

Losing Ourselves in Others

illustration of Marley's ghost in A Christmas Carol

The end of the year is a time when people often come together in love and gratitude. Regardless of religion, many gather to share food and drink or perhaps just to enjoy one another’s company. It’s a time to celebrate the fact that, though life is hard and dangerous, we made it through one more year with the help of kindness and support from one another.

Of course, this is why the end of the year can also be really hard. Many people didn’t survive the pandemic and have left enormous voids in their wake. Even for families and friend groups who were lucky enough to avoid death, many relationships didn’t survive.

Deep differences of opinion about the pandemic, race, and government have created chasms of frustration, distrust, and misunderstanding. If this is an accurate description of relationships between those who cared deeply for one another, it’s even less likely to be resolvable for casual acquaintances and members of our communities we only come to know as a result of our attempts to create social policy. This time of year can amplify our already significant sense of grief, loss, and loneliness — the comfort of community is gone. We feel what is missing acutely. How ought we to deal with these differences? Can we deal with them without incurring significant changes to our identities?

Moral philosophy throughout the course of human history has consistently advised us to love our neighbors. Utilitarianism tells us to treat both the suffering and the happiness of others impartially — to recognize that each sentient being’s suffering and happiness deserves to be taken seriously. Deontology advises us to recognize the inherent worth and dignity of other people. Care ethics teaches us that our moral obligations to others are grounded in care and in the care relationships into which we enter with them. Enlightenment moral philosophers like Adam Smith have argued that our moral judgments are grounded in sympathy and empathy toward others. We are capable of imaginatively projecting ourselves into the lives and experiences of other beings, and that provides the grounding for our sense of concern for them.

Moral philosophers have made fellow-feeling a key component in their discussions of how to live our moral lives, yet we struggle (and have always struggled) to actually empathize with fellow creatures. At least one challenge is that there can be no imaginative projection into someone else’s experiences and worldview if doing so is in conflict with everything a person cares about and with the most fundamental things with which they identify.

“Ought implies can” is a contentious but common expression in moral philosophy. It suggests that any binding moral obligation must be achievable; if we ought to do something, then we realistically can do the thing in question. If you tell me that I ought to have done more to end world hunger, for instance, that implies that it was possible for me to have done more to end world hunger (or, at least, that you believe that it was possible for me to have done so).

But there are different senses of “can.” One sense is that I “can” do something only if it is logically possible. Or, perhaps, I “can” do something only if it is metaphysically possible. Or, in many of the instances that I have in mind here, a person “can” do something only if it is psychologically possible. It may be the case that empathizing with one’s neighbor, even in light of all of the advice offered by wise people, may be psychologically impossible to do, or close to it. The explanation for this has to do with the ways in which we construct and maintain our identities over time.

Fundamental commitments make us who we are and make life worth living (when it is). In fact, the fragility of those commitments, and thus the fragility of our very identities, causes some philosophers to argue that immortality is undesirable. In Bernard Williams’ now famous paper The Makropulos Case: Reflections on the Tedium of Immortality he describes a scene from The Makropulos Affair, an opera by Czech composer Leoš Janáček. The main character, Elina, is given the opportunity to live forever — she just needs to keep taking a potion to extend her life. After many, many years of living, she decides to stop taking the potion, even though she knows that if she does so she will cease to exist. Williams argues that anyone who takes such a potion — anyone who chooses to extend their life indefinitely — would either inevitably become bored or would change so much that they lose their identity — they would, though they continue to live, cease to be who they once were.

One of the linchpins of Williams’ view is that, if a person puts themselves in countless different circumstances, they will take on desires, preferences, and characteristics that are so unlike the “self” that started out on the path that they would become someone they no longer recognize. One doesn’t need to be offered a vial of magical elixir to take on the potential for radical change — one has simply to take a chance on opening oneself up to new ideas and possibilities. To do so, however, is to risk becoming unmoored from one’s own identity — to become someone that an earlier version of you wouldn’t recognize. While it may frustrate us when our friends and loved ones are not willing to entertain the evidence that we think should change their minds, perhaps this shouldn’t come as a surprise — we sometimes see change as an existential threat.

Consider the case of a person who takes being patriotic as a fundamental part of their identity. They view people who go into professions that they deem as protective of the country — police officers and military members — to be heroes. If they belong to a family which has long held the same values, they may have been habituated to have these beliefs from an early age. Many of their family members may be members of such professions. If this person were asked to entertain the idea that racism is endemic in the police force, even in the face of significant evidence, they may be unwilling and actually incapable of doing so. Merely considering such evidence might be thought of, consciously or not, as a threat to their very identity.

The challenge that we face here is more significant than might be suggested by the word “bias.” Many of these beliefs are reflective of people’s categorical commitments and they’d rather die than give them up.

None of this is to say that significant changes to fundamental beliefs are impossible — such occurrences are often what philosophers call transformative experiences. That language is telling. When we are able to entertain new beliefs and attitudes, we express a willingness to become new people. This is a rare enough experience to count as a major plot point in a person’s life.

This leaves us with room for hope, but not, perhaps, for optimism. Events of recent years have laid bare the fundamental, identity-marking commitments of friends, family, and members of our community. Reconciling these disparate commitments, beliefs, and worldviews will require nothing less than transformation.

Resurrecting James Dean: The Ethics of CGI Casting

A collage of four photographs of James Dean

James Dean, iconic star of Rebel Without a Cause, East of Eden, and Giant died in a tragic car accident in 1955 at the age of 24. Nevertheless, Dean fans may soon see him in a new role—as a supporting character in the upcoming Vietnam-era film Finding Jack.

Many people came out against the casting decision. Among the most noteworthy were Chris Evans and Elijah Wood. Evans tweeted, “This is awful. Maybe we can get a computer to paint us a new Picasso, or write a couple new John Lennon tunes. The complete lack of understanding here is shameful.” Wood tweeted, “NOPE. this shouldn’t be a thing.”

The producers of the film explained their decision. Anton Ernst, who is co-directing the film, told The Hollywood Reporter they “searched high and low for the perfect character to portray the role of Rogan, which has some extreme complex character arcs, and after months of research, we decided on James Dean.”

Supporters of the casting decision argue that the use of Dean’s image is a form of artistic expression. The filmmakers have the right to create the art that they want to create. No one has a right appear in any particular film. Artists can use whatever medium they like to create the work that they want to create. Though it is true that some people are upset about the decision, there are others that are thrilled. Even many years after his death, there are many James Dean fans, and this casting decision appeals to them. The filmmakers are making a film for this audience, and it is not reasonable to say that they can’t do so.

Many think that the casting of a CGI of Dean is a publicity stunt. That said, not all publicity stunts are morally wrong. Some such stunts are perfectly acceptable, even clever. Those that are concerned with the tactic as a stunt may feel that the filmmakers are being inauthentic. The filmmakers claim that their motivation is to unpack the narrative in the most affective way possible, but they are really just trying to sell movie tickets. The filmmakers may rightly respond: what’s wrong with trying to sell movie tickets? That’s the business they are in. Some people might value authenticity for its own sake. Again, however, the filmmakers can make the art that they want to make. They aren’t required to value authenticity.

Those opposed to the casting decision would be quick to point out that an ethical objection to the practice need not also be a legal objection. It may well be true that filmmakers should be free to express themselves through their art in whatever way they see fit. However, the fact that an artist can express himself or herself in a particular way doesn’t entail that they should engage in that kind of expression. CGI casting, and casting of a deceased person in particular, poses a variety of ethical problems.

One metaethical question posed by this case has to do with whether it is possible to harm a person after they are dead.  One potential harm has to do with consent. If Dean were alive today, he could decide whether he wanted to appear in the film or not. His estate gave permission to the production company to use Dean’s likeness, but it is far from clear that they should be able to do so. It is one thing for an estate to retain ownership of the work that an artist made while living. It is reasonable to believe that the fruits of that artist’s labor can be used to benefit their family and loved ones after the artist is dead. The idea that an artist’s family is in a position to agree to new art to be created using the artist’s likeness requires further ethical defense.

A related argument has to do with artistic expression as a form of speech. Often, the choices that an actor makes when it comes to the projects they take on are expressions of their values. Dean may not have wanted to participate in a movie about the Vietnam War. Some claim that Dean was a pacifist, so the message conveyed by the film may not be one that Dean would endorse. Bringing back James Dean through the use of CGI forces Dean to express a message he may not have wanted to express. On the other hand, if Dean no longer exists, it may make little sense to say that he is being forced to express a message.

Another set of arguments has to do with harms to others. There are many talented actors in the world, and most of them can’t find work. Ernst’s claim that they simply couldn’t find a living actor with the range to play this character is extremely difficult to believe. Filmmaking as an art form is a social enterprise. It doesn’t happen in a vacuum—there are social and political consequences to making certain kinds of artistic choices. Some argue that if filmmakers can cast living actors, they should.

There is also reason for concern that this casting choice sets a dangerous precedent, one that threatens to destroy some of the things that are good about art. Among other things, art is a way for us to understand ourselves and to relate to one another. This happens at multiple levels, including the creation of the art and the interpretation of its message. Good stories about human beings should, arguably, be told by human beings. When a character is computer generated, it might sever that important human connection. Some argue that art is not art at all if the intentions of an artist do not drive it. Even if the person creating the CGI makes artistic decisions, an actor isn’t making those decisions. Some argue that acting requires actors.

The ethical questions posed here are just another set that falls under a more general ethical umbrella. As technology continues to improve in remarkable and unexpected ways, we need to ask ourselves: which jobs should continue to be performed by living human beings?

The Ethics of Human Head Transplants Explored: Part Two

Black and white photograph of a medical student examining a person on a table

In a previous post, I explored several ethical questions arising out of the work of renegade surgeons pushing to conduct the first “human head transplant.” One remaining but intriguing conundrum concerns the identity of the person who would emerge from the transplant, should it prove to be successful. The radical nature of the surgery places some doubt as to who legitimately is the “donor” and who the “recipient.” The surgery is commonly referred to as a “human head” transplant, possibly because we are used to seeing small, discrete organs as the objects of donation. However, note that this moniker seems to get it backwards, at least in terms of how the surgeons and potential participants understand the surgery. It is the original owner of the head who is understood to be receiving a new body as donated organ. Thus, the surgery should go by the name “whole body” transplant. Continue reading “The Ethics of Human Head Transplants Explored: Part Two”

Ethnic Identity in America: Remembering the Ni’ihau Incident

An aerial view of Niihau island surrounded by blue ocean.

The Island of Ni’ihau is a recluse. Only the island’s inhabitants, along with a few fortunate individuals from outside Ni’ihau, are allowed to leave and return as they please. This 70-square-mile plot of land near the center of the Pacific Ocean is Hawaii’s westernmost island, and it lacks roads, Internet, and even indoor plumbing. Ni’ihau hosts approximately 130 permanent residents, all of whom live in isolation and without modern conveniences in an effort to preserve the native culture of Hawaii. The island was sold by King Kamehameha V in 1864 to the Scottish plantation-owner Elizabeth Sinclair, who promised to keep Hawaiians “as strong in Hawaii as they are now.” Despite the residents’ conversion to Christianity, a few modern technologies being introduced, and some of the younger islanders learning English, the local culture along with the native Hawaiian language have successfully persisted.

All this was jeopardized, however, in the days following the Japanese attack on Pearl Harbor in 1941.

Continue reading “Ethnic Identity in America: Remembering the Ni’ihau Incident”

Reckoning with the Legacy of Derek Parfit

Philosopher Derek Parfit died on January 1st. Let us hope he will go to heaven. Will he? Parfit, who was an agnostic, was not much concerned with the existence of heaven or hell. But, he did famously argue that, even if such places do exist, the person going there would not be the same person who previously died. And, thus, someone would be punished or rewarded for the deeds of another person. This is deeply unjust, as unfair as sending someone to prison because of the crimes committed by his identical twin brother.

Continue reading “Reckoning with the Legacy of Derek Parfit”

Masculinity Across Sports

When conjuring up the perfect image of masculinity in your mind, most people imagine the typical high school jock. He plays football, basketball, ice hockey, or a similar hypermasculine activity. Rarely does a runner, swimmer, or this sort of “second tier” of masculinity in sports arise. By assigning masculinized predispositions to certain sports, could the conversation surrounding masculinity become skewed from a young age? If so, this would certainly create a problematic discourse around certain sports and limit a conversation for LGBTQ+ communities to have a voice within this realm.

Continue reading “Masculinity Across Sports”

Identity and Pluralism in Merkel’s Call to Ban the Veil

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Germany would be far from the first country to ban the veil. France was the first Western European country to do so in 2011, with the administration using the reasoning that the veil is a vehicle of oppression of women as justification for the fines imposed on women who leave their house with their faces covered.

Despite the fact that a number of countries in Europe, including the Netherlands, Italy, Belgium, and Switzerland, have some sort of legal restriction on the wearing of headscarves, this is the first time the prospect of a federal ban on the full veil in Germany has been raised (Though half of Germany’s states have banned teachers from wearing headscarves after a Constitutional Court case in 2003). That leadership in Germany have joined the movement against Muslim headscarves speaks to a shift in approach to what has long been a thorny question. How can a nation balance a liberal respect for pluralism and the autonomy of its citizens while at the same time preserve a national identity?

Merkel said in her speech defending the idea of the ban: “The full veil is not appropriate here, it should be forbidden wherever that is legally possible. It does not belong to us.” Her move towards banning the veil has widely been taken as a tactic to mitigate the negative response to her allowing hundreds of thousands of migrants to enter Germany in the wake of the migrant crises of recent years. What it means to be German is implicated in the discussion, and the public display of practices that are interpreted to be “foreign” are less than welcome in the current climate. Thus, the proposal of a ban on headscarves will likely help Merkel gain support of constituents who have been less than pleased with her handling of the migrant crisis and its effects on the economy and other aspects of German life.

Bans like these bring out the tension between the formal and substantive values underlying modern liberal societies. On the one hand, there is a commitment to allowing people to live as they wish: the value of protecting civil liberties and individual autonomy, which is a formal value (it does not implicate any particular value systems or commitments to promote). On the other hand, there is a commitment to promoting something resembling a national identity, values which would be German, or French, or British, or American, which would be substantive.

This formal commitment we can call a commitment to respect autonomy, or the value of pluralism. This is the value in respecting an individual’s autonomy in shaping her life according to her values, especially in practices that are in significant areas of life. Practices and choices regarding child-rearing, partner selection, educational strategies, meals and dietary customs, burial and worship, etc. shape the meaning and significance of our lives. Crucially, at the root of this commitment is the notion that there are multiple reasonable value systems that could shape a good life, and therefore in order to have a society that respects all individuals, it must acknowledge that they could arrive at different ideas about how to live. Given a full set of human competences, well-informed people can disagree as to what will constitute a life well-lived.

It can be important to one family to raise children in authoritarian, achievement-focused manner. In another household, particular eating practices could be highly significant. Expressions of religious belief and worship vary from diet to clothing to structural family choices. These practices are ways that the world and life makes sense to us and is meaningful, and in the last few hundred years especially, societies have trended towards more pluralist approaches to governance where citizens can hold a variety of value systems and fully participate in the government and society.

France has embraced a further value: a deep separation of church and state, in other words, a commitment to secularism. Public spaces are meant to be free from “conspicuous” displays of religious expression. For instance, displays of religious expression in public schools have been banned since 2004, and this restriction has been met with wide public support: BBC reports, “Most of the population – including most Muslims – agree with the government when it describes the face-covering veil as an affront to society’s values.” The justification for these restrictions is largely framed as an appeal to what it means to be “French,” and the substantive values that come with citizenship. This is a move away from pluralism, a move towards the promotion of particular nationalist values.

The commitments here are distinct from a commitment to pluralism and a respect for autonomy, for expressions of differing values (specifically, values that arise from religious commitment) in certain public places are outlawed. It is telling that along with a fine, the sanction for wearing the full veil includes taking a class on citizenship. Despite being the country with the largest Muslim minority in Western Europe, in order to be a proper citizen of France, expressing this religion in particular ways in particular places is legally prohibited.

The value of pluralism is underwritten by the notion that there are multiple ways of living that may be equally valid, or at least that are not inherently wrong. With a commitment to secularism, France is avoiding saying that these practices are wrong, and instead saying they are not appropriate for the public sphere (though President Sarkozy, who was in power and behind the ban on full veils, cited the oppressive nature of the veil at the time). Attempting to outline appropriate behavior for the public sphere, while maintaining that individuals can live according to their values in private is giving priority to the commitment of secularism over pluralism, which is relegated to a particular sphere of life.

A commitment to secularism could be underwriting the current move in Germany, with the foreign minister’s language citing the veil’s inherent conflict with Germany’s “open society”, and Merkel claiming that the veil “doesn’t belong to us.” The foreign minister seems to attempt to appeal to formal values of German society – values that wouldn’t favor one religious or secular value system over another – when he mentions the importance of communication: “Showing the face is a constituent element for our communication, the way we live, our social cohesion. That is why we call on everyone to show their face.” This would suggest that the issue isn’t with the expression of religious faith that is foreign to Germany or an oppression inherent in wearing the garment, but rather that Germans have a formal commitment that this specific instance of religious expression is in conflict with. If someone is covering their face, the suggestion is that communication is undermined. (Our ability to successfully communicate via a plethora of digital media would seem to be a counterexample to this appeal to the necessity of seeing the face of our interlocutor.)

The restriction of public expression of personal adherence to a value system has been justified on a variety of grounds recently in Western Europe. Whatever is taken to justify it, the case must be weighed against the nation’s commitment to pluralism, and the extent to which the nation wishes to preserve that value as part of its national identity.

Evaluating Climate Change’s Post-Election Importance

Former Vice President Al Gore is making headlines after his meeting with President-elect Donald Trump on December 5th. After the meeting, he sat for an interview with the Guardian about their conversation  and the election in general. In the interview, Gore stated that, for the sake of the environment, we do not have “time to despair” over the results of the election and that “despair is just another form of denial.” Gore is known for his environmental activism, most specifically his documentary, “An Inconvenient Truth,” which highlighted the urgency of climate change in 2006. Gore even won the 2007 Nobel Peace Prize for his efforts to combat climate change. Though many environmentalists might agree with Gore, are his statements acceptable? Is it fair to compare grief from the election to denial? And is Gore failing to recognize the marginalized identities that are at stake as a result of the election?

Continue reading “Evaluating Climate Change’s Post-Election Importance”

Cannons in a Quiet Park

On any other day, Belgrade’s Kalemegdan Park would have been relatively peaceful. Usually it would have been filled with people taking walks, groups of tourists and teenagers meeting their friends. Yet today a large crowd of people had gathered at the edge of the park, at an overlook above the Sava river. Just finishing a political tour of the city, my group and I joined them. In the middle of the crowd stood a cluster of soldiers- some in ornamental dress, others in camouflage – and a brass band to their left. To their right stood a group of politicians in dark suits. and in the middle of it all, half a dozen cannon barrels silhouetted against the sunset.

Continue reading “Cannons in a Quiet Park”