← Return to search results
Back to Prindle Institute

Great Man Syndrome and the Social Bases of Self-Respect

black and white photograph of David statue with shadow on wall

“Am I good enough?” “Was someone else smarter or more talented than me?” “Am I just lazy or incompetent?” You might find these thoughts familiar. It’s an anxiety that I have felt many times in graduate school, but I don’t think it’s a unique experience. It seems to show up in other activities, including applying for college and graduate school, pursuing a career in the arts, vying for a tenure-track academic job, and trying to secure grants for scientific research. This anxiety is a moral problem, because it can perpetuate imposter syndrome – feelings of failure and a sense of worthlessness, when none of these are warranted.

The source of this anxiety is something that I would like to call “great man syndrome.” The “great man” could be a man, woman, or non-binary person. What is important is the idea that there are some extra-capable individuals who can transcend the field through sheer force of innate ability or character. Gender, race, and other social categories matter for understanding social conceptions of who has innate ability and character, which can help to explain who is more likely to suffer from this angst, but “great man syndrome” can target people from any social class.

It functions primarily by conflating innate ability or character with professional success, where that professional success is hard to come by. For those of us whose self-conceptions are built around being academics, artists, scientists, or high achievers and whose professional success is uncertain, “great man syndrome” can generate uncertainty about our basic self-worth and identity. On the other hand, those who achieve professional success can easily start to think that they are inherently superior to others.

What does “great man syndrome” look like, psychologically? First, in order to continue pursuing professional success, it’s almost necessary to be prideful and think that because I’m inherently better than others in my field in some way, I can still achieve one of the few, sought-after positions. Second, the sheer difficulty and lack of control over being professionally recognized creates constant anxiety about not producing enough or not hitting all of the nigh-unattainable markers for “great-man-ness.” Third, these myths tie our sense of well-being to our work and professional success in a way that is antithetical to proper self-respect. This results in feelings of euphoria when we are recognized professionally, but deep shame and failure when we are not. “Great man syndrome” negatively impacts our flourishing.

My concept of “great man syndrome” is closely related to Thomas Carlyle’s 19th century “great man theory” of history, which posits that history is largely explained by the impacts of “great men,” who, by their superior innate qualities, were able to make a great impact on the world. There are several reasons to reject Carlyle’s theory: “great men” achieve success with the help of a large host of people whose contributions often go unrecognized; focusing on innate qualities prevents us from seeing how we can grow and improve; and there are multiple examples of successful individuals who do not have the qualities we would expect of “great men.”

Even if one rejects “great man theory,” it can still be easy to fall into “great man syndrome.” Why is this the case? The answer has to do with structural issues common to the fields and practices listed above. Each example I gave above — scientific enterprises, artistic achievement, higher educational attainment, and the academic job market — has the following features. First, each of these environments are highly competitive. Second, they contain members whose identities are tied up with that field of practice. Third, if one fails to land one of the scarce, sought-after positions, there are few alternative methods of gainful employment that allow one to maintain that social identity.

The underlying problem that generates “great man syndrome” isn’t really the competition or the fact that people’s identities are tied up with these pursuits; the problem is that there are only so many positions within those fields that ensure “the social bases of self-respect.” On John Rawls’s view, “the social bases of self-respect” are aspects of institutions that support individuals by providing adequate material means for personal independence and giving them a secure sense that their aims and pursuits are valuable. To be recognized as equal citizens, people need to be structurally and socially supported in ways that promote self-respect and respect from others.

This explains why “great man syndrome” strikes at our basic self-worth — there are only so many positions that provide “the social bases of self-respect.” So, most of the people involved in those pursuits will never achieve the basic conditions of social respect so long as they stay in their field. This can be especially troubling for members of social classes that are not commonly provided “the social bases of self-respect.” Furthermore, because these areas are intrinsically valuable and tied to identity, it can be very hard to leave. Leaving can feel like failing or giving up, and those who point out the structural problems are often labeled as pessimistic or failing to see the true value of the field.

How do we solve this problem? There are a few things that we as individuals can do, and that many people within these areas are already doing. We can change how we talk about the contributions of individuals to these fields and emphasize that we are first and foremost engaged in a collective enterprise which requires that we learn from and care for each other. We can reaffirm to each other that we are worthy of respect and love as human beings regardless of how well we perform under conditions of scarcity. We can also try to reach the halls of power ourselves to change the structures that fail to provide adequate material support for those pursuing these aims.

The difficulty with these solutions is that they do not fundamentally change the underlying institutional failures to provide “the social bases of self-respect.” Some change may be effected by individuals, especially those who attain positions of power, but it will not solve the core issue. To stably ensure that all members of our society have the institutional prerequisites needed for well-being, we need to collectively reaffirm our commitment to respecting each other and providing for each other’s material needs. Only then can we ensure that “the social bases of self-respect” will be preserved over time.

Collective action of this kind itself undermines the core myth of “great man syndrome,” as it shows that change rests in the power of organization and solidarity. In the end, we must build real political and economic power to ensure that everyone has access to “the social bases of self-respect,” and that is something we can only do together.

Should Clinicians Have Soapboxes?

blurred photograph of busy hospital hallway

Despite the tendency to talk about the pandemic in the past tense, COVID-19 hasn’t gone. Infection rates in multiple countries are swelling, prompting some – like Kenya, Austria, the Netherlands, and Belgium – to employ increasingly stringent measures. Unsurprisingly, alongside increasing infection rates comes an increase in hospital admissions. Yet, there’s one trait that most of those requiring COVID-19 treatment share – they’re unvaccinated.

This trend isn’t surprising given that one of the points of vaccination is to reduce the seriousness of the infection, thus reducing the need for serious medical interventions. Simply put, vaccinated people aren’t ending up in hospitals as often because they’re vaccinated. The people who haven’t been vaccinated, for whatever reason, are more likely to have severe complications if infected, thus needing clinical care. So far, so simple.

This tendency for hospital beds to be occupied by the unvaccinated invites questions regarding the burden on healthcare systems. After all, emergency care services are better placed to respond to emergencies – like bus crashes, heart attacks, or complicated births – when their wards, ambulances, and hallways aren’t preoccupied with patients. If those patients are there because of their choice not to be vaccinated, it’s only natural to wonder whether they are equally deserving of that resource-use.

But is it appropriate for those working in the medical profession to voice such concerns? If you’re in the hospital seriously ill, does it help to know that your nurse, doctor, consultant, or porter may resent your being there?

This question’s been brought to the forefront of the COVID-19 discussion because of a recent Guardian article entitled ICU is full of the unvaccinated – my patience with them is wearing thin. In it, an anonymous NHS respiratory consultant writes, “I am now beaten back, exhausted, worn down by the continuous stream of people that we battle to treat when they have consciously passed up the opportunity to save themselves. It does make me angry.” Similar sentiments come from the Treating the unvaccinated article in The New Yorker, where critical care physician Scott Aberegg recounts:

There’s a big internal conflict… On the one hand, there’s this sense of ‘Play stupid games, win stupid prizes.’ There’s a natural inclination to think not that they got what they deserved, because no one deserves this, but that they have some culpability because of the choices they made… When you have that intuition, you have to try to push it aside. You have to say, [t]hat’s a moral judgment which is outside my role as a doctor. And because it’s a pejorative moral judgment, I need to do everything I can to fight against it. But I’d be lying if I said it didn’t remain somewhere in the recesses of my mind. This sense of, Boy, it doesn’t have to be this way.

It’s not unsurprising that clinicians feel this way. They’ve seen the very worst this pandemic has to offer. The prospect that any of it was avoidable will undoubtedly stir up feelings of anger, betrayal, or even injustice; clinicians are, after all, only human. While expecting clinicians not to have such opinions seems like an impossible demand, should they be voicing them on platforms with such a broad reach?

On the one hand, the answer is yes. Entering the medical professions in no way invalidates one’s right to free speech, be that in person or print. Much like how any other member of the public can pen an article in an internationally respected newspaper if invited, clinicians have the right to share their views. If that view concerns their increasing inability to accept the preventable loss of life, then, at least in terms of that clinician’s rights, there is very little to stop them ethically. To try would be to revoke a privilege which many of us would likely consider to be fundamental and, without a robust justification, unassailable.

However, those experiencing the pandemic’s horrors may have more than just a right to share their opinions; they might have a duty. Those working on the frontlines in the battle against the pandemic know better than most the state of the healthcare services, the experience of watching people die from the illness, and the frustration from having to cope with much of it is seemingly preventable. Given that they have this unique knowledge, both from a medical and personable standpoint, it would seem that clinicians have a responsibility to be as honest with the general public as possible. If that means sharing their woes and frustrations about the reluctance of people to take even the most basic steps to save themselves, then so be it. After all, if they don’t tell us this information, it seems unlikely that anyone else is.

But, such a principled stance may detrimentally affect trust in the healthcare system, and subsequently, that system’s effectiveness.

As The Prindle Post has recently explored, shame is a complex phenomenon. Its use in trying to shape people’s behaviors is far from simple. This complexity has been seen in several previous public health concerns where shame has had the opposite effect intended. As both The Wall Street Journal and NPR have recently reported, shame makes for a terrible public health tool as it deters engagement with clinicians. If you believe that you’re going to be shamed by your doctor, you’re probably less likely to go. For smokers and alcoholics, this chiefly detrimentally affects only a single person’s health. During a global pandemic,however,  it means there’s one more potentially infectious person not receiving medical care. Scaled-up, this can easily result in countless people refusing to visit hospitals when they need to – increasing infection rates and preventing medical assistance from getting to those that need it.

All this is not to say that doctors, nurses, surgeons, and countless others involved in the care of the vulnerable should be automatons, devoid of emotion and opinion about the unvaccinated. Again, they’re human, and they’re going to have thoughts about what they see during the course of their professional careers. But whether those opinions should be broadcast for the entire world to read and see is an entirely different question.

Fair Shares and COVID-19 Booster Shots

photograph of COVID vaccination in outdoor tent

Arguments abound regarding the moral importance of receiving the COVID-19 vaccine. Beyond the obvious health benefits for the vaccinated individual, herd immunity remains the most effective way to stop the spread of the virus, limit the development of more deadly variants, and – most  importantly – save lives. In fact, it may very well be the case that these reasons go so far as to provide us with a moral duty to get vaccinated so as not to treat others unfairly and, therefore, immorally. Given all of this, it would seem then that the morality of receiving a third ‘booster’ dose of the vaccine is simple. Unfortunately, ethics is rarely that straight-forward.

Currently, 7.54 billion doses of the COVID-19 vaccine have been administered globally, with 52.2% of the world’s population having now received at least one dose. In the U.S., close to 60% of the population have been fortunate enough to receive two doses of the vaccine, with the CDC now recommending a third dose for certain vulnerable portions of the population. Colorado, California, New Mexico, New York, and Arkansas have gone further than this by approving booster doses for all residents over the age of 18.

Yet, at the same time, only 4.6% of people in low-income countries have received their first dose of the vaccine, with this number dropping to less than one percent in countries such as Chad and Haiti. The reasons for this are many, but one of the largest contributing factors has been affluent countries pre-ordering more doses than they require to fully vaccinate their population. The U.S., for example, has pre-ordered twice as many vaccines as they need, the U.K. has purchased four times as many, and Canada has secured a whopping five times as many doses as would be required to provide a double dose of the vaccine to every one of their residents. These orders are still being filled, and – until they are – many poorer nations are left to wait to receive even their first dose of the vaccine. As a result, the World Health Organization has called on countries to issue a moratorium on providing COVID-19 booster shots until every country is able to vaccinate at least 10% of its population.

Essentially, this matter boils down to the unjust distribution of limited resources – with some countries taking far more than their ‘fair share’ of the vaccine, and leaving others without nearly enough. This has become a fairly common moral issue lately – underpinning problems surrounding everything from toilet paper, to gasoline, to carbon emissions.

There are many reasons why it’s wrong to take more than your fair share of a limited resource. On top of these more general concerns with just allocations, there are ethical issues specific to the case of vaccines. For one, we might claim that we have strong moral reasons to maximize the good. While an initial vaccine dose will grant around 90% immunity to the recipient, using that same dose as a booster will instead grant only a 10% increase in protection. Put simply, a single COVID-19 vaccine dose will do far more good given to an unvaccinated individual than to someone who has already received two previous doses. There are pragmatic concerns too. Unvaccinated populations provide opportunities for the virus to mutate into more virulent strains – strains that undercut vaccination efforts everywhere else in the world.

So let’s suppose that there’s a good case to be made for the fact that countries have done something wrong by taking far more than their fair share of the COVID-19 vaccine, and that the vaccine stock used by affluent nations to provide third booster shots is what we might call an “ill-gotten gain.” What does this mean for us, as individuals? Do we have a moral obligation to refrain from receiving a booster shot until more people – especially those in poorer nations – have managed to at least receive their first dose?

If we think that our resources should go where they’ll do the most good, then the answer may very well be “yes.” This approach is precisely the same as a very famous argument for our moral obligation to donate money to the poor. While buying that Starbucks Double Chocolaty Chip Crème Frappuccino might bring me a modicum of joy, donating that same amount of money could do far more for someone living in absolute destitution. In the same way, while an additional COVID-19 vaccine – used as a booster – will bring me a small benefit, it could do far more for someone else if used as an initial vaccine.

Of course, this argument assumes that by refusing a booster shot, my vaccine dose will instead be sent where it’s more needed. But it turns out it’s notoriously difficult to donate unused COVID vaccines, with some U.S. states already throwing away tens of thousands of unused doses. Suppose, then, that these booster shots are bought-and-paid-for, and that refusing these boosters will not see them go to those who are more in need. What, then, are our obligations regarding these ill-gotten gains?

A thought experiment may help in this situation. Suppose that we were currently suffering through a severe water shortage, and that the government sent out a limited supply of water tankers to alleviate people’s suffering. Your town’s tanker arrives, and everyone receives a reasonable allowance of water. In a shockingly unscrupulous turn of events, however, your town’s local officials hijack and claim the tanker destined for the next town over, parking it on the main street and telling residents to come and help themselves. Whatever water isn’t taken, they claim, will merely be dumped. What should you do? You don’t agree with how this water was obtained, but you also know that if you don’t use it, it’ll only go to waste anyway. You already have enough water to survive, but your plants are looking a little brown and your car could really use a good wash. It seems that, in a circumstance like this, you have every reason to make use of this ill-gotten gain. We have an obligation to maximize the good, and since the harm (depriving others of this vital resource) has already been done, some good might as well come of it, no?

Perhaps. But it is in cases like this that it becomes important to distinguish between maximizing the good in a particular case, and maximizing the good over the long run. While I may have everything to gain from enjoying this stolen water, I don’t stand to benefit from a society in which one town steals vital resources from another. And the same may be true of vaccine booster shots. A global society in which affluent nations overbuy and hoard life-saving resources is one that, in the long-run, will create more harm than good – particularly where this kind of behavior only serves to prolong and worsen a crisis (like the pandemic) for the entire global population. By refraining from taking the COVID-19 booster – at least until those in poorer nations have had the opportunity to receive their initial vaccine – we send a clear message to our governments that we will not partake in ill-gotten gains.

What Toilet Paper Can Teach Us About Climate Change

photograph of empty toilet paper rolls stacked

One of the stranger parts of the COVID-19 pandemic has been people’s sudden obsession with bathroom sanitation. While there was never any pandemic-related disruption to the supply chain, nor the risk of even the strongest lockdown measures in place preventing people from buying essential groceries, many found themselves overcome by a desperate need to panic-buy vast quantities of toilet paper. Ultimately, this created a self-fulfilling prophecy in which paranoid hoarding led to the very shortage that had been feared. A similar scenario played out earlier this year when a cyberattack on Colonial Pipeline led to gasoline shortages throughout the East Coast. Panic-buying ensued once again, with individuals stockpiling vast quantities of fuel and further exacerbating an already struggling supply line.

Many of us might have the intuition that hoarding of this kind is wrong. But why? There are many ways we might try to determine the moral rightness or wrongness of an action. One of the simplest is to see whether it causes harm to others. But that’s not hugely helpful here. Suppose I hold a one-hour exam information session for my class of sixty students. In order to be fair, each student is given one minute in which to ask any questions they might have. Suppose, then, that one student ignores this guideline, and instead monopolizes a total of two minutes for her queries. It seems wrong of her to do this. But why? It’s not clear that her actions harm her fellow classmates. The extra minute she takes only subtracts slightly more than a second from each of their times – hardly enough to make an appreciable difference.

One way of explaining the wrongness of this student’s action is instead to claim that she is taking more than her fair share. We often find ourselves having to divide a finite resource amongst some group of individuals: time in a meeting, pizza amongst friends, holidays between family members. And in each of these scenarios there is, presumably, a fair way of making that division – one that gives full consideration to the interests of all individuals concerned. Once that allocation has been made, exceeding your fair share is wrong, regardless of whether it results in actual harm to others. This is precisely the kind of approach we might take toward food in a famine and water in a drought – and it explains what’s wrong about taking more than your fair share of toilet paper during a pandemic, too.

For many, the fair share approach may be so obvious as to appear trivial. But it can help inform our approach to far more complicated problems – like climate change. In 2011, nearly all countries agreed to limit the global average temperature rise to no more than 2°C compared to preindustrial levels – the maximum global temperature rise we can tolerate while avoiding the most catastrophic effect of climate changes. According to the Intergovernmental Panel on Climate Change, achieving this with a probability of >66% would require us to keep our global carbon expenditure below 2900GtCO2. As at the time of writing, only 605GtCO2 remains. Divided equally amongst the 7.9 billion population of earth, this comes out at a lifetime carbon allowance of 76.6 tonnes of CO2 per person — or around 0.9 tonnes per year over an 85-year lifespan.

Of course, it might be the case that a fair share isn’t necessarily an equal share. Another way of dividing up the carbon budget might be to instead require a proportional reduction in carbon emissions by all emitters. Put another way, this requires that everyone’s emissions peak around 2020, drop 50% by 2045, and fall below zero by 2075. The problematic side of this approach is that it allows historically high emitters to continue to emit at a much greater rate than many others around the world. As such, it provides a far more generous carbon budget for those living in a country like the U.S. According to Carbon Brief, a child born in the U.S. in 2017 will – on this approach – have a lifetime carbon budget of 450 tonnes of CO2, or 5.3 tonnes per year over an 85-year lifespan. By contrast, a child born in the same year in Bangladesh will receive only 4 tonnes of CO2, or 0.05 tonnes per year.

Of course, other factors may come into play in determining what a ‘fair share’ of carbon emissions is for each individual. One such factor is need. Suppose, for example, that I live in a part of the country where the only electricity production I have access to is derived from a coal-fired power plant. In such a case, I might necessitate a higher budget than someone who lives in a location with renewable energy options.

But the precise method by which we determine a fair share of carbon emissions is largely academic. This is because – even on the most generous allocation – we are all still horribly over-budget. In 2019 (the most recent year for which data is available), the per capita carbon emissions of a U.S. citizen was around 16 tonnes of CO2. Ultimately, this means that there is a moral imperative on each of us to do all we can to reduce our future emissions in any way possible. Some actions – like recycling and patronizing public transport – may be easy, but other changes (like the one I suggested in a previous article) may require much greater sacrifice. But without these changes, we – like those who hoarded toilet paper and gasoline – will continue to take far more than our fair share, and subsequently treat others unfairly in the process.

Scarce Goods and Rationalization

photograph of crowded waiting room

A friend of mine recently posted on Facebook asking for insight into “the ethics of (1) getting vaccinated as quickly as possible for the common good and (2) not using privilege to be vaccinated ahead of vulnerable people.”

Many responded with arguments along the lines of, “by getting a vaccine you are contributing to herd immunity, so it is a good thing to do.” Others linked to this New York Times ethics column in which Dr. Appiah argues that the advantage of easy management means that people should get vaccines when they can get them (and not worry too much about whether others might need them more), and further that by getting the vaccine “you are contributing not just to your own well-being but to the health of the community.”

Another friend recently mentioned in a group chat how she was able to get a vaccine that, technically, she did not yet legally qualify for (since Florida is only officially vaccinating K-12 educators, and not college instructors). I demurred, saying it’s important as healthy youngish people to wait our turn, and a third friend argued that even if you are not the ideal person to get the vaccine, you should still get it if you can since more vaccines are better than fewer and you can help protect others by getting vaccinated.

Assessing the Arguments

The Herd Immunity Argument — The thing that unites all these replies is the thought that by getting the vaccine you are helping to protect others. But in these cases, that is probably wrong. I want to be clear. I am not denying that more people being vaccinated contributes to herd immunity. What I am denying is that my friends getting a vaccine contributes to more people being vaccinated.

Right now the vaccines are a scarce good. If I do not get a vaccine, someone else will get that particular injection. As such, in getting a vaccine I have not actually done anything to increase the percentage of the population that is vaccinated, I have simply made sure that I, rather than someone else, am part of that vaccinated percentage.

The Waste Rejoinder — Some commenters on Facebook mentioned that some vaccines go to waste. But for the most part the vaccine distribution process has sorted itself out. While a good number of vaccines were being wasted in January, we are now in mid-March and the number wasted is utterly tiny in comparison to the number used. The odds that if you do not get a vaccine that the vaccine will end up in the trash is extraordinarily small.

So sure, if you happen to be in a situation where the alternative to not getting a vaccine is throwing it away, then get the vaccine. But unless you know that to be the alternative, you should not think that in getting the vaccine you are heroically contributing to solving the problem.

Speed of Distribution — While no one in the threads mentioned this argument, there is something that could be said for skipping the line. Even if someone else would have gotten that same vaccine, it’s possible it would have taken longer for the vaccine to get in someone’s arm. Now, it’s true that at this point the states are not sitting on nearly as large a vaccine stockpile as they were originally. But it is still the case that some vaccines, while they are not being wasted, are taking longer than ideal to end up in someone’ arm. Indeed, this seems to be happening where I am in Tallahassee.

But the problem is, this was not the situation either of my friends were in. Sure, this situation might be more common than the wasted vaccine situation. But it will still be rare (and indeed, markets are such that this waste usually does not last very long; soon after that article about Tallahassee was published demand at the site increased).

The Lesson

Now, I don’t want to argue that it is wrong to get the vaccine if you have the chance to do so. Probably sometimes it’s right and sometimes it’s wrong. As is often the case, it all depends on the details.

Instead, I want to suggest that we need to be careful to not convince ourselves that our selfish acts serve an altruistic motive. I think it’s probably ok to be somewhat selfish. It’s reasonable to care more about saving your own life than  the lives of a stranger (even Aquinas agreed as much). But I think when you are prioritizing your own good over the good of others, it’s important to recognize that that is what you are doing.

So if I get the vaccine perhaps that is ok. But I should recognize that if I get the vaccine someone else will not. I should also recognize that since I am young and healthy, that other person probably would have gotten more value from the protection than I did. The question, as far as altruism goes, is how do I compare to the average person getting a vaccine these days? Am I younger than the average person who would get the vaccine instead of me? Then probably it is better that the other person gets it. Am I healthier than the average person who would get the vaccine instead of me? Then probably it is better that the other person gets it.

The thing is, we have strong biases in favor of rationalizing our own selfish acts. Thus, we often look for reasons to think doing the thing we want is also good in general. This is a very dangerous tendency. People often accept really bad arguments, if those really bad arguments help them think well of their own selfish activity. This should scare us, and make us all a little more self-critical about our moral reasoning anytime we come up with plausible reasons for thinking the thing we want to do is also the best thing for the world as a whole. Remember, we all have a tendency to think that way, even when the act is merely selfish.

Life-Life Tradeoffs in the Midst of a Pandemic

photograph of patients' feet standing in line waiting to get tested for COVID

Deciding who gets to live and who gets to die is an emotionally strenuous task, especially for those who are responsible for saving lives. Doctors in pandemic-stricken countries have been making decisions of great ethical significance, faced with the scarcity of ventilators, protective equipment, space in intensive medical care, and medical personnel. Ethical guidelines have been issued, in most of the suffering countries, to facilitate decision-making and the provision of effective treatment, with the most prominent principle being “to increase overall benefits” and “maximize life expectancy.” But are these guidelines as uncontroversial as they initially appear to be?

You walk by a pond and you see a child drowning. You can easily save the child without incurring significant moral sacrifices. Are you obligated to save the child at no great cost to yourself? Utilitarians argue that we would be blameworthy if we failed to prevent suffering at no great cost to ourselves. Now, suppose, that you decide to act upon the utilitarian moral premise and rescue the child. As you prepare to undertake this life-rescuing task, you realize the presence of two drowning children on the other side of the pond. You can save them both – still at no cost to yourself – but you cannot save all three. What is the right thing to do? Two lives count more than one, thus you ought to save the maximum number of people possible. It seems evident that doctors who are faced with similar decisions ought to maximize the number of lives to be saved. What could be wrong with such an ethical prescription?

Does the ‘lonely’ child have reasonable grounds to complain? The answer is yes. If the child happened to be on the other side of the pond, she would have a considerably greater chance of survival. Also, if, as a matter of unfortunate coincidence, the universe conspired and brought closer to her two extra children in need of rescue, she would have an even greater chance of survival – given that three lives count more than two. But, that seems to be entirely unfair. Whether one has a right to be rescued should not be determined by morally arbitrary factors such as one’s location and the number of victims in one’s physical proximity. Rather, one deserves to be rescued simply on the grounds of being a person with inherent moral status. Things beyond your control, and which you are not responsible for, should not affect the status of your moral entitlements. As a result, every child in the pond should have an equal chance of rescue. If we cannot save all of them, we should flip a coin to decide the one(s) that can be affordably saved. By the same logic, if doctors owe their patients equal respect and consideration, they should assign each one of them, regardless of morally arbitrary factors (such as age, gender, race, social status), an equal chance to receive sufficient medical care.

What about life expectancy? A doctor is faced with a choice of prolonging a patient’s life by 20 years and prolonging another patient’s life by 2 months. For many, maximizing life expectancy seems to be the primary moral factor to take into account. But, what if there is a conflict between maximizing lives and maximizing life? Suppose that we can either save a patient with a life expectancy of 20 years or save 20 patients with a life expectancy of 3 months each. Maximizing life expectancy entails saving the former, since 20 years of life count more than 5 years of life, while maximizing lives entails saving the latter. It could be argued that the role of medicine is not merely to prolong life but to enhance its quality; this would explain why we may be inclined to save the person with the longest life expectancy. A life span of 3 months is not an adequate amount of time to make plans and engage in valuable projects, and is also accompanied by a constant fear of death. Does that entail that we should maximize the quality of life as well? Faced with a choice between providing a ventilator to a patient who is expected to recover and lead a healthy and fulfilling life and providing a ventilator to a patient who has an intellectual disability, what should the doctor do? If the role of medicine is merely to maximize life quality, the doctor ought to give the ventilator to the first patient. However, as US disability groups have argued, such a decision would constitute a “deadly form of discrimination,” given that it deprives the disabled of their right to equal respect and consideration.

All in all, reigning over life and death is not as enviable as we might have thought.

The Ethics of Triage

photograph of empty cots in a medical tent

As the global crisis of the Coronavirus pandemic deepens we are facing a barrage of ethical problems related to the provision of health care.

Equitable access to medical treatment is an issue that will manifest on different levels. It will manifest globally: in areas where health systems are deficient or sections of the population have limited access, the effects stand to be much worse if large-scale infection takes hold.

Populations in countries with underlying issues of poverty or other large public health issues already putting stress on health systems will suffer higher mortality rates and may find it more difficult than wealthier nations to source supplies such as protective gear and medicines.

The statistics stand something like this: Of persons infected, about 20 out of 100 will need hospital care. Of those, about 5, or 5 percent of people overall, (roughly a quarter of those who are hospitalized) will need intensive care including the use of a respirator for assisted breathing.  Mortality rates from COVID-19 are differing between places, but on average it is as high as 3-6 per cent.

If the pandemic gets away from us and infections spiral, even developed countries with good health care, services will be stretched, likely way beyond capacity. As intensive care beds are filled, some people will miss out on medical resources. The question of who is going to miss out, or who is going to be prioritized, will leave doctors and medical staff facing very tough decisions about how best to distribute scarce resources.

When hospitalizations increase to the point where demand for intensive care outstrips capacity, the process of triage is used to make decisions about which patients to prioritize. I’ll come back to this concept of triage in a moment, but first, it could not be more urgent for people in places facing down imminent rises in infection rates and community infections to understand that the more preventative measures are heeded the more we reduce the need for doctors to make tough decisions in terms of access to care. Social distancing measures are vital because even those not as vulnerable to the worst outcomes of infection have a role to play in helping to curb its spread. Though around 80 percent of cases are mild, the danger lies in the threat of overwhelmed healthcare systems if really high percentages become infected –and this is why experts are telling us that we need stringent measures to contain the spread.

Triage is the treatment policy adopted in wartime where the numbers of casualties far outstripped medical resources in terms of access to doctors, medicines, and care facilities.

Wounded patients were divided into three categories: the first, those likely to survive without medical assistance; the second, those who may survive with assistance and probably not without; and the third, those who would probably not survive even with medical assistance. Of these categories, only those falling into the second would receive medical treatment.

How does such a principle look in the time of global Coronavirus pandemic? Hospitals may be forced to adopt such a policy with the use of intensive care staff and equipment, and as health systems reach breaking point, choices about who will get access to life-saving treatment will be a real ethical and practical issue.

How will those decisions be made? If someone needs intensive care their chances of survival without assistance are greatly reduced already. Patients deemed to have a higher chance of survival based on other factors, such as general health or age, are likely to be prioritized over those with existing health problems or the elderly.

It is possible that the elderly or terminally ill, for example, might be placed in the equivalent third category, so that the resources spent in trying to save them might be deemed better spent on someone whose chances of survival are good with care but poor without.

A raft of other factors could be in the mix. It is likely age would be a factor, and if numbers of infections rose sharply, there is the possibility of age cut-offs getting lower, so that first under 70 might be prioritized, next under 60, next under 50 and so on. Would profession be a consideration – should healthcare workers, for example, be prioritized? How about parents or people with young children, or other dependents?

These kinds of choices are not unfamiliar in bioethics (they have to be made, for instance, by doctors considering allocation of the fewer organs available for transplant than patients in need of them), but the salient difference here is the sheer numbers of cases where such decisions are faced.

By virtue of doctors and medical staff having to confront these tough triage decisions on a large scale, a kind of consequentialist ethics is forced upon them. Triage is inherently utilitarian, because it allocates resources according not to need but best outcome. A patient in poorer health has fundamentally higher care needs, which translates to demand on medicine, equipment, and staff; but if those resources can be split between two less critical patients with a reasonable chance of saving both, that is the best (probable) outcome. This decision is not based on individual patients’ needs, but on a better outcome overall, according to consequences.

Whatever factors come to play a role in individual decisions made by doctors and healthcare professionals, once the healthcare system has reached this stage there will necessarily have to be a process of ethical weighing-up of costs and benefits, which thrusts a utilitarian framework onto decision-making.

One may, in theory, reject utilitarian reasoning and argue that we have a duty to everyone, and that everyone has a right to equal treatment, access to care, or other necessities such as protective equipment. But rights are powerless when the capacity to uphold or honor them does not exist. In a scenario where infections spiral out of control and health systems collapse, the notion of a universal right to life-saving treatment will be meaningless.

This is an ethical issue in terms of how it affects individual outcomes throughout the pandemic, and it is also an ethical issue by virtue of the awful position it puts doctors, nurses, and medical staff in. Imagine having to choose between two young patients, one with a chronic condition so somewhat less likely to recover. Imagine having to choose between a healthcare worker and a layperson, or between the mother of an infant or an older child. The point is that it can become a situation where doctors are forced to make ‘ethically impossible’ choices.

Peter Singer, a utilitarian philosopher, claims ethics is not an ‘ideal’ system–that it is not something which works only in theory–but, he says, “the whole point of ethical judgements is to guide practice.” In other words, ethics is not about ideals, but practical outcomes.

He is right, in the context of triage in the age of COVID-19, only insofar as these particular practical ethical issues arise as a result of better ethical options, like preparedness and mitigation, having been foregone. In other words, if ethics is not an ideal but a practical reality, utilitarian ethics is a reality here not because it was, as it turns out, right all along, but because other ethical failures have put us in the position of being left with no other choice.

I said at the beginning that the more preventative measures are heeded, the more we reduce the need to make tough decisions in terms of access to care. Triage is not therefore an ethical position, but rather the unhappy position of having to use a kind of moral calculus, which it is better to have avoided in the first place. We therefore need to mobilize our capacity, at the individual level and as a society, using the measures epidemiologists are urging, to mitigate the need for triage. We can think of it as our duty to our families, to our communities, to our nations, and to humanity. Failure at this level would be an ethical failure.

We should, however, take the opportunity to consider what other ethical failures threaten to lead us to disaster in this crisis. Given the general shortage of specialist care facilities, and even of basic protective gear for front-line staff in many parts of the world, the issue of preparedness is also burning.

Why are there not enough critical care facilities in so many countries when a deadly global pandemic has been warned of for many decades? Many nations spend large percentages of their GDP on defense against threats of invasion or international conflict, yet are completely, tragically unprepared for this, predictable, event.

The situation of front-line medical staff having to make heart-rending decisions about who will receive life saving medical treatment and who will miss out is a morally onerous burden that could, had governments better protected their citizens by being ready for such an event, have been largely prevented.

Emergency Rationing in Italy

blurred photograph of crowded hospital waiting room

When facing rationing health care resources, how can we ethically make decisions regarding directing care? In answering, we may attend to a famous thought experiment that brings out the tensions of making choices facing lose-lose options: The Trolley Problem. Originally articulated by Philippa Foot in 1967 in order to draw out tensions in utilitarian moral frameworks, this thought experiment has highlighted distinctions in common moral intuitions in domains from bioethics to military ethics. The classic trolley case was posed in a series of cases that press on whether considering the consequences of a presented choice is the correct deliberative path:

“Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.”

This last case has been taken up as The Trolley Problem: a runaway tram must be directed either towards a track with five working men on it or a track with one man on it. Each case is presented in terms where the decision is between an action that results in the deaths of five or the death of one.

Morally relevant features in the deliberation favor different schools of thought in ethics: the consideration in favor of minimizing lives lost highlights the importance of the consequences of the choice, diverting the tram at all may speak to implicating an agent in the deaths (however many result), or, on the other hand, we may think that facing the choice implicates the agent whether she acts or not, or that making a choice between the paths qualifies vicious or problematic because it suggests that lives can be reduced to figures and statistics instead of adopting an appropriate respect for the incommensurate value of human lives.

The confounding tension of these (and likely other) morally relevant features in the trolley problem makes it an ethical puzzle that has stuck with philosophers and non-philosophers for decades. Though it was originally presented to draw out the tensions between favoring the morally relevant consequences of an act and any other features, the difficulty in squaring our explanation for the morally permissible response to cases like the trolley problem has led to various interpretations of moral intuitions and ethical principles in their own right.

For instance, Foot uses the case to discuss the Doctrine of Double Effect, which dates back to Thomas Aquinas in the history of “Western” philosophy. It draws a distinction between what you aim to do and the side effects of your action. If your aim is morally permissible, but it has morally bad effects, the Doctrine of Double Effect delineates when such actions are permissible. If you foresee negative side effects of your choice, but they are not part of your aim, then your choice is morally permissible. If the morally bad effects of your action are part of your aim, or are a necessary part of achieving your aim, then we attribute the effects to your action and it is not morally permissible.

Thus, if you redirect the tram to collide with one person rather than five, this would qualify as a morally permissible action because your aim is not to kill the one person, rather it is to save the five people (or to minimize deaths), and the death of the one person is a foreseen side effect. If the one person did not die, all the better, from the perspective of your aims and choice.

Between consequentialist reasoning (minimize deaths!) and principles like the Doctrine of Double Effect (bad effects are permissible as long as your aim is good and outweighs the bad!), there are multiple ethical frameworks that can make sense of permissible harm, even deaths, that result from one’s actions.

The healthcare choices facing the medical community in Italy are reaching a selection framework similar to the trolley problem. In the case of triage, or prioritizing some patient care over others, the side effects are clearly unfortunate; some people will not be receiving care that they need. Typically, decisions regarding triage prioritize care roughly in terms of first-come, first-served mitigated by severity. But when resources become extreme in terms of scarcity, or conditions become extreme in terms of survival, the stakes change. On battlefields and in the conditions we are seeing in Italy, the situations of need are such that physicians are facing incredibly difficult rationing decisions. By giving resources to one patient, they can anticipate others experiencing significant harm, deteriorating health, or even death.

The reality of the effects of COVID-19 in Italy is that resources have become incredibly scarce remarkably quickly. Resources include staff time and intention, materials like masks and respirators, and space like beds and rooms. There are limited amounts of each, and decisions regarding how to allot them are particularly fraught when lives are at stake.

In an opinion article for the New York Times, medical experts articulated the difficulty facing physicians:

“The goal should be saving as many people as possible, and treating those who are likely to get the greatest benefit from care. This will mean that treatment cannot be allocated on a first-come-first-served basis, as it normally is. Traditionally, patients on ventilators are not displaced for other patients, and later arriving patients can be turned away in a shortage.

But in the coronavirus pandemic, business as usual would make patients with a good prognosis if treated suffer for want of treatment, while patients who arrive earlier but have a grave, or even hopeless, prognosis would receive treatment. Under that standard of care, more lives would be lost.”

The advice here adopts a standard of care that aims to maximize lives saved, similar to the majority of respondents to the Trolley Problem. In crises like the one affecting areas currently hit hardest by this pandemic, the calculation that saves the most lives means an alteration in how we ration care, in how we triage.

A major concern is maintaining the health of those professionals who are treating the ill. Thus, the role of the patient will play a large role in allocating health care. The perennial press for the Trolley Problem is what the physicians are currently facing: “What if the one person on the tracks opposite the five is the leader of a country?” “What if the one person is in charge of their family?” “What if the one person is a doctor?”… Unfortunately, this last question is particularly pertinent. As health care professionals treating the ill are currently in high demand, it is crucial to keep them healthy. Keeping one health care worker able to serve the ill population has ripple effects for the health of the community.

Deciding which principles to adopt in order to protect the health of our communities in the face of this pandemic are going to be difficult. The decision to withhold care is heart-wrenching, and should put pressure on our global community to increase the resources available to those in need to reduce the necessary triage and rationing. Indeed, that is almost always a response when the Trolley Problem is posed – surely, there’s a way out of making this decision. We have a moral obligation to help one another, and as Italy is part of the EU, perhaps the EU is specially positioned to provide aid and resources (and is, perhaps, failing in this duty).

Debunking the Marshmallow Myth: Rationality in Scarcity

photograph of several marshmallows, the largest in the center standing upright

On May 25th, researchers published findings that altered our understanding of a classic psychological study, the marshmallow test. In the famous test, young children are offered a marshmallow now, or two marshmallows later. Then, researchers follow up with the children, and supposedly those that delayed gratification for more marshmallows did better in terms of standardized tests and other measures of success. Continue reading “Debunking the Marshmallow Myth: Rationality in Scarcity”

Questions on the Ethics of Triage, Posed by a Sub-Saharan Ant

an image of an anthill

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In a new study published in Proceedings of the Royal Society B, behavioral ecologist Erik Frank at the University of Lausanne in Switzerland and his colleagues discuss their findings that a species of sub-Saharan ants bring their wounded hive-mates back to the colony after a termite hunt. This practice of not leaving wounded ants behind is noteworthy on its own, but Frank and fellow behavioral ecologists note that the Matabele ants (Megaponera analis) engage in triage judgments to determine which injured ants are worth or possible to save–not all living wounded are brought back to the nest for treatment.

Continue reading “Questions on the Ethics of Triage, Posed by a Sub-Saharan Ant”

Tragedy of the Commons in Cape Town’s Water Crisis

Cape Town, South Africa is running out of water. In less than 90 days, the city’s reservoirs will be so dry that the water in them will be too silty to be usable. This will be the first time that a major city will run out of water, and the world is watching the city’s attempts to delay what has seemed inevitable for months. Cape Town has faced three years of drought that climatologists have called a “once in a millennium” phenomenon, which, paired with the rapid increase in population, has led to a dire and record-breaking situation.

Continue reading “Tragedy of the Commons in Cape Town’s Water Crisis”