← Return to search results
Back to Prindle Institute

The Perils of Perfectionism in Energy Policy

nuclear power plant tucked in rolling green hills

Last month, Germany closed its three remaining nuclear power plants, eliciting an open letter of protest from two Nobel laureates, senior professors, and climate scientists. Nuclear energy is one of, perhaps the, least carbon-intensive power sources, additionally boasting a smaller environmental impact than some other low-carbon alternatives due to its compact footprint. However, Germany has struggled to replace its fossil fuel plants with greener options. Consequently, phasing out nuclear energy will require burning more coal and gas, increasing emissions of CO2 and deadly air pollutants.

Ironically, the political movement against German nuclear power was led by ecological activists and the Green Party. According to their election manifesto, nuclear energy is “a high-risk technology.” Steffi Lemke, Federal Minister for the Environment and Nuclear Safety, argued, “The phase-out of nuclear power makes our country safer; ultimately, the risks of nuclear power are uncontrollable.”

While there is some risk associated with nuclear energy, as evidenced by disasters like Chernobyl, the question remains: Are the German Greens justified in shutting down nuclear power plants due to these risks?

Risks, even very deadly ones, can be justified if the benefits are significant and the chance of a bad outcome is sufficiently low. The tradeoff with nuclear power is receiving energy at some level of associated risk, such as a nuclear meltdown or terrorist attack. Despite these risks, having access to energy is crucial for maintaining modern life and its conveniences – lights, computers, the internet. In fact, our lives might be more dangerous without energy, as our society would be much poorer and less capable of caring for its citizens.

It might be argued that another energy source could provide the same benefits without the risks of nuclear power. However, it is essential to gain perspective on the relative risks involved. Despite the fixation on nuclear meltdowns, nuclear power is significantly less risky than alternatives.

For every terawatt hour (TWh) produced, coal energy, still widely used in Germany, causes an estimated 25 deaths through accidents and air pollution. Natural gas, which is growing in German energy production, is safer, causing around three deaths per TWh. In contrast, nuclear power results in only 0.07 deaths/TWh, making it 467 times safer than brown coal and 40 times safer than natural gas. Accounting for deaths linked to climate change would further widen these disparities. A coal plant emits 273 times more CO2 (and 100 times more radiation) than a similar-sized nuclear plant. By eliminating the risks of nuclear energy, Germany inadvertently takes on even greater environmental and health risks.

Germany is in the process of transitioning to renewable energy sources, such as wind and solar. It may be justifiable to shut down nuclear power and eliminate the associated risks assuming that nuclear power is being entirely replaced with renewable sources. However, as of 2021, 75% of German energy came from fossil fuels. Had Germany maintained its nuclear power plants, its growing renewables could be replacing much more fossil fuel energy production. Replacing good with good is not as impactful as replacing bad with good.

The German Greens are correct that nuclear power has some associated environmental and health risks. They chose a strategy of moral perfectionism, doing whatever was necessary to eliminate those risks.

But pushing to eliminate nuclear energy, in the name of safety and environmentalism, has inadvertently led to increased reliance on fossil fuels and heightened environmental and health risks. This demonstrates the potential pitfalls of adhering to our principles and values without considering compromises and trade-offs.

We should, however, be cautious. Just as moral perfectionism can lead us astray, too easily abandoning our principles in the name of pragmatism risks ethical failures of other kinds.

Act consequentialism is probably the most “pragmatic” moral theory. It posits that the right action is whatever creates the best consequences. You should lie, steal, and kill whenever it produces the best outcome (although it rarely does).

Critics of consequentialism argue that it leaves little room for individuals to maintain their integrity or act on their personal values. The philosopher Bernard Williams provided an illustration: Jim, a tourist in a small South American town, finds himself with a terrible choice to either kill one innocent villager or let the local captain kill all twenty villagers. The utilitarian answer is clear: Jim should kill one villager to save the others, as it produces the best outcome. However, Williams argued that we could understand if Jim couldn’t bring himself to kill the innocent villager. If Jim failed to do so, we might not blame him, or at least not blame him harshly. Yet, utilitarianism suggests that Jim would be doing just as much wrong as if he personally killed all but one of the villagers. His action resulted in nineteen more deaths. This example demonstrates the extreme moral pragmatism of consequentialism, which seemingly overlooks the importance of personal integrity and living according to one’s beliefs and values. This is the danger of taking moral pragmatism too far.

But the anti-nuclear Greens may provide an example of moral perfectionism going too far. Morality is not solely about sticking to your principles. Balancing costs and benefits, compromising, and prioritizing are all equally important. We cannot afford to let the pursuit of perfection prevent us from doing the good we can. But neither can we entirely abandon our personal values and principles, as doing so risks devaluing the personal factors that allow us to make sense of our lives. Perhaps there is some room, in some cases, for acting on principle even if it doesn’t result in the best consequences.

Is It Wrong to Say the Pandemic Is Over?

photograph of President Biden at podium

President Biden’s recent statement that the pandemic is “over” sparked a flurry of debate as many experts arguing that such remarks are premature and unhelpful. Biden’s own officials have attempted to walk back the remarks, with Anthony Fauci suggesting that Biden simply meant that the country is in a better place now compared to when pandemic first began. Some have even suggested that Biden is simply wrong in his assertion. But was it really wrong to say that the pandemic is over? Does the existence of a pandemic depend on what experts might say? Who should get to say if a pandemic is over? Are there moral risks to either declaring victory too soon or admitting achievements too late?

Following Biden’s statement many of his own COVID advisors seemed surprised. A spokesperson for the Department of Health and Human Services reiterated that the public health emergency remains in effect and that there would be a 60 day notice before ending it. Fauci suggested that Biden meant that the worst stage of the pandemic is over, but noted, “We are not where we need to be if we are going to quote ‘live with the virus’ because we know we are not going to eradicate it.” He also added, “Four hundred deaths per day is not an acceptable number as far as I’m concerned.” Biden’s Press Secretary Karine Jean-Pierre has conceded that the pandemic isn’t “over,” but that “it is now more manageable” with case numbers down dramatically from when Biden came to office.

The World Health Organization also weighed in on Biden’s assertion with WHO Director-General Tedros Adhanom Ghebreyesus stating that the end “is still a long way off…We all need hope that we can—and we will—get to the end of the tunnel and put the pandemic behind us. But we’re not there yet.” When asked whether there are criteria in place for the WHO to revoke the declaration of a public health emergency, WHO representative Maria Van Kerkhove said that it “is under active discussion.”

With nearly 400 deaths in America per day from COVID, and over one million dead in the U.S alone, many have been critical of the president’s remarks.

2 million new COVID infections were confirmed last month and there is still a concern among many about the effects of long COVID with persistent and debilitating symptoms for months after infection. Some estimates suggest that as many as 10 million Americans may suffer from this condition. The virus has also become more infectious as mutations produce new variants, and there is a concern that the situation could become worse.

Critics also suggest that saying that the pandemic is over sends the wrong message. As Dr. Emily Langdon of the University of Chicago noted, “The problem with Biden’s message is that it doubles down on this idea that we don’t need to worry about COVID anymore.” Saying that the pandemic is over will discourage people from getting vaccinated or getting boosters while less that 70% of Americans are fully vaccinated. Declaring the pandemic over also means an end to the emergency funds provided during the pandemic, perhaps even including the forgiveness of student debt.

On the other hand, there are those who defend the president’s assertion. Dr. Ashwin Vasan notes that “We are not longer in the emergency phase of the pandemic…we haven’t yet defined what endemicity looks like.”

This is an important point because there is no single simple answer to what a pandemic even is.

Classically a pandemic is defined as “an epidemic occurring worldwide, or over a very wide area, crossing international boundaries and usually affecting a large number of people.” However, this definition does not mention severity or population immunity. In fact, the definition of “pandemic” has been modified several times after the last few decades and currently the WHO doesn’t even use the concept as an official category. Most definitions are aimed at defining when the problem begins and not where it ends.

This reminds us that while there is an epidemiological definition of “pandemic,” the concept is not purely a scientific term. To the extent that public policy is shaped by pandemic concerns, then a pandemic is also a political concept. The declaration that the pandemic is “over” is, therefore, not purely a matter for experts. As I have discussed previously, there needs to be democratic input in areas of science where expert advice affects public policy precisely because there are also many issues involved that require judgments about values.

Some might suggest that the decision should be entirely up to scientists. As Bruce Y. Lee of Forbes writes, “there was the President of the U.S., who’s not a scientist or medical expert, at the Detroit Auto Show, which is not a medical setting, making a statement about something that should have been left for real science and real scientists to decide.” But this is simply wrong.

Yes, people don’t get a say about what the case numbers are, but to whatever extent there is a “pandemic” recognized by governments with specific government policies to address these concerns, then people should get a say. It is not a matter for scientist to decide on their own.

Many experts have suggested the saying the pandemic is over will lead people to think we don’t need to care about COVID anymore. David Dosner from Columbia University’s Mailman School of Public Health has expressed the concern that Biden’s comments will give a “kind of social legitimacy to the idea of going into crowds, and it just makes some people feel awkward not not doing that.” But ironically, the same experts who profess the need to follow the science, seem to have no problem speculating without evidence. How does anyone know that Biden’s statements would discourage people from getting vaccinated? Is anyone really suggesting that after all this time, the remaining 30% of the country that isn’t vaccinated is suddenly going to drop their plans to get vaccinated because of what Joe Biden said?

There is no good reason why saying the pandemic over would mean giving up our efforts to fight COVID. As noted, the term has no official use. The emergency declarations by the WHO and Department of Health would carry on regardless. On the other hand, despite the case rates, people around the world are returning to their lives. Even Canada recently announced the end to border vaccine mandates. While Fauci may not be comfortable with 400 deaths per day, maybe the American people are. As governments and the public lose interest in treating the pandemic as a “pandemic,” scientists risk straining their own credibility by focusing on what is important to them rather than gauging what the public is prepared to entertain policy-wise.

In an age of polarization and climate change, scientists need to be conscious about public reactions to their warnings. There is a risk that if the public construes the experts’ insistence on the pandemic mindset – despite the worst-case scenarios seeming to be increasingly remote – as ridiculous, then they will be less likely to find such voices credible in the future. When the next crisis comes along, the experts may very well be ignored. Yes, there are moral risks to declaring the pandemic over prematurely, but there are also very real moral risks to continuing to insist that it isn’t.

Insurance, Natural Disasters, and the Relevance of Luck

photograph of black smoke and forest fire approaching apartments

Last year, Hurricane Ida caused around $30 billion in damages. This cost was largely borne by insurers, forcing some companies to declare insolvency. The United States is now on the brink of another active hurricane season, and – as a result – insurers in some of the riskiest parts of the country are cancelling home insurance policies. Around 80,000 homes in Louisiana have already lost their coverage, with an additional 80,000 Floridian policy holders set to be affected by the end of this week.

This is nothing new. The worsening climate crisis has seen a marked increase in the number of uninsurable homes. For example, in Australia – a country blighted by recent wildfires and floodsaround 720,000 homes are set to be completely uninsurable by the end of the century. When these homes are damaged or destroyed, their occupants are left destitute.

What, then, should we be doing to help these people? More generally, what kind of moral obligations do we have to people who lose their homes as the result of a natural disaster?

One way of approaching this issue is through the concept of “luck.” We experience luck all the time – some of it good, some of it bad. And bad luck comes in many different forms: We might have our car destroyed by an errant bolt of lightning; or we might lose our entire life savings betting on a bad hand of poker. In both cases, we’re left worse off. But the obligation on others to help us may very well differ. Consider the bolt of lightning. This is, perhaps, the purest example of a case of “brute” bad luck. Compare this with that losing hand of poker. Sure, there’s still an element of luck at play: if the random shuffle of the deck had dealt me a better hand, I might’ve won. But I knew what I was signing up for. I made a calculated gamble knowing there was a good chance I might lose all of my money. Unlike the bad luck of being struck by lightning, the losing hand of poker was bad luck I opted in to – “option” luck, if you will.

This distinction between “brute” bad luck and “option” bad luck forms an important part of how Luck Egalitarians see our obligations to help others.

Stated in its simplest form, Luck Egalitarianism says that we have a moral obligation to help those who suffer from bad brute luck, but not those who suffer from bad option luck.

So, how does Luck Egalitarianism help us with disaster-prone homeowners? Well, wildfires and floods are (like bolts of lightning) clear cases of bad brute luck. But this doesn’t necessarily mean we have an automatic obligation to help those who lose their homes to such disasters. Here’s the thing: wildfires and floods aren’t entirely unpredictable. They tend to occur in disaster-prone areas, and such areas are usually well-documented. In fact, those who choose to build their homes in a risky location will usually find themselves paying substantially less for their homes. In this way, these people make a calculated gamble – so when disaster does inevitably strike, it is instead a case of bad option luck, not brute luck.

At least, it used to be this simple. For a long while, disaster-prone areas were largely stable. But the climate crisis has seen a swift end to that. Risky areas are growing, and wildfire and flood seasons are lengthening and worsening. Someone who chose to build in a safe area several decades ago may now find their home regularly in the path of catastrophe.

But what about the fact that these people choose to stay in disaster-prone areas? Sure, there may have initially been no risk in that location, but now that there is, doesn’t their choice to stay make any disaster that befalls them a case of bad option luck?

The Luck Egalitarian would say “yes” – but only if a homeowner actually has a choice in the matter. Sadly, many don’t. Selling a home that’s at risk of imminent destruction is hard. What’s more, selling it for a price that allows the occupants to afford a new home in a safer area is even more difficult. For this reason, many disaster-prone homeowners find themselves stranded – unable to afford to move. So, if someone builds a home in an area that later becomes disaster-prone, and – as a result – cannot afford to move, the Luck Egalitarian will still see the destruction of their home as a case of bad brute luck.

But there’s one final complication – namely, the role played by insurance. According to some Luck Egalitarians, the availability of affordable insurance is sufficient to convert bad brute luck into bad option luck. Why? Consider the lightning example again. Suppose that automotive lightning strikes are frequent in my area, but that full lightning coverage for my car is available for only $1 per month. I, however, opt not to purchase this insurance and instead spend my money on something more frivolous. Suppose, then, that the inevitable happens and my car is destroyed by a random bolt of lightning. While the lightning might be an “act of God,” the loss of my car is not. Why?

Because that loss is a combination of both the lightning and my calculated gamble to not purchase insurance. I, essentially, signed up for the loss of my car.

This is why insurance is so important when considering what we owe to disaster-prone homeowners. If affordable insurance is widely available – and a homeowner refuses to purchase it – any disaster that befalls them will be a case of bad option luck. This will mean that – for the Luck Egalitarian, at least – the rest of us have no specific moral obligations to help those individuals. When such insurance isn’t available, however (as it no longer is for many residents of Louisiana and Florida) the story changes. Those who (1) built their homes in previously safe areas that have now become disaster-prone; (2) subsequently cannot afford to move; and (3) inevitably find themselves the victims of natural disasters are victims of bad brute luck. And this may very well put strong moral obligations on the rest of us to come to their aid – either as individuals, or through our elected government. As the climate crisis worsens – and more and more homes become uninsurable – the need for this kind of assistance will only grow.

Nuclear War and Scope Neglect

photograph of 'Fallout Shelter' sign in the dark

“Are We Facing Nuclear War?”The New York Times, 3/11/22

“Pope evokes spectre of nuclear war wiping out humanity” — Reuters, 3/17/22

“The fear of nuclear annihilation raises its head once more” — The Independent, 3/18/22

“The threat of nuclear war hangs over the Russia-Ukraine crisis”NPR, 3/18/22

“Vladimir Putin ‘asks Kremlin staff to perform doomsday nuclear attack drill’”The Mirror, 3/19/22

“Demand for iodine tablets surge amid fears of nuclear war”The Telegraph, 3/20/22

“Thinking through the unthinkable”Vox, 3/20/22

The prospect of nuclear war is suddenly back, leading many of us to ask some profound and troubling questions. Just how terrible would a nuclear war be? How much should I fear the risk? To what extent, if any, should I take preparatory action, such as stockpiling food or moving away from urban areas?

These questions are all, fundamentally, questions of scale and proportion. We want our judgments and actions to fit with the reality of the situation — we don’t want to needlessly over-react, but we also don’t want to under-react and suffer an avoidable catastrophe. The problem is that getting our responses in proportion can prove very difficult. And this difficulty has profound moral implications.

Everyone seems to agree that a nuclear war would be a significant moral catastrophe, resulting in the loss of many innocent lives. But just how bad of a catastrophe would it be? “In risk terms, the distinction between a ‘small’ and a ‘large’ nuclear war is important,” explains Seth Baum, a researcher at a U.S.-based think tank, the Global Catastrophic Risk Institute. “Civilization as a whole can readily withstand a war with a single nuclear weapon or a small number of nuclear weapons, just as it did in WW2. At a larger number, civilization’s ability to withstand the effects would be tested. If global civilization fails, then […] the long-term viability of humanity is at stake.”

Let’s think about this large range of possible outcomes in more detail. Writing during the heights of the Cold War, the philosopher Derek Parfit compared the value of:

    1. Peace.
    2. A nuclear war that kills 99% of the world’s existing population.
    3. A nuclear war that kills 100%.

Everyone seems to agree that 2 is worse than 1 and that 3 is worse than 2. “But,” asks Parfit, “which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater.”

Parfit was, it turns out, correct about what most people think. A recent study posing Parfit’s question (lowering the lethality of option 2 to 80% to remove confounders) found that most people thought there is a greater moral difference between 1 and 2 than between 2 and 3. Given the world population is roughly 8 billion, the difference between 1 and 2 is an overwhelming 6.4 billion more lives lost. The difference between 2 and 3 is “only” 1.6 billion more lives lost.

Parfit’s reason for thinking that the difference between 2 and 3 was a greater moral difference was because 3 would result in the total extinction of humanity, while 2 would not. Even after a devastating nuclear war such as that in 2, it is likely that humanity would eventually recover, and we would lead valuable lives once again, potentially for millions or billions of years. All that future potential would be lost with the last 20% (or in Parfit’s original case, the last 1%) of humanity.

If you agree with Parfit’s argument (the study found that most people do, after being reminded of the long-term consequences of total extinction), you probably want an explanation of why most people disagree. Perhaps most people are being irrational or insufficiently imaginative. Perhaps our moral judgments and behavior are systematically faulty. Perhaps humans are victims of a shared psychological bias of some kind. Psychologists have repeatedly found that people aren’t very good at scaling up and down their judgments and responses to fit the size of a problem. They name this cognitive bias “scope neglect.”

The evidence for scope neglect is strong. Another psychological study asked respondents how much they would be willing to donate to prevent migrating birds from drowning in oil ponds — ponds that could, with enough money, be covered by safety nets. Respondents were either told that 2,000, or 20,000, or 200,000 birds are affected each year. The results? Respondents were willing to spend $80, $78, and $88 respectively. The scale of the response had no clear connection with the scale of the issue.

Scope neglect can explain many of the most common faults in our moral reasoning. Consider the quote, often attributed to Josef Stalin, “If only one man dies of hunger, that is a tragedy. If millions die, that’s only statistics.” Psychologist Paul Slovic called this tendency to fail to conceptualize the scope of harms suffered by large numbers of people mass numbing. Mass numbing is a form of scope neglect that helps explain ordinary people standing by passively in the face of mass atrocities, such as the Holocaust. The scale of suffering, distributed so widely, is very difficult for us to understand. And this lack of understanding makes it difficult to respond appropriately.

But there is some good news. Knowing that we suffer from scope neglect allows us to “hack” ourselves into making appropriate moral responses. We can exploit our tendency for scope neglect to our moral advantage.

If you have seen Steven Spielberg’s Schindler’s List, then you will remember a particular figure: The girl in the red coat. The rest of the film is in black and white, and the suffering borders continually on the overwhelming. The only color in the film is the red coat of a young Jewish girl. It is in seeing this particular girl, visually plucked out from the crowd by her red coat, that Schindler confronts the horror of the unfolding Holocaust. And it is this girl who Schindler later spots in a pile of dead bodies.

The girl in the red coat is, of course, just one of the thousands of innocents who die in the film, and one of the millions who died in the historical events the film portrays. The scale and diffusion of the horror put the audience members at risk of mass numbing, losing the capacity to have genuine and appropriately strong moral responses. But using that dab of color is enough for Spielberg to make her an identifiable victim. It is much easier to understand the moral calamity that she is a victim of, and then to scale that response up. The girl in the red coat acts as a moral window, allowing us to glimpse the larger tragedy of which she is a part. Spielberg uses our cognitive bias for scope neglect to help us reach a deeper moral insight, a fuller appreciation of the vast scale of suffering.

Charities also exploit our tendency for scope neglect. The donation-raising advertisements they show on TV tend to focus on one or two individuals. In a sense, this extreme focus makes no sense. If we were perfectly rational and wanted to do the most moral good we could, we would presumably be more interested in how many people our donation could help. But charities know that our moral intuitions do not respond to charts and figures. “The reported numbers of deaths represent dry statistics, ‘human beings with the tears dried off,’ that fail to spark emotion or feeling and thus fail to motivate action,” writes Slovic.

When we endeavor to think about morally profound topics, from the possibility of nuclear war to the Holocaust, we often assume that eliminating psychological bias is the key to good moral judgment. It is certainly true that our biases, such as scope neglect, typically lead us to poor moral conclusions. But our biases can also be a source for good. By becoming more aware of them and how they work, we can use our psychological biases to gain greater moral insight and to motivate better moral actions.

What If You Aren’t Sure What’s Moral?

photograph of a fork in the path

Today, I woke up in a soft bed in a heated apartment. I got up and made full use of the miracle of indoor plumbing before moving on to breakfast. Pouring myself a bowl of vitamin-enriched cereal and milk (previously delivered to my doorstep) I had to admit it: modern life is good.

Opening up my laptop, my gratitude for modernity diminished as quickly as my browser tabs multiplied. Our phones and laptops are not just tools. They are portals to another world — a relentless world of news, opinion, and entertainment. We’re living through the age of information overload. On average, we now consume 174 newspapers worth of information each day. “I’ve processed more information in the last 48 hours than a medieval peasant would in a lifetime,” reads a well-liked tweet.

And yet, amid this tsunami of information, we seem to have less certainty than ever. Controversy and discord reign. There is little agreement about basic facts, let alone about what is to be done. Is it time to lift COVID-19 restrictions yet? Is American democracy at risk of failure? Are plastics killing us? Should we allow genetically modified foods? Will climate change be simply bad or disastrous? I have my opinions, and I’m sure you have yours, but do any of us know the answers to any of these questions with certainty?

As well as uncertainty about the facts, we continually find ourselves facing moral uncertainty. Moral theories and views divide both public and philosophical opinions. They defy consensus. Is euthanasia morally permissible? Is abortion? Eating meat? Amid our unprecedented access to a wide range of moral arguments and views, all competing for our allegiance, we are left to come to our own moral conclusions. If we are being brutally honest with ourselves, we probably aren’t absolutely certain about all of our moral views.

In these conditions, moral uncertainty is the norm. But, as the Samuel Beckett line goes, “You must go on.” Even if you don’t know for sure what the right moral view is, reality refuses to stop the clock to let you figure it out. You have to act one way or another, despite your moral uncertainty. Being uncertain doesn’t take you off the hook of moral responsibility. Neither does refusal to act. As climate change illustrates, refraining from taking decisions can be just as disastrous as making the wrong decisions.

So, how can you go on under these conditions of moral uncertainty? Let’s take a concrete example. What if you think eating meat is morally permissible, but you’re not totally sure? If you’re willing to admit there’s some chance you could be wrong about the morality of vegetarianism, what should you do? Keep eating meat? Or give it up?

The philosopher William MacAskill argues that if you are morally uncertain about vegetarianism, you should give up eating meat. In fact, even if you think there’s only a 10% chance that vegetarianism is the right moral view, you should still give up meat.

MacAskill thinks there’s an asymmetry in the moral risks you’re running. “If you eat veggie and eating meat is permissible, well, you’ve only lost out on a bit of pleasure,” says MacAskill, “But if you eat meat and eating meat is impermissible, you’ve done something very wrong.” Maybe you should give up a bit of pleasure to avoid the risk of doing something really morally terrible, even if the probability that you would be doing something really morally terrible is relatively low. “The morally safe option,” claims MacAskill, “is to eat vegetarian.”

We can apply MacAskill’s approach to other problems where we face moral uncertainty. Peter Singer famously argued that failing to donate money to help alleviate suffering in the developing world is just as morally wrong as letting a child drown in front of you. Most of us seem to think that Singer’s moral claims are too strong; we don’t think we are morally obligated to donate to charities, even if we think it is morally good – beyond what we are obligated to do – to donate. However, it seems at least possible that Singer is right. If he is right, then not giving any money would be very wrong, as wrong as letting a child drown. But if Singer is wrong, then all I’d lose by donating is a bit of money. Given the moral risk, the appropriate choice seems to be to donate some money to charity.

These two cases might make MacAskill’s approach look appealing. But it can also get strange. Imagine you really want to have a child. You are near-certain that having a child is morally permissible. In fact, you think having a child, bringing a happy person into the world, would be a serious moral good. You also think there’s a tiny (less than one percent) chance that anti-natalism is true. According to the version of anti-natalism you’re considering, by having a child you’re doing something morally terrible — bringing into existence a chain of human suffering that will continue for millennia. If anti-natalism says that having a child is morally wrong enough, then it would be less morally risky for you to simply not have a child. But should you really not have a child in such a case? Even though you believe with near-certainty that doing so would be a morally good thing? That seems like a strange conclusion.

The ethicists Johan Gustafsson and Olle Torpman give an alternative framework for thinking about how we should act under moral uncertainty. When we think of good, moral people, we generally think they are conscientious; they are typically true to what they believe is right. To put it another way, we think that a moral, conscientious person won’t do what they sincerely believe to be wrong. In the child example, your sincere, near-certain belief is that it is permissible, perhaps even a good thing, to have a child. MacAskill’s approach to dealing with moral uncertainty seems to say you ought not to have a child. But how can a moral theory that you don’t believe in matter more than the one you do believe in? For these reasons, Gustafsson and Torpman propose a much simpler approach: act in accordance with the moral view that you are most confident in. In this case, that would mean you should have the child that you want.

This simpler approach to dealing with moral uncertainty might seem straightforward and convincing. But I invite the reader to go back and apply Gustafsson and Torpman’s approach to the two cases discussed earlier, of charity and vegetarianism. Arguably, their approach gives less convincing advice in these cases.

How we should act given moral uncertainty is an important question for the discordant moment in which we are living. Whether we have the correct answer to this question remains far from clear.

Who Is Accountable for Inductive Risk in AI?

computer image of programming decision trees

Many people are familiar with algorithms and machine learning when it comes to applications like social media or advertising, but it can be hard to appreciate all of the diverse applications that machine learning has been applied to. For example, in addition to regulating all sorts of financial transactions, an algorithm might be used to evaluate teaching performances, or in the medical field to help identify illness or those at risk of disease. With this large array of applications comes a large array of ethical factors which become relevant as more and more real world consequences are considered. For example, machine learning has been used to train AI to detect cancer. But what happens when the algorithm is wrong? What are the ethical issues when it isn’t completely clear how the AI is making decisions and there is a very real possibility that it could be wrong?

Consider the example of applications of machine learning in order to predict whether someone charged with a crime is likely to be a recidivist. Because of massive backlogs in various court systems many have turned to such tools in order to get defendants through the court system more efficiently. Criminal risk assessment tools consider a number of details of a defendant’s profile and then produce a recidivism score. Lower scores will usually mean a more lenient sentence for committing a crime, while higher scores will usually produce harsher sentences. The reasoning is that if you can accurately predict criminal behavior, resources can be allocated more efficiently for rehabilitation or for prison sentences. Also, the thinking goes, decisions are better made based on data-driven recommendations than the personal feelings and biases that a judge may have.

But these tools have significant downsides as well. As Cathy O’Neil discusses in her book Weapons of Math Destruction, statistics show that in certain counties in the U.S. a Black person is three times more likely to get a death sentence than a white person, and so the application of computerized risk models intended to reduce prejudice, are no less prone to bias. As she notes, “The question, however, is whether we’ve eliminated human bias or simply camouflaged it with technology.” She points out that questionnaires used in some models include questions like when “the first time you ever were involved with the police” which is likely to yield very different answers depending on whether the respondent is white or Black. As she explains “if early ‘involvement’ with the police signals recidivism, poor people and racial minorities look far riskier.” So, the fact that such models are susceptible to bias also means they are not immune to error.

As mentioned, researchers have also applied machine learning in the medical field as well. Again, the benefits are not difficult to imagine. Cancer-detecting AI has been able to identify cancer that humans could not. Faster detection of a disease like lung cancer allows for quicker treatment and thus the ability to save more lives. Right now, about 70% of lung cancers are detected in late stages when it is harder to treat.

AI not only has the potential to save lives, but to also increase efficiency of medical resources as well. Unfortunately, just like the criminal justice applications, applications in the medical field are also subject to error. For example, hundreds of AI tools were developed to help deal with the COVID-19 pandemic, but a study by the Turing Institute found that AI tools had little impact. In a review of 232 algorithms for diagnosing patients, a recent medical journal paper found that none of them were fit for clinical use. Despite the hype, researchers are “concerned that [AI] could be harmful if built in the wrong way because they could miss diagnoses and underestimate the risk for vulnerable patients.”

There are lots of reasons why an algorithm designed to detect things or sort things might make errors. Machine learning requires massive amounts of data and so the ability of an algorithm to perform correctly will depend on how good the data is that it is trained with. As O’Neil has pointed out, a problematic questionnaire can lead to biased predictions. Similarly, incomplete training data can cause a model to perform poorly in real-world settings. As Koray Karaca’s recent article on inductive risk in machine learning scenarios explains, creating a model requires methodological and precise choices to be made. But these decisions are often driven by certain background assumptions – plagued by simplification and idealization – and which create problematic uncertainties. Different assumptions can create different models and thus different possibilities of error. However, there is always a gap between a finite amount of empirical evidence and an inductive generalization, meaning that there is always an inherent risk in using such models.

If an algorithm determines that I have cancer and I don’t, it could dramatically affect my life in all sorts of morally salient ways. On the other hand, if I have cancer and the algorithm says I don’t, it can likewise have a harmful moral impact on my life. So is there a moral responsibility involved and if so, who is responsible? In a 1953 article called “The Scientist Qua Scientist Makes Value Judgments” Richard Rudner argues that “since no scientific hypothesis is completely verified, in accepting a hypothesis the scientist must make the decision that evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis…How sure we need to be before we accept a hypothesis will depend on how serious a mistake would be.”

These considerations regarding the possibility of error and the threshold for sufficient evidence represent calculations of inductive risk. For example, we may judge that the consequences of asserting that a patient does not have cancer when they actually do to be far worse than the consequences of asserting that a patient does have cancer when they actually do not. Because of this, and given our susceptibility to error, we may accept a lower standard of evidence for determining that a patient has cancer but a higher standard for determining the patient does not have cancer to mitigate and minimize the worst consequences if an error occurs. But how do algorithms do this? Machine learning involves optimization of a model by testing it against sample data. Each time an error is made, a learning algorithm updates and adjusts parameters to reduce the total error which can be calculated in different ways.

Karaca notes that optimization can be carried out either in cost-sensitive or -insensitive ways. Cost-insensitive training assigns the same value to all errors, while cost-sensitive training involves assigning different weights to different errors. But the assignment of these weights is left to the modeler, meaning that the person who creates the model is responsible for making the necessary moral judgments and preference orderings of potential consequences. In addition, Karaca notes that inductive risk concerns arise for both the person making methodological choices about model construction and later for those who must decide whether to accept or reject a given model and apply it.

What this tells us is that machine learning inherently involves making moral choices and that these can bear out in evaluations of acceptable risk of error. The question of defining how “successful” the model is is tied up with our own concern about risk. But this only poses an additional question: How is there accountability in such a system? Many companies hide the results of their models or even their existence. But, as we have seen, moral accountability in the use of AI is of paramount importance. At each stage of assessment, we encounter an asymmetry in information that pits the victims of such AI to “prove” the algorithm wrong against available evidence that demonstrates how “successful” the model is.

Risk, Regret, and Sport

photograph of two soccer players competing in air for ball

The legendary soccer player Denis Law recently announced that he has been suffering with dementia for several years. Law attributes his dementia to heading soccer balls. We’ve known for decades – in 2002 Jeff Astle’s death from dementia was linked to heading – that there is a link between heading and brain damage.

Other sports face similar issues. American football’s problem with Chronic Traumatic Encephalopathy (CTE) is well documented. CTE can lead to, amongst other things: aggression, depression, and paranoia that can arise in people in their 20s; it also can bring memory loss, dementia, and eventually death. Other sports like rugby and hockey also have links to CTE, and they have their own problems with brain damage.

Broadly, people who partake in sports that involve collisions (including things like headers) are at risk of brain injury. This is true especially when playing at higher levels of competition (as opposed to playing occasional pickup games), where impacts are bigger and players spend more time playing their sport.

How should players think about this risk? Last year, Jamie Carragher, a former top-level player for Liverpool FC and current pundit, said: “If I suffer from dementia in my old age and research suggests that is because of my football career, I will have no regrets.” Carragher recognizes that we are now better informed about the risks and need to make changes to minimize the risks (here is one: fewer headers in training), but he thinks the risks are still worthwhile, and that we must keep some of the risky elements in football: players should still be able to challenge each other in ways that risk sickening head-clashes.

I think Carragher’s thoughts are widely shared. Playing soccer, or rugby, or football is worth the risk of dementia later in life, so much so that players won’t regret playing their sport. But I think this line of thought rests on some troubling assumptions.

The first is the temptation to make a false comparison between the ordinary risks of sport and brain damage. We should obviously grant that some injuries are acceptable risks. I played rugby for over a decade, and I spent several months with sprained ankles and bad shoulders. It’s no surprise that I now occasionally get the odd ache. Almost every sport carries some risk of injury, and if we grant (as I think we should) that playing sports can be a meaningful part of our lives, these risks should not get in the way of us playing. When Carragher says that “there was a danger of injury every time I played,” he is right, but he misses the point. These brain injuries are not the same as (to take his example) a broken leg. They are highly damaging – far more long-term and life-changing than a broken leg usually is.

This leads to a deeper point. Living with dementia can involve a loss of awareness, a loss of memory, and confusion; CTE can lead to personality changes. We might reasonably think of these as transformative experiences. L. A. Paul developed the notion of a transformative experience. To take one of her examples, it’s impossible to know what it is like to be a parent – what it is to love your offspring, what it is to have such a particular duty of care – before becoming a parent. We can only know what it is like to be a parent by becoming a parent. But that means that choosing to become (or not become) a parent is always shrouded in ignorance. (Her other major example is becoming a vampire: we can’t tell what it will be like to be immortal creatures of the night.)

Perhaps the decision to play a sport that might lead to a serious brain injury involves some element of a transformative experience: you can’t know what your life would be like if you had CTE or dementia – confused, with a ruined memory and a changed personality – so perhaps you shouldn’t be so keen to declare that you won’t regret it. You might not feel that way when dementia takes its grip.

Here is another problem. Carragher’s line of thought also assumes that regret lines up with justification. That is to say, if you won’t regret something, then you were justified in taking that risk – you were right to do it. But, as R. Jay Wallace has argued, this isn’t always the case. In Wallace’s example, a young girl might get pregnant. She was far too young, and both she and her child would have had a better time of it had she waited several more years. Her decision to have a child was unjustified. Yet she surely cannot regret her decision: after all, she loves this child.

It isn’t surprising that people who have dedicated decades to their sports – sports that make their lives meaningful – won’t regret what they have done. But that doesn’t mean they made the right choice. There are plenty of other meaningful options out there: like taking up sculpting, squash, or chess.

Yet thinking about regret and justification also brings up something in favor of taking these risks: some people will have nothing to regret at all because brain damage is far from guaranteed, even in football. Bernard Williams argued that we might sometimes take a risk and that risk will be justified by the results. If you abandon your wife and children to set off on a career as a painter, you might have made a grave error if you fail in your career – but perhaps it will all have been worth it if you succeed. Likewise, Carragher, if he avoids dementia, might have been perfectly justified in playing soccer. Others might not be so lucky.

Sports play a meaningful role in many of our lives, and we are all happy to live with some level of risk. But we shouldn’t just say: “I won’t regret playing, even if I get dementia.” To note that you wouldn’t regret playing just because of a broken leg is to compare chalk and cheese; we don’t really know what our lives would be like with dementia, so we shouldn’t be confident in such assertions; and even if we end up with no regrets, that doesn’t mean we did the right thing. This discussion requires serious conversations about risk management and the meaningfulness of sport – it shouldn’t be conducted at the level of glib sayings.

The Inherent Conflict in Informed Consent

photograph of doctor's shingle with caduceus image

A recent study has drawn attention to the relatively poor medical reasoning capabilities of terminally-ill patients. When confronted with complicating factors, a group of terminal cancer patients demonstrated decreased appreciation and understanding of their prognosis in comparison to their healthy adult counterparts. More concerning, perhaps, is the study’s finding that attending physicians were not consistent in recognizing these deficiencies in competence. Ultimately, the study supports mounting evidence that the bright line we draw to separate individual autonomy from institutional paternalism is too simplistic. Patient competence is overestimated and physicians’ impact is underappreciated. These findings have important implications for our conceptualization of informed consent.

Informed consent is a process, made up of the many communications between a doctor and a patient (or clinical investigator and research participant). Details regarding the purpose, benefits, and risks of, as well as alternatives to, a given treatment are relayed so as to enable potential clients to deliberate and decide whether the medical intervention offered aligns with their interests. As a patient has all the freedom to decide what should or should not happen to her body prior to undergoing a clinical trial or medical procedure, the decision is to be made free from coercion; the doctor acts so as to facilitate patient decision-making. Achieving this requires adequate, accurate information be provided in terms the patient can easily understand.

Legally, informed consent represents a basic threshold of competency that a patient must be assisted in meeting in order to legally acquiesce to a medical procedure. It exists to safeguard bodily integrity — the right of self-determination over our bodies. It grants legal permission and protects healthcare providers from liability.

Morally, informed consent is a compromise between epistemic merits and welfare interests. Informed consent balances doctors’ medical expertise against patients’ unique knowledge of their preferences. While physicians might know best how to treat injury and combat afflictions, they are less equipped to make determinations about the kind of risks a patient is willing to take or the value she might place on different health outcomes. As patients must live with the consequences of whatever decision is made, we tend to privilege patient autonomy. Once properly informed, we believe that the patient is best-positioned to determine the most suitable course of treatment.

The trouble, as studies like this show, is that patients are not the autonomous healthcare consumers we assume them to be. They are often dependent on the doctor’s expertise and medical advice as many suffer from some combination of informational overload and emotional overwhelm. Patients’ weak grasp of their medical prognosis is offset only by the trust they have in their physician and a general deference to authority.

This means that informed consent is, in many cases, simply not possible. Patients who are very young, very ill, mentally impaired, or even merely confused are not capable of demonstrating sufficient competence or granting meaningful permission. Unfortunately, patient literacy is overestimated, communication barriers go undetected, and patient misunderstanding and noncompliance continues. Findings suggest that thorough assessment of patient competence is rare, and patients’ comprehension is questioned only in those cases where a patient’s decision deviates from the physician’s recommendations.

An increased focus on patient education may not be enough to combat these problems. Efforts to present information in a more accessible manner may bring some improvement, but there are many medical situations where the sheer complexity or volume of the information involved outstrips the decision-making capacity of everyday patients. Some types of medical information, like risk assessments, use probability estimates that would require formal training to fully appreciate and thus overburden patients’ capacity to adequately comprehend and reasonably deliberate. In such cases, no amount of dialogue would allow a patient to attain the understanding necessary for informed decision-making.

In the end, the possibility of an equitable doctor/patient consultation is rarely a reality. As Oonagh Corrigan explains,

“There needs to be a realisation that the type of illness a patient is suffering from, her anxiety about the likely trajectory of her illness, her expectations about treatment and, in general, her implicit trust in the doctor and medical science mean that ‘informed choices’ based on an adequate understanding of the information and on careful consideration of the potential benefits and risks, are difficult to achieve in practice.”

We cannot maintain our idealistic divide between autonomous decision‐making on the one hand, and autocratic paternalism on the other. From framing effects to geographic bias, a physician is bound to have a greater hand in decision-making than our common conception of the dynamic allows.

Some may say that this liberty is sufficiently curtailed by the Hippocratic Oath. A doctor’s duty to the health of a patient is thought to limit the possibility of abuse. But the physician’s obligation to do no harm offers little guidance on the ground. The duties of nonmaleficence and beneficence share no necessary tie to the particular social and cultural values of patients. They would, for example, recommend the administering of blood transfusions to patients whose deeply-held religious beliefs disallow it.

Finding a suitable middle ground between individual autonomy and institutional paternalism is particularly tricky. The territory of informed consent is already a political battleground. One need look no further than the dispute concerning mandatory pre-abortion counseling or talk therapy for transgender patients. While we may wish physicians to take a larger role in the care of those who genuinely lack capacity, this would inevitably lead to the silencing of legitimate interests. Any acceptable resolution of this tension is bound to be hard-won.