← Return to search results
Back to Prindle Institute

The Hidden Ethics of Inflation

photograph of hands removing fiver from wallet

A danger of the modern obsession with data, facts, and figures is that it can disguise questions of ethics as questions of facts. Authors here at the Prindle Post, as well as elsewhere, have discussed the slipperiness of the slogan “follow the science.” It is easy to follow the science to belief in COVID-19 and the effectiveness of vaccination, but far harder to follow the science to what an acceptable level of risk is.

Our measures and metrics, the ways we describe the world we inhabit, involve more than taking a ruler to the structure of the universe. Science requires reflection and judgment. An awareness of the way our facts and figures are constituted opens up new space for ethical and political deliberation.

Inflation is a good case in point. The naturalization of inflation as a simple descriptive fact about the world, like bad weather, prevents a discussion of the causes of inflation and the choices behind those causes. The reporting of inflation as a single tell-all figure hinders awareness of whom it impacts most.

Inflation as simultaneously fact and decision

It is not uncommon to see inflation referenced as a cause or explanation for higher prices, in the sense that the reason prices are higher is because of inflation. For instance, in an op-ed for Newsweek, former congressman Newt Gingrich wrote, “Each day that inflation increases prices, the Democrats lose ground with ordinary Americans.” Similarly, CNBC declared, “inflation has raised the prices of many goods people want for a home revamp.” However, as economists define it, “inflation” is simply the word we use to describe any general increase in the prices of goods and services over some period of time in a country. “Inflation,” then, no more explains a price increase, than a “drunk-making power” explains the inebriating effect of alcohol. What matters is the why of inflation.

It is of course likely that businesses are partly raising prices for reasons consumers can appreciate – COVID-tangled supply lines, elevated raw materials costs, increasing production capacity or workforce, raising worker pay. However, as critics of current record profits have pointed out – such as Elizabeth Warren, economist Paul Krugman, and others – at least some inflation might be the result of large corporations leveraging pricing power due to market dominance or consolidation. Inflation is, if nothing else, a good excuse to raise prices.

But even if one grants the contentious point that corporations are actively doing this, we might still believe that it is perfectly fine for a corporation to increase profits when the consumer demand is there. These are for-profit entities after all. I am, however, not concerned with the ethics of this particular practice at present; rather, my point is that all those important debates about corporate responsibility, pricing power, and anti-trust are being obscured by our insistence on treating inflation like the weather – that is, as a force beyond human control.

Likewise, it is taken as natural that higher product costs should be passed onto consumers. Here again, there can be choice. Corporations could choose to cut executive bonuses or curtail stock buybacks (which are currently surging) rather than exclusively opt to increase prices. Yet further choices are in the background about tax levels for very high income earners and the permissibility of buybacks, which were largely illegal before 1982.

Even for the notionally bloodless topics of supply lines and logistics, choices were made by corporations about prioritizing efficiency over resilience, about offshoring and the use of cheap foreign labor, and about concentration of manufacturing in specific markets.

These choices may or may not be defensible, given one’s values and their economic framework, but it is imperative to recognize them as choices, occurring in a specific political and institutional context which facilitated them, and which could be otherwise.

Whosoever hath not, from him shall be taken away even that he hath

All sorts of significant choices and hidden values are buried within the way inflation is measured.

Inflation is typically measured by the Bureau of Labor Statistics’ Consumer Price Index (CPI). The index is, in their own words, “a measure of the average change over time in the prices paid by urban consumers for a representative basket of consumer goods and services.” (See their  FAQ.) The “basket” of goods includes gas, clothes, groceries, healthcare, and other typical purchases.

The Bureau of Labor Statistics collects an enormous amount of data, from across regions and consumer income levels. Economists quibble about the details – about how perfectly it captures overall inflation – but the more foundational concern from an ethical perspective is the move from an inflation measure, to how that measure impacts a particular consumer. While national policy decisions may take the details into account, national news will typically only report the overall Consumer Price Index. However, the very act of averaging across the diverse economic landscape of the United States entails the measure is insensitive to the specifics.

This happens in at least three ways. First, price increases are uneven across the bundle of goods. Inflation of 7% does not mean that gas rose 7% and frozen concentrated orange juice rose 7%. In fact, gas prices have increased several times that, spiking even higher after the invasion of Ukraine. Second, price increases are uneven across the country. Third, even if the bundle of goods is the same, it represents a different proportion of income for different people. (Stocks and other assets are not immune to negative effects from inflation, but inflation can potentially be waited out and money moved to less sensitive assets.)

The long and the short of this is that inflation hits different people differently. The people it hits hardest include those who must spend a large proportion of their income on consumable goods like food, those with less financial flexibility to modify their habits and assets, those with primarily cash saving, and those poorly positioned to negotiate inflation adjustments to their pay. The savvy reader may notice these are circling a central descriptor – those who are already poor.

In the U.K., activist Jack Monroe is developing a Vimes Boots Index that she believes more accurately reflects inflation specifically for people with less money. It is named after a character in a Terry Pratchet novel who comments that the poor cannot choose to buy boots that cost five times as much even if they last ten times as long because the poor never have the cash on hand to buy the nicer boots in the first place; a riff on the more general idea it is expensive to be poor.

Again, this is not to dispute that there is value in reporting the Consumer Price Index. It is instead to attend to the fact that how we discuss inflation and the metrics we use are not simply “following the science,” even the dismal science; they are, either more or less knowingly, decisions that express our values.

 

The author would like to acknowledge the valuable feedback of Rashid CJ Marcano-Rivera on economic matters.

An End to Pandemic Precautions?

photograph of masked man amongst blurred crowd

I feel like I have bad luck when it comes to getting sick. Every time there’s a cold going around, I seem to catch it, and before I started regularly getting the flu shot, I would invariably end up spending a couple of weeks a year in abject misery. During the pandemic, however, I have not had a single cold or flu. And I’m far from alone: not only is there plentiful anecdotal evidence, but there is solid scientific evidence that there really was no flu season to speak of this year in many parts of the world. It’s easy to see why: the measures that have been recommended for preventing the spread of the coronavirus – social distancing, wearing masks, sanitizing and washing your hands – turn out to be excellent ways of preventing the spread of cold and flu viruses, as well.

Now, parts of the world are gradually opening up again: in some countries social distancing measures and mask mandates are being relaxed, and people are beginning to congregate again in larger numbers. It is not difficult to imagine a near-future in which pumps of hand sanitizer are abandoned, squirt bottles disappear from stores, and the sight of someone wearing a mask becomes a rarity. A return to normal means resuming our routines of socializing and working (although these may end up looking very different going forward), but it also means a return to getting colds and flus.

Does it have to? While aggressive measures like lockdowns have been necessary to help stop the spread of the coronavirus, few, I think, would think that such practices should be continued indefinitely in order to avoid getting sick a couple times a year. On the other hand, it also doesn’t seem to be overly demanding to ask that people take some new precautions, such as wearing a mask during flu season, or sanitizing and washing their hands on a more regular basis. There are good reasons to continue these practices, at least to some extent: while no one likes being sick with a cold or flu, for some the flu can be more than a minor inconvenience.

So, consider this claim: during the course of the COVID-19 pandemic, we have had a moral obligation to do our part in preventing its spread. This is not an uncontroversial claim: some have argued that personal liberties outweigh any duty one might have towards others when it comes to them getting sick (especially when it comes to wearing masks), and some have argued that the recommended mandates mentioned above are ineffective (despite the scientific evidence to the contrary). I don’t think either of these arguments are very good; that’s not, however, what I want to argue here. Instead, let’s consider a different question: if it is, in fact, the case that we have had (and continue to have) moral obligations to take measures to help prevent the spread of coronavirus, do such obligations extend to the diseases – like colds and flus – that will return after the end of the pandemic? I think the answer is: yes. Kind of.

Here’s what this claim is not: it is not the claim that social distancing must last forever, that you have to wear a mask everywhere forever, or that you can never eat indoors, or have a beer on a patio, or go into a shop with more than a few people at a time, etc. Implementing these restrictions in perpetuity in order to prevent people from getting colds and flus seems far too demanding.

Here’s what the claim is: there are much less-demanding actions that one ought to take in order to help stop the spread of common viruses, in times when the chance of contracting such a virus is high (e.g., cold and flu season). For instance, you have no doubt acquired a good number of masks and a good quantity of hand sanitizer over the past year-and-change, and have likely become accustomed to using them. They are, I think, merely a mild inconvenience: I doubt that anyone actively enjoys wearing a mask when they take the subway, for example, or squirting their hands with sanitizer every time they go in and out of a store, but it’s a small price to pay in order to help the prevention of the spread of viruses.

In addition, while in the pre-corona times there was perhaps a social stigma against wearing medical masks in public in particular, it seems likely that we’ve all gotten used to seeing people wearing masks by now. Indeed, in many parts of the world it is already commonplace for people to wear masks during cold and flu season, or when they are sick or are worried that people they spend time with are sick. That such practices have been ubiquitous in some countries is reason to think that they are not a terrible burden.

There is, of course, debate about which practices are most effective at preventing the spread of other kinds of viruses. Some recent data suggest that while masks can be effective at helping reduce the spread of the flu, perhaps the most effective measures have been ones pertaining to basic hygiene, especially washing your hands. Given that we have become much more cognizant of such measures during the pandemic, it is reasonable to think that it would not be too demanding to expect that people continue to be as conscientious going forward.

Again, note that this is a moral claim, and not, say, a claim about what laws or policy should be. Instead, it is a claim that some of the low-cost, easily accomplishable actions that have helped prevent the spread of a very deadly disease should continue when it comes to preventing the spread of less-deadly ones. Ultimately, returning to normal does not mean having to give up on some of the good habits we’ve developed during the course of the pandemic.

The Bigger Problem with “COVID Conga Lines”

photograph of full subway car with half of the passengers unmasked

On December 9th, days before New York would again order the re-closing of bars and restaurants in an attempt to stem the resurgence of COVID-19 cases seemingly spread by holiday travelers, dozens of members of New York’s Whitestone Republican Club gathered together for a holiday party at a restaurant in Queens; weeks later, multiple attendees have tested positive for the coronavirus and at least one partygoer has been hospitalized. Although restaurants were allowed to open at 25% capacity on the day of the party, restaurant visitors were also required to wear face masks while not eating; videos of the event — including one showing a prominent local candidate for city council happily leading a conga line — revealed that the majority of people in attendance neglected many of the public health guidelines designed to mitigate the spread of COVID-19.

In response to media coverage of its party, the Club released a statement that read, in part, “We abided by all precautions. But we are not the mask police, nor are we the social distancing police. Adults have the absolute right to make their own decisions, and clearly many chose to interact like normal humans and not paranoid zombies in hazmat suits. This is for some reason controversial to the people who believe it’s their job to tell us all what to do.”

Evoking something like “liberty” to defend the flaunting of public health regulations is, at this point, a common refrain in conversations criticizing official responses to COVID-19. According to such a perspective, the coronavirus pandemic is viewed more as a private threat to individual freedoms than a public threat to health and well-being. For various reasons (ranging from basic calculations about personal risk to outright denials of the reality of the virus as a whole), the possibility that someone could unintentionally spread the coronavirus to strangers while unmasked in public is ranked as less significant than the possibility that someone could have their personal liberties inhibited by inconvenient regulations. As some anti-mask protestors (including Representative-elect Marjorie Taylor Greene from Georgia’s fourteenth congressional district) have said: “My body, my choice,” co-opting the long-standing pro-abortion slogan to refer instead to their asserted right to keep their faces uncovered in public, without qualification.

Critics of this perspective often call it “reckless” and chastise so-called “anti-maskers” for being cavalier with their neighbors’ health; in at least one case, people have even been arrested and charged with reckless endangerment for knowingly exposing passengers on a plane to COVID-19. Against this, folks might respond by downplaying the overall effect of coronavirus morbidity: as one skeptic explained in August, “I hear all the time, people are like, ‘I’d rather be safe than sorry, I don’t want to be a grandma killer.’ I’m sorry to sound so harsh — I’m laughing because grandmas and grandpas die all the time. It’s sad. But here’s the thing: It’s about blind obedience and compliance.”

At present, the United States has registered more than 20 million cases of COVID-19 and over 340,000 patients have died from the illness; while these numbers are staggering to many, others might do some simple math to conclude that over 19 million people have (or might still potentially) recover from the disease. For those who view a mortality rate of “only” 1.5% as far too low to warrant extensive governmental regulation of daily life, they might weigh the guarantee of government control against the risk of contracting a disease and measure the former as more personally threatening than the latter. (It is worth reiterating at this point that COVID-19 patients are five times more likely to die than are flu patients — the law of large numbers is particularly unhelpful when trying to think about pandemic statistics.) Even if someone knows that they might unintentionally spread the coronavirus while shopping, boarding a plane, or partying during the holidays, they might also think it’s unlikely that their accidental victim will ultimately suffer more than they might personally suffer from an uncomfortable mask.

To be clear, the risks of contracting COVID-19 are indeed serious and evidence already suggests that even cases with only mild initial symptoms might nevertheless produce drastic long-lasting effects to a patient’s pulmonary, cardiovascular, immune, nervous, or reproductive systems. But let’s imagine for a moment that none of that is true: what if the perspective described above was completely and unequivocally correct and the Whitestone Republican Club’s recommendation to “Make your own calculated decisions, don’t give in to fear or blindly obey the media and politicians, and respect the decisions of others” was really as simple and insulated as they purport it to be?

There would still be a significant problem.

In general, we take for granted that the strangers we meet when we step out of our front door are not threats to our personal well-being. Some philosophers have explained this kind of expectation as being rooted in a kind of “social contract” or agreement to behave in certain ways relative to others such that we are afforded certain protections. On such views, individuals might be thought of as having certain duties to protect the well-being of their fellow citizens in certain ways, even if those duties are personally inconvenient, because those citizens benefit in turn from the protection of others (shirking public health regulations might then be seen, on this view, as a kind of free rider problem).

However, this doesn’t clearly explain the sort of moralizing condemnation directed towards anti-maskers; why, for example, might someone in a city far from Queens care about the choices made at the Whitestone Republican Club’s holiday party? Certainly, it might seem odd for someone in, say, central Texas to expect someone else in southeast New York to uphold a kind of give-and-take contractarian social contract!

But, more than just assuming that strangers are not threats, we often suppose that our civic neighbors are, in some sense, our partners who work in tandem with us to accomplish mutually beneficial goals. Here an insight from John Dewey is helpful: in his 1927 book The Public and Its Problems, Dewey points out that even before we talk about the organization and regulation of states or governments, we first must identify a group of people with shared interests — what Dewey calls a “public.” After considering how any private human action can have both direct and indirect consequences, Dewey explains that “The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for.” On this definition, many different kinds of “publics” (what others might call “communities” or “social groups”) abound, even if they lack clearly defined behavioral expectations for their members. To be a member of a public in this sense is simply to be affected by the other members of a group that you happen to be in (whether or not you consciously agreed to be a part of that group). As Dewey explains later, “The planets in a constellation would form a community if they were aware of the connection of the activities of each with those of the others and could use this knowledge to direct behavior.”

This might be why negligence in New York of public health regulations bothers people even if they are far elsewhere: that negligence is evidence that partygoers are either not “aware of the connection of the activities of each with those of the others” or they are not “us[ing] this knowledge to direct behavior.” (Given the prevalence of information about COVID-19, the latter certainly seems most likely.) That is to say, people who don’t attend to the indirect consequences of their actions are, in effect, not creating the collective “public” that we take for granted as “Americans” (even apart from any questions of governmental or legal regulations).

So, even if no one physically dies (or even gets sick) from the actions of someone ignoring public health regulations, that ignorance nevertheless damages the social fabric on which we depend for our sense of cultural cohesion that stretches from New York to Texas and beyond. (When such negligence is intentional, the social fabric is only rent deeper and more extensively). Americans often wax eloquently about unifying ideals like “E Pluribus Unum” that project an air of national solidarity, despite our interstate diversity: one of the many victims of the COVID-19 pandemic might end up being the believability of such a sentiment.

Waiting for a Coronavirus Vaccine? Watch Out for Self-Deception

photograph of happy smiley face on yellow sticky note surrounded by sad unhappy blue faces

In the last few months, as it is clear that the coronavirus won’t be disappearing anytime soon, there has been a lot of talking about vaccines. The U.S. has already started several trials, and both Canada and Europe have followed suit. The lack of a current vaccine has made even more evident how challenging it is to coexist with the current pandemic. Aside from the more extreme consequences that involve hospitalizations, families and couples have been separated for what is a dramatic amount of time, and some visas have been halted. Unemployment rates have hit record numbers with what will be predicted to be a slow recovery. Restaurants, for example, have recently reopened, yet it is unclear what their future will be when the patio season will soon come to an end. With this in mind, many (myself included) are hoping that a vaccine will come, the sooner the better.

But strong interest for a vaccine, raises the worry of how this influences what we believe, and in particular, how we examine evidence that doesn’t fit our hopes. The worry is that one might indulge in self-deception. What do I mean by this? Let me give you an example that clarifies what I have in mind.

Last week, I was talking to my best friend, who is a doctor and, as such, typically defers to experts. When my partner and I told my friend of our intention of getting married, she reacted enthusiastically. Unfortunately, the happiness of the moment was interrupted by the realization that, due to the current coronavirus pandemic, the wedding would need to take place after the distribution of a vaccine. Since then, my friend has repeatedly assured me that there will be a vaccine as early as October on the grounds that Donald Trump has guaranteed it will be the case. When I relayed to her information coming instead from Dr. Anthony Fauci, who instead believes the vaccine will be available only in 2021, my friend embarked in effortful mental gymnastics to justify (or better: rationalize) why Trump was actually right.

There is an expression commonly used in Italian called “mirror climbing.” Climbing a mirror is an obviously effortful activity and it is also bound to fail because the mirror’s slippery surface makes it easy to fall from. Italians use the expression metaphorically to denote the struggle of someone attempting to justify a proposition that by their own lights is not justifiable. My friend was certainly guilty of some mirror climbing and she is a clear example of someone who, driven by the strong desire to see her best friend getting married, self-deceives that the vaccine will be available in September. This is in fact how self-deception works. People don’t simply believe what they want for that is psychologically impossible. You couldn’t possibly make yourself believe that the moon was made of cheese, even if you wanted to. Beliefs are just not directly controllable like actions. Rather, it is our wishes, desires, interests that influence the way we come to believe what we want by shaping how we gather and interpret evidence. We might, for example, give more importance to reading news that align with our hopes and scroll past news titles that question what we would like to be true. We might give weight to a teaspoon of evidence coming from a source we wouldn’t normally trust, and instead give credibility to evidence coming from sources that we know is not relevant.

You might ask though, how is my friend’s behavior different from someone who is simply wrong instead of self-deceived? Holding a belief that it turns to be false usually happens out of mistake, and as a result, when people correct us, we don’t have problems revising that belief. Self-deception instead, doesn’t happen out of mere error, it is driven by a precise motivation — desires, hope, fears, worry, and so on — which biases the way we collect and interpret evidence in favor of that belief. Consider my friend again. She is a doctor, and as such she always trusts experts. Now, regardless of political views, Trump, contrary to Dr Fauci, is not an expert in medicine. Normally, my friend knows better than trusting someone who is not an expert, yet the only instance when she doesn’t, is one where there is a real interest at stake. This isn’t a coincidence; the belief there will be a vaccine in October is fueled by a precise hope. This is a problem because our beliefs should be guided by evidence, not wishes. Beliefs, so to speak, are not designed to make us feel better (contrary to desires, for example). They are supposed to match reality, and as such be a tool that we use to navigate our environment. Deceiving ourselves that something is the case when it’s not inevitably leads to disappointment because reality has a way to intrude our hopes and catch up with us.

Given this, what can we do to prevent being falling into the grips of self-deception? Be vigilant. We are often aware of our wishes and hopes (just like you are probably aware now that you’re hoping a vaccine will be released soon). Once we are aware of our motivational states, we should slow down our thoughts and be extra careful when considering evidence in favor. This is the first step in protecting ourselves from self-deception.

Causality and the Coronavirus

image of map of US displayed as multi-colored bar graph

“Causality” is a difficult concept, yet beliefs about causes are often consequential. A troubling illustration of this is the claim, which is being widely shared on social media, that the coronavirus is not particularly lethal, as only 6% of the 190,000+ deaths attributed to the virus are “caused” by the disease.

We tend to think of causes in too-simplistic terms

Of all of the biases and limitations of human reasoning, our tendency to simplify causes is arguably one of the most fundamental. Consider the hypothetical case of a plane crash in Somalia in 2018. We might accept as plausible causes things such as the pilot’s lack of experience (say it was her first solo flight), the (old) age of the plane, the (stormy) weather, and/or Somalia’s then-status as a failed state, with poor infrastructure and, perhaps, an inadequate air traffic control system.

For most, if not all, phenomena that unfold at a human scale, a multiplicity of “causes” can be identified. This includes, for example, social stories of love and friendship and political events such as wars and contested elections.1

Causation in medicine

Causal explanations in medicine are similarly complex. Indeed, the CDC explicitly notes that causes of death are medical opinions. These opinions are likely to include not only an immediate cause (“final disease or condition resulting in death”), but also an underlying cause (“disease or injury that initiated the events resulting in death”), as well as other significant conditions which are or are not judged to contribute to the underlying cause of death.

In any given case, the opinions expressed on the death certificate might be called into question. Even though these opinions are typically based on years of clinical experience and medical study, they are limited by medical uncertainty and, like all human judgments, human fallibility.

When should COVID count as a cause?

Although the validity of any individual diagnosis might be called into question, aggregate trends are less equivocal. Consider this graph from the CDC which identifies the number of actual deaths not attributed to COVID-19 (green), additional deaths which have been attributed to COVID-19 (blue), and the upper bound of the expected number of deaths based on historical data (orange trend line). Above the blue lines there are pluses to indicate weeks in which the total number (including COVID) exceeds the reported number by a statistically significant margin. This has been true for every week since March 28. In addition, there are pluses above the green lines indicating where the number of deaths excluding COVID was significantly greater than expected. This is true for each of the last eight weeks (ignoring correlated error, we would expect such a finding fewer than one in a million times by chance). This indicates that the number of deaths due to COVID in America has been underreported, not overreported.

Among the likely causes for these ‘non-COVID’ excess deaths, we can point, particularly early in the pandemic, to a lack of familiarity with, and testing for, the virus among medical professionals. As the pandemic unfolded, it is likely that additional deaths can be attributed, in part to indirect causal relationships such as people delaying needed visits to doctors and hospitals out of fear, and the social, psychological, and economic consequences that have accompanied COVID in America. Regardless, the bottom line is clear: without COVID-19, over two hundred thousand other Americans would still be alive today. The pandemic has illuminated, tragically, our interconnectedness and with it our
responsibilities to each other. One part of this responsibility is to deprive the virus of the
opportunity to spread by wearing masks and socially distancing. But this is not enough: we
need to stop the spread of misinformation as well.

 

1 Some argue that we can think of individual putative causes as “individually unnecessary” but as “jointly sufficient.” In the 2000 US Presidential Election, for example, consider the presence of Ralph Nader on the ballot, delays in counting the vote in some jurisdictions, the Monica Lewinsky scandal, and other phenomena such as the “butterfly ballot” in Palm Beach County, Florida. Each of these might have been unnecessary to lead the election to be called for G.W. Bush, but they were jointly sufficient to do so.

On “Doing Your Own Research”

photograph of army reserve personnel wearing neck gaiter at covid testing site

In early August, American news outlets began to circulate a surprising headline: neck gaiters — a popular form of face covering used by many to help prevent the spread of COVID-19 — could reportedly increase the infection rate. In general, face masks work by catching respiratory droplets that would otherwise contaminate a virus-carrier’s immediate environment (in much the same way that traditional manners have long-prescribed covering your mouth when you sneeze); however, according to the initial report by CBS News, a new study found that the stretchy fabric typically used to make neck gaiters might actually work like a sieve to turn large droplets into smaller, more transmissible ones. Instead of helping to keep people safe from the coronavirus, gaiters might even “be worse than no mask at all.”

The immediate problem with this headline is that it’s not true; but, more generally, the way that this story developed evidences several larger problems for anyone hoping to learn things from the internet.

The neck gaiter story began on August 7th when the journal Science Advances published new research on a measurement test for face mask efficacy. Interested by the widespread use of homemade face-coverings, a team of researchers from Duke University set out to identify an easy, inexpensive method that people could use at home with their cell phones to roughly assess how effective different commonly-available materials might be at blocking respiratory droplets. Importantly, the study was not about the overall efficacy rates of any particular mask, nor was it focused on the length of time that respiratory droplets emitted by mask-wearers stayed in the air (which is why smaller droplets could potentially be more infectious than larger ones); the study was only designed to assess the viability of the cell phone test itself. The observation that the single brand of neck gaiter used in the experiment might be “counterproductive” was an off-hand, untested suggestion in the final paragraph of the study’s “Results” section. Nevertheless, the dramatic-sounding (though misleading) headline exploded across the pages of the internet for weeks; as recently as August 20th, The Today Show was still presenting the untested “result” of the study as if it were a scientific fact.

The ethics of science journalism (and the problems that can arise from sensationalizing and misreporting the results of scientific studies) is a growing concern, but it is particularly salient when the reporting in question pertains to an ongoing global pandemic. While it might be unsurprising that news sites hungry for clicks ran a salacious-though-inaccurate headline, it is far from helpful and, arguably, morally wrong.

Furthermore, the kind of epistemic malpractice entailed by underdeveloped science journalism poses larger concerns for the possibility of credible online investigation more broadly. Although we have surrounded ourselves with technology that allows us to access the internet (and the vast amount of information it contains), it is becoming ever-more difficult to filter out genuinely trustworthy material from the melodramatic noise of websites designed more for attracting attention than disseminating knowledge. As Kenneth Boyd described in an article here last year, the algorithmic underpinnings of internet search engines can lead self-directed researchers into all manner of over-confident mistaken beliefs; this kind of structural issue is only exacerbated when the inputs to those algorithms (the articles and websites themselves) are also problematic.

These sorts of issues cast an important, cautionary light on a growing phenomenon: the credo that one must “Do Your Own Research” in order to be epistemically responsible. Whereas it might initially seem plain that the internet’s easily-accessible informational treasure trove would empower auto-didacts to always (or usually) draw reasonable conclusions about whatever they set their minds to study, the epistemic murkiness of what can actually be found online suggests that reality is more complicated. It is not at all clear that non-expert researchers who are ignorant of a topic can, on their own, justifiably identify trustworthy information (or information sources) about that topic; but, on the other hand, if a researcher does has enough knowledge to judge a claim’s accuracy, then it seems like they don’t need to be researching the topic to begin with!

This is a rough approximation of what philosophers sometimes call “Meno’s Paradox” after its presentation in the Platonic dialogue of that name. The Meno discusses how inquiry works and highlights that uninformed inquirers have no clear way to recognize the correct answer to a question without already knowing something about what they are questioning. While Plato goes on to spin this line of thinking into a creative argument for the innateness of all knowledge (and, by extension, the immortality of the soul!), subsequent thinkers have often taken different approaches to argue that a researcher only needs to have partial knowledge either of the claim they are researching or of the source of the claim they are choosing to trust in order to come to justified conclusions.

Unfortunately, “partial knowledge” solutions have problems of their own. On one hand, human susceptibility to a bevy of psychological biases make a researcher’s “partial” understanding of a topic a risky foundation for subsequent knowledge claims; it is exceedingly easy, for example, for the person “doing their own research” to be unwittingly led astray by their unconscious prejudices, preconceptions, or the pressures of their social environment. On the other hand, grounding one’s confidence in a testimonial claim on the trustworthiness of the claim’s source seems to (in most cases) simply push the justification problem back a step without really solving much: in much the same way that a non-expert cannot make a reasonable judgment about a proposition, that same non-expert also can’t, all by themselves, determine who can make such a judgment.

So, what can the epistemically responsible person do online?

First, we must cultivate an attitude of epistemic humility (of the sort summarized by Plato’s infamous comment “I know that I know nothing”) — something which often requires us to admit not only that we don’t know things, but that we often can’t know things without the help of teachers or other subject matter experts doing the important work of filtering the bad sources of information away from the good ones. All too often, “doing your own research” functionally reduces to a triggering of the confirmation bias and lasts only as long as it takes to find a few posts or videos that satisfy what a person was already thinking in the first place (regardless of whether those posts/videos are themselves worthy of being believed). If we instead work to remember our own intellectual limitations, both about specific subjects and the process of inquiry writ large, we can develop a welcoming attitude to the epistemic assistance offered by others.

Secondly, we must maintain an attitude of suspicion about bold claims to knowledge, especially in an environment like the internet. It is a small step from skepticism about our own capacities for inquiry and understanding to skepticism about that of others, particularly when we have plenty of independent evidence that many of the most accessible or popular voices online are motivated by concerns other than the truth. Virtuous researchers have to focus on identifying and cultivating relationships with knowledgeable guides (who can range from individuals to their writings to the institutions they create) on whom they can rely when it comes time to ask questions.

Together, these two points lead to a third: we must be patient researchers. Developing epistemic virtues like humility and cultivating relationships with experts that can overcome rational skepticism — in short, creating an intellectually vibrant community — takes a considerable amount of effort and time. After a while, we can come to recognize trustworthy informational authorities as “the ones who tend to be right, more often than not” even if we ourselves have little understanding of the technical fields of those experts.

It’s worth noting here, too, that experts can sometimes be wrong and nevertheless still be experts! Even specialists continue to learn and grow in their own understanding of their chosen fields; this sometimes produces confident assertions from experts that later turn out to be wrong. So, for example, when the Surgeon General urged people in February to not wear face masks in public (based on then-current assumptions about the purportedly low risk of asymptomatic patients) it made sense at the time; the fact that those assumptions later proved to be false (at which point the medical community, including the epistemically humble Surgeon General, then recommended widespread face mask usage) is simply a demonstration of the learning/research process at work. On the flip side, choosing to still cite the outdated February recommendation simply because you disagree with face mask mandates in August exemplifies a lack of epistemic virtue.

Put differently, briefly using a search engine to find a simple answer to a complex question is not “doing your own research” because it’s not research. Research is somewhere between an academic technique and a vocational aspiration: it’s a practice that can be done with varying degrees of competence and it takes training to develop the skill to do it well. On this view, an “expert” is simply someone who has become particularly good at this art. Education, then, is not simply a matter of “memorizing facts,” but rather a training regimen in performing the project of inquiry within a field. This is not easy, requires practice, and still often goes badly when done in isolation — which is why academic researchers rely so heavily on their peers to review, critique, and verify their discoveries and ideas before assigning them institutional confidence. Unfortunately, this complicated process is far less sexy (and far slower) than a scandalous-sounding daily headline that oversimplifies data into an attractive turn of phrase.

So, poorly-communicated science journalism not only undermines our epistemic community by directly misinforming readers, but also by perpetuating the fiction that anyone is an epistemic island unto themselves. Good reporting must work to contextualize information within broader conversations (and, of course, get the information right in the first place).

Please don’t misunderstand me: this isn’t meant to be some elitist screed about how “only the learned can truly know stuff, therefore smart people with fancy degrees (or something) are best.” If degrees are useful credentials at all (a debatable topic for a different article!) they are so primarily as proof that a person has put in considerable practice to become a good (and trustworthy) researcher. Nevertheless, the Meno Paradox and the dangers of cognitive biases remain problems for all humans, and we need each other to work together to overcome our epistemic limitations. In short: we would all benefit from a flourishing epistemic community.

And if we have to sacrifice a few splashy headlines to get there, so much the better.

To Wear a Mask or Not During the COVID-19 Pandemic

photograph of groups of people walking on busy street wearing protective masks

The COVID-19 pandemic is a worldwide phenomenon that has disrupted people’s lives and the economy. Currently, the United States leads COVID cases in the world and as of this writing, the United States has the largest amount of confirmed deaths, and ranks eighth in deaths per capita due to the virus. There are a number of factors that might explain why the numbers are so high: the United States’ failed leadership in tackling the virus back in December/January, the government’s response to handling the crisis once the virus spread throughout the United States, states’ opening up too early — and too quickly — in May and June, and people’s unwillingness to take the pandemic seriously by not social distancing or wearing face masks. Let us focus on the last point. Why the unseriousness? As soon as the pandemic hit, conspiracy theories regarding the virus spread like — well, like the virus itself. Some are so fully convinced about a conspiracy theory that their beliefs may be incorrigible. Others seem only to doubt mask-wearing as a solution.

Part of the unwillingness to wear face masks is due to the CDC and WHO having changed their positions about wearing masks as a preventative measure. From the beginning, the U.S. Surgeon General claimed that masks were ineffective, but now both the CDC and the WHO recommend wearing them.

Why this reversal? We are facing a novel virus. Science, as an institution, works through confirming and disconfirming hypotheses. Scientists find evidence for a claim and it leads to their hypothesis being correct. As time goes on, scientists gather new evidence disconfirming their original hypothesis. And as time continues further, they gather more information and evidence and were too quick to disconfirm the hypothesis. Because this virus is so new, scientists are working with limited knowledge. There will inevitably be back-and-forth shifts on what works and what doesn’t. Scientists must adapt to new information. Citizens, however, may interpret this as skepticism about wearing masks since the CDC and WHO cannot make up their minds. And so people may think: “perhaps wearing masks does prevent the spread of the virus; perhaps it doesn’t. So if we don’t know, then let’s just live our lives as we did.” Indeed, roughly 14% of Americans state they never wear masks. But what if there was a practical argument that might encourage such skeptics to wear a mask that didn’t directly rely on the evidence that masks do prevent spreading the virus? What if, despite the skepticism, wearing masks could still be shown to be in one’s best interest? Here, I think using Pascal’s wager can be helpful.

To refamiliarize ourselves, Pascal’s wager comes from Blaise Pascal, a 17th-century French mathematician and philosopher, who wagered that it’s best to believe in God without relying on direct evidence that God exists. To put it succinctly, either God exists or He doesn’t. How shall we decide? Well, we either believe God exists or we believe He doesn’t exist. So then, there are four possibilities:

God exists God does not exist
Belief in God
  1. +∞ (infinite gain)
2. − (finite loss)
Disbelief in God 4.   −∞ (infinite loss) 3. + (finite gain)

 

For 1., God exists and we believe God exists. Here we gain the most since we gain an infinitely happy life. If we win, we win everything. For 2., we’ve only lost a little since we simply believed and lost the truth of the matter. In fact, it’s so minimal (compared to infinite) that we lose nothing. For 3., we have gained a little. While we have the truth, there is not infinite happiness. And compared to infinite, we’ve won nothing. And finally, for 4., we have lost everything since we don’t believe in God and it’s an eternity of divine punishment. By looking at the odds, we should bet on God existing because doing so means you win everything and lose nothing. If God exists and you don’t believe, you lose everything and win nothing. If God doesn’t exist, compared to infinite, the gain or loss is insignificant. So through these odds, believing in God is your best bet since it’s your chance of winning, and not believing is your chance of losing.

There have been criticisms and responses to Pascal’s wager, but I still find this wager useful as an analogy when applied to mask-wearing. Consider:

Masks Prevent Spreading the Virus Masks Don’t Prevent Spreading the Virus
Belief in Masks Preventing Spreading the Virus (1) (Big Gain) People’s lives are saved and we can flatten the curve easily. (2) − (finite loss) We wasted some time wearing a piece of cloth over our face for a few months.
Disbelief in Masks Preventing Spreading the Virus (4) (Big Loss) We continually spread the virus, hospitals are overloaded with COVID cases, and more deaths. (3) + (finite gain) We got the truth of the matter.

 

For (1), we have a major gain. If wearing masks prevents the spread of the virus and we do wear masks, then we help flatten the curve, lessen people contracting the virus, and help prevent any harms or deaths due to COVID-19. (One model predicts that wearing masks can save up to 33,000 American lives.) This is the best outcome. Suppose (2). If masks do nothing or minimally prevent the spread of the virus, yet we continue to wear masks, we have wasted very little. By simply wearing a restriction over our face, it is simply an inconvenience. Studies show that we don’t lose oxygen by wearing a face mask. And leading experts are hopeful that we may get a vaccine sometime next year. There are promising results from clinical phase trials. And so wearing masks, having a small inconvenience in our lives, is not a major loss. After all, we can still function in our lives with face masks. People who wear masks as part of their profession (e.g. doctors, miners, firefighters, military) still carry out their duties. Indeed, their masks help them fulfill their duties. The inconvenience is a minor loss compared to saving lives and preventing the spread of the virus as stated in (1).

Suppose (3). If (3) is the case, then we’ve avoided inconvenience, but this advantage is nothing compared to the cost (4) represents. While we don’t have to wear a mask, celebrating the riddance of inconvenience pales in comparison to losing unnecessary lives and unknowingly spreading the virus. Compared to what we stand to lose in (4), in (3) we’ve won little.

Suppose (4). If we decide (4) is the strategy, we’ve doomed ourselves by making others sicker, we’ve continually spread the virus, and hospitals have had to turn away sick people which leads to more preventable deaths. We’ve lost so many lives and caused the sickness to spread exponentially, all because we didn’t wear a mask.

Note that we haven’t proved that masks work scientifically (although I highly suspect that they do). Rather, we’re doing a rational cost-benefit analysis to determine what the best strategy is. Wearing masks would be in our best interest. If we’re wrong, then it’s a minor inconvenience. But if we’re right, then we’ve prevented contributing to the spread of the COVID-19 virus which has wreaked havoc on many lives all over the globe. Surely, it’s better to bet on wearing masks than not to.

Individualism in the Time of COVID: The Rights and Wrongs of Face Masks

photograph of large crowd walking through strip mall

There are many ways of understanding individualism. On one understanding, it is equivalent to selfishness or egoism. Those who refuse to wear masks have been labelled, perhaps rightly, as individualists in this sense. Yet some anti-maskers claim to be exercising their rights in refusing to wear a mask. In doing so, they appeal to a more profound understanding of individualism in which we are each owed protection against intrusions by the government or other persons. In the words of philosopher Philippa Foot, rights protect a “kind of moral space, a space which others are not allowed to invade.”  That moral space includes a literal area around our person as a zone of privacy. Philosopher Judith Jarvis Thomson argues “if we have fairly stringent rights over our property, we have very much more stringent rights over our own person.” Here ‘stringent’ means that it would take more to override those rights: what provides sufficient reason to do something that violates your property rights might not provide sufficient reason to override your rights over your person.

If our rights over our person are more stringent than our rights over our property, then it would seem to follow that we should have stringent rights over what we wear. Of course, there are laws regarding what we wear, such as public decency laws, but these are, to most of us, unobtrusive. Most of us aren’t inclined to stroll naked through the local shopping mall, and so laws forbidding that activity don’t strike us as an imposition on our rights. Norms regarding such matters are moreover reasonably stable. So, it should be understandable even to those who disagree with anti-maskers that it could feel like an intrusion for the government to dictate our wearing something like a mask over our faces: as some anti-maskers label it, they feel “muzzled.” But is there a moral right not to wear a mask?

First, we have to ask, in virtue of what do we have any rights at all? This is a difficult philosophical question that has exercised philosophers and legal theorists, resulting in some of the most challenging works in those disciplines. But there are some simple ideas at play that have an intuitive appeal and are easily grasped. The first basic idea is that there is an intrinsic value to each individual. We are not valuable only because of our usefulness to others, but just in virtue of something like our humanity, rationality, or having been created by God. Here accounts diverge, and challenging issues arise. Suffice to say that whatever account is given, at least anyone considering the question of whether to wear a mask certainly has this sort of value.

The second basic idea is that this intrinsic value demands consideration or respect in our thinking and action that takes the form of rights. The individual has a value such that their life cannot be disposed of because it inconveniences me: individuals aren’t fungible and have a certain inviolability. Philosopher Ronald Dworkin put it this way: “rights trump utility.” This means that rights prevent us from doing things to an individual (such as killing them) on the grounds that doing those things would promote overall happiness. There is vagueness to this idea that philosophers and legal theorists work hard to dispel: how much utility does a right trump? Can we kill one to save five? If not, then ten, or one hundred? Again, we need not settle this question to answer the question regarding masks.

Third basic idea: when I violate someone’s rights deliberately, I do wrong. Often this idea will receive a great deal of nuance, delimiting the nature of the wrongdoing and exceptions that may arise in various circumstances. The important point is that rights define a moral space within which I can make choices, even choices that lower my expected utility or that of others, provided that I am not violating their rights in doing so. So the moral space defines a domain of autonomy for individual decision-making and choice but not an unlimited one. To see that it cannot be unlimited requires only a moment’s reflection. If we both claim to have rights in this sense, we must recognize each other’s inviolability at the risk of these claims being meaningless. Instead, we must realize that the assertion of rights imposes obligations on us: we must limit the exercise of our autonomy, taking other bearers of rights into account. Our claim to an unimpeded pursuit of happiness must recognize the claim of others to the same, and so my pursuit of happiness must be framed in a way that takes them into account. The kind of individualism that supports rights claims is grounded in the recognition of the value of the individual and imposes obligations that others take that value into account as they act. This is why anti-maskers are acting inconsistently with the ideas they claim to act on. The notion of rights that they invoke seems to have come uprooted from the moral ideas that ground it and become a merely legal notion. It is possibly a nod to constitutional rights, but one that fails to account for why those constitutional rights were a good idea from a moral point of view.

Those who refuse to wear masks on the grounds of exercising rights seem to have decided that their minor discomfort outweighs the lives of others in wearing a mask. Imagine that I have an exceedingly comfortable shirt, but for whatever reason, it kills one out of every thousand people who look at it. Presumably, I have a moral obligation not to wear that shirt anywhere but away from the view of all onlookers. Others would be within their rights to force me not to wear that shirt or to take on the discomfort of wearing it covered, assuming it won’t have its fatal effect when concealed. I cannot object to these requirements by saying, “but my shirt is so comfortable!” or “covering my shirt makes it less comfortable!” because these questions are put out of consideration by the rights of others. If each of our minor discomforts provides grounds for subjecting others to risk of serious illness and death, then, effectively, none of us have any moral rights. The French theologian and mathematician Blaise Pascal put it concisely: “Respect means: inconvenience yourself.”

Forbidden Knowledge in Scientific Research

cloeup photograph of lock on gate with iron chain

It is no secret that science has the potential to have a profound effect on society. This is often why scientific results can be so ethically controversial. For instance, researchers have recently warned of the ethical problems associated with scientists growing lumps of human brain in the laboratory. The blobs of brain tissue grown from stem cells developed spontaneous brain waves like those found in premature babies. The hope is that the study offers the potential to better understand neurological disorders like Alzheimer’s, but it also raises a host of ethical worries concerning the possibility this brain tissue could reach sentience. In other news, this week a publication in the journal JAMA Pediatrics ignited controversy by reporting a supposed link between fluoride exposure and IQ scores in young children. In addition to several experts questioning the results of the study itself, there is also concern about the potential effect this could have on the debate over the use of fluoride in the water supply; anti-fluoride activists have already jumped on the study to defend their cause. Scientific findings have an enormous potential to dramatically affect our lives. This raises an ethical issue: should there be certain topics, owing to their ethical concerns, that should be off-limits for scientific study?

This question is studied in both science and philosophy, and is sometimes referred to as the problem of forbidden knowledge. The problem can include issues of experimental methods and whether they follow proper ethical protocols (certain knowledge may be forbidden if it uses human experimentation), but it can also include the impact that the discovery or dissemination of certain kinds of knowledge could have on society. For example, a recent study found that girls and boys are equally as good at mathematics and that children’s brains function similarly regardless of gender. However, there have been several studies going back decades which tried to explain differences between mathematical abilities in boys and girls in terms of biological differences. Such studies have the possibility of re-enforcing gender roles and potentially justifying them as biologically determined. This has the potential to spill over into social interactions. For instance, Helen Longino notes that such findings could lead to lower priorities being made to encourage women to enter math and science.

So, such studies have the potential to impact society which is an ethical concern, but is this reason enough make them forbidden? Not necessarily. The bigger problem involves how adequate these findings are, the concern that they could be incorrect, and what society is to do about that until correct findings are published. For example, in the case of math testing, it is not that difficult to find significant correlations between variables, but the limits of this correlation and the study’s potential to identify causal factors are often lost on the public. There are also methodical problems; some standardized tests rely on male-centric questions that can skew results, different kinds of tests and different strategies for preparing for them can also misshape our findings. So even if correlations are found, where there are not major flaws in the assumptions of the study, they may not be very generalizable. In the meantime, such findings, even if they are corrected over time, can create stereotypes in the public that are hard to get rid of.

Because of these concerns, some philosophers argue that either certain kinds of questions be banned from study, or that studies should avoid trying to explain differences in abilities and outcomes according to race or sex. For instance, Janet Kourany argues that scientists have moral responsibilities to the public and they should thus conduct themselves according to egalitarian standards. If a scientist wants to investigate the differences between racial and gender groups, they should seek to explain these in ways without assuming that the difference is biologically determined.

In one of her examples, she discusses studying differences between incidents of domestic violence in white and black communities. A scientist should highlight similarities of domestic violence within white and black communities and seek to explain dissimilarities in terms of social issues like racism or poverty. With a stance like this, research into racial differences explaining differences in rates of domestic violence would thus constitute forbidden knowledge. Only if these alternative egalitarian explanations empirically fail can a scientist then choose to explore race as a possible explanation of differences between communities. By doing so, it avoids perpetuating a possibly empirically flawed account that suggests that blacks might be more violent than other ethnic groups.

She points out that the alternative risks keeping stereotypes alive even while scientists slowly prove them wrong. Just as in the case of studying mathematical differences, the slow settlement of opinion within the scientific community leaves society free to entertain stereotypes as “scientifically plausible” and adopt potentially harmful policies in the meantime. In his research on the matter Philip Kitcher notes that we are susceptible to instances of cognitive asymmetry where it takes far less empirical evidence to maintain stereotypical beliefs than it takes to get rid of them. This is why studying the truth of such stereotypes can be so problematic.

These types of cases seem to offer significant support to labeling particular lines of scientific inquiry forbidden. But the issue is more complicated. First, telling scientists what they should and should not study raises concerns over freedom of speech and freedom of research. We already acknowledge limits on research on the basis of ethical concerns, but this represents a different kind of restriction. One might claim that so long as science is publicly funded, there are reasonable democratically justified limits of research, but the precise boundaries of this restriction will prove difficult to identify.

Secondly, and perhaps more importantly, such a policy has the potential to exacerbate the problem. According to Kitcher,

“In a world where (for example) research into race differences in I.Q. is banned, the residues of belief in the inferiority of the members of certain races are reinforced by the idea that official ideology has stepped in to conceal an uncomfortable truth. Prejudice can be buttressed as those who opposed the ban proclaim themselves to be the gallant heirs of Galileo.”

In other words, one reaction to such bans on forbidden knowledge, so long as our own cognitive asymmetries are unknown to us, will be to fight back that this is an undue limitation on free speech for the sake of politics. In the meantime, those who push for such research can become martyrs and censoring them may only serve to draw more attention to the cause.

This obviously presents us with an ethical dilemma. Given that there are scientific research projects that could have a potentially harmful effect on society, whether the science involved is adequate or not, is it wise to ban such projects as forbidden knowledge? There are reasons to say yes, but implementing such bans may cause more harm or drive more public attention to such issues. Even banning research on the development of brain tissue from stem cells may be wise, but it may also cause such research to move to another country with more relaxed ethical standards, meaning that potential harms could be much worse. These issues surrounding how science and society relate are likely only going to be solved with greater public education and open discussion about what ethical responsibilities we think scientists should have.

The Ethics of Scientific Advice: Lessons from “Chernobyl”

photograph of Fireman's Monument at Cherynobl

The recently-released HBO miniseries Chernobyl highlights several important moral issues that are worth discussing. For example, what should we think about nuclear power in the age of climate change? What can disasters tell us about government accountability and the dangers of keeping unwelcome news from the public? This article will focus on the ethical issues concerning scientists potential to influence government policy. How should scientists advise governments, and who holds them accountable for their advice? 

In the second episode, the Soviet Union begins dumping thousands of tons of sand and boron onto the burning nuclear plant at the suggestion of physicist Valery Legasov. After consulting fellow scientist Ulana Khomyuk (a fictional character who represents the many other scientists involved), Legasov tells Soviet-leader Gorbachev that in order to prevent a potential disaster, drainage pools will need to be emptied from within the plant in an almost certain suicide mission. “We’re asking for your permission to kill three men,” Legasov reports to the Soviet government. It’s hard to imagine a more direct example of a scientist advising a decision with moral implications. 

Policy makers often lack the expertise to make informed decisions, and this provides an opportunity for scientists to influence policy. But should scientists consider ethical or policy considerations when offering advice? 

On one side of this debate are those who argue that scientists primary responsibility is to ensure the integrity of science. This means that scientists should maintain objectivity and should not allow their personal moral or religious convictions to influence their conclusions. It also means that the public should see science as an objective and non-political affair. In essence, science must be value-free.

This value-free side of the debate is reflected in the mini-series’ first episode. It ends with physicist Legasov getting a phone call from Soviet minister Boris Shcherbina telling him that he will be on the commission investigating the accident. When Legasov begins to suggest an evacuation, Shcherbina tells him, “You’re on this committee to answer direct questions about the function of an RBMK reactor…nothing else. Certainly not policy.”

Those who argue for value-free science often argue that scientists have no business trying to influence policy. In democratic nations this is seen as particularly important since policy makers are accountable to voters while scientists are not. If scientists are using ethical judgments to suggest courses of action, then what mechanism will ensure that those value judgments reflect the public’s values?

In order to maintain the value-free status of science, philosophers such as Ronald N. Geire argue that there is an important distinction between judging the truth of scientific hypotheses and judging the practical uses of science. A scientist can evaluate the evidence for a theory or hypotheses, but they shouldn’t evaluate whether one should rely on that theory or hypothesis to make a policy decision. For example, a scientist might tell the government how much radiation is being released and how far it will spread, but they should not advise something like an evacuation. Once the government is informed of relevant details, the decision of how to respond should be left entirely to elected officials. 

Opponents of this view, however, argue that scientists do have a moral responsibility when offering advice to policy makers and believe that scientists shouldering this responsibility is desirable. Philosopher Heather Douglas argues that given that scientists can be wrong, and given that acting on incorrect information can lead to morally important consequences, scientists do have a moral duty concerning the advice they offer to policy makers. Scientists are the only ones who can fully appreciate the potential implications of their work. 

In the mini-series we see several examples where only the scientists fully appreciate the risks and dangers from radiation, and are the strongest advocates of evacuation. In reality, Legasov and a number of other scientists offered advice on how to proceed with cleaning up the disaster. According to Adam Higginbotham’s Midnight in Chernobyl: The Untold Story of the World’s Greatest Nuclear Disaster, the politicians were ignorant of nuclear physics, and the scientists and technicians were too paralyzed by indecision to commit to a solution.

In the real-life disaster, the scientists involved were frequently unsure about what was actually happening. They had to estimate how fast various parts of the core might burn and whether different radioactive elements would be released into the air. Reactor specialist Konstantin Fedulenko was worried that the boron drops were having limited effect and that each drop was hurling radioactive particles into the atmosphere. Legasov disagreed and told him that it was too late to change course. Fedulenko believed it was best to let the graphite fire burn itself out, but Legasov retorted, “People won’t understand if we do nothing…We have to be seen to be doing something.” This suggests that the scientists were not simply offering technical advice but were making judgments based on additional value and policy considerations. 

Again, according to Douglas, given the possibility for error and the potential moral consequences at play, scientists should consider these consequences to determine how much evidence is enough to say that a hypothesis is true or to advise a particular course of action. 

In the mini-series, the government relies on monitors showing a low level of radiation to initially conclude that the situation is not bad enough to warrant an evacuation. However, it is pointed out the radiation monitors being used likely only had a limited maximum range, and so the radiation could be much higher than the monitor would tell them. Given that they may be wrong about the actual amount of radiation and the threat to public health, a morally-responsible scientist might conclude that evacuation be suggested to policy makers. 

While some claim that scientists shouldn’t include these considerations, others argue that they should. Certainly, the issue isn’t limited to nuclear disasters either. Cases ranging from climate change to food safety, chemical and drug trials, economic policies, and even the development of weapons, all present a wide array of potential moral consequences that might be considered when offering scientific advice. 

It’s difficult to say a scientist shouldn’t make morally relevant consequences plain to policy makers. It often appears beneficial, and it sometimes seems unavoidable. But this liberty requires scientists to practice judgment in determining what a morally relevant consequence is and is not. Further, if scientists rely on value judgments when advising government policy, how are scientists to be held accountable by the public? Given these benefits and concerns, whether we want scientists to make such judgments and to what extent their advice should reflect those judgments presents an important ethical dilemma for the public at large. Resolving this dilemma will at least require that we be more aware of how experts provide policy advice.