← Return to search results
Back to Prindle Institute
Science

Pathogenic Research: The Perfect Storm for Moral Blindness?

By Giles Howdle
13 Feb 2023
microscopic image of virus cells

In October, scientists at Boston University announced that they had created a COVID-19 variant as contagious as omicron (very) but significantly more lethal. “In K18-hACE2 mice [engineered mice vulnerable to COVID],” their preprint paper reported, “while Omicron causes mild, non-fatal infection, the Omicron S-carrying virus inflicts severe disease with a mortality rate of 80%.” If this beefed-up Omicron were released somehow, it would have had the potential to cause a much more severe pandemic.

The National Science Advisory Board for Biosecurity has now released new guidelines which seek to strike a significantly more cautious balance between the dangers and rewards of risky research involving PPPs — potential pandemic pathogens. The previous standards, under which the Boston University research was allowed to be conducted without any safety review, were, according to the NSABB, reliant on definitions of a PPP that were “too narrow” and likely to “result in overlooking… pathogens with enhanced potential to cause a pandemic.” (The researchers at Boston University claimed their enhanced COVID-19 variant was marginally less deadly than the original virus, and hence that they were not conducting risky “gain of function” research requiring oversight. But this argument is flawed since the deadliness of a virus with pandemic potential is a function of the combination of infectiousness and deadliness. Since the novel variant combined close-to-original-COVID-19 deadliness with omicron infectiousness, the novel variant is likely significantly more dangerous than the original strain.)

Experiments like these are not merely a question of public policy. Apart from the legal and regulatory issues, we can also ask: is it morally permissible to be personally involved in such research? To fund it, administer it, or conduct it?

On the positive side, research with PPPs, including some forms of the heavily politicized “gain-of-function” research, promises valuable insight into the origins, risks, and potential treatment of dangerous pathogens. We may even prevent or mitigate future natural pandemics. All of this seems to give us strong moral reasons to conduct such research.

However, according to Marc Lipsitch and Alison Galvani, epidemiologists at Harvard and Yale, these benefits are overblown and achievable by safer methods. The risks of such research, on the other hand, are undeniable. Research with dangerous pathogens is restricted to the safest rated labs. But even top safety-rated BS-3 and BS-4 research labs leak viruses with regularity. The COVID-19 lab leak theory remains contentious, but the 1977 Russian flu pandemic was very likely the result of a lab leak. It killed 700,000 people. Anthrax, SARS, smallpox, zika virus, ebola, and COVID-19 (in Taiwan) have all leaked from research labs, often with deadly results. One accident in a lab could cause hundreds of millions of deaths.

Given the scale of risk involved, you might ask why we don’t see mass refusals to conduct such research? Why do the funders of such work not outright reject contributing to such risk-taking? Why does this research not spark strong moral reactions from those involved?

Perhaps part of the reason is that we seem particularly vulnerable to flawed moral reasoning when it comes to subjects this like this. We often struggle to recognize the moral abhorrence of risky research. What might explain our “moral blindness” on this issue?

Stalin supposedly said, “One death is a tragedy. A million deaths is a statistic.” Morally, he was wrong. But psychologically, he was right. Our minds are better suited to the small scale of hunter-gatherer life than to the modern interconnected world where our actions can affect millions. We struggle to scale our moral judgments to the vast numbers involved in a global pandemic. Moral psychologists call this effect “scope neglect” and I discuss it in more detail here.

When a lab worker, research ethics committee member, or research funder thinks about what might go wrong with PPP research, they may fail to “scale up” their moral judgments to the level needed to consider the moral significance of causing a worldwide pandemic. More generally, research ethical principles were (understandably) built to consider the risks that research poses to the particular individuals involved in the research (subjects and experimenters), rather than the billions of innocents that could be affected. But this, in effect, institutionalizes scope neglect.

To compound this clouding effect of scope neglect, we tend to mentally round up tiny probabilities to “maybe” (think: lottery) or round them down to “it will never happen” (think: being hit by a meteorite while sleeping, the unfortunate fate of Ann Hodges of Alabama). Lipsitch and Inglesby’s 2014 study gives a 0.01-0.6% probability of causing a pandemic per lab worker per year to gain-of-function research on virulent flu viruses.

But rounding this probability down to “it won’t happen” would be a grave moral error.

Because a severe pandemic could cause hundreds of millions of deaths, even the lower-bound 0.01% risk of causing a global pandemic each year would mean that a gain-of-function researcher should expect to cause an average of 2,000 deaths per year. If that math is even remotely close to right, working on the most dangerous PPPs could be the most deadly job in the world.

Of course, we don’t act like it. Psychologically, it is incredibly hard to recognize what is “normal” as morally questionable, or even profoundly wrong. If your respected peers are doing the same kind of work, the prestigious scientific journals are publishing your research, and the tenure board are smiling down from above, it’s almost impossible to come to the disturbing and horrifying  conclusion that you’re doing something seriously unethical. But if the risks are as severe as Lipsitch and Co. claim (and the benefits as mediocre) then it is difficult to see how working with PPPs could be ethically defensible. What benefit to the world would your work have to provide to justify causing an expected 2,000 deaths each year?

Even putting the ethical debate to one side, extreme caution seems warranted when debating the morality of lab research on PPPs. It is a topic that could create the “perfect storm” of flawed moral reasoning.

Giles Howdle is writing up his doctoral thesis at the University of Edinburgh in the UK. His research focuses on the problem of morality’s authority—on questions such as ‘Why should I be moral?’ Besides topics in ethics and metaethics, his philosophical interests include political philosophy and the philosophy of wellbeing.
Related Stories