← Return to search results
Back to Prindle Institute

Pathogenic Research: The Perfect Storm for Moral Blindness?

microscopic image of virus cells

In October, scientists at Boston University announced that they had created a COVID-19 variant as contagious as omicron (very) but significantly more lethal. “In K18-hACE2 mice [engineered mice vulnerable to COVID],” their preprint paper reported, “while Omicron causes mild, non-fatal infection, the Omicron S-carrying virus inflicts severe disease with a mortality rate of 80%.” If this beefed-up Omicron were released somehow, it would have had the potential to cause a much more severe pandemic.

The National Science Advisory Board for Biosecurity has now released new guidelines which seek to strike a significantly more cautious balance between the dangers and rewards of risky research involving PPPs — potential pandemic pathogens. The previous standards, under which the Boston University research was allowed to be conducted without any safety review, were, according to the NSABB, reliant on definitions of a PPP that were “too narrow” and likely to “result in overlooking… pathogens with enhanced potential to cause a pandemic.” (The researchers at Boston University claimed their enhanced COVID-19 variant was marginally less deadly than the original virus, and hence that they were not conducting risky “gain of function” research requiring oversight. But this argument is flawed since the deadliness of a virus with pandemic potential is a function of the combination of infectiousness and deadliness. Since the novel variant combined close-to-original-COVID-19 deadliness with omicron infectiousness, the novel variant is likely significantly more dangerous than the original strain.)

Experiments like these are not merely a question of public policy. Apart from the legal and regulatory issues, we can also ask: is it morally permissible to be personally involved in such research? To fund it, administer it, or conduct it?

On the positive side, research with PPPs, including some forms of the heavily politicized “gain-of-function” research, promises valuable insight into the origins, risks, and potential treatment of dangerous pathogens. We may even prevent or mitigate future natural pandemics. All of this seems to give us strong moral reasons to conduct such research.

However, according to Marc Lipsitch and Alison Galvani, epidemiologists at Harvard and Yale, these benefits are overblown and achievable by safer methods. The risks of such research, on the other hand, are undeniable. Research with dangerous pathogens is restricted to the safest rated labs. But even top safety-rated BS-3 and BS-4 research labs leak viruses with regularity. The COVID-19 lab leak theory remains contentious, but the 1977 Russian flu pandemic was very likely the result of a lab leak. It killed 700,000 people. Anthrax, SARS, smallpox, zika virus, ebola, and COVID-19 (in Taiwan) have all leaked from research labs, often with deadly results. One accident in a lab could cause hundreds of millions of deaths.

Given the scale of risk involved, you might ask why we don’t see mass refusals to conduct such research? Why do the funders of such work not outright reject contributing to such risk-taking? Why does this research not spark strong moral reactions from those involved?

Perhaps part of the reason is that we seem particularly vulnerable to flawed moral reasoning when it comes to subjects this like this. We often struggle to recognize the moral abhorrence of risky research. What might explain our “moral blindness” on this issue?

Stalin supposedly said, “One death is a tragedy. A million deaths is a statistic.” Morally, he was wrong. But psychologically, he was right. Our minds are better suited to the small scale of hunter-gatherer life than to the modern interconnected world where our actions can affect millions. We struggle to scale our moral judgments to the vast numbers involved in a global pandemic. Moral psychologists call this effect “scope neglect” and I discuss it in more detail here.

When a lab worker, research ethics committee member, or research funder thinks about what might go wrong with PPP research, they may fail to “scale up” their moral judgments to the level needed to consider the moral significance of causing a worldwide pandemic. More generally, research ethical principles were (understandably) built to consider the risks that research poses to the particular individuals involved in the research (subjects and experimenters), rather than the billions of innocents that could be affected. But this, in effect, institutionalizes scope neglect.

To compound this clouding effect of scope neglect, we tend to mentally round up tiny probabilities to “maybe” (think: lottery) or round them down to “it will never happen” (think: being hit by a meteorite while sleeping, the unfortunate fate of Ann Hodges of Alabama). Lipsitch and Inglesby’s 2014 study gives a 0.01-0.6% probability of causing a pandemic per lab worker per year to gain-of-function research on virulent flu viruses.

But rounding this probability down to “it won’t happen” would be a grave moral error.

Because a severe pandemic could cause hundreds of millions of deaths, even the lower-bound 0.01% risk of causing a global pandemic each year would mean that a gain-of-function researcher should expect to cause an average of 2,000 deaths per year. If that math is even remotely close to right, working on the most dangerous PPPs could be the most deadly job in the world.

Of course, we don’t act like it. Psychologically, it is incredibly hard to recognize what is “normal” as morally questionable, or even profoundly wrong. If your respected peers are doing the same kind of work, the prestigious scientific journals are publishing your research, and the tenure board are smiling down from above, it’s almost impossible to come to the disturbing and horrifying  conclusion that you’re doing something seriously unethical. But if the risks are as severe as Lipsitch and Co. claim (and the benefits as mediocre) then it is difficult to see how working with PPPs could be ethically defensible. What benefit to the world would your work have to provide to justify causing an expected 2,000 deaths each year?

Even putting the ethical debate to one side, extreme caution seems warranted when debating the morality of lab research on PPPs. It is a topic that could create the “perfect storm” of flawed moral reasoning.

Ivermectin, Hydroxychloroquine, and the Dangers of Scientific Preprints

photograph of "In Evidence We Trust" protest sign

There is a new drug of choice among those who have refused to get vaccinated for COVID-19, or are otherwise looking for alternative treatments: ivermectin, an antiparasitic drug that is used primarily in farm animals. The drug recently made headlines in the U.S. after a judge in Ohio ordered a hospital to treat a patient with it, and a number of countries in Latin America and Europe have begun using it, as well. It is not the first time that a drug that was developed for something else entirely was touted as the new miracle cure for COVID-19: hydroxychloroquine, an anti-malarial, was an early favorite for alternative treatments from former president Trump, despite the FDA’s statement that it had no real effect on patients with COVID-19, and indeed could be very dangerous when used improperly. The FDA has recently issued a statement to a similar effect when it comes to ivermectin, warning that the drug can be “highly toxic in humans.”

It is not surprising that there has been continued interest in alternative treatments to COVID-19: given the existence of vaccine skepticism and various surrounding conspiracy theories, people who do not trust the science of vaccinations, for one reason or another, will look for other ways of fighting the disease. What is perhaps surprising is why this particular drug was chosen as the new alternative treatment. There is, after all, no seemingly good reason to think that a horse de-wormer would be effective at killing the coronavirus. So where did this idea come from?

Not, it turns out, from nowhere. As was the case with hydroxychloroquine, the U.S.-based health analytics company Surgisphere produced a study that purported to show that ivermectin was effective at treating COVID-19, albeit in just “a handful of in vitro and observational studies.” The study was not published in any peer-reviewed outlet, but was instead uploaded as a preprint.

A preprint is a “version of a scientific manuscript posted on a public server prior to formal review”: it’s meant to be a way of rapidly disseminating results to the scientific community at large. Preprints can have significant benefits when it comes to getting one’s results out quickly: peer-review can be a lengthy process, and during a global pandemic, time is certainly of the essence. At the same time, there are a number of professional and ethical considerations that surround the use of preprints in the scientific community.

For example, a recent study on preprints released during the pandemic found a “remarkably low publication rate” for sampled papers, with one potential explanation being that “some preprints have lower quality and will not be able to endure peer-reviewing.” Others have cautioned that while the use of preprints has had positive effects in the physical sciences, when it comes to the medical sciences there is potentially more reason to be concerned: given that developments in medical science is typically of much more interest to the general public, “Patients may be exposed to early, unsubstantiated claims relevant to their conditions, while lacking the necessary context in which to interpret [them].” Indeed, this seems to be what happened with regards to alternative treatments for COVID-19, which have been uploaded online amongst an explosion of new preprint studies.

Additional problems arise when it comes to the use of medical preprints in the media. Another recent study found that while online media outlets linking to preprints was a common practice, said preprints were often framed inconsistently: media outlets often failed to mention that the preprints had not been peer reviewed, instead simply referring to them as “research.” While the authors of the study were encouraged that discussions of preprints in the media could foster “greater awareness of the scientific uncertainty associated with health research findings,” they were again concerned that failing to appropriately frame preprint studies risked misleading readers into thinking that the relevant results were accepted in the scientific community.

So what should we take away from this? We have seen that there are clearly benefits to the general practice of publishing scientific preprints online, and that in health crises in particular the rapid dissemination of scientific results can result in faster progress. At the same time, preprints making claims that are not adequately supported by the evidence can get picked up by members of the general public, as well as the media, who may be primarily concerned with breaking new “scientific discoveries” without properly contextualizing the results or doing their due diligence in terms of the reliability of the source. Certainly, then, there is an obligation on the part of media outlets to do better: given that many preprints do not survive peer review, it is important for the media to note that, when they do refer to preprint studies, that the results are provisional.

It’s not clear, though, whether highlighting the distinction would make much of a difference in the grand scheme of things. For instance, in response to the FDA’s statement that there is no scientific basis for studying the effects of ivermectin on COVID-19, Kentucky senator Rand Paul stated that it was really a “hatred for Trump” that stood in the way of investigating the drug, and not, say, the fact that the preprint study did not stand up to scientific scrutiny. It seems unlikely that, for someone like Paul, the difference between preprints and peer-reviewed science is a relevant one when it comes to pushing a political narrative.

Nevertheless, a better understanding of the difference between preprints and peer-reviewed science could still be beneficial when helping people make decisions about what information to believe. While some preprints certainly do go on to pass peer review, if the only basis that one has for some seemingly implausible medical claims is a preprint study, it is worth approaching those claims with skepticism.

COVID-19 Vaccines and Drug Patent Laws

photograph of covid vaccine ampules

One of the problems that Canada has had with the COVID vaccines is a lack of domestic production. We are told that there are plans to construct a new facility for such purposes, but that this will not occur until long after it is needed. In the meantime, it was reported this week that Biolyse, a small pharmaceutical manufacturer in Ontario, has offered to produce millions of doses of vaccine but can’t because patents prevent them from being able to do so. This is just one example of a much larger moral issue regarding potential patent reform.

About three-quarters of the vaccine supply has been secured by 10 countries that account for 60 percent of global economic growth. However, 130 countries haven’t received any doses and they account for over 2 billion people. Companies like Biolyse have offered to produce vaccines for lower-income nations, but they haven’t been able to secure a license from companies like Johnson & Johnson in order to do it. This has led to a growing call for waivers for intellectual property such as patent protections so that more companies can manufacture vaccines to increase the supply.

The move has been led by South Africa and India who are seeking support from to suspend elements of the WTO Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement concerning intellectual property rights for the direction of the coronavirus pandemic. So far, however, the United States and several other countries have blocked negotiations, and this has led to direct appeals to President Biden as “the full protection of intellectual property and monopolies will only negatively impact efforts to vaccinate the world and be self-defeating.”

While efforts such as the UN supported Vaccines Global Access Facility have helped with distribution of vaccines in lower-income nations, the greatly unequal distribution suggests how limited these efforts have been. Supporters of waiving IP rights in this case argue that we should be able to take advantage of unused production capacity to maximize the supply of vaccines. This is particularly important because current estimates show that many nations will be waiting until 2024 to achieve mass immunization. This situation represents a significant drain on the global economy and complicates our ability to deal with variants. Experience also teaches that drug manufacturers in developing nations can make large amounts of quality drugs inexpensively.

Supporters of a waiver also point out the massive amount of public funding that pharmaceutical companies have received to develop coronavirus vaccines and that much of the groundwork for those vaccines were discoveries that came from federally-funded research. Thus, they argue that the vaccine should be a “people’s vaccine” that is universally available to all at no cost. They also suggest that such a waiver would send a message of commitment to public health as opposed to prioritizing intellectual property rights.

Opponents of the measure, however, argue that waiving patents would dampen scientific innovation by deterring private investment. They argue that a waiver “creates a dangerous precedent of nullifying IP rights” which “destroys the bedrock of what makes medial innovation possible.” The argument makes the case that vaccine development is expensive and, without a guarantee of success, a patent protection guarantee is necessary for innovators to continue to fund their efforts. They also argue that existing regulations are already flexible to allow vaccine drug manufacturers to voluntarily engage in agreements with generic drug manufacturers.

It is important to note that just because regulations are flexible to allow something doesn’t mean that that thing will happen. Nor does one case of a patent waver constitute a “precedent.” Typically, precedents require context, so outside of a pandemic scenario, it’s difficult to see how this might become a problem. Unless, of course, that larger context concerns how to reform our use of patents in the face of other significant moral problems, in which case such a moral conversation might be more helpful.

To consider how a wider discussion of the morality of drug patents might be helpful to the issue of the COVID vaccine, we can look to moral philosophers who have addressed the issue. In 2009, philosopher Thomas Pogge argued that developing nations’ adoption of global uniform intellectual property rights under the TRIPs agreement is morally problematic. He maintained that the loss of freedom to produce, sell, and buy medicines produced by patents imposes a huge loss in terms of disease and premature death that cannot be justified. The typical alternatives of government initiatives and partnerships (such as the UN-backed COVID-19 Vaccines Global Access Facility or COVAX) to deliver medicines to developing nations “are really doing good by improving the situation relative to what it would be under TRIPs unmitigated. Still, these efforts are not nearly sufficient to protect the poor.” Indeed while COVAX has delivered hundreds of thousands of doses, “the disparity between high- and low-income countries remains vast.”

Pogge explains how corporate interests and public health outcomes are misaligned; if pharmaceutical companies help low-income patients benefit from patented medicine, it will undermine its profitability by losing out on customers, both in terms of less revenue but also because the disease will be eliminated more quickly. He suggests several reforms to the patent system including the development of a guaranteed Health Impact Fund (HIF) created by governments where a vaccine developer (for example) would agree to provide production and distribution of their drug at the lowest feasible cost in return for a share equal to its share of the assessed global impact for all HIF-registered products from the HIF reward pools (which would constitute a multi-billion dollar fund) for ten years. Since such a fund rewards those in relation to their impact on global health, drug companies would become more incentivized to focus on treatments and diseases that aren’t simply a priority for the affluent.

This idea makes clear that drug patents are a moral issue one which is connected to other major problems involving excessive litigation and marketing. So perhaps it is a good thing that the waiver is agreed to if a more substantial and target reform can eventually take place. Pogge suggests that patent reform such as his proposed HIF would be a significant step in also addressing global poverty as a whole. So, while discussion of waiving patent protections in the case of COVID is morally important, it may be more morally important to fit this step into a larger conversation that considers the morality of the drug patent system as a whole.

Hydroxychloroquine and the Ethical Pitfalls of Private Science

A box of hydroxychloroquine sulphate tablets held by a hand with coronavirus written in background

Last week, news broke that a significant study into the effects of hydroxychloroquine for treating COVID-19 relied on data that has now been called into question. The effects of this study, and other studies that relied on data from the same source, were profound, leading to changes in planned studies and in treatments for COVID-19 being prescribed to patients. The fact that this data comes from an unaudited source highlights the ethical concerns that stem from having an increased corporate role in science.

In late May, a study published in the elite medical journal The Lancet suggested that COVID-19 patients taking chloroquine or hydroxychloroquine were more likely to die. The study included over 96,000 patients, relying on electronic health data from the company Surgisphere run by Dr. Sepan Desai, who was also included as a co-author of the article. It found that at 671 hospitals where COVID-19 patients had been prescribed hydroxychloroquine, the risk of death was over twice as great as patients who were not prescribed the drug. An additional study using data from Surgisphere investigated the uses of blood pressure medication and was published in a paper for The New England Journal of Medicine. A third paper using Surgisphere data was available as a preprint which suggested that ivermectin significantly reduced mortality in COVID-19 patients. All three papers have been retracted.

The retractions occurred after discrepancies were noticed in the data. The reported doses of hydroxychloroquine for American patients was higher than FDA guidelines and the number of Australian deaths were higher than official statistics. There was also a discrepancy between the small number of hospitals included and the vast number of patient records. Following this, independent auditors were asked to review the data provided by Surgisphere; however, the company refused to provide the data, citing confidentiality requirements with the hospitals. Yet investigations found that no hospitals located in the US admitted to participating with Surgisphere. 

Surgisphere itself is also a suspect source. The company was founded in 2007 but has little online presence. Their website does not list partner hospitals or identify its scientific advisory board. It claims that the company has 11 employees. Their enormous database doesn’t seem to have been used by peer reviewed studies until May. Desai himself also has a colorful history, including a record of three outstanding medical malpractice suits against him. 

The studies had significant impact world-wide. Following the report that hydroxychloroquine increased mortality rates in patients, the WHO announced a “temporary” pause into their studies of hydroxychloroquine (they have since resumed their efforts). The studies also played a role in the national conversation about the drug in the United States following President Trump’s announcement that he had been taking it to combat the virus. The preprint on ivermectin was never officially published, but it did lead to changes in treatment protocols in South America. In Bolivia, a local government planned to hand out 350,000 doses of the drug after receiving authorization from the Bolivian Ministry of Health. The drug was also cited as a potential treatment in Chile and Peru. 

This episode highlights several general moral issues. Retraction scandals at a time when the public is looking to, and relying on, medical science are dangerous. The situation is intensified by the fact that these controversies are tied to the political debate over hydroxychloroquine, as it may undermine science along partisan lines. Polls show that Democrats are far more likely than Republicans to have a great deal of confidence in scientists to act in the best interests of the public yet such scandals further undermine public trust and make science seem more partisan. 

The matter also raises ethical issues within the sciences. According to Ivan Oransky from Retraction Watch, the case represents larger systematic issues within the sciences, noting that even leading journalists rely too heavily on an honor system. For example, the pandemic has led to warning signs about the use of preprints in journals, which have moved away from getting feedback while studies are being finalized to sharing “breaking data” as fast as possible, despite the lack of peer review.  

The Surgisphere episode highlights the ethical pitfalls of science relying on private sector companies for research. Since the twentieth century, the private sector has been an increasing source of scientific funding. In the United States, private funding accounts for 65% of research and development spending in 2013. There are good reasons for private sector investments and corporate-university level partnerships. The public sector has shown less willingness to supply the needed funding. As Ashtosh Jogalekar points out in an article for Scientific American, investments by private interests have allowed for many projects to be funded which might not be funded otherwise. He notes, “For these billionaires a few millions of dollars is not too much, but for a single scientific project hinging on the vicissitudes of government funding it can be a true lifeline.” It has also been noted that private funding can ensure cost-effective replication studies are possible, especially important since efforts to produce reproducibility were only successful in 40% of experiments published in peer-reviewed journals. 

On the other hand, according to Sheldon Krimsky, the author of Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research?, numerous problems can occur when scientists partner with private corporations. Krimsky finds that publication practices have been influenced by commercial interests: the commercialization of science has led to a decline in the notion that scientists should work in the public interest, and sharing data becomes more problematic given the use of paywalls and intellectual property protection. This makes it more difficult to verify the data.

There are many ways corporations can complicate data-sharing. By choosing not to release unflattering findings or claiming data as exclusive intellectual property, companies can make it difficult for others to use research (consider Diamond v Chakrabarty which began the precedent for allowing genetically modified organisms to be patentable). And, of course, the Surgisphere episode is an example of university-level researchers working in collaboration with a private company where the company retains sole control of the data. Such cases allow for fraud and suffer from a lack of oversight. 

One proposed solutions is to move towards “open science,” making publications, data, and other information open and accessible to everyone. Such a move would allow for both increased transparency and accountability as well as more rigorous peer-review. Under such a system, falsified data would be more difficult to provide and more easy to detect. 

While many of these issues have been brewing for years, it is not every day that a single published study can have the kind of global impact that came with investigations into the effectiveness of hydroxychloroquine, even while other independent studies have also demonstrated its ineffectiveness. The ethical fallout from this scandal is thus far more obvious given public interest in the disease. Indeed, there have already been calls to stop private speculation into COVID-19 research; part of this call includes the position that all intellectual property should be made available for free to the international scientific community for fighting the pandemic. The question now is what specific reforms should be implemented to prevent scandals like this from happening again?

 

Rules Versus Results in Vaccine Research

A photo of a person withdrawing medicine from a vial with a syringe

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


A group called Rational Vaccines, which conducted a clinical trial of a herpes vaccine, has brought legal and ethical standards for medical research into the news recently. The trial, conducted from April to August of 2016, used human subjects and was conducted in the Caribbean in order to avoid being overseen by the FDA or cleared by the institutional review board, or IRB, which is required of trials in the US.

Peter Thiel, co-founder of PayPal, has invested $4 million in Rational Vaccines this month. Thiel is an outspoken critic of the safety regulations the FDA and considers the oversight of the organization to needlessly delay scientific advancement. In an interview, he claimed that our current system of checks would prevent the polio vaccine from being developed today. However, his contribution to the group was contingent on their future compliance with FDA regulations in order for the findings of their research to be able to help more people.

The group has faced a great deal of controversy over the herpes vaccine study, not only for blatantly avoiding the jurisdiction of the US, but also for failing to meet standards of scientific rigor. However, the Chief of Rational Vaccines, Augustin Fernandez, stands by the results and is concerned that the import of the study is being lost in the uproar over the conditions of the trial.

Medical research has the aim of advancing our understanding of treatment and developing new ways of preventing ailments and healing the sick. However, this aim is in conflict with a competing value: that of imposing risk to the subjects. Clinical trials in medicine are experiments that explore what happens when humans engage in some treatment, and thus expose humans to risk of harm. Medical research thus has its own burden of justification that other scientific research (for instance, in physics or chemistry) doesn’t necessarily need to meet.

A basic ethical burden for medical research is that of scientific merit: a study must meet the standards of scientific method that peers dictate. Because studies are taking place in a scientific context, ideally this includes some possibility of replication of the results, suitable sample size and distribution, and objectivity of data collection. Including a control group and controlling variables is similarly important. Because clinical trials are exposing humans to harm, the study needs to be of scientific merit. Unfortunately, the herpes vaccine trial is not clearly meeting these standards because it included only 20 subjects without a clearly established control group, and the results involved the subjects who received the vaccine self-reporting their impression of whether there was an improvement of their condition. These aspects have undermined the scientific merit of the study, making it unlikely to be published in the US.

Also, because the research is experimental in nature, meaning it is not the standard of care, it is thus risky, and there are ethical demands on how this risk must be managed. Because humans are involved and potentially harmed by the trial, when medical research is conducted on humans, the subjects must give informed consent to adopting the risk of the study.

Other ways of managing the risk of the study go beyond the consent of the participants. The possible benefit must be proportional to the risk assumed by the subjects. Further, the benefit must not be directed towards a different group than those assuming the risk. This concern ties into selecting the subjects. If the treatment or drug that results from the trial will be prohibitively expensive, for instance, then it is ethically fraught to test the drug on subjects from economically disadvantaged groups. It may save a significant amount of money to conduct research on new treatments in less developed areas or nations, but to do so often means centering the risk of the trials on groups that will not be able to benefit from the results.  

This ethical constraint on medical trials is especially pertinent to studies being conducted transnationally. In Rational Vaccine’s trial for this herpes vaccine, subjects from the US and UK were flown to the Caribbean to take part in the study so this concern doesn’t overtly arise. However, transferring their patients outside the country highlights their attempts to avoid government regulations.

The criticisms that Thiel and other libertarians lob against the FDA claim that the regulations prevent medical progress. That’s true. Regulations are meant to promote the aims discussed above in order to prevent harm and exploitation of human subjects. In conducting research, the tension between developing a life-saving treatment and conducting your trial in a respectful and ethical manner can be fraught.

Unfortunately, the history of clinical trials is riddled with cases that highlight how important it is to attend to the ethical implications of medical research. It is very possible to conduct research in countries with fewer regulations and underprivileged citizens that are willing to adopt high risk in exchange for not receiving any benefit. From 1946-1948, US scientists infected patients in Guatemala with STDS. Within the US, patients’ rights have been disrespected and exploited, for instance in the well-known and egregious case of the Tuskegee experiments, where a group of African American patients were denied treatment for their syphilis in order to see how the disease progressed over their lives.

There is, of course, value in pursuing scientific advancement through clinical trials. The results of such trials could help a number of people, increasing the quality of life for many. This benefit must be weighed against the possible exploitation of research subjects and accepting the risk that the experiment will result in harm to the subjects.