← Return to search results
Back to Prindle Institute

Good To Be Skeptical? Evidentialism and Climate Change

photograph of tree in the desert

When it comes to climate change, defining the limits of reasonable skepticism is not only a matter of intellectual curiosity, but also moral and political urgency. In contemporary scientific circles, skepticism is generally celebrated as a virtue. However, those who reject the near-consensus about anthropogenic climate change also claim the “skeptic” title. This raises an important question: What does it mean to be skeptical, and when is skepticism no longer praiseworthy?

Philosophers have often pondered the extent of human knowledge. Skeptics argue that our understanding is more limited than we tend to believe. Some skeptics even claim that we can never know anything, or that none of our beliefs are justified and we ought to suspend all judgment on all issues.

Many climate scientists claim the title “skeptic” for themselves and attach the label “denier” to their opponents. The Committee for Skeptical Inquiry, for example, has called on the media to stop using the term “skepticism” to refer to those who reject the prevailing climate consensus and to instead use the term “denial.” We can, according to Washington and Cook, authors of Climate Change Denial: Heads in the Sand, think of the difference like this: “Skepticism is healthy both in science and society; denial is not.” However, when it comes to climate change, the term “skeptic” continues to be associated with those who reject the prevailing scientific consensus, blurring the line between skepticism and denial.

To better understand the differences between skepticism and denial, let’s consider a concrete example: the existence of ghosts. A ghost skeptic denies that we are justified in believing that ghosts exist. They neither believe nor disbelieve in ghosts, as they think there isn’t enough evidence to justify a belief in ghosts. A ghost denier, conversely, decidedly believes that ghosts do not exist. They disbelieve in ghosts, arguing that ghosts are incompatible with our best understanding of how the laws of the universe work, and that, absent good evidence for ghosts, we should conclude they do not exist. In general, it is not necessarily better to be a skeptic than a denier. Whether we ought to disbelieve something or merely suspend judgment depends on the particular issue and the strength of the evidence we have.

So why do Washington and Cook think that denial is always a bad thing? Ultimately, they are referring to a very specific sense of “denial.” They mean someone who clings “to an idea or belief despite the presence of overwhelming evidence to the contrary.” This is a sense of denial that draws on Freudian psychoanalysis, which characterizes denial as a pathological defense mechanism that involves denying something because one wishes it weren’t true. Denial in this sense is the result of some kind of emotional or psychological incapacity to accept reality.

It is clearly bad to be a climate change denier, or any kind of denier, in the pathological sense Washington and Cook have in mind. However, we can’t assume everyone who denies the scientific consensus on climate change is suffering from a psychological disorder. Some genuinely believe the evidence they have seen does not justify a belief in anthropogenic climate change. Whether it is a mistake to disbelieve in man-made climate change depends entirely on the strength of the scientific evidence. In my own view, the scientific evidence of anthropogenic climate change is very strong and this, rather than some psychological defect, is what makes denial inappropriate.

However, it is worth noting that most of those who reject the consensus on climate change identify as “skeptics” rather than “deniers,” claiming that they have not yet formed a conclusion on the matter. But plenty of scientists who defend the prevailing view on climate change also think of themselves as still embracing skepticism. This raises the question: who is the real skeptic?

To answer that question, we first need to understand a distinction between philosophical skepticism and the scientific skepticism advocated by figures like Michael Shermer, publisher of Skeptic magazine. Shermer defines skepticism as striking the “right balance between doubt and certainty.” As William James notes, this contrasts with a philosophical skeptic who says, “Better go without belief forever rather than believe a lie!” Philosophical skeptics only think we should believe things that are absolutely certain. Scientific skeptics try to believe whatever the evidence suggests has a greater than 50% chance of being true. These are very different standards. To philosophers, scientific skepticism is just “evidentialism” – the principle that our beliefs should be based solely on available evidence.

So who are the real skeptics? Perhaps some climate skeptics are philosophical skeptics. Perhaps they think it is more likely than not that anthropogenic climate change is real, but that we still aren’t justified in believing it. In this case, climate skeptics might be the “real skeptics,” but only on an interpretation of skepticism that most scientists would think is deeply objectionable.

But most climate skeptics are not philosophical skeptics. As the philosophers Coady and Corry observe, the debate between climate change proponents and climate skeptics is not a dispute between two groups of skeptics, one scientific and one philosophical. Instead, it is a disagreement between two groups of evidentialists, who differ in their interpretations and evaluations of the evidence and hence in their beliefs. Of course, one side must be wrong and the other must be right. But both sides appeal to the evidence, as they see it, to justify their respective views.

Proponents of anthropogenic climate change often accuse climate skeptics of disregarding the wealth of evidence supporting their stance. Conversely, climate skeptics argue that climate change advocates are swayed by personal desires, emotions, or political ideologies. But, at bottom, both criticisms reveal a shared commitment to evidentialism. These are accusations of forming beliefs based on things other than the best available evidence – of violating evidentialism. Neither side of the climate debate adopts the extreme skeptical position of suspending all judgment and belief, regardless of the evidence at hand.

Acknowledging that most people on both sides of this issue are committed to an evidentialist approach is crucial, because it encourages both sides to engage in a constructive dialogue that focuses on the merits of proof, rather than resorting to ad hominem attacks or accusations of bias. By emphasizing the importance of evaluating the strength and reliability of the evidence, it becomes possible to move beyond the polarizing and confusing labels of “skeptic” and “denier” and engage in a more fruitful discussion. Perhaps this could help reverse the current trend in public opinion toward climate skepticism.

Given that both sides of the climate change debate are committed to evidentialism, instead of squabbling over the label “skeptic,” which neither side should want to claim given its philosophical meaning, our focus should return to simply assessing the facts.

Pathogenic Research: The Perfect Storm for Moral Blindness?

microscopic image of virus cells

In October, scientists at Boston University announced that they had created a COVID-19 variant as contagious as omicron (very) but significantly more lethal. “In K18-hACE2 mice [engineered mice vulnerable to COVID],” their preprint paper reported, “while Omicron causes mild, non-fatal infection, the Omicron S-carrying virus inflicts severe disease with a mortality rate of 80%.” If this beefed-up Omicron were released somehow, it would have had the potential to cause a much more severe pandemic.

The National Science Advisory Board for Biosecurity has now released new guidelines which seek to strike a significantly more cautious balance between the dangers and rewards of risky research involving PPPs — potential pandemic pathogens. The previous standards, under which the Boston University research was allowed to be conducted without any safety review, were, according to the NSABB, reliant on definitions of a PPP that were “too narrow” and likely to “result in overlooking… pathogens with enhanced potential to cause a pandemic.” (The researchers at Boston University claimed their enhanced COVID-19 variant was marginally less deadly than the original virus, and hence that they were not conducting risky “gain of function” research requiring oversight. But this argument is flawed since the deadliness of a virus with pandemic potential is a function of the combination of infectiousness and deadliness. Since the novel variant combined close-to-original-COVID-19 deadliness with omicron infectiousness, the novel variant is likely significantly more dangerous than the original strain.)

Experiments like these are not merely a question of public policy. Apart from the legal and regulatory issues, we can also ask: is it morally permissible to be personally involved in such research? To fund it, administer it, or conduct it?

On the positive side, research with PPPs, including some forms of the heavily politicized “gain-of-function” research, promises valuable insight into the origins, risks, and potential treatment of dangerous pathogens. We may even prevent or mitigate future natural pandemics. All of this seems to give us strong moral reasons to conduct such research.

However, according to Marc Lipsitch and Alison Galvani, epidemiologists at Harvard and Yale, these benefits are overblown and achievable by safer methods. The risks of such research, on the other hand, are undeniable. Research with dangerous pathogens is restricted to the safest rated labs. But even top safety-rated BS-3 and BS-4 research labs leak viruses with regularity. The COVID-19 lab leak theory remains contentious, but the 1977 Russian flu pandemic was very likely the result of a lab leak. It killed 700,000 people. Anthrax, SARS, smallpox, zika virus, ebola, and COVID-19 (in Taiwan) have all leaked from research labs, often with deadly results. One accident in a lab could cause hundreds of millions of deaths.

Given the scale of risk involved, you might ask why we don’t see mass refusals to conduct such research? Why do the funders of such work not outright reject contributing to such risk-taking? Why does this research not spark strong moral reactions from those involved?

Perhaps part of the reason is that we seem particularly vulnerable to flawed moral reasoning when it comes to subjects this like this. We often struggle to recognize the moral abhorrence of risky research. What might explain our “moral blindness” on this issue?

Stalin supposedly said, “One death is a tragedy. A million deaths is a statistic.” Morally, he was wrong. But psychologically, he was right. Our minds are better suited to the small scale of hunter-gatherer life than to the modern interconnected world where our actions can affect millions. We struggle to scale our moral judgments to the vast numbers involved in a global pandemic. Moral psychologists call this effect “scope neglect” and I discuss it in more detail here.

When a lab worker, research ethics committee member, or research funder thinks about what might go wrong with PPP research, they may fail to “scale up” their moral judgments to the level needed to consider the moral significance of causing a worldwide pandemic. More generally, research ethical principles were (understandably) built to consider the risks that research poses to the particular individuals involved in the research (subjects and experimenters), rather than the billions of innocents that could be affected. But this, in effect, institutionalizes scope neglect.

To compound this clouding effect of scope neglect, we tend to mentally round up tiny probabilities to “maybe” (think: lottery) or round them down to “it will never happen” (think: being hit by a meteorite while sleeping, the unfortunate fate of Ann Hodges of Alabama). Lipsitch and Inglesby’s 2014 study gives a 0.01-0.6% probability of causing a pandemic per lab worker per year to gain-of-function research on virulent flu viruses.

But rounding this probability down to “it won’t happen” would be a grave moral error.

Because a severe pandemic could cause hundreds of millions of deaths, even the lower-bound 0.01% risk of causing a global pandemic each year would mean that a gain-of-function researcher should expect to cause an average of 2,000 deaths per year. If that math is even remotely close to right, working on the most dangerous PPPs could be the most deadly job in the world.

Of course, we don’t act like it. Psychologically, it is incredibly hard to recognize what is “normal” as morally questionable, or even profoundly wrong. If your respected peers are doing the same kind of work, the prestigious scientific journals are publishing your research, and the tenure board are smiling down from above, it’s almost impossible to come to the disturbing and horrifying  conclusion that you’re doing something seriously unethical. But if the risks are as severe as Lipsitch and Co. claim (and the benefits as mediocre) then it is difficult to see how working with PPPs could be ethically defensible. What benefit to the world would your work have to provide to justify causing an expected 2,000 deaths each year?

Even putting the ethical debate to one side, extreme caution seems warranted when debating the morality of lab research on PPPs. It is a topic that could create the “perfect storm” of flawed moral reasoning.

Ivermectin, Hydroxychloroquine, and the Dangers of Scientific Preprints

photograph of "In Evidence We Trust" protest sign

There is a new drug of choice among those who have refused to get vaccinated for COVID-19, or are otherwise looking for alternative treatments: ivermectin, an antiparasitic drug that is used primarily in farm animals. The drug recently made headlines in the U.S. after a judge in Ohio ordered a hospital to treat a patient with it, and a number of countries in Latin America and Europe have begun using it, as well. It is not the first time that a drug that was developed for something else entirely was touted as the new miracle cure for COVID-19: hydroxychloroquine, an anti-malarial, was an early favorite for alternative treatments from former president Trump, despite the FDA’s statement that it had no real effect on patients with COVID-19, and indeed could be very dangerous when used improperly. The FDA has recently issued a statement to a similar effect when it comes to ivermectin, warning that the drug can be “highly toxic in humans.”

It is not surprising that there has been continued interest in alternative treatments to COVID-19: given the existence of vaccine skepticism and various surrounding conspiracy theories, people who do not trust the science of vaccinations, for one reason or another, will look for other ways of fighting the disease. What is perhaps surprising is why this particular drug was chosen as the new alternative treatment. There is, after all, no seemingly good reason to think that a horse de-wormer would be effective at killing the coronavirus. So where did this idea come from?

Not, it turns out, from nowhere. As was the case with hydroxychloroquine, the U.S.-based health analytics company Surgisphere produced a study that purported to show that ivermectin was effective at treating COVID-19, albeit in just “a handful of in vitro and observational studies.” The study was not published in any peer-reviewed outlet, but was instead uploaded as a preprint.

A preprint is a “version of a scientific manuscript posted on a public server prior to formal review”: it’s meant to be a way of rapidly disseminating results to the scientific community at large. Preprints can have significant benefits when it comes to getting one’s results out quickly: peer-review can be a lengthy process, and during a global pandemic, time is certainly of the essence. At the same time, there are a number of professional and ethical considerations that surround the use of preprints in the scientific community.

For example, a recent study on preprints released during the pandemic found a “remarkably low publication rate” for sampled papers, with one potential explanation being that “some preprints have lower quality and will not be able to endure peer-reviewing.” Others have cautioned that while the use of preprints has had positive effects in the physical sciences, when it comes to the medical sciences there is potentially more reason to be concerned: given that developments in medical science is typically of much more interest to the general public, “Patients may be exposed to early, unsubstantiated claims relevant to their conditions, while lacking the necessary context in which to interpret [them].” Indeed, this seems to be what happened with regards to alternative treatments for COVID-19, which have been uploaded online amongst an explosion of new preprint studies.

Additional problems arise when it comes to the use of medical preprints in the media. Another recent study found that while online media outlets linking to preprints was a common practice, said preprints were often framed inconsistently: media outlets often failed to mention that the preprints had not been peer reviewed, instead simply referring to them as “research.” While the authors of the study were encouraged that discussions of preprints in the media could foster “greater awareness of the scientific uncertainty associated with health research findings,” they were again concerned that failing to appropriately frame preprint studies risked misleading readers into thinking that the relevant results were accepted in the scientific community.

So what should we take away from this? We have seen that there are clearly benefits to the general practice of publishing scientific preprints online, and that in health crises in particular the rapid dissemination of scientific results can result in faster progress. At the same time, preprints making claims that are not adequately supported by the evidence can get picked up by members of the general public, as well as the media, who may be primarily concerned with breaking new “scientific discoveries” without properly contextualizing the results or doing their due diligence in terms of the reliability of the source. Certainly, then, there is an obligation on the part of media outlets to do better: given that many preprints do not survive peer review, it is important for the media to note that, when they do refer to preprint studies, that the results are provisional.

It’s not clear, though, whether highlighting the distinction would make much of a difference in the grand scheme of things. For instance, in response to the FDA’s statement that there is no scientific basis for studying the effects of ivermectin on COVID-19, Kentucky senator Rand Paul stated that it was really a “hatred for Trump” that stood in the way of investigating the drug, and not, say, the fact that the preprint study did not stand up to scientific scrutiny. It seems unlikely that, for someone like Paul, the difference between preprints and peer-reviewed science is a relevant one when it comes to pushing a political narrative.

Nevertheless, a better understanding of the difference between preprints and peer-reviewed science could still be beneficial when helping people make decisions about what information to believe. While some preprints certainly do go on to pass peer review, if the only basis that one has for some seemingly implausible medical claims is a preprint study, it is worth approaching those claims with skepticism.

Medical Challenge Trials: Time to Embrace the Challenge?

photograph of military personnel receiving shot

The development of the COVID-19 vaccines is worthy of celebration. Never has a vaccine for a  novel virus been so quickly developed, tested, and rolled out. Despite this success, we could have done much better. In particular, a recent study estimates that by allowing “challenge trials” in the early months of the pandemic, we would have completed the vaccine licensing process between one and eight months faster than we did using streamlined conventional trials. The study also provides a conservative estimate of the years of life that an earlier vaccine rollout would have saved: between 720,000 and 5,760,000. However, whether we should have used challenge trials depends on a number of ethical considerations.

Here is an extraordinary fact: we first genetically sequenced the virus in January 2020. Moderna then developed their RNA vaccine in just two days. But the F.D.A. could only grant the vaccine emergency authorization in late December — almost a year later. Over this period the virus killed approximately 320,000 U.S. citizens. The vast majority of the delay between development and approval was due to the time needed to run the necessary medical trials. Enough data needed to be collected to show the vaccines were effective and, even more importantly, safe.

Here’s how those trials worked. Volunteers from a large pool (for example, 30,420 volunteers in Moderna’s phase three trial) were randomly provided either a vaccine or a placebo. They then went about their lives. Some caught the virus, others didn’t. Researchers, meanwhile, were forced to wait until enough volunteers caught the illness for the results to be statistically valid. The fact that the virus spread so quickly was a blessing in this one respect; it sped up their research considerably.

So-called “challenge trials” are an alternative way to run medical trials. The difference is that in a  challenge trial healthy (and informed) volunteers are intentionally infected with the pathogen responsible for the illness researchers want to study. The advantages are that statistically significant results can be found with far fewer volunteers far more quickly. If we vaccinate volunteers and then expose them to the virus, we’ll have a good idea of the vaccine’s effectiveness within days. This means faster licensing, faster deployment of the vaccine, and, therefore, thousands of saved lives.

Challenge trials are generally blocked from proceeding on ethical grounds. Infecting healthy people with a patho­gen they might nev­er oth­er­wise be ex­posed to — a patho­gen which might cause them ser­i­ous or per­man­ent harm or even death — might seem dif­fi­cult to jus­ti­fy. Some med­ic­al prac­ti­tion­ers con­sider it a vi­ol­a­tion of the Hip­po­crat­ic oath they have sworn to up­hold — “First, do no harm.” Ad­voc­ates of chal­lenge tri­als point out that slow, tra­di­tion­al med­ic­al tri­als can cause even great­er harm. Hun­dreds of thou­sands of lives could likely have been saved had COV­ID-19 chal­lenge tri­als been per­mit­ted and the various vac­cines’ emer­gency approv­al occurred months earli­er.

Ad­mit­tedly, chal­lenge tri­als ef­fect­ively shift some risk of harm from the pub­lic at large to a small group of med­ic­al vo­lun­teers. Can we really accept greater risk of harm and death in a small group in order to protect society as a whole? Or are there moral limits to what we can do for the ‘greater good’? Per­haps it is this unequal distribution of burdens and benefits that critics object to as un­eth­ic­al or un­just.

Ad­vocates of chal­lenge tri­als point out that vo­lun­teers con­sent to these risks. Hence, per­mit­ting chal­lenge tri­als is, fun­da­ment­ally, simply per­mitting fully con­sent­ing adults to put them­selves at risk to save oth­ers. We don’t ban healthy adults from run­ning into dan­ger­ous wa­ter to save drowning swim­mers (even though these adults would be risk­ing harm or death). So, the reas­on­ing goes, nor should we ban healthy adults from vo­lun­teer­ing in med­ic­al tri­als to save oth­ers’ lives.

Of course, if a volunteer is lied to or otherwise misinformed about the risks of a medical trial, their consent to the trial does not make participation ethically permissible. For consent to be ethically meaningful, it must be informed. Volunteers must understand the risks they face and judge them to be acceptable. But making sure that volunteers fully understand the risks involved (including the ‘unknown’ risks) can be difficult. For example, a well-replicated finding from psychology is that people are not very good at understanding the likelihood of very low- (or high-) probability events occurring. We tend to “round down” low probability events to “won’t happen” and “round up” high probability events to “will happen”. A 0.2% probability of death doesn’t seem very different from a 0.1% probability to most of us, even though it’s double the risk.

Informed consent also cannot be obtained from children or those who are mentally incapable of providing it, perhaps due to extreme old age, disability, or illness. So members of these groups cannot participate in challenge trials. This limitation, combined with the fact that younger, healthier people may be more likely to volunteer for challenge trials than their more vulnerable elders, means that the insights we gain from the trial data may not translate well to the broader population. This could weaken the cost-benefit ratio of conducting challenge trials, at least in certain cases.

A fur­ther eth­ic­al worry about chal­lenge tri­als is that the poor and the dis­ad­vant­aged, those with no oth­er op­tions, might be indirectly coerced to take part. If in­dividu­als are des­per­ate enough to ac­cess fin­an­cial resources, for ex­ample for food or shel­ter they require, they might take on in­cred­ible per­son­al risk to do so. This dy­nam­ic is called “des­per­ate ex­change,” and it must be avoided if chal­lenge tri­als are to be eth­ically per­miss­ible.

One way to pre­vent des­per­ate ex­changes is to place lim­its on the fin­an­cial com­pens­a­tion provided to vo­lun­teers, for ex­ample merely cov­er­ing travel and in­con­veni­ence costs. But this solu­tion might be thought to threaten to un­der­mine the pos­sib­il­ity of run­ning chal­lenge tri­als at all. Who is go­ing to volun­teer to put his life at risk for noth­ing?

There’s some evid­ence that people would be will­ing to vo­lun­teer even without ser­i­ous fisc­al com­pens­ation. In the case of blood dona­tion, un­paid vol­untary sys­tems see high dona­tion rates and high­er donor qual­ity than mar­ket-based, paid-dona­tion sys­tems such as the U.S.’s. As I write this 38,659 vo­lun­teers from 166 coun­tries have already signed up to be Chal­lenge Tri­al volun­teers with “1 Day Soon­er,” a pro-Chal­lenge Tri­al or­gan­iz­a­tion fo­cus­ing on COV­ID-19 tri­als. These vo­lun­teers ex­pect no mon­et­ary com­pens­a­tion, and are primar­ily mo­tiv­ated by eth­ic­al con­sid­er­a­tions.

The ad­voc­ates of chal­lenge tri­als sys­tem­at­ic­ally failed to win the ar­gu­ment as COV­ID-19 spread across the globe in 2020. Med­ic­al reg­u­lat­ors deemed the eth­ic­al con­cerns too great. But the tide may now be chan­ging. This Feb­ru­ary, Brit­ish reg­ulat­ors ap­proved a COV­ID-19 chal­lenge tri­al. When time-in-tri­al equates with lives lost, the prom­ise of chal­lenge tri­als may prove too strong to ig­nore.

Ethical Considerations in the Lab-Leak Theory

3D image of Covid-19 virus cells

President Biden announced recently that he would be launching an investigation into the origin of the coronavirus. While the standard narrative over much of the course of the pandemic has been that it was initially transmitted to humans via contact with animals in Wuhan, China – thought by many to be bats, although there have also been theories that pangolins could have been involved – a second possibility has also been entertained, namely that the virus originated in a virology lab. Indeed, this was one of the favorite theories of Donald Trump, who, on several occasions, simply stated that the virus originated in a lab, although he failed to provide any evidence for his assertions. The so-called “lab-leak” theory soon took on the status of a conspiracy theory: it was explicitly rejected by numerous scientists, and its association with Trump and other members of the alt-right greatly hindered any credibility that the theory may have had within the scientific community. With Trump out of office, however, questions about the plausibility of the theory have resurfaced, and there has been enough pressure for Biden to open the investigation.

Should Biden have opened his investigation into the lab-leak theory? While it might seem like a question that can be answered by considering the science – i.e., by looking at whether there is good evidence for the theory, whether expert scientific opinion considers it a plausible hypothesis, etc. – there are other ethical factors that we should consider, as well.

Here’s one sense in which it seems that such an investigation is worthwhile: it is always worthwhile to try to learn the truth. Now, there are a lot of truths that we might think really don’t add that much value to our lives – I can spend a lot of time counting the number of blades of grass on my lawn, for example, and at the end of a very long day will possess a shiny new true belief, but hardly anyone would think that I had spent my time wisely. The COVID-19 pandemic, however, is of substantial importance, and so learning about where it came from may seem like an investigation that is worth pursuing for its own sake.

At the same time, there are also potential practical benefits to learning the truth of the matter about the origin of COVID-19. The pandemic has raised many questions about how we should react to the next one, and what we can do to prevent it. Making sure that we have the correct theory of the origin of the virus would then no doubt be useful when thinking about responses to future outbreaks. So here are two points in favor of conducting the investigation: we can learn the truth of something important, and we might be able to become better prepared for similar events in the future.

However, there are also some potential drawbacks. Specifically, there have been concerns that, especially during the previous administration, the impetus for discussing the lab-leak theory was not an attempt to make sure that one’s science was correct, but to find a scapegoat. The theory comes in two different forms. According to one version, the virus was intentionally released from the lab, for whatever reason. If this were to be the case, then there would be a definitive place to direct one’s blame. This version of the theory, however, falls predominantly within the realm of conspiracy theory. The other, more popular version states that while the virus originated in a lab, its transmission into the surrounding population was an accident. Even if this is the case, though, it would seem to represent an act of negligence, and thus the lab, the scientists, and the government would be blameworthy for it.

One of the early criticisms of Trump’s endorsement of the lab-leak theory was that given that it was driven by the search for someone to blame instead of a theory that was best supported by evidence, he was fanning the flames of anti-Asian racism. Indeed, by insisting on the truth of the theory without evidence, as well as consistently referring to the coronavirus as the “China virus,” incidents of anti-Asian racism increased during the course of the pandemic in the U.S.

Here, then, is a concern with Biden’s investigation: opening an official investigation into the lab-leak theory gives legitimacy to a view that has been considered by many to be little more than a conspiracy theory, which may again result in an increase in incidents of anti-Asian racism. Given the potential ethically problematic results of the inquiry, we can then ask: is it worth it?

What is perhaps encouraging is that Biden’s investigation seems to be motivated more by dissent within parts of the scientific community than by the political search for a scapegoat. We might still be concerned, however, that people will not be good at distinguishing versions of the theory under consideration. As noted above, there are two versions of the lab-leak theory, one more distinctly conspiratorial than the other. However, by giving credence to the view that the virus accidentally leaked from the lab, one may instead interpret this as giving more credence to the other.

This is not to say that the investigation is a bad idea. Instead, it should remind us that inquiry is never conducted in a vacuum, and that which questions are worth investigating may depend not solely on the evidence, but on the ethical consequences of doing so.

Biden, Trump, and the Dangers of Value-Free Science

President Joe Biden observes dosage preparations during a tour of a vaccination center

I don’t think it’s controversial to say that the Trump administration lived in tension with scientific advisors. Because of concerns that Trump politicized science in ways that put life at risk and undermined public trust, the Biden administration is launching a 46-person federal scientific integrity task force to investigate areas where partisanship interfered with scientific decision-making and to come up with ways to keep politics out of science in the future. While risk to scientific integrity is an important concern, the thinking behind this task force risks covering up a problem rather than resolving it.

Critics seeking “evidence-based policy-making” have accused the Trump administration of letting politics interfere with issues including, but not limited to, coronavirus, climate change, and whether Hurricane Dorian threatened Alabama. They also argue that this interference made the response to COVID-19 worse and led to a higher death toll. Jane Lubchenco, deputy director for climate and environment at the White House Office of Science and Technology Policy, noted, “What we have seen in the last administration is that the suppression of science, the reassignment of scientists, the distortion of scientific information around climate change was not only destructive but counterproductive and really problematic.”

But it isn’t clear scientific integrity can be defined in a way that is free from political interference or that it should be. Consider the memo from Biden on the subject which states that “scientific findings should never be distorted and influenced by political considerations.” While this might mean making sure that findings and data are not suppressed or distorted in ad hoc and arbitrary ways, this approach also sounds like an attempt to enforce a value-free ideal of science, which, according to many philosophers of science and scientists themselves, is neither possible nor desirable.

For starters, it isn’t clear that we can completely separate politics from science even if we wanted to. According to philosopher Helen Longino, what we take as evidence for something requires assumptions that are informed by our values. These assumptions often cannot be (and are not) empirically measured, and so “there are no formal rules, guidelines, or processes that can guarantee that social values will not permeate evidential relations.” Such assumptions can dramatically affect the methods taken by scientists including what protocols to follow, what sorts of things to measure, and for how long.

For example, in his book A Tapestry of Values, Keven Elliot provides an example of Woburn Massachusetts in the 1970s when several people became ill and it was noted that the local water had taken on a strange color and taste. Eventually it was discovered that barrels of industrial chemicals were found buried near the city’s wells. Proving a direct link between these chemicals and the many cancers and illnesses in the city proved difficult. A department of public health report about a connection between the two was inconclusive. Later, citizens of the community managed to get a separate study commissioned with significantly more input from the community and which later found that there was a significant correlation between consumption of water from the contaminated wells and the health problems people experienced. As Elliot notes,

“assumptions about the appropriate boundaries of the geographical area to be studied can be very important to scrutinize; if a study incorporates some heavily polluted areas and other areas that are not very polluted, it can make pollution threats appear less serious than they would otherwise be. Similarly, analyzing health effects together for two neighboring towns might yield statistically significant evidence for health problems, whereas analyzing health effects in the two towns separately might not yield statistically significant results.”

In other words, there are many cases where values are needed to inform the methods of research that is taken.

Consider an example from the headlines this week. On Monday it was reported that less than 3% of all land on Earth is fully ecologically intact. Philosophers Kristen Intemann and Inmaculada de Melo-Martin have argued that measuring climate impacts requires values because “impact” depends on judgments about what is worth protecting. As the paper that inspired this week’s headline makes clear, “there is no clear definition of what is meant by intactness and the term is used loosely in the scientific literature.” For some scientists measuring the intactness of an ecosystem is done by measuring anthropogenic influence, whereas for the authors of the paper measuring whether an ecosystem is intact will involve measuring the habitat intactness, faunal intactness, and functional intactness. Depending on how this is measured, we find that the amount of land that is intact varies from 3% to 25%. The decision regarding which of these measures to use is quite significant and will inevitably depend on our values. Whatever we decide, the findings will have an enormous impact on our policies.

Philip Kitcher has argued that science is not just about finding truth, but finding truths we deem significant, which makes democratically-informed values highly desirable. The decision of whether agricultural science should focus on efficiency and maximizing crop yields or sustainability and maintaining future output is something that we might want to be politically-informed. Another area where values are desirable involves cases of inductive risk. As I’ve previously explained it, inductive risk involves cases of dealing with the risks of real world consequences relative to the uncertainty you have in your current conclusion.

A really good example of this thinking at play is the public health advice when it comes to COVID. From social distancing, to mask-wearing, to vaccine use, the guidance has always been a matter of weighing what is known relative to risks of being wrong. This has been pretty blatant. Experts need to weigh the risks of, for example, using the AstraZeneca vaccine despite not knowing a lot about its connection to blood clots because the alternatives are worse. In a case like this, regardless about how you may feel about the scientific findings, when scientists say the benefits outweigh the risks, this is a value judgment, and therefore it is a fair question whether political or ethical values other than those of scientists should be relevant to science in way that doesn’t damage the integrity of the research.

For these reasons, many philosophers have argued that trying to bury values under the rug and pursuing a goal like value-free science isn’t helpful. If, in your attempt to banish political interference, values are only made more subtle and difficult to notice, you only make the problem worse. It’s possible that efforts to secure scientific integrity may stop short of the value-free ideal; the aim may not be to weed out all values, but only “improper political influence.” But then the word “improper” takes on huge significance and requires a lot of clarification. Thus, there is a larger moral question about how much influence democratic values should have over science and whether it is possible to provide an account of integrity that may be politically informed but not just as politically controversial at the end of the day.

Bad Science, Bad Science Reporting

3d image of human face with severalpoints of interest circled

It tends to be that only the juiciest of developments in the sciences become newsworthy: while important scientific advances are made on a daily basis, the general public hear about only a small fraction of them, and the ones we do hear about do not necessarily reflect the best science. Case in point: a recent study that made headlines for having developed an algorithm that could detect perceived trustworthiness in faces. The algorithm used as inputs a series of portraits from the 16th to the 19th centuries, along with participant’s judgments of how trustworthy they found the depicted faces. The authors then claimed that there was a significant increase in trustworthiness over the period of time they investigated, which they attributed to lower levels of societal violence and greater economic development. With an algorithm thus developed, they then applied it to some modern-day faces, comparing Donald Trump to Joe Biden, and Meghan Markle to Queen Elizabeth II, among others.

It is perhaps not surprising, then, that once the media got wind of the study that articles with names like “Meghan Markle looks more trustworthy than the Queen” and “Trust us, it’s the changing face of Britain” began popping up online. Many of these articles read the same: they describe the experiment, show some science-y looking pictures of faces with dots and lines on them, and then marvel at how the paper has been published in Nature Communications, a top journal in the sciences.

However, many have expressed serious worries with the study. For instance, some have noted how the paper’s treatment of their subject matter – in this case, portraits from hundreds of years ago – is uninformed by any kind of art history, and that the belief that there was a marked decrease in violence over that time is uniformed by any history at all. Others note how the inputs into the algorithm are exclusively portraits of white faces, leading some to make the charge that the authors were producing a racist algorithm. Finally, many have noted the very striking similarity between what the authors are doing and the long-debunked studies of phrenology and physiognomy, which purported to show that the face of one’s skull and nature of one’s facial features were indicative of their personality traits, respectively.

There are many ethical concerns that this study raises. As some have noted already, developing an algorithm in this manner could be used as a basis for making racist policy decisions, and would seem to lend credence to a form of “scientific racism.” While these problems are all worth discussing, here I want to focus on a different issue, namely how a study lambasted by so many, with so many glaring flaws, made its way to the public eye (of course, there is also the question of how the paper got accepted in such a reputable journal in the first, but that’s a whole other issue).

Part of the problem comes down to how the results of scientific studies are communicated, with the potential for miscommunications and misinterpretations along the way. Consider again how those numerous websites clamoring for clicks with tales of the trustworthiness of political figures got their information in the first place, which was likely from a newswire service. Here is how ScienceDaily summarized the study:

“Scientists revealed an increase in facial displays of trustworthiness in European painting between the fourteenth and twenty-first centuries. The findings were obtained by applying face-processing software to two groups of portraits, suggesting an increase in trustworthiness in society that closely follows rising living standards over the course of this period.”

Even this brief summary is misleading. First, to say that scientists “revealed” something implies a level of certainty and definitiveness in their results. Of course, all results of scientific studies are qualified: there is never an experiment that will say that it is 100% certain of its results, or that, when measuring different variables, that there is a definitive cause and effect relationship between them. The summary does qualify this a little bit – in saying that the study “suggests” an increase in trustworthiness. But this is misleading for another reason, namely that the study does not purport to measure actual trustworthiness, but perceptions of trustworthiness.

Of course, a study about an algorithm measuring what people think trustworthiness looks like is not nearly as exciting as a trustworthiness detection machine. And perhaps because the difference can be easily overlooked, or because the latter is likely to garner much more attention than the former, the mistake shows up in several of the outlets reporting it. For example:

Meghan was one and a half times more trustworthy than the Queen, according to researchers.

Consultants from PSL Analysis College created an algorithm that scans faces in painted portraits and pictures to find out the trustworthiness of the individual.

Meghan Markle has a more “trustworthy” face than the Queen, a new study claims.

From Boris Johnson to Meghan Markle – the algorithm that rates trustworthiness.”

Again, the problem here is that the study never made the claim that certain individuals were, in fact, more trustworthy than others. But that news outlets and other sites report it as such compound worries that one might employ the results of the study to reach unfounded conclusions about who is trustworthy and who isn’t.

So there are problems here at three different levels: first, with the nature and design of the study itself; second, with the way that newswire services summarized the results, making them seem more certain than they really were; and third, with the way that sites that used those summaries presented the results in order to make it look more interesting and legitimate than it really was, without raising any of the many concerns expressed by other scientists. All of these problems compound to produce the worries that the results of the study could be misinterpreted and misused.

While there are well-founded ethical concerns about how the study itself was conducted, it is important not to ignore what happens after the studies are finished and their results disseminated to the public. The moral onus is not only on the scientists themselves, but also on those reporting on the results of scientific studies.

Causality and the Coronavirus

image of map of US displayed as multi-colored bar graph

“Causality” is a difficult concept, yet beliefs about causes are often consequential. A troubling illustration of this is the claim, which is being widely shared on social media, that the coronavirus is not particularly lethal, as only 6% of the 190,000+ deaths attributed to the virus are “caused” by the disease.

We tend to think of causes in too-simplistic terms

Of all of the biases and limitations of human reasoning, our tendency to simplify causes is arguably one of the most fundamental. Consider the hypothetical case of a plane crash in Somalia in 2018. We might accept as plausible causes things such as the pilot’s lack of experience (say it was her first solo flight), the (old) age of the plane, the (stormy) weather, and/or Somalia’s then-status as a failed state, with poor infrastructure and, perhaps, an inadequate air traffic control system.

For most, if not all, phenomena that unfold at a human scale, a multiplicity of “causes” can be identified. This includes, for example, social stories of love and friendship and political events such as wars and contested elections.1

Causation in medicine

Causal explanations in medicine are similarly complex. Indeed, the CDC explicitly notes that causes of death are medical opinions. These opinions are likely to include not only an immediate cause (“final disease or condition resulting in death”), but also an underlying cause (“disease or injury that initiated the events resulting in death”), as well as other significant conditions which are or are not judged to contribute to the underlying cause of death.

In any given case, the opinions expressed on the death certificate might be called into question. Even though these opinions are typically based on years of clinical experience and medical study, they are limited by medical uncertainty and, like all human judgments, human fallibility.

When should COVID count as a cause?

Although the validity of any individual diagnosis might be called into question, aggregate trends are less equivocal. Consider this graph from the CDC which identifies the number of actual deaths not attributed to COVID-19 (green), additional deaths which have been attributed to COVID-19 (blue), and the upper bound of the expected number of deaths based on historical data (orange trend line). Above the blue lines there are pluses to indicate weeks in which the total number (including COVID) exceeds the reported number by a statistically significant margin. This has been true for every week since March 28. In addition, there are pluses above the green lines indicating where the number of deaths excluding COVID was significantly greater than expected. This is true for each of the last eight weeks (ignoring correlated error, we would expect such a finding fewer than one in a million times by chance). This indicates that the number of deaths due to COVID in America has been underreported, not overreported.

Among the likely causes for these ‘non-COVID’ excess deaths, we can point, particularly early in the pandemic, to a lack of familiarity with, and testing for, the virus among medical professionals. As the pandemic unfolded, it is likely that additional deaths can be attributed, in part to indirect causal relationships such as people delaying needed visits to doctors and hospitals out of fear, and the social, psychological, and economic consequences that have accompanied COVID in America. Regardless, the bottom line is clear: without COVID-19, over two hundred thousand other Americans would still be alive today. The pandemic has illuminated, tragically, our interconnectedness and with it our
responsibilities to each other. One part of this responsibility is to deprive the virus of the
opportunity to spread by wearing masks and socially distancing. But this is not enough: we
need to stop the spread of misinformation as well.

 

1 Some argue that we can think of individual putative causes as “individually unnecessary” but as “jointly sufficient.” In the 2000 US Presidential Election, for example, consider the presence of Ralph Nader on the ballot, delays in counting the vote in some jurisdictions, the Monica Lewinsky scandal, and other phenomena such as the “butterfly ballot” in Palm Beach County, Florida. Each of these might have been unnecessary to lead the election to be called for G.W. Bush, but they were jointly sufficient to do so.

On “Doing Your Own Research”

photograph of army reserve personnel wearing neck gaiter at covid testing site

In early August, American news outlets began to circulate a surprising headline: neck gaiters — a popular form of face covering used by many to help prevent the spread of COVID-19 — could reportedly increase the infection rate. In general, face masks work by catching respiratory droplets that would otherwise contaminate a virus-carrier’s immediate environment (in much the same way that traditional manners have long-prescribed covering your mouth when you sneeze); however, according to the initial report by CBS News, a new study found that the stretchy fabric typically used to make neck gaiters might actually work like a sieve to turn large droplets into smaller, more transmissible ones. Instead of helping to keep people safe from the coronavirus, gaiters might even “be worse than no mask at all.”

The immediate problem with this headline is that it’s not true; but, more generally, the way that this story developed evidences several larger problems for anyone hoping to learn things from the internet.

The neck gaiter story began on August 7th when the journal Science Advances published new research on a measurement test for face mask efficacy. Interested by the widespread use of homemade face-coverings, a team of researchers from Duke University set out to identify an easy, inexpensive method that people could use at home with their cell phones to roughly assess how effective different commonly-available materials might be at blocking respiratory droplets. Importantly, the study was not about the overall efficacy rates of any particular mask, nor was it focused on the length of time that respiratory droplets emitted by mask-wearers stayed in the air (which is why smaller droplets could potentially be more infectious than larger ones); the study was only designed to assess the viability of the cell phone test itself. The observation that the single brand of neck gaiter used in the experiment might be “counterproductive” was an off-hand, untested suggestion in the final paragraph of the study’s “Results” section. Nevertheless, the dramatic-sounding (though misleading) headline exploded across the pages of the internet for weeks; as recently as August 20th, The Today Show was still presenting the untested “result” of the study as if it were a scientific fact.

The ethics of science journalism (and the problems that can arise from sensationalizing and misreporting the results of scientific studies) is a growing concern, but it is particularly salient when the reporting in question pertains to an ongoing global pandemic. While it might be unsurprising that news sites hungry for clicks ran a salacious-though-inaccurate headline, it is far from helpful and, arguably, morally wrong.

Furthermore, the kind of epistemic malpractice entailed by underdeveloped science journalism poses larger concerns for the possibility of credible online investigation more broadly. Although we have surrounded ourselves with technology that allows us to access the internet (and the vast amount of information it contains), it is becoming ever-more difficult to filter out genuinely trustworthy material from the melodramatic noise of websites designed more for attracting attention than disseminating knowledge. As Kenneth Boyd described in an article here last year, the algorithmic underpinnings of internet search engines can lead self-directed researchers into all manner of over-confident mistaken beliefs; this kind of structural issue is only exacerbated when the inputs to those algorithms (the articles and websites themselves) are also problematic.

These sorts of issues cast an important, cautionary light on a growing phenomenon: the credo that one must “Do Your Own Research” in order to be epistemically responsible. Whereas it might initially seem plain that the internet’s easily-accessible informational treasure trove would empower auto-didacts to always (or usually) draw reasonable conclusions about whatever they set their minds to study, the epistemic murkiness of what can actually be found online suggests that reality is more complicated. It is not at all clear that non-expert researchers who are ignorant of a topic can, on their own, justifiably identify trustworthy information (or information sources) about that topic; but, on the other hand, if a researcher does has enough knowledge to judge a claim’s accuracy, then it seems like they don’t need to be researching the topic to begin with!

This is a rough approximation of what philosophers sometimes call “Meno’s Paradox” after its presentation in the Platonic dialogue of that name. The Meno discusses how inquiry works and highlights that uninformed inquirers have no clear way to recognize the correct answer to a question without already knowing something about what they are questioning. While Plato goes on to spin this line of thinking into a creative argument for the innateness of all knowledge (and, by extension, the immortality of the soul!), subsequent thinkers have often taken different approaches to argue that a researcher only needs to have partial knowledge either of the claim they are researching or of the source of the claim they are choosing to trust in order to come to justified conclusions.

Unfortunately, “partial knowledge” solutions have problems of their own. On one hand, human susceptibility to a bevy of psychological biases make a researcher’s “partial” understanding of a topic a risky foundation for subsequent knowledge claims; it is exceedingly easy, for example, for the person “doing their own research” to be unwittingly led astray by their unconscious prejudices, preconceptions, or the pressures of their social environment. On the other hand, grounding one’s confidence in a testimonial claim on the trustworthiness of the claim’s source seems to (in most cases) simply push the justification problem back a step without really solving much: in much the same way that a non-expert cannot make a reasonable judgment about a proposition, that same non-expert also can’t, all by themselves, determine who can make such a judgment.

So, what can the epistemically responsible person do online?

First, we must cultivate an attitude of epistemic humility (of the sort summarized by Plato’s infamous comment “I know that I know nothing”) — something which often requires us to admit not only that we don’t know things, but that we often can’t know things without the help of teachers or other subject matter experts doing the important work of filtering the bad sources of information away from the good ones. All too often, “doing your own research” functionally reduces to a triggering of the confirmation bias and lasts only as long as it takes to find a few posts or videos that satisfy what a person was already thinking in the first place (regardless of whether those posts/videos are themselves worthy of being believed). If we instead work to remember our own intellectual limitations, both about specific subjects and the process of inquiry writ large, we can develop a welcoming attitude to the epistemic assistance offered by others.

Secondly, we must maintain an attitude of suspicion about bold claims to knowledge, especially in an environment like the internet. It is a small step from skepticism about our own capacities for inquiry and understanding to skepticism about that of others, particularly when we have plenty of independent evidence that many of the most accessible or popular voices online are motivated by concerns other than the truth. Virtuous researchers have to focus on identifying and cultivating relationships with knowledgeable guides (who can range from individuals to their writings to the institutions they create) on whom they can rely when it comes time to ask questions.

Together, these two points lead to a third: we must be patient researchers. Developing epistemic virtues like humility and cultivating relationships with experts that can overcome rational skepticism — in short, creating an intellectually vibrant community — takes a considerable amount of effort and time. After a while, we can come to recognize trustworthy informational authorities as “the ones who tend to be right, more often than not” even if we ourselves have little understanding of the technical fields of those experts.

It’s worth noting here, too, that experts can sometimes be wrong and nevertheless still be experts! Even specialists continue to learn and grow in their own understanding of their chosen fields; this sometimes produces confident assertions from experts that later turn out to be wrong. So, for example, when the Surgeon General urged people in February to not wear face masks in public (based on then-current assumptions about the purportedly low risk of asymptomatic patients) it made sense at the time; the fact that those assumptions later proved to be false (at which point the medical community, including the epistemically humble Surgeon General, then recommended widespread face mask usage) is simply a demonstration of the learning/research process at work. On the flip side, choosing to still cite the outdated February recommendation simply because you disagree with face mask mandates in August exemplifies a lack of epistemic virtue.

Put differently, briefly using a search engine to find a simple answer to a complex question is not “doing your own research” because it’s not research. Research is somewhere between an academic technique and a vocational aspiration: it’s a practice that can be done with varying degrees of competence and it takes training to develop the skill to do it well. On this view, an “expert” is simply someone who has become particularly good at this art. Education, then, is not simply a matter of “memorizing facts,” but rather a training regimen in performing the project of inquiry within a field. This is not easy, requires practice, and still often goes badly when done in isolation — which is why academic researchers rely so heavily on their peers to review, critique, and verify their discoveries and ideas before assigning them institutional confidence. Unfortunately, this complicated process is far less sexy (and far slower) than a scandalous-sounding daily headline that oversimplifies data into an attractive turn of phrase.

So, poorly-communicated science journalism not only undermines our epistemic community by directly misinforming readers, but also by perpetuating the fiction that anyone is an epistemic island unto themselves. Good reporting must work to contextualize information within broader conversations (and, of course, get the information right in the first place).

Please don’t misunderstand me: this isn’t meant to be some elitist screed about how “only the learned can truly know stuff, therefore smart people with fancy degrees (or something) are best.” If degrees are useful credentials at all (a debatable topic for a different article!) they are so primarily as proof that a person has put in considerable practice to become a good (and trustworthy) researcher. Nevertheless, the Meno Paradox and the dangers of cognitive biases remain problems for all humans, and we need each other to work together to overcome our epistemic limitations. In short: we would all benefit from a flourishing epistemic community.

And if we have to sacrifice a few splashy headlines to get there, so much the better.

Hydroxychloroquine and the Ethical Pitfalls of Private Science

A box of hydroxychloroquine sulphate tablets held by a hand with coronavirus written in background

Last week, news broke that a significant study into the effects of hydroxychloroquine for treating COVID-19 relied on data that has now been called into question. The effects of this study, and other studies that relied on data from the same source, were profound, leading to changes in planned studies and in treatments for COVID-19 being prescribed to patients. The fact that this data comes from an unaudited source highlights the ethical concerns that stem from having an increased corporate role in science.

In late May, a study published in the elite medical journal The Lancet suggested that COVID-19 patients taking chloroquine or hydroxychloroquine were more likely to die. The study included over 96,000 patients, relying on electronic health data from the company Surgisphere run by Dr. Sepan Desai, who was also included as a co-author of the article. It found that at 671 hospitals where COVID-19 patients had been prescribed hydroxychloroquine, the risk of death was over twice as great as patients who were not prescribed the drug. An additional study using data from Surgisphere investigated the uses of blood pressure medication and was published in a paper for The New England Journal of Medicine. A third paper using Surgisphere data was available as a preprint which suggested that ivermectin significantly reduced mortality in COVID-19 patients. All three papers have been retracted.

The retractions occurred after discrepancies were noticed in the data. The reported doses of hydroxychloroquine for American patients was higher than FDA guidelines and the number of Australian deaths were higher than official statistics. There was also a discrepancy between the small number of hospitals included and the vast number of patient records. Following this, independent auditors were asked to review the data provided by Surgisphere; however, the company refused to provide the data, citing confidentiality requirements with the hospitals. Yet investigations found that no hospitals located in the US admitted to participating with Surgisphere. 

Surgisphere itself is also a suspect source. The company was founded in 2007 but has little online presence. Their website does not list partner hospitals or identify its scientific advisory board. It claims that the company has 11 employees. Their enormous database doesn’t seem to have been used by peer reviewed studies until May. Desai himself also has a colorful history, including a record of three outstanding medical malpractice suits against him. 

The studies had significant impact world-wide. Following the report that hydroxychloroquine increased mortality rates in patients, the WHO announced a “temporary” pause into their studies of hydroxychloroquine (they have since resumed their efforts). The studies also played a role in the national conversation about the drug in the United States following President Trump’s announcement that he had been taking it to combat the virus. The preprint on ivermectin was never officially published, but it did lead to changes in treatment protocols in South America. In Bolivia, a local government planned to hand out 350,000 doses of the drug after receiving authorization from the Bolivian Ministry of Health. The drug was also cited as a potential treatment in Chile and Peru. 

This episode highlights several general moral issues. Retraction scandals at a time when the public is looking to, and relying on, medical science are dangerous. The situation is intensified by the fact that these controversies are tied to the political debate over hydroxychloroquine, as it may undermine science along partisan lines. Polls show that Democrats are far more likely than Republicans to have a great deal of confidence in scientists to act in the best interests of the public yet such scandals further undermine public trust and make science seem more partisan. 

The matter also raises ethical issues within the sciences. According to Ivan Oransky from Retraction Watch, the case represents larger systematic issues within the sciences, noting that even leading journalists rely too heavily on an honor system. For example, the pandemic has led to warning signs about the use of preprints in journals, which have moved away from getting feedback while studies are being finalized to sharing “breaking data” as fast as possible, despite the lack of peer review.  

The Surgisphere episode highlights the ethical pitfalls of science relying on private sector companies for research. Since the twentieth century, the private sector has been an increasing source of scientific funding. In the United States, private funding accounts for 65% of research and development spending in 2013. There are good reasons for private sector investments and corporate-university level partnerships. The public sector has shown less willingness to supply the needed funding. As Ashtosh Jogalekar points out in an article for Scientific American, investments by private interests have allowed for many projects to be funded which might not be funded otherwise. He notes, “For these billionaires a few millions of dollars is not too much, but for a single scientific project hinging on the vicissitudes of government funding it can be a true lifeline.” It has also been noted that private funding can ensure cost-effective replication studies are possible, especially important since efforts to produce reproducibility were only successful in 40% of experiments published in peer-reviewed journals. 

On the other hand, according to Sheldon Krimsky, the author of Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research?, numerous problems can occur when scientists partner with private corporations. Krimsky finds that publication practices have been influenced by commercial interests: the commercialization of science has led to a decline in the notion that scientists should work in the public interest, and sharing data becomes more problematic given the use of paywalls and intellectual property protection. This makes it more difficult to verify the data.

There are many ways corporations can complicate data-sharing. By choosing not to release unflattering findings or claiming data as exclusive intellectual property, companies can make it difficult for others to use research (consider Diamond v Chakrabarty which began the precedent for allowing genetically modified organisms to be patentable). And, of course, the Surgisphere episode is an example of university-level researchers working in collaboration with a private company where the company retains sole control of the data. Such cases allow for fraud and suffer from a lack of oversight. 

One proposed solutions is to move towards “open science,” making publications, data, and other information open and accessible to everyone. Such a move would allow for both increased transparency and accountability as well as more rigorous peer-review. Under such a system, falsified data would be more difficult to provide and more easy to detect. 

While many of these issues have been brewing for years, it is not every day that a single published study can have the kind of global impact that came with investigations into the effectiveness of hydroxychloroquine, even while other independent studies have also demonstrated its ineffectiveness. The ethical fallout from this scandal is thus far more obvious given public interest in the disease. Indeed, there have already been calls to stop private speculation into COVID-19 research; part of this call includes the position that all intellectual property should be made available for free to the international scientific community for fighting the pandemic. The question now is what specific reforms should be implemented to prevent scandals like this from happening again?

 

The Small but Unsettling Voice of the Expert Skeptic

photograph of someone casting off face mask

Experts and politicians worldwide have come to grips with the magnitude of the COVID-19 pandemic. Even Donald Trump, once skeptical that COVID-19 would affect the US in a significant way, now admits that the virus will likely take many more thousands of lives.

Despite this agreement, some are still not convinced. Skeptics claim that deaths that are reported as being caused by COVID-19 are really deaths that would have happened anyway, thereby artificially inflating the death toll. They claim that the CDC is complicit, telling doctors to document a death as “COVID-related” even when they aren’t sure. They highlight failures of world leaders like the Director-General of the World Health Organization and political corruption in China. They claim that talk of hospitals being “war zones” is media hype, and they share videos of “peaceful” local hospitals from places that aren’t hot spots, like Louisville or Tallahassee. They point to elaborate conspiracies about the nefarious origins of the novel coronavirus.

What’s the aim of this strikingly implausible, multi-national conspiracy, according to these “COVID-truthers”? Billions of dollars for pharmaceutical companies and votes for tyrannical politicians who want to look like benevolent saviors.

Expert skeptics like COVID-truthers are concerning because they are more likely to put themselves, their families, and their communities at risk by not physical distancing or wearing masks. They are more likely to violate stay-at-home orders and press politicians to re-open commerce before it is safe. And they pass this faulty reasoning on to their children.

While expert skepticism is not new, it is unsettling because expert skepticism often has a kernel of truth. Experts regularly disagree, especially in high-impact domains like medicine. Some experts give advice outside their fields (what Nathan Ballantyne calls “epistemic trespassing”). Some experts have conflicts of interest that lead to research fraud. And some people—seemingly miraculously—defy expert prediction, for example, by surviving a life-threatening illness.

If all this is right, shouldn’t everyone be skeptical of experts?

In reality, most non-experts do okay deciding who is trustworthy and when. This is because we understand—at least in broad strokes—how expertise works. Experts disagree over some issues, but, in time, their judgments tend to converge. Some people do defy expert expectations, but these usually fall within the scope of uncertainty. For example, about 1 in 100,000 cancers go into spontaneous remission. Further, we can often tell who is in a good position to help us. In the case of lawyers, contractors, and accountants, we can find out their credentials, how long they’ve been practicing, and their specialties. We can even learn about their work from online reviews or friends who have used them.

Of course, in these cases, the stakes are usually low. If it turns out that we trusted the wrong person, we might be able to sue for damages or accept the consequences and try harder next time. But as our need for experts gets more complicated, figuring out who is trustworthy is harder. For instance, questions about COVID-19 are:

  • New (Experts struggle to get good information.)
  • Time-sensitive (We need answers more quickly than we have time to evaluate experts.)
  • Value-charged (Our interests in the information biases who we trust.)
  • Politicized (Information is emotionally charged or distorted, and there are more epistemic trespassers.)

Where does this leave those of us who aren’t infectious disease experts? Should we shrug our shoulders with the COVID-truthers and start looking for ulterior motives?

Not obviously. Here are four strategies to help distill reality from fantasy.

  1. Keep in mind what experts should (and should not) be able to do.

Experts spend years studying a topic, but they cannot see the future. They should be able explain a problem and suggest ways of solving it. But models that predict the future are educated guesses. In the case of infectious diseases, those guesses depend on assumptions about how people act. If people act differently, the guesses will be inaccurate. But that’s how models work.

  1. Look for consensus, but be realistic.

When experts agree on something, that’s usually a sign they’re all thinking about the evidence the same way. But when they face a new problem, their evidence will change continually, and experts will have little time to make sense of it. In the case of COVID-19, there’s wide consensus about the virus that causes it and how it spreads. There is little consensus on why it hurts some people more than others and whether a vaccine is the right solution. But just because there isn’t consensus doesn’t mean there are ulterior motives.

  1. Look for “meta-expert consensus.”

When experts agree, it is sometimes because they need to look like they agree, whether due to worries about public opinion or because they want to convince politicians to act. These are not good reasons to trust experts. But on any complex issue, there’s more than one kind of expert. And not all experts have conflicts of interest. In the case of COVID-19, independent epidemiologists, infectious disease doctors, and public health experts agree that SARS-CoV-2 is a new, dangerous, contagious threat and that social distancing the main weapon against that threat. That kind of “meta-expert consensus” is a good check on expertise and good news for novices when deciding what to believe.

  1. Don’t double-down.

When experts get new evidence, they update their beliefs, even if they were wrong. They don’t force that evidence to fit old beliefs. When prediction models for COVID-related deaths did not bear out, experts updated their predictions. They recognized that predictions can be confounded by many variables, and they used the new evidence to update their models. This is good advice for novices, too.

These strategies are not fool proof. The world is messy, experts are fallible, and we won’t always trust the right people. But while expert skepticism is grounded in real limitations of expertise, we don’t have to join the ranks the COVID-truthers. With hard work and a little caution, we can make responsible choices about who we trust.

In Search of an AI Research Code of Conduct

image of divided brain; fluid on one side, curcuitry on the other

The evolution of an entire industry devoted to artificial intelligence has presented a need to develop ethical codes of conduct. Ethical concerns about privacy, transparency, and the political and social effects of AI abound. But a recent study from the University of Oxford suggests that borrowing from other fields like medical ethics to refine an AI code of conduct is problematic. The development of an AI ethics means that we must be prepared to address and predict ethical problems and concerns that are entirely new, and this makes it a significant ethical project. How we should proceed in this field is itself a dilemma. Should we proceed in a top-down principled approach or a bottom up experimental approach?

AI ethics can concern itself with everything from the development of intelligent robots to machine learning, predictive analytics, and the algorithms behind social media websites. This is why it is such an expansive area with some focusing on the ethics of how we should treat artificial intelligence, others focusing on how we can protect privacy, or some on how the AI behind social media platforms and AI capable of generating and distributing ‘fake news’ can influence the political process. In response many have focused on generating a particular set of principles to guide AI researchers; in many cases borrowing from codes governing other fields, like medical ethics.

The four core principles of medical ethics are respect for patient autonomy, beneficence, non-maleficence, and justice. Essentially these principles hold that one should act in the best interests of a patient while avoiding harms and ensuring fair distribution of medical services. But the recent Oxford study by Brent Mittelstadt argues that the analogical reasoning relating the medical field to the AI field is flawed. There are significant differences between medicine and AI research which makes these principles not helpful or irrelevant.

The field of medicine is more centrally focused on promoting health and has a long history of focusing on the fiduciary duties of those in the profession towards patients. Alternatively, AI research is less homogeneous, with different researchers in both the public and private sector working on different goals and who have duties to different bodies. AI developers, for instance, do not commit to public service in the same way that a doctor does, as they may only responsible to shareholders. As the study notes, “The fundamental aims of developers, users, and affected parties do not necessarily align.”

In her book Towards a Code of Ethics for Artificial Intelligence Paula Boddington highlights some of the challenges of establishing a code of ethics for the field. For instance, those working with AI are not required to receive accreditation from any professional body. In fact,

“some self-taught, technically competent person, or a few members of a small scale start up, could be sitting in their mother’s basement right now dreaming up all sorts of powerful AI…Combatting any ethical problems with such ‘wild’ AI is one of the major challenges.”

Additionally, there are mixed attitudes towards AI and its future potential. Boddington notes a divide in opinion: the West is more alarmist as compared to nations like Japan and Korea which are more likely to be open and accepting.

Given these challenges, some have questioned whether an abstract ethical code is the best response. High-level principles which are abstract enough to cover the entire field will be too vague to be action-guiding, and because of the various different fields and interests, oversight will be difficult. According to Edd Gent,

“AI systems are…created by large interdisciplinary teams in multiple stages of development and deployment, which makes tracking the ethical implications of an individual’s decisions almost impossible, hampering our ability to create standards to guide those choices.”

The situation is not that different from work done in the sciences. Philosopher of science Heather Douglas has argued, for instance, that while ethical codes and ethical review boards can be helpful, constant oversight is impractical, and that only scientists can fully appreciate the potential implications of their work. The same could be true of AI researchers. A code of principles of ethics will not replace ethical decision-making; in fact, such codes can be morally problematic. As Boddington argues, “The very idea of parceling ethics into a formal ‘code’ can be dangerous.” This is because many ethical problems are going to be new and unique so ethical choice cannot be a matter of mere compliance. Following ethical codes can lead to complacency as one seeks to check certain boxes and avoid certain penalties without taking the time to critically examine what may be new and unprecedented ethical issues.

What this suggests is that any code of ethics can only be suggestive; they offer abstract principles that can guide AI researchers, but ultimately the researchers themselves will have to make individual ethical judgments. Thus, part of the moral project of developing an AI ethics is going to be the development of good moral judgment by those in the field. Philosopher John Dewey noted this relationship between principles and individual judgment, arguing:

“Principles exist as hypotheses with which to experiment…There is a long record of past experimentation in conduct, and there are cumulative, verifications which give many principles a well earned prestige…But social situations alter; and it is also foolish not to observe how old principles actually work under new conditions, and not to modify them so that they will be more effectual instruments in judging new cases.”

This may mirror the thinking of Brent Mittelstadt who argues for a bottom-up approach to AI ethics that focuses on sub-fields developing ethical principles as a response to resolving challenging novel cases. Boddington, for instance, notes the importance of equipping researchers and professionals with the ethical skills to make nuanced decisions in context; they must be able to make contextualized interpretations of rules, and to judge when rules are no longer appropriate. Still, such an approach has its challenges as researchers must be aware of the ethical implications of their work, and there still needs to be some oversight.

Part of the solution to this is public input. We as a public need to make sure that corporations, researchers, and governments are aware of the public’s ethical concerns. Boddington recommends that in such input there be a diversity of opinion, thinking style, and experience. This includes not only those who may be affected by AI, but also professional experts outside of the AI field like lawyers, economists, social scientists, and even those who have no interest in the world of AI in order maintain an outside perspective.

Codes of ethics in AI research will continue to develop. The dilemma we face as a society is what such a code should mean, particularly whether it will be institutionalized and enforced or not. If we adopt a bottom up approach, then such codes will likely be only there for guidance or will require the adoption of multiple codes for different areas. If a more principled top-down approach is adopted, then there will be additional challenges of dealing with the novel and with oversight. Either way, the public will have a role to play to ensure that its concerns are being heard.

Forbidden Knowledge in Scientific Research

cloeup photograph of lock on gate with iron chain

It is no secret that science has the potential to have a profound effect on society. This is often why scientific results can be so ethically controversial. For instance, researchers have recently warned of the ethical problems associated with scientists growing lumps of human brain in the laboratory. The blobs of brain tissue grown from stem cells developed spontaneous brain waves like those found in premature babies. The hope is that the study offers the potential to better understand neurological disorders like Alzheimer’s, but it also raises a host of ethical worries concerning the possibility this brain tissue could reach sentience. In other news, this week a publication in the journal JAMA Pediatrics ignited controversy by reporting a supposed link between fluoride exposure and IQ scores in young children. In addition to several experts questioning the results of the study itself, there is also concern about the potential effect this could have on the debate over the use of fluoride in the water supply; anti-fluoride activists have already jumped on the study to defend their cause. Scientific findings have an enormous potential to dramatically affect our lives. This raises an ethical issue: should there be certain topics, owing to their ethical concerns, that should be off-limits for scientific study?

This question is studied in both science and philosophy, and is sometimes referred to as the problem of forbidden knowledge. The problem can include issues of experimental methods and whether they follow proper ethical protocols (certain knowledge may be forbidden if it uses human experimentation), but it can also include the impact that the discovery or dissemination of certain kinds of knowledge could have on society. For example, a recent study found that girls and boys are equally as good at mathematics and that children’s brains function similarly regardless of gender. However, there have been several studies going back decades which tried to explain differences between mathematical abilities in boys and girls in terms of biological differences. Such studies have the possibility of re-enforcing gender roles and potentially justifying them as biologically determined. This has the potential to spill over into social interactions. For instance, Helen Longino notes that such findings could lead to lower priorities being made to encourage women to enter math and science.

So, such studies have the potential to impact society which is an ethical concern, but is this reason enough make them forbidden? Not necessarily. The bigger problem involves how adequate these findings are, the concern that they could be incorrect, and what society is to do about that until correct findings are published. For example, in the case of math testing, it is not that difficult to find significant correlations between variables, but the limits of this correlation and the study’s potential to identify causal factors are often lost on the public. There are also methodical problems; some standardized tests rely on male-centric questions that can skew results, different kinds of tests and different strategies for preparing for them can also misshape our findings. So even if correlations are found, where there are not major flaws in the assumptions of the study, they may not be very generalizable. In the meantime, such findings, even if they are corrected over time, can create stereotypes in the public that are hard to get rid of.

Because of these concerns, some philosophers argue that either certain kinds of questions be banned from study, or that studies should avoid trying to explain differences in abilities and outcomes according to race or sex. For instance, Janet Kourany argues that scientists have moral responsibilities to the public and they should thus conduct themselves according to egalitarian standards. If a scientist wants to investigate the differences between racial and gender groups, they should seek to explain these in ways without assuming that the difference is biologically determined.

In one of her examples, she discusses studying differences between incidents of domestic violence in white and black communities. A scientist should highlight similarities of domestic violence within white and black communities and seek to explain dissimilarities in terms of social issues like racism or poverty. With a stance like this, research into racial differences explaining differences in rates of domestic violence would thus constitute forbidden knowledge. Only if these alternative egalitarian explanations empirically fail can a scientist then choose to explore race as a possible explanation of differences between communities. By doing so, it avoids perpetuating a possibly empirically flawed account that suggests that blacks might be more violent than other ethnic groups.

She points out that the alternative risks keeping stereotypes alive even while scientists slowly prove them wrong. Just as in the case of studying mathematical differences, the slow settlement of opinion within the scientific community leaves society free to entertain stereotypes as “scientifically plausible” and adopt potentially harmful policies in the meantime. In his research on the matter Philip Kitcher notes that we are susceptible to instances of cognitive asymmetry where it takes far less empirical evidence to maintain stereotypical beliefs than it takes to get rid of them. This is why studying the truth of such stereotypes can be so problematic.

These types of cases seem to offer significant support to labeling particular lines of scientific inquiry forbidden. But the issue is more complicated. First, telling scientists what they should and should not study raises concerns over freedom of speech and freedom of research. We already acknowledge limits on research on the basis of ethical concerns, but this represents a different kind of restriction. One might claim that so long as science is publicly funded, there are reasonable democratically justified limits of research, but the precise boundaries of this restriction will prove difficult to identify.

Secondly, and perhaps more importantly, such a policy has the potential to exacerbate the problem. According to Kitcher,

“In a world where (for example) research into race differences in I.Q. is banned, the residues of belief in the inferiority of the members of certain races are reinforced by the idea that official ideology has stepped in to conceal an uncomfortable truth. Prejudice can be buttressed as those who opposed the ban proclaim themselves to be the gallant heirs of Galileo.”

In other words, one reaction to such bans on forbidden knowledge, so long as our own cognitive asymmetries are unknown to us, will be to fight back that this is an undue limitation on free speech for the sake of politics. In the meantime, those who push for such research can become martyrs and censoring them may only serve to draw more attention to the cause.

This obviously presents us with an ethical dilemma. Given that there are scientific research projects that could have a potentially harmful effect on society, whether the science involved is adequate or not, is it wise to ban such projects as forbidden knowledge? There are reasons to say yes, but implementing such bans may cause more harm or drive more public attention to such issues. Even banning research on the development of brain tissue from stem cells may be wise, but it may also cause such research to move to another country with more relaxed ethical standards, meaning that potential harms could be much worse. These issues surrounding how science and society relate are likely only going to be solved with greater public education and open discussion about what ethical responsibilities we think scientists should have.

The Moral Quandary of Testing on Animals

Photo of three rats in a cage with a little red house and food and water available

The topic of testing on animals as a form of scientific research has been contentious for quite some time. In most cases, the discussion tends to focus on whether it is morally permissible to test various products and procedures on animals in order to determine whether they would be safe and beneficial for human use. Animal experimentation is not always conducted simply for the benefit of human beings—sometimes the parties that stand to benefit from the research are other non-human animals, often including other members of the same species as the animals being tested.

Defenders of the practice of testing on animals for the benefit of humans argue that the benefits for humans substantially outweigh the harms incurred by animals. Some argue that our moral obligations extend only to other members of the moral community. Among other things, members of the moral community can recognize the nature of rights and obligations and are capable of being motivated to act on the basis of moral reasons. Non-human animals, because they are not capable of these kinds of reflections, are not members of the moral community. As such, defenders of animals testing argue, they don’t have rights. In response, critics argue that if we only have obligations to beings that can recognize the nature of moral obligations, then we don’t have obligations to very young children or to permanently mentally disabled humans, and this idea is morally indefensible.

Other defenders of animal testing argue that it is both natural and proper for human beings to exercise dominion over animals. These arguments take more than one form. Some people who make this argument are motivated by passages from the Bible. Genesis 1:26 reads, “And God said, Let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth.” Some argue that this passage suggests that humans have divine permission to use animals as they see fit. The use of animals for the benefit of humans seems morally defensible to these people for this reason.

Others argue human dominion over other animals is appropriate because human beings have demonstrated their superiority over non-human animals. We are no different from other animals in the sense that we use our natural skills to climb as high on the food chain as our circumstance permit. As rational creatures, our needs extend farther than the needs of non-human animals. As a result, we can use non-human animals to solve a wider range of problems. We can use them not only for protein, but to make our lives longer, better, more beautiful, and more convenient. Critics of such a view argue that might doesn’t make right. What’s more, our enhanced rational capacities also give us the ability to make moral judgments, and these moral judgments should extend to compassion for the suffering of all living creatures.

Arguments against research on animals also come in a variety of forms. One approach focuses on suffering. Famously, Peter Singer argued that what makes a being deserving of moral consideration is their capacity to suffer. If we treat equal amounts of suffering unequally simply because of the species to which the animal happens to belong, our behavior is speciesist—we are taking seriously considerations that are morally irrelevant. Rights based approaches, like the one argued by Tom Regan point out that non-human animals are subjects of lives. There is something it is like for them to experience the world in the unique way that they do. In light of this, we should recognize that non-human animals have intrinsic value and they should not be used as objects to be manipulated for the benefit of human beings.

How should we assess the situation when the research done on non-human animals is done, not for the benefit of human beings, but for the benefit of other non-human animals? In these cases, one major criticism of testing disappears—researchers can’t be accused of failing to take the interests of non-human animals seriously. After all, concern for the interests of non-human animals is what motivates this research to begin with. Vaccines for rabies, canine parvovirus, distemper, and feline leukemia virus have been developed through the use of animal research. These critical procedures improve and even save the lives of non-human animals. When we engage in a consequentialist assessment of the practice, testing on non-human animals for the benefit of other non-human animals seems justified.

On the other hand, it may be that speciesism is rearing its ugly head again in this case. Consider a parallel case in which research was being conducted for the good of human beings. Imagine that a tremendous amount of good could be done for human beings at large if we tested a particular product on a human being. The testing of this product would cause tremendous physical pain to the human being, and may even cause their death. Presumably, we would not think that it is justified to experiment on the human. The ends do not justify the means.  

One might think that one major difference between the case of testing on humans and the case of testing on animals is that humans are capable of giving consent and animals are not. So, on this view, if we kidnap a human for the purposes of experimenting on her to achieve some greater good, what we have done wrong, is, in part, violating the autonomy of the individual. Animals aren’t capable of giving consent, so it is not possible to violate their autonomy in this way.  

Under the microscope, this way of carving up the situation doesn’t track our ordinary discourse about consent. It is, of course, true, that humans are free to use freely (within limits) certain things that are incapable of giving consent. For example, humans can use grain and stone and so on without fear of violating any important moral principle. In other cases in which consent is not possible, we tend to have very different intuitions. Very young children, for example, aren’t capable of consent, and for that very reason we tend to think it is not morally permissible for us to use them as mere means to our own ends. Beings that are conscious but are incapable of giving consent seem worthy of special protection. So it seems wrong to test on them even if it is for the good of their own species. Is it speciesist to think that the ends can’t justify the means in the case of the unwilling human subject but not in the case of the unwilling non-human animal?

Testing on non-human animals for the sake of other non-human animals also raises other sets of unique moral concerns and questions. What is the proper rank ordering of moral obligations when the stakeholders are abstractions? Imagine that we are considering doing an experiment on Coco the chimpanzee. The experiment that we do on Coco might have implications for future chimpanzees with Coco’s condition. The research might, then, have a beneficial impact for Coco’s species—the species “chimpanzee.” Can the moral obligations that we have to concrete, suffering beings ever be outweighed by obligations that we have to abstractions like “future generations” or “survival of the species”?