← Return to search results
Back to Prindle Institute

Black-Box Expertise and AI Discourse

image of black box highlighted on stage

It has recently been estimated that new generative AI technology could add up to $4.4 trillion to the global economy. This figure was reported by The New York Times, Bloomberg, Yahoo Finance, The Globe and Mail, and dozens of other news outlets and websites. It’s a big, impressive number that has been interpreted by some as even more reason to get excited about AI, and by others to add to a growing list of concerns.

The estimate itself came from a report recently released by consulting firm McKinsey & Company. As the authors of the report prognosticate, AI will make a significant impact in the kinds of tasks that can be performed by AI instead of humans: some of these tasks are relatively simple, such as creating “personalized emails,” while others are more complex, such as “communicating with others about operational plans or activities.” Mileage may vary depending on the business, but overall those productivity savings can add up to huge contributions to the economy.

While it’s one thing to speculate, extraordinary claims require extraordinary evidence. Where one would expect to see a rigorous methodology in the McKinsey report, however, we are instead told that the authors referenced a “proprietary database” and “drew on the experience of more than 100 experts,” none of whom are mentioned. In other words, while it certainly seems plausible that generative AI could add a lot of value to the global economy, when it comes to specific numbers, we’re just being asked to take McKinsey’s word for it. McKinsey are perceived by many to be experts, after all.

It often is, in general, perfectly rational to take an expert’s word for it, without having to examine their evidence in detail. Of course, whether McKinsey & Company really are experts when it comes to AI and financial predictions (or, really, anything else for that matter) is up for debate. Regardless, something is troubling about presenting one’s expert opinion in such a way that one could not investigate it even if one wanted to. Call this phenomenon black-box expertise.

Black-box expertise seems to be common and even welcomed in the discourse surrounding new developments in AI, perhaps due to an immense amount of hype and appetite for new information. The result is an arms race of increasingly hyperbolic articles, studies, and statements from legitimate (and purportedly legitimate) experts, ones that are often presented without much in the way of supporting evidence. A discourse that encourages black-box expertise is problematic, however, in that it can make the identification of experts more difficult, and perhaps lead to misplaced trust.

We can consider black-box expertise in a few forms. For instance, an expert may present a conclusion but not make available their methodology, either in whole or in part – this seems to be what’s happening in the McKinsey report. We can also think of cases in which experts might not make available the evidence they used in reaching a conclusion, or the reasoning they used to get there. Expressions of black-box expertise of these kinds have plagued other parts of the AI discourse recently, as well.

For instance, another expert opinion that has been frequently quoted comes from AI expert Paul Christiano, who, when asked about the existential risk posed by AI, claimed: “Overall, maybe we’re talking about a 50/50 chance of catastrophe shortly after we have systems at the human level.” It’s a potentially terrifying prospect, but Christiano is not forthcoming with his reasoning for landing on that number in particular. While his credentials would lead many to consider him a legitimate expert, the basis of his opinions on AI is completely opaque.

Why is black-box expertise a problem, though? One of the benefits of relying on expert opinion is that the experts have done the hard work in figuring things out so that we don’t have to. This is especially helpful when the matter at hand is complex, and when we don’t have the skills or knowledge to figure it out ourselves. It would be odd, for instance, to demand to see all of the evidence, or scrutinize the methodology of an expert who works in a field of which we are largely ignorant since we wouldn’t really know what we were looking at or how to evaluate it. Lest we be skeptics about everything we’re not personally well-versed in, reliance on expertise necessarily requires some amount of trust. So why should it matter how transparent an expert is about the way they reached their opinion?

The first problem is one of identification.  As we’ve seen, a fundamental challenge in evaluating whether someone is an expert from the point of view of a non-expert is that non-experts tend to be unable to fully evaluate claims made in that area of expertise. Instead, non-experts rely on different markers of expertise, such as one’s credentials, professional accomplishments, and engagement with others in their respective areas. Crucially, however, non-experts also tend to evaluate expertise on the basis of factors like one’s ability to respond to criticism, the provisions of reasons for their beliefs, and their ability to explain their views to others. These factors are directly at odds with black-box expertise: without making one’s methodology or reasoning apparent, it makes it difficult for non-experts to identify experts.

A second and related problem with black-box expertise is that it becomes more difficult for others to identify epistemic trespassers: those who have specialized knowledge or expertise in one area that make judgments on matters in areas where they lack expertise. Epistemic trespassers are, arguably, rampant in AI discourse. Consider, for example, a recent and widely-reported interview with James Cameron, the director of the original Terminator series of movies. When asked about whether he considered artificial intelligence to be an existential risk, he remarked, “I warned you guys in 1984, and you didn’t listen” (referring to the plot of the Terminator movies in which the existential threat of AI was very tangible). Cameron’s comment makes for a fun headline (one which was featured in an exhausting number of publications), but he is by no measure an expert in artificial intelligence in the year 2023. He may be an accomplished filmmaker, but when it comes to contemporary discussions of AI, he is very much an epistemic trespasser.

Here, then, is a central problem with relying on black-box expertise in AI discourse: expert opinion presented without transparent evidence, methodology, or reasoning can be difficult to distinguish from opinions of non-experts and epistemic trespassers. This can make it difficult for non-experts to navigate an already complex and crowded discourse to identify who should be trusted, and whose word should be taken with a grain of salt.

Given the potential of AI and its tendency to produce headlines that tout it both as a possible savior of the economy and destroyer of the world, being able to identify experts is an important part of creating a discourse that is productive and not simply motivated by fear-mongering and hype. Black-box expertise, like that one on display in the McKinsey report and many other commentaries from AI researchers, provides a significant barrier to creating that kind of discourse.

‘Don’t Look Up’ and “Trust the Science”

photograph of "Evidence over Ignorance" protest sign

A fairly typical review of “Don’t Look Up” reads as follows: “The true power of this film, though, is in its ferocious, unrelenting lampooning of science deniers.” I disagree. This film exposes the unfortunate limits of the oft-repeated imperative of the coronavirus and climate-change era: “Trust the Science.” McKay and Co. probe a kind of epistemic dysfunction, one that underlies many of our most fiercest moral and political disagreements. Contrary to how it’s been received, the film speaks to the lack of a generally agreed-upon method for arriving at our beliefs about how the world is and who we should trust.

As the film opens, we are treated to a warm introduction to our two astronomers and shown a montage of the scientific and mathematical processes they use to arrive at their horrific conclusion that a deadly comet will collide with Earth in six months. Surely, you might be thinking, this film tells us exactly whom to believe and trust from the outset! It tells us to “Trust the Scientists,” to “Trust the Science!”

Here’s a preliminary problem with trying to follow that advice. It’s not like we’re all doing scientific experiments ourselves whenever we accept scientific facts. Practically, we have to rely on the testimony of others to tell us what the science says — so who do we believe? Which scientists and which science?

In the film, this decision is straightforward for us. In fact, we’re not given much of a choice. But in real life, things are harder. Brilliantly, the complexity of real-life is (perhaps unintentionally) reflected in the film itself.

Imagine you’re a sensible person, a Science-Truster. You go to the CDC to get your coronavirus data, to the IPCC to get your climate change facts. If you’re worried about a comet smashing into Earth, you might think to yourself something like, “I’m going to go straight to the organization whose job it is to look at the scientific evidence, study it, and come to conclusions; I’ll trust what NASA says. The head of NASA certainly sounds like a reliable, expert source in such a scenario.” What does the head of NASA tell the public in “Don’t Look Up”? She reports that the comet is nothing to worry about.

Admittedly, McKay provides us a clear reason for the audience to ignore the head of NASA’s scientific misleading testimony about the comet. She is revealed to be a political hire and an anesthesiologist rather than an astronomer. “Trust the Science” has a friend, “Trust the Experts,” and the head of NASA doesn’t qualify as an expert on this topic. So far, so good, for the interpretation of the film as endorsing “Trust the Science” as an epistemic doctrine. It’s clear why so many critics misinterpret the film this way.

But, while it’s easy enough to miss amid the increasingly frantic plot, the plausibility of Trust the Science falls apart as the film progresses. Several Nobel-prize winning, Ivy-league scientists throw their support behind the (doomsday-causing) plan of a tech-billionaire to bring the wealth of the comet safely to Earth in manageable chunks. They assure the public that the plan is safe. Even one of our two scientific heroes repeats the false but reassuring line on a talk show, to the hosts’ delight.

Instead of being a member of the audience with privileged information about whom you should trust, imagine being an average Joe in the film’s world at this point. All you could possibly know is that some well-respected scientists claim we need to destroy or divert the comet at all costs. Meanwhile, other scientists, equally if not more well-respected, claim we can safely bring the mineral-rich comet to Earth in small chunks. What does “Trust the Science” advise “Don’t Look Up” average Joe? Nothing. The advice simply can’t be followed. It offers no guidance on what to believe or whom to listen to.

How could you decide what to believe in such a scenario? Assuming you, like most of us, lack the expertise to adjudicate the topic on the scientific merits, you might start investigating the incentives of the scientists on both sides of the debate. You might study who is getting paid by whom, who stands to gain from saying what. And this might even lead you to the truth — that the pro-comet-impact scientists are bought and paid for by the tech-billionaire and are incentivized to ignore, or at least minimize, the risk of mission failure. But this approach to belief-formation certainly doesn’t sound like Trusting the Science anymore. It sounds closer to conspiracy theorizing.

Speaking of conspiracy theories, in a particularly fascinating scene, rioters confront one of our two astronomers with the conspiracy theory that the elites have built bunkers because the they don’t really believe the comet is going to be survivable (at least, not without a bunker). Our astronomer dismissively tells the mob this theory is false, that the elites are “not that competent.” This retort nicely captures the standard rationalistic, scientific response to conspiracy theories; everything can be explained by incompetence, so there’s no need to invoke conspiracy. But, as another reviewer has noticed, later on in the film “we learn that Tech CEO literally built a 2,000 person starship in less than six months so he and the other elites could escape.” It turns out the conspiracy theory was actually more or less correct, if not in the exact details. This rationalistic, scientific debunking and dismissal of conspiracy is actually proven entirely wrong. We would have done better trusting the conspiracy theorist than trusting the scientist.

Ultimately, the demand that we “Trust the Science” turns out to be both un-followable (as soon as scientific consensus breaks down, since we don’t know which science or scientists to listen to), and unreliable (as shown when the conspiracy theorist turns out to be correct). The message this film actually delivers about “Trust the Science” is this: it’s not good enough!

The Moral and Political Importance of “Trust the Science”

Let’s now look at why any of this matters, morally speaking.

Cultures have epistemologies. They have established ways for their members to form beliefs that are widely accepted as the right ways within those cultures. That might mean that people generally accept, for example, a holy text as the ultimate source of authority about what to believe. But in our own society, currently, we lack this. We don’t have a dominant, shared authority or a commonly accepted way to get the right beliefs. We don’t have a universally respected holy book to appeal to, not even a Walter Cronkite telling us “That’s the way it is.” We can’t seem to agree on what to believe or whom to listen to, or even what kinds of claims have weight. Enter “Trust the Science”: a candidate heuristic that just might be acceptable to members of a technologically developed, scientifically advanced, and (largely) secularized society like ours. If our society could collectively agree that, in cases of controversy, everyone should Trust the Science, we might expect the emergence of more of a consensus on the basic facts. And that consensus, in turn, may resolve many of our moral and political disagreements.

This final hope isn’t a crazy one. Many of our moral and political disagreements are based on disagreements about beliefs about the basic facts. Why do Democrats tend to agree with mandatory masks, vaccines, and other coronavirus-related restrictions, while Republicans tend to disagree with them? Much of it is probably explained by the fact that, as a survey of 35,000 Americans found, “Republicans consistently underestimate risks [of coronavirus], while Democrats consistently overestimate them.” In other words, the fact that both sides have false beliefs partly explains their moral and political disagreements. Clearly, none of us are doing well at figuring out whom we can trust to give truthful, undistorted information on our own. But perhaps, if we all just followed the  “Trust the Science” heuristic, then we would reach enough agreement about the basic facts to make some progress on these moral and political questions.

Perhaps unintentionally, “Don’t Look Up” presents a powerful case against this hopeful, utopian answer to the deep divisions in our society. Trusting the Science can’t play the unifying role we might want it to; it can’t form the basis of a new, generally agreed upon secular epistemic heuristic for our society. “Don’t Look Up” is not the simple “pro-science,” “anti-science-denier” film many have taken it to be. It’s far more complicated, ambivalent, and interesting.

Expertise and the “Building Distrust” of Public Health Agencies

photograph of Dr. Fauci speaking on panel with American flag in background

If you want to know something about science, and you don’t know much about science, it seems that the best course of action would be to ask the experts. It’s not always obvious who these experts are, but there are often some pretty easy ways to identify them: if they have a lot of experience, are recognized in their field, do things like publish important papers and win grant money, etc., then there’s a good chance they know what they’re talking about. Listening to the experts requires a certain amount of trust on our part: if I’m relying on someone to give me true information then I have to trust that they’re not going to mislead me, or be incompetent, or have ulterior motives. At a time like this it seems that listening to the scientific experts is more important than ever, given that people need to stay informed about the latest developments with the COVID-19 pandemic.

However, there continues to be a significant number of people who appear to be distrustful of the experts, at least when it comes to matters concerning the coronavirus in the US. Recently, Dr. Anthony Fauci stated that he believed that there was a “building distrust” in public health agencies, especially when it comes to said agencies being transparent with developments in fighting the pandemic. While Dr. Fauci did not put forth specific reasons for thinking this, it is certainly not surprising he might feel this way.

That being said, we might ask: if we know that the experts are the best people to look to when looking for information about scientific and other complex issues, and if it’s well known that Dr. Fauci is an expert, then why is there a growing distrust of him among Americans?

One reason is no doubt political. Indeed, those distrustful of Dr. Fauci have claimed that he is merely “playing politics” when providing information about the coronavirus: some on the political right in the US have expressed skepticism with the severity of the pandemic and the necessity for the use of face masks specifically, and have interpreted the messages from Dr. Fauci as being an attack on their political views, motivated by differing political interests. Of course, this is an extremely unlikely explanation for Dr. Fauci’s recommendations: someone simply disagreeing with you or giving you advice that you don’t like is not a good reason to find them distrustful, especially when they are much more knowledgeable on the subject than you are.

But here we have another dimension to the problem, and something that might contribute to a building distrust: people who disagree with the experts might develop resentment toward said experts because they feel as though their own views are not being taken seriously.

Consider, for instance, an essay recently written by a member of a right-wing think tank called “How Expert Worship is Ruining Science.” The author, clearly skeptical of the recommendations of Dr. Fauci, laments what he takes to be a dismissing of the views of laypersons. While the article itself is chock-a-block with fallacious reasoning, we can identify a few key points that can help explain why some are distrustful of the scientific experts in the current climate.

First, there is the concern that the line between experts and “non-experts” is not so sharp. For instance, with there being so much information available to anyone with an internet connection, one might think that given one’s ability to do research for oneself that we should not think that we can so easily separate the experts from the laypersons. Not taking the views of the non-expert seriously, then, means that one might miss out on getting at truth from an unlikely source.

Second, recent efforts by social media sites like Twitter and Facebook to prevent the spread of misinformation are being interpreted as acts of censorship. Again, the thought is that if I try to express my views on social media, and my post is flagged as being false or misleading, then I will feel that my views are not being taken seriously. However, the reasoning continues: the nature of scientific inquiry is meant to be that which is open to objection and criticism, and so failing to engage with that criticism, or to even allow it to be expressed, represents bad scientific practice on the part of the experts. As such, we have reason to distrust them.

While this reasoning isn’t particularly good, it might help explain the apparent distrust of experts in the US. Indeed, while it is perhaps correct to say that there is not a very sharp distinction between those who are experts and those who are not, it is nevertheless still important to recognize that if an expert as credentialed and experienced as Dr. Fauci disagrees with you, then it is likely your views need to be more closely examined. The thought that scientific progress is incompatible with some views being fact-checked or prevented from being disseminated on social media is also hyperbolic: progress in any field would slow to a halt if it stopped to consider every possible view, and that the fact that one specific set of views is not being considered as much as one wants is not an indication that productive debate is not being conducted by the experts.

At the same time, it is perhaps more understandable why those who are presenting information that is being flagged as false or misleading may feel a growing sense of distrust of experts, especially when views on the relevant issues are divided along the political spectrum. While Dr. Fauci himself has expressed that he takes transparency to be a key component in maintaining the trust of the public, this is perhaps not the full explanation. There may instead be a fundamental tension between trying to best inform the public while simultaneously maintaining their trust, since doing so will inevitably require not taking seriously everyone who disagrees with the experts.

The Small but Unsettling Voice of the Expert Skeptic

photograph of someone casting off face mask

Experts and politicians worldwide have come to grips with the magnitude of the COVID-19 pandemic. Even Donald Trump, once skeptical that COVID-19 would affect the US in a significant way, now admits that the virus will likely take many more thousands of lives.

Despite this agreement, some are still not convinced. Skeptics claim that deaths that are reported as being caused by COVID-19 are really deaths that would have happened anyway, thereby artificially inflating the death toll. They claim that the CDC is complicit, telling doctors to document a death as “COVID-related” even when they aren’t sure. They highlight failures of world leaders like the Director-General of the World Health Organization and political corruption in China. They claim that talk of hospitals being “war zones” is media hype, and they share videos of “peaceful” local hospitals from places that aren’t hot spots, like Louisville or Tallahassee. They point to elaborate conspiracies about the nefarious origins of the novel coronavirus.

What’s the aim of this strikingly implausible, multi-national conspiracy, according to these “COVID-truthers”? Billions of dollars for pharmaceutical companies and votes for tyrannical politicians who want to look like benevolent saviors.

Expert skeptics like COVID-truthers are concerning because they are more likely to put themselves, their families, and their communities at risk by not physical distancing or wearing masks. They are more likely to violate stay-at-home orders and press politicians to re-open commerce before it is safe. And they pass this faulty reasoning on to their children.

While expert skepticism is not new, it is unsettling because expert skepticism often has a kernel of truth. Experts regularly disagree, especially in high-impact domains like medicine. Some experts give advice outside their fields (what Nathan Ballantyne calls “epistemic trespassing”). Some experts have conflicts of interest that lead to research fraud. And some people—seemingly miraculously—defy expert prediction, for example, by surviving a life-threatening illness.

If all this is right, shouldn’t everyone be skeptical of experts?

In reality, most non-experts do okay deciding who is trustworthy and when. This is because we understand—at least in broad strokes—how expertise works. Experts disagree over some issues, but, in time, their judgments tend to converge. Some people do defy expert expectations, but these usually fall within the scope of uncertainty. For example, about 1 in 100,000 cancers go into spontaneous remission. Further, we can often tell who is in a good position to help us. In the case of lawyers, contractors, and accountants, we can find out their credentials, how long they’ve been practicing, and their specialties. We can even learn about their work from online reviews or friends who have used them.

Of course, in these cases, the stakes are usually low. If it turns out that we trusted the wrong person, we might be able to sue for damages or accept the consequences and try harder next time. But as our need for experts gets more complicated, figuring out who is trustworthy is harder. For instance, questions about COVID-19 are:

  • New (Experts struggle to get good information.)
  • Time-sensitive (We need answers more quickly than we have time to evaluate experts.)
  • Value-charged (Our interests in the information biases who we trust.)
  • Politicized (Information is emotionally charged or distorted, and there are more epistemic trespassers.)

Where does this leave those of us who aren’t infectious disease experts? Should we shrug our shoulders with the COVID-truthers and start looking for ulterior motives?

Not obviously. Here are four strategies to help distill reality from fantasy.

  1. Keep in mind what experts should (and should not) be able to do.

Experts spend years studying a topic, but they cannot see the future. They should be able explain a problem and suggest ways of solving it. But models that predict the future are educated guesses. In the case of infectious diseases, those guesses depend on assumptions about how people act. If people act differently, the guesses will be inaccurate. But that’s how models work.

  1. Look for consensus, but be realistic.

When experts agree on something, that’s usually a sign they’re all thinking about the evidence the same way. But when they face a new problem, their evidence will change continually, and experts will have little time to make sense of it. In the case of COVID-19, there’s wide consensus about the virus that causes it and how it spreads. There is little consensus on why it hurts some people more than others and whether a vaccine is the right solution. But just because there isn’t consensus doesn’t mean there are ulterior motives.

  1. Look for “meta-expert consensus.”

When experts agree, it is sometimes because they need to look like they agree, whether due to worries about public opinion or because they want to convince politicians to act. These are not good reasons to trust experts. But on any complex issue, there’s more than one kind of expert. And not all experts have conflicts of interest. In the case of COVID-19, independent epidemiologists, infectious disease doctors, and public health experts agree that SARS-CoV-2 is a new, dangerous, contagious threat and that social distancing the main weapon against that threat. That kind of “meta-expert consensus” is a good check on expertise and good news for novices when deciding what to believe.

  1. Don’t double-down.

When experts get new evidence, they update their beliefs, even if they were wrong. They don’t force that evidence to fit old beliefs. When prediction models for COVID-related deaths did not bear out, experts updated their predictions. They recognized that predictions can be confounded by many variables, and they used the new evidence to update their models. This is good advice for novices, too.

These strategies are not fool proof. The world is messy, experts are fallible, and we won’t always trust the right people. But while expert skepticism is grounded in real limitations of expertise, we don’t have to join the ranks the COVID-truthers. With hard work and a little caution, we can make responsible choices about who we trust.

Pseudoscience, Antiscience, and Bad Coronavirus Advice

photograph of Trump at podium with Dr. Fauci and Dr. Bix behind him.

First, it was hydroxychloroquine, which Trump touted as the supposedly miracle cure for the coronavirus. In the weeks since, however, research has suggested that there is little reason to think that the antimalarial drug has any effect on the coronavirus whatsoever, and may in fact be actively harmful instead. Not deterred by a lack of any kind of expertise, knowledge, or common sense, Trump has most recently suggested that ultraviolet light or disinfectants may be a fruitful area of research in combatting the pandemic. With regards to the potential for ultraviolet light, Trump stated:

“And then I said, supposing you brought the light inside of the body, which you can do either through the skin or in some other way. And I think you said you’re going to test that too. Sounds interesting.”

And with regards to disinfectants:

“And then I see the disinfectant where it knocks it out in a minute. One minute. And is there a way we can do something like that, by injection inside or almost a cleaning?”

As many news outlets, scientists, and people who have thought about the implications of Trump’s proposals for more than one second have reported, injecting disinfectant is a terrible idea, and while it may cure you of coronavirus, it will only do so by way of having killed you.

Presuming that Trump’s motivations are not, in fact, to recommend to Americans that they commit involuntary suicide, why on earth would he suggest treatment options that are so obviously and blatantly harmful?

If advice to consume something potentially harmful as a miracle cure sounds familiar, it is because it has been a staple in various pseudoscientific communities for a long time. As a rough characterization, we can call a set of beliefs or a practice pseudoscientific if they purport to be scientific, but are not, in fact, supported by any kind of scientific justification or evidence. You have no doubt come across pseudoscience in various forms: homeopathy, for example, is considered by a considerable number of people to be “good science,” despite there being overwhelming evidence that it is a potentially dangerous approach to illness that has zero empirical support.

While pseudoscience at least purports to be scientific, other recent proposed miracle cures for coronavirus would be better categorized as antiscience. Antiscience has been defined as “the outright rejection of the time-tested methods of science as a means of producing valid and useful knowledge,” and can be found in many different approaches to treatments of mental or physical conditions. For example, consuming some quantity of bleach has been proposed by some groups as a treatment for conditions ranging from autism, cancer, HIV/Aids, and malaria. Described as a “miracle mineral solution” or “MMS,” it has also been proposed by these groups as a cure for coronavirus.

That such views are not simply pseudoscientific but distinctly antiscientific is evidenced by the justification that those who are proposing such views provide for them. For instance, when asked to defend their claims that consuming MMS would cure coronavirus despite lacking even a tangential relationship with science, the head of one the group proposing the treatment stated that “the FDA has a financial interest in this problem because it’s run by people in the pharmaceutical industry.” The basis for the claims, then, is not that they are supported by science, but rather that science itself is not to be trusted, and so we need to look elsewhere.

Should we classify Trump’s remarks as pseudoscience, or antiscience? While he has a lengthy track record of ignoring, attacking, or contradicting scientific experts, Trump’s suggestion that scientists look into the possibility of injecting disinfectant for treatment would appear not to necessarily be a rejection of science as a trustworthy enterprise, but rather an amateurish attempt at science itself. The reasoning is perhaps something like the following: if there is evidence that UV light and disinfectants will kill the coronavirus on an external surface, it stands to reason that it will work internally, as well.

Part of what makes these remarks so dangerous is not only that it is coming from a source that many find trustworthy, but that its similarity to the kind of treatments proposed by antiscientific groups will no doubt make it appealing to members of those groups, as well. Bad, pseudoscientific reasoning that also appeals to those who are skeptical of science generally is likely a recipe for disaster.

Although a federal court has recently issued an injunction against the group most prominently marketing MMS, it is much more difficult to stem the tide of misinformation when it is pouring, unfiltered, out of the mouth of the president. Indeed, those who adhere to antiscientific views will no doubt fail to be convinced by good scientific evidence that their views are incorrect. After all, if one takes the scientific enterprise to be corrupt, then there is little reason to change your view on the basis of the word of scientific experts.

That Trump has been, and continues to be, an active cause of the spread of disinformation is well-documented, and so it is hardly surprising that he is a detriment in the battle against coronavirus. And while in an ideal world his views on scientific issues would be completely ignored, that his views are both pseudoscientific and appeal to the antiscientific community means that it is undoubtedly only a matter of time until someone is seriously hurt because of Trump’s advice.

COVID-19 and the Ethics of Belief

photograph of scientist with mask and gloves looking through microscope

The current COVID-19 pandemic will likely have long-term effects that will be difficult to predict. This has certainly been the case with past pandemics. For example, the Black Death may have left a lasting mark on the human genome. Because of variations in human genetics, some people have genes which provide an immunological advantage to certain kinds of diseases. During the Black Death, those who had such genes were more likely to live and those without were more likely to do die. For example, a study of Rroma people, whose ancestors migrated to Europe from India one thousand years ago, revealed that those who migrated to Europe possessed genetic differences from their Indian ancestors that were relevant to the immune system response to Yersinia pestis, the bacterium that causes the Black Death. It’s possible that COVID-19 could lead to similar kinds of long-term effects. Are there moral conclusions that we can draw from this?

By itself, not really. Despite this being an example of natural selection at work, the fact that certain people are more likely to survive certain selection pressures than others does not indicate any kind of moral superiority. However, one moral lesson that we could take away is a willingness to make sure that our beliefs are well adapted to our environment. For example, a certain gene is neither good or bad in itself but becomes good or bad through the biochemical interactions within the organism in its environment. Genes that promote survival demonstrate their value to us by being put to (or being capable of being put to) the test of environmental conditions. In the time of COVID-19 one moral lesson the public at large should learn is to avoid wishful thinking and to demonstrate the fitness of our beliefs by putting them to empirical testing. The beliefs that are empirically successful are the beliefs that should carry on and be adopted.

For example, despite the complaints and resistance to social distancing, the idea has begun to demonstrate its value by being put to the test. This week the U.S. revised its model of projected deaths down from a minimum of 100,000 to 60,000 with the changes being credited to social distancing. In Canada, similar signs suggest that social distancing is “flattening the curve” and reducing the number of infections and thus reducing the strain on the healthcare system. On the other hand, stress, fear, and panic may lead us to accept ideas that are encouraging but not tested.

This is why it isn’t a good idea to look to “easy” solutions like hydroxychloroquine as a treatment for COVID-19. As Dr. Fauci has noted, there is no empirical evidence that the drug is effective at treating it. While there are reports of some success, these are merely anecdotal. He notes, “There have been cases that show there may be an effect and there are others to show there’s no effect.” Any benefits the drug may possess are mitigated by a number of factors that are not known. Variations among the population may exist and so need to be controlled for in a clinical study. Just as certain genes may only be beneficial under certain environing conditions, the same may be true of beliefs. An idea may seem positive or beneficial, but that may only be under certain conditions. Ideas and beliefs need to be tested under different conditions to see whether they hold up. While studies are being conducted on hydroxychloroquine, they are not finished.

Relying on wishful thinking instead can be dangerous. The president has claimed that he downplayed the virus at first because he wanted to be “America’s cheerleader,” but being optimistic or hopeful without seriously considering what one is up against, or by ignoring the warning signs, is a recipe for failure. The optimism that an outbreak wouldn’t occur delayed government action to engage in social distancing measures in Italy and in the U.S. and as a result thousands may die who may not have had the matter been treated more seriously sooner.

As a corollary from the last point, we need to get better at relying on experts. But we need to be clear about who has expertise and why? These are people who possess years of experience studying, researching, and investigating ideas in their field to determine which ones hold up to scrutiny and which ones fail. They may not always agree, but this is often owing to disagreements over assumptions that go into the model or because different models may not be measuring exactly the same thing. This kind of disagreement is okay, however, because anyone is theoretically capable of examining their assumptions and holding them up to critical scrutiny.

But why do the projections keep changing? Haven’t they been wrong? How can we rely on them? The answer is that the projections change as we learn more data. But this far preferable to believing the same thing regardless of changing findings. It may not be as comforting getting a single specific unchanging answer, but these are still the only ideas that have been informed by empirical testing. Even if an expert is proven wrong, the field can still learn from those mistakes and improve their conclusions.

But it is also important to recognize that non-medical experts cannot give expert medical advice. Even having a Ph.D. in economics does not qualify Peter Navarro to give advice relating to medicine, biochemistry, virology, epidemiology, or public health policy. Only having years of experience in that field will allow you to consider the relevant information necessary for solving technical problems and putting forward solutions best suited to survive the empirical test.

Perhaps we have seen evidence that a broad shift in thinking has already occurred. There are estimates that a vaccine could be six months to a year away. Polling has shown a decrease in the number of people who would question the safety of vaccines. So perhaps the relative success of ending the pandemic will inspire new trust in expert opinion. Or, maybe people are just scared and will later rationalize it.

Adopting the habit of putting our beliefs to the empirical test, the moral consequences of which are very serious right now, is going to be needed sooner rather than later. If and when a vaccine comes along for COVID-19, the anti-vaccination debate may magnify. And, once the COVID-19 situation settles, climate change is still an ongoing issue that could cause future pandemics. Trusting empirically-tested theories and expert testimony more, and relying less on hearsay, rumor, and fake news could be one of the most important moral decisions we make moving forward.

Hydroxychloroquine and the Problem of Expert Disagreement

photograph of Coronavirus Update Breifing with Dr. Fauci at the podium with Trump behind him

On April 5th, after promoting the use of an anti-malarial drug to (possibly) help stem the tide of the coronavirus outbreak, President Trump commented, “What do I know? I’m not a doctor, but I have common sense.” According to Trump, even though we still lack conclusive evidence that hydroxychloroquine is an effective treatment for COVID-19, there is no reason not to try using it: the medication has been prescribed for other reasons for years and some preliminary results suggest it might also help diminish the effects of the novel coronavirus.

In contrast, Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases and member of the Coronavirus Task Force assembled by the White House to combat the outbreak, has repeatedly cautioned against counting on a treatment regimen that, based on what we know at this point, may not actually work; speaking to Fox and Friends on April 3rd, Fauci warned “We’ve got to be careful that we don’t make that majestic leap to assume that this is a knockout drug. We still need to do the kinds of studies that definitely prove whether any intervention is truly safe and effective.”

What should the average American (who, presumably, knows next to nothing about hydroxychloroquine) make of this disagreement? In most cases, we have reason to believe that the President of the United States – whoever that person happens to be – is in a position to be well-informed and trustworthy. Similarly, we have good reasons to think that doctors who have been appointed to lead federal research institutes (like the NIAID) – not to mention medical doctors in general – are believable experts about medications and prescription practices, as well as other matters of healthcare. How is a non-expert supposed to know who should be believed when purported experts disagree?

This is what philosophers sometimes call the “problem of expert disagreement” – if a layperson needs the insight of an expert to make a reasonable judgment about a claim, but two potential experts disagree, how can the layperson decide which expert to believe? Although the answer here might initially seem trivially easy – the layperson should just listen to whichever expert has more relevant knowledge about the claim – things aren’t so simple: how can the layperson know what counts as “relevant knowledge” if they are, in fact, just a layperson?

So, instead, we might look to the credentials of the two experts to see what sort of education or experience they might be employing when making their recommendations. If we know that one expert graduated from a well-respected university that specializes in the relevant field while the other received a degree from a university that does not train experts in the specific domain, then we have some reason to trust the first over the second. Ultimately, though, this test might not be much better than the first option: it requires the layperson to be able to judge the relative merit of credentialing institutions rather than credentialed individuals and this also seems unrealistic to expect actual laypersons to be capable of doing.

It’s worth noting, though, that this is exactly what laypersons think they’re doing when they simply assert that someone “went to Cornell” or “is the President” – they’re citing some person as an authority in virtue of credentials that they hold, regardless of whether those credentials are actually relevant to the question up for debate. In the worst cases, this isn’t just some misleading effect of celebrity, it’s actually the fallacious “argument from authority” (or an argumentum ad verecundiam, if you prefer): this example of bad reasoning occurs whenever a bad argument is grounded on the basis of someone’s authority in an irrelevant area of expertise.

Finally, laypersons might judge between two disagreeing experts by investigating which expert agrees with the standard consensus of other experts in their field. By increasing the sample size of experts beyond just the original two, the layperson can feasibly judge whether or not a particular person is an outlier among their peers. Presumably, a majority of experts will hold the most credibly supported position in the field (indeed, it’s not clear what else would constitute such a position). Of course, there are problems with this method too (experts in a field might agree with each other for all sorts of reasons other than a concern for the truth, for example), but it’s worth noting that this technique can be used even by the most ignorant of laypersons: all we need to know to judge between two experts is which expert’s peer group is bigger.

Typically, the problem of expert disagreement is debated among philosophers interested in social epistemology – the study of how knowledge works in group contexts – but when expert testimony bears on moral matters then ethicists should be concerned with it as well. It’s general epistemological doctrine that thinkers should believe what’s true, but (even if you deny this) it’s straightforwardly (or at least pragmatically) clear that people interested in protecting themselves and their loved ones from a pandemic should listen to the best medical experts available.

All of this is to say that, in the case of hydroxychloroquine and its purported role in fighting COVID-19, Fauci’s expertise (if you’ll forgive me for putting it this way) clearly trumps Trump’s. In the case of the first test, Dr. Fauci’s position as a medical expert gives his opinion immediate priority for medical questions over that of President Trump (whose job often entails seeking the expert advice of specialists like Fauci). For the second test, Dr. Fauci’s educational and professional career are clearly more relevant to medical questions than President Trump’s history of making real estate and television deals – and no amount of “common sense” matters here, either. Finally, although Trump has repeatedly referenced a survey of medical professionals in support of his position, Fauci’s insistence on controlled testing is simply the standard vetting process scientists seek to ensure that new treatment regimens are safe; the group Trump appeals to (based on that survey) numbers around 2300 individuals, whereas Fauci’s is something on the order of “most every medical researcher who has practiced in the last century.” United States presidents command many things, but the scientific method is not one of them.

Which might also be why Trump now appears to be actively censoring Fauci during press briefings, but that’s a topic for a different article.

Knowing What You Don’t Know

A photograph of the word "knowledge engraved in white sandstone

It’s inevitable that there will be some things that you think you know that you don’t actually know: everyone gets overconfident and makes mistakes sometimes, and every one of us have had to occasionally eat crow. However, a recent study reports that a significant number of people in the United States face this problem of thinking that they know more than they do about a number of key scientific issues. One of these beliefs is not terribly surprising: while the existence of human-made climate change is overwhelmingly supported by scientists, beliefs about climate change diverge from the scientific consensus largely along partisan lines.

Another issue that sees a significant amount of divergence between laypeople and scientists, however, is a belief about the safety of genetically modified foods, or GM foods for short. The study reports that while there is significant scientific consensus that GM foods are “safe to consume” and “have the potential to provide substantial benefits to humankind”, the predominant view amongst the general population in the US is precisely the opposite: while 88% of surveyed scientists said that GM foods were safe, only 37% of laypeople said they thought the same. Participants in the study were asked to rate the strength of their opposition to GM foods, as well as the extent of their concern with such foods. They were then asked to rate how confident they were in their understanding of various issues about GM foods, and were also asked a series of questions testing their general scientific knowledge. The crucial result from the study was that those who expressed the most extreme opposition to GM foods “knew the least” when it came to general scientific knowledge, but thought that “they knew the most.” In other words, extreme opponents of GM foods were seriously bad at knowing what they know and what they didn’t know.

The consequences of having extreme attitudes toward issues that one is also overconfident about can be significant. As the Nature study reports, the benefits of GM foods are potentially substantial, being able to provide “increased nutritional content, higher yield per acre, better shelf life and crop disease resistance.” Other scientists report numerous other benefits, including aiding those in developing countries in the production of food. However, a number of groups, including Greenpeace, have presented various opposing views to the use of GM foods and GMOs (genetically modified organisms) in general, despite the backlash from numerous scientists. While there are certainly many open questions about GM foods and GMOs in general, maintaining one’s beliefs in opposition to the consensus of experts seems like an irresponsible thing to do.

Apart from the potential negative consequences of holding such views, failing to properly take account of evidence seems to point to a more personal flaw in one’s character. Indeed, a number of philosophers have argued that humility, i.e. a proper recognition of one’s own strengths and limitations, is a virtue generally worth pursuing. People who lack intellectual humility – those who are overly boastful, or who refuse to acknowledge their own shortcomings regarding what they do not know – often seem to be suffering from a defect in character.

As the authors of the Nature study identify, a “traditional view in the public understanding of scientific literature is that public attitudes that run counter to scientific consensus reflect a knowledge deficit.” As such, a focus of those working in scientific communication has been on the education of the public. However, the authors also note that such initiatives “have met with limited success,” and their study might suggest why: because those with the most extreme viewpoints also tend to believe that they know much more than they do, they will likely prove unreceptive to attempts at education, since they think they know well enough already. Instead, the authors suggest that a “prerequisite to changing people’s views through education may be getting them to first appreciate gaps in their knowledge.”

It’s not clear, though, what it would take to get someone who greatly overestimates how well they understand something to appreciate the actual gaps in their knowledge. Indeed, it seems that it might be just as difficult to try to tell someone who is overly confident that they are lacking information as it is to try to teach them about something they already take themselves to know. There is also a question of whether such people will trust the experts who are trying to point out those gaps: if I take myself to be extremely knowledgeable about a topic then presumably I will consider myself to possess a degree of expertise, in which case it seems unlikely that I will listen to anyone else who calls themselves an authority.

As The Guardian reports, compounding the problem are two cognitive biases that can stand in the way of those with extreme viewpoints from changing their minds: “active information avoidance,” in which information is rejected because it conflicts with one’s beliefs, and the “backfire effect,” in which being presented with information that conflicts with one’s beliefs actually results in one becoming more confident in one’s beliefs, rather than less. All of these factors together make it very difficult to determine how, exactly, people with extreme viewpoints can be convinced that they should change their beliefs in the face of conflicting evidence.

Perhaps, then, part of the problem with those who take an extreme stance on an issue while greatly overestimate their understanding of it is again a problem of character: such individuals might lack a degree of humility, at least when it comes to a specific topic. In addition to attempting to address specific gaps in one’s knowledge, then, we might also look toward having people attend to their own intellectual limitations more generally. We are all, after all, subject to biases, false beliefs, and general limitations in our knowledge and understanding, although it is sometimes easy to lose sight of this fact.