← Return to search results
Back to Prindle Institute

Black-Box Expertise and AI Discourse

image of black box highlighted on stage

It has recently been estimated that new generative AI technology could add up to $4.4 trillion to the global economy. This figure was reported by The New York Times, Bloomberg, Yahoo Finance, The Globe and Mail, and dozens of other news outlets and websites. It’s a big, impressive number that has been interpreted by some as even more reason to get excited about AI, and by others to add to a growing list of concerns.

The estimate itself came from a report recently released by consulting firm McKinsey & Company. As the authors of the report prognosticate, AI will make a significant impact in the kinds of tasks that can be performed by AI instead of humans: some of these tasks are relatively simple, such as creating “personalized emails,” while others are more complex, such as “communicating with others about operational plans or activities.” Mileage may vary depending on the business, but overall those productivity savings can add up to huge contributions to the economy.

While it’s one thing to speculate, extraordinary claims require extraordinary evidence. Where one would expect to see a rigorous methodology in the McKinsey report, however, we are instead told that the authors referenced a “proprietary database” and “drew on the experience of more than 100 experts,” none of whom are mentioned. In other words, while it certainly seems plausible that generative AI could add a lot of value to the global economy, when it comes to specific numbers, we’re just being asked to take McKinsey’s word for it. McKinsey are perceived by many to be experts, after all.

It often is, in general, perfectly rational to take an expert’s word for it, without having to examine their evidence in detail. Of course, whether McKinsey & Company really are experts when it comes to AI and financial predictions (or, really, anything else for that matter) is up for debate. Regardless, something is troubling about presenting one’s expert opinion in such a way that one could not investigate it even if one wanted to. Call this phenomenon black-box expertise.

Black-box expertise seems to be common and even welcomed in the discourse surrounding new developments in AI, perhaps due to an immense amount of hype and appetite for new information. The result is an arms race of increasingly hyperbolic articles, studies, and statements from legitimate (and purportedly legitimate) experts, ones that are often presented without much in the way of supporting evidence. A discourse that encourages black-box expertise is problematic, however, in that it can make the identification of experts more difficult, and perhaps lead to misplaced trust.

We can consider black-box expertise in a few forms. For instance, an expert may present a conclusion but not make available their methodology, either in whole or in part – this seems to be what’s happening in the McKinsey report. We can also think of cases in which experts might not make available the evidence they used in reaching a conclusion, or the reasoning they used to get there. Expressions of black-box expertise of these kinds have plagued other parts of the AI discourse recently, as well.

For instance, another expert opinion that has been frequently quoted comes from AI expert Paul Christiano, who, when asked about the existential risk posed by AI, claimed: “Overall, maybe we’re talking about a 50/50 chance of catastrophe shortly after we have systems at the human level.” It’s a potentially terrifying prospect, but Christiano is not forthcoming with his reasoning for landing on that number in particular. While his credentials would lead many to consider him a legitimate expert, the basis of his opinions on AI is completely opaque.

Why is black-box expertise a problem, though? One of the benefits of relying on expert opinion is that the experts have done the hard work in figuring things out so that we don’t have to. This is especially helpful when the matter at hand is complex, and when we don’t have the skills or knowledge to figure it out ourselves. It would be odd, for instance, to demand to see all of the evidence, or scrutinize the methodology of an expert who works in a field of which we are largely ignorant since we wouldn’t really know what we were looking at or how to evaluate it. Lest we be skeptics about everything we’re not personally well-versed in, reliance on expertise necessarily requires some amount of trust. So why should it matter how transparent an expert is about the way they reached their opinion?

The first problem is one of identification.  As we’ve seen, a fundamental challenge in evaluating whether someone is an expert from the point of view of a non-expert is that non-experts tend to be unable to fully evaluate claims made in that area of expertise. Instead, non-experts rely on different markers of expertise, such as one’s credentials, professional accomplishments, and engagement with others in their respective areas. Crucially, however, non-experts also tend to evaluate expertise on the basis of factors like one’s ability to respond to criticism, the provisions of reasons for their beliefs, and their ability to explain their views to others. These factors are directly at odds with black-box expertise: without making one’s methodology or reasoning apparent, it makes it difficult for non-experts to identify experts.

A second and related problem with black-box expertise is that it becomes more difficult for others to identify epistemic trespassers: those who have specialized knowledge or expertise in one area that make judgments on matters in areas where they lack expertise. Epistemic trespassers are, arguably, rampant in AI discourse. Consider, for example, a recent and widely-reported interview with James Cameron, the director of the original Terminator series of movies. When asked about whether he considered artificial intelligence to be an existential risk, he remarked, “I warned you guys in 1984, and you didn’t listen” (referring to the plot of the Terminator movies in which the existential threat of AI was very tangible). Cameron’s comment makes for a fun headline (one which was featured in an exhausting number of publications), but he is by no measure an expert in artificial intelligence in the year 2023. He may be an accomplished filmmaker, but when it comes to contemporary discussions of AI, he is very much an epistemic trespasser.

Here, then, is a central problem with relying on black-box expertise in AI discourse: expert opinion presented without transparent evidence, methodology, or reasoning can be difficult to distinguish from opinions of non-experts and epistemic trespassers. This can make it difficult for non-experts to navigate an already complex and crowded discourse to identify who should be trusted, and whose word should be taken with a grain of salt.

Given the potential of AI and its tendency to produce headlines that tout it both as a possible savior of the economy and destroyer of the world, being able to identify experts is an important part of creating a discourse that is productive and not simply motivated by fear-mongering and hype. Black-box expertise, like that one on display in the McKinsey report and many other commentaries from AI researchers, provides a significant barrier to creating that kind of discourse.

The Democratic Limits of Public Trust in Science

photograph of Freedom Convoy trucks

It isn’t every day that Canada makes international headlines for civil unrest and disruptive protests. But the protests which began last month in Ottawa by the “Freedom Convoy” have inspired similar protests around the world and led to the Canadian government declaring a national emergency and seeking special powers to handle the crisis. But what exactly is the crisis that the nation faces? Is it a far-right, conspiratorial, anti-vaccination movement threatening to overthrow the government? Or is it the government’s infringement on rights in the name of “trusting the experts”?

It is easy to take the view that protests are wrong. First, we must acknowledge that the position that the truckers are taking in protesting the mandate is fairly silly. For starters, even if they were successful at getting the Canadian Federal Government to change its position, the United States also requires that truckers be vaccinated to cross the border, so this is a moot point. I also won’t defend the tactics used in the protests including the noise, blocking bridges, etc. However, several people in Canada have pinned part of the blame for the protests on the government, and Justin Trudeau in particular, for politicizing the issue of vaccines and creating a divisive political atmosphere.

First, it is worth noting that Canada has relied more on restrictive lockdown measures as of late compared to other countries, and much of this is driven by the need to keep hospitals from being overrun. However, this is owing to long-term systemic fragility in the healthcare sector, particularly a lack of ICU beds, prompting many – including one of Trudeau’s own MPs – to call for reform to healthcare funding to expand capacity instead of relying so much on lockdown measures. One would think that this would be a topic of national conversation with the public wondering why the government hasn’t done anything about this situation since the beginning of the pandemic. But instead, the Trudeau government has only chosen to focus on a policy of increasing vaccination rates, claiming that they are following “the best science” and “the best public health advice.”

Is there, however, a possibility that the government is hoping that enough people get vaccinated and with enough lockdown measures, they can avoid having the healthcare system collapse, expect the pandemic blows over, and escape without having to address such long-term problems? Maybe, maybe not. But it certainly casts any advice offered or decisions made the government in a very different light. Indeed, one of the problems with expert advice (as I’ve previously discussed here, here, and here) is that it is subject to inductive risk concerns and so the use of expert advice must be democratically-informed.

For example, if we look at a model used by Canada’s federal government, one will note how often its projections are based on different assumptions about what could happen. The model itself may be driven by a number of unstated assumptions which may or may not be reasonable. It is up to politicians to weigh the risks of getting it wrong, and not simply treat experts as if they are infallible. This is important because the value judgments inherent in risk assessment – about the reasonableness of our assumptions as well as the consequences of getting it wrong and potentially overrunning the healthcare system – are what ultimately will determine what restriction measures the government will enact. But this requires democratic debate and discussion. This is where failure of democratic leadership breeds long-term mistrust in expert advice.

It is reasonable to ask questions about what clear metrics a government might use before ending a lockdown, or to ask if there is strong evidence for the effectiveness of a vaccine mandate. But for the public, not all of whom enjoy the benefit of an education in science, it is not so clear what is and is not a reasonable question. The natural place for such a discussion would be the elected Parliament where representatives might press the government for answers. Unfortunately, defense of the protest in any form in Parliament is vilified, with the opposition being told they stand with “people who wave swastikas.” Prime Minister Trudeau has denounced the entire group as a “small fringe minority,” “Nazis,” with “unacceptable views.” However, some MPs have voiced concern about the tone and rhetoric involved in lumping everyone who has a doubt about the mandate or vaccine together.

This divisive attitude has been called out by one of Trudeau’s own MPs who said that people who question existing policies should not be demonized by their Prime Minister, noting “It’s becoming harder and harder to know when public health stops and where politics begins,” adding, “It’s time to stop dividing Canadians and pitting one part of the population against another.” He also called on the Federal government to establish clear and measurable targets.

Unfortunately, if you ask the federal government a direct question like “Is there a federal plan being discussed to ease out mandates?” you will be told that:

there have been moments throughout the pandemic where we have eased restrictions and those decisions have always been made guided by the best available advice that we’re getting from public health experts. And of course, going forward we will continue to listen to the advice that we get from our public health officials.

This is not democratic accountability (and it is not scientific accountability either). “We’re following the science” or “We’re following the experts” is not good enough. Anyone who actually understands the science will know that this is more a slogan than a meaningful claim.

There is also a bit of history at play. In 1970, Trudeau’s father Pierre invoked the War Measures Act during a crisis that resulted in the kidnapping and murder of a cabinet minister. It resulted in rounding up and arrest of hundreds of arrests without warrant or charge. This week the Prime Minister has invoked the successor to that legislation for the first time in Canadian history because…trucks. The police were having trouble moving the trucks because they couldn’t get tow trucks to help clear blocked border crossing. Now, while we can grant that the convoy has been a nuisance and has illegally blocked bridges, we’ve also seen the convoy complying with court-ordered injunctions on honking, we’ve also seen the convoy organizers opposing violence, with no major acts of violence taking place. While there was a rather odd proposal that the convoys could form a “coalition” with the parliamentary opposition to form a new government, I suspect that this is more owing to a failure to understand how Canada’s system of government works rather than a serious attempt to, as some Canadian politicians would claim “overthrow the government.”

The point is that this is an issue that has started with a government not being transparent and accountable, abusing the democratic process in the name of science, and taking advantage of the situation to demonize and delegitimize the opposition. It is in the face of this, and in the face of uncertainty about the intentions of the convoy, and after weeks of not acting sooner to ameliorate the situation, that the government claims that a situation has arisen that, according to the Emergencies Act, is a “threat to the security of Canada…that is so serious as to be a national emergency.” Not only is there room for serious doubt as to whether the convoy situation has reached such a level, but this is taking place during a context of high tension where the government and the media have demonstrated a willingness to overgeneralize and demonize a minority by lobbing as many poisoning the well fallacies as possible and misrepresenting the nature of science. The fact that in this political moment the government seeks greater power is a recipe for abuse of power.

In a democracy, where not everyone enjoys the chance to understand what a model is, how they are made, or how reliable (and unreliable) they can be, citizens have a right to know more about how their government is making use of expert advice in limiting individual freedom. The politicization of the issue using the rhetoric of “following the science,” as well as the government’s slow response and opaque reasoning have only served to make it more difficult for the public to understand the nature of the problem we face. Our public discourse has been stunted by transforming our policy conversations into a narrow one about vaccination and the risk posed by the “alt right.” But there is a much bigger, much more real problem here: the call to “trust the experts” can be used just as easily as a rallying call for rationality as it can be a political tool to demonizing entire groups of people to justify taking away their rights.

An End to Pandemic Precautions?

photograph of masked man amongst blurred crowd

I feel like I have bad luck when it comes to getting sick. Every time there’s a cold going around, I seem to catch it, and before I started regularly getting the flu shot, I would invariably end up spending a couple of weeks a year in abject misery. During the pandemic, however, I have not had a single cold or flu. And I’m far from alone: not only is there plentiful anecdotal evidence, but there is solid scientific evidence that there really was no flu season to speak of this year in many parts of the world. It’s easy to see why: the measures that have been recommended for preventing the spread of the coronavirus – social distancing, wearing masks, sanitizing and washing your hands – turn out to be excellent ways of preventing the spread of cold and flu viruses, as well.

Now, parts of the world are gradually opening up again: in some countries social distancing measures and mask mandates are being relaxed, and people are beginning to congregate again in larger numbers. It is not difficult to imagine a near-future in which pumps of hand sanitizer are abandoned, squirt bottles disappear from stores, and the sight of someone wearing a mask becomes a rarity. A return to normal means resuming our routines of socializing and working (although these may end up looking very different going forward), but it also means a return to getting colds and flus.

Does it have to? While aggressive measures like lockdowns have been necessary to help stop the spread of the coronavirus, few, I think, would think that such practices should be continued indefinitely in order to avoid getting sick a couple times a year. On the other hand, it also doesn’t seem to be overly demanding to ask that people take some new precautions, such as wearing a mask during flu season, or sanitizing and washing their hands on a more regular basis. There are good reasons to continue these practices, at least to some extent: while no one likes being sick with a cold or flu, for some the flu can be more than a minor inconvenience.

So, consider this claim: during the course of the COVID-19 pandemic, we have had a moral obligation to do our part in preventing its spread. This is not an uncontroversial claim: some have argued that personal liberties outweigh any duty one might have towards others when it comes to them getting sick (especially when it comes to wearing masks), and some have argued that the recommended mandates mentioned above are ineffective (despite the scientific evidence to the contrary). I don’t think either of these arguments are very good; that’s not, however, what I want to argue here. Instead, let’s consider a different question: if it is, in fact, the case that we have had (and continue to have) moral obligations to take measures to help prevent the spread of coronavirus, do such obligations extend to the diseases – like colds and flus – that will return after the end of the pandemic? I think the answer is: yes. Kind of.

Here’s what this claim is not: it is not the claim that social distancing must last forever, that you have to wear a mask everywhere forever, or that you can never eat indoors, or have a beer on a patio, or go into a shop with more than a few people at a time, etc. Implementing these restrictions in perpetuity in order to prevent people from getting colds and flus seems far too demanding.

Here’s what the claim is: there are much less-demanding actions that one ought to take in order to help stop the spread of common viruses, in times when the chance of contracting such a virus is high (e.g., cold and flu season). For instance, you have no doubt acquired a good number of masks and a good quantity of hand sanitizer over the past year-and-change, and have likely become accustomed to using them. They are, I think, merely a mild inconvenience: I doubt that anyone actively enjoys wearing a mask when they take the subway, for example, or squirting their hands with sanitizer every time they go in and out of a store, but it’s a small price to pay in order to help the prevention of the spread of viruses.

In addition, while in the pre-corona times there was perhaps a social stigma against wearing medical masks in public in particular, it seems likely that we’ve all gotten used to seeing people wearing masks by now. Indeed, in many parts of the world it is already commonplace for people to wear masks during cold and flu season, or when they are sick or are worried that people they spend time with are sick. That such practices have been ubiquitous in some countries is reason to think that they are not a terrible burden.

There is, of course, debate about which practices are most effective at preventing the spread of other kinds of viruses. Some recent data suggest that while masks can be effective at helping reduce the spread of the flu, perhaps the most effective measures have been ones pertaining to basic hygiene, especially washing your hands. Given that we have become much more cognizant of such measures during the pandemic, it is reasonable to think that it would not be too demanding to expect that people continue to be as conscientious going forward.

Again, note that this is a moral claim, and not, say, a claim about what laws or policy should be. Instead, it is a claim that some of the low-cost, easily accomplishable actions that have helped prevent the spread of a very deadly disease should continue when it comes to preventing the spread of less-deadly ones. Ultimately, returning to normal does not mean having to give up on some of the good habits we’ve developed during the course of the pandemic.

Expertise and the “Building Distrust” of Public Health Agencies

photograph of Dr. Fauci speaking on panel with American flag in background

If you want to know something about science, and you don’t know much about science, it seems that the best course of action would be to ask the experts. It’s not always obvious who these experts are, but there are often some pretty easy ways to identify them: if they have a lot of experience, are recognized in their field, do things like publish important papers and win grant money, etc., then there’s a good chance they know what they’re talking about. Listening to the experts requires a certain amount of trust on our part: if I’m relying on someone to give me true information then I have to trust that they’re not going to mislead me, or be incompetent, or have ulterior motives. At a time like this it seems that listening to the scientific experts is more important than ever, given that people need to stay informed about the latest developments with the COVID-19 pandemic.

However, there continues to be a significant number of people who appear to be distrustful of the experts, at least when it comes to matters concerning the coronavirus in the US. Recently, Dr. Anthony Fauci stated that he believed that there was a “building distrust” in public health agencies, especially when it comes to said agencies being transparent with developments in fighting the pandemic. While Dr. Fauci did not put forth specific reasons for thinking this, it is certainly not surprising he might feel this way.

That being said, we might ask: if we know that the experts are the best people to look to when looking for information about scientific and other complex issues, and if it’s well known that Dr. Fauci is an expert, then why is there a growing distrust of him among Americans?

One reason is no doubt political. Indeed, those distrustful of Dr. Fauci have claimed that he is merely “playing politics” when providing information about the coronavirus: some on the political right in the US have expressed skepticism with the severity of the pandemic and the necessity for the use of face masks specifically, and have interpreted the messages from Dr. Fauci as being an attack on their political views, motivated by differing political interests. Of course, this is an extremely unlikely explanation for Dr. Fauci’s recommendations: someone simply disagreeing with you or giving you advice that you don’t like is not a good reason to find them distrustful, especially when they are much more knowledgeable on the subject than you are.

But here we have another dimension to the problem, and something that might contribute to a building distrust: people who disagree with the experts might develop resentment toward said experts because they feel as though their own views are not being taken seriously.

Consider, for instance, an essay recently written by a member of a right-wing think tank called “How Expert Worship is Ruining Science.” The author, clearly skeptical of the recommendations of Dr. Fauci, laments what he takes to be a dismissing of the views of laypersons. While the article itself is chock-a-block with fallacious reasoning, we can identify a few key points that can help explain why some are distrustful of the scientific experts in the current climate.

First, there is the concern that the line between experts and “non-experts” is not so sharp. For instance, with there being so much information available to anyone with an internet connection, one might think that given one’s ability to do research for oneself that we should not think that we can so easily separate the experts from the laypersons. Not taking the views of the non-expert seriously, then, means that one might miss out on getting at truth from an unlikely source.

Second, recent efforts by social media sites like Twitter and Facebook to prevent the spread of misinformation are being interpreted as acts of censorship. Again, the thought is that if I try to express my views on social media, and my post is flagged as being false or misleading, then I will feel that my views are not being taken seriously. However, the reasoning continues: the nature of scientific inquiry is meant to be that which is open to objection and criticism, and so failing to engage with that criticism, or to even allow it to be expressed, represents bad scientific practice on the part of the experts. As such, we have reason to distrust them.

While this reasoning isn’t particularly good, it might help explain the apparent distrust of experts in the US. Indeed, while it is perhaps correct to say that there is not a very sharp distinction between those who are experts and those who are not, it is nevertheless still important to recognize that if an expert as credentialed and experienced as Dr. Fauci disagrees with you, then it is likely your views need to be more closely examined. The thought that scientific progress is incompatible with some views being fact-checked or prevented from being disseminated on social media is also hyperbolic: progress in any field would slow to a halt if it stopped to consider every possible view, and that the fact that one specific set of views is not being considered as much as one wants is not an indication that productive debate is not being conducted by the experts.

At the same time, it is perhaps more understandable why those who are presenting information that is being flagged as false or misleading may feel a growing sense of distrust of experts, especially when views on the relevant issues are divided along the political spectrum. While Dr. Fauci himself has expressed that he takes transparency to be a key component in maintaining the trust of the public, this is perhaps not the full explanation. There may instead be a fundamental tension between trying to best inform the public while simultaneously maintaining their trust, since doing so will inevitably require not taking seriously everyone who disagrees with the experts.

Waiting for a Coronavirus Vaccine? Watch Out for Self-Deception

photograph of happy smiley face on yellow sticky note surrounded by sad unhappy blue faces

In the last few months, as it is clear that the coronavirus won’t be disappearing anytime soon, there has been a lot of talking about vaccines. The U.S. has already started several trials, and both Canada and Europe have followed suit. The lack of a current vaccine has made even more evident how challenging it is to coexist with the current pandemic. Aside from the more extreme consequences that involve hospitalizations, families and couples have been separated for what is a dramatic amount of time, and some visas have been halted. Unemployment rates have hit record numbers with what will be predicted to be a slow recovery. Restaurants, for example, have recently reopened, yet it is unclear what their future will be when the patio season will soon come to an end. With this in mind, many (myself included) are hoping that a vaccine will come, the sooner the better.

But strong interest for a vaccine, raises the worry of how this influences what we believe, and in particular, how we examine evidence that doesn’t fit our hopes. The worry is that one might indulge in self-deception. What do I mean by this? Let me give you an example that clarifies what I have in mind.

Last week, I was talking to my best friend, who is a doctor and, as such, typically defers to experts. When my partner and I told my friend of our intention of getting married, she reacted enthusiastically. Unfortunately, the happiness of the moment was interrupted by the realization that, due to the current coronavirus pandemic, the wedding would need to take place after the distribution of a vaccine. Since then, my friend has repeatedly assured me that there will be a vaccine as early as October on the grounds that Donald Trump has guaranteed it will be the case. When I relayed to her information coming instead from Dr. Anthony Fauci, who instead believes the vaccine will be available only in 2021, my friend embarked in effortful mental gymnastics to justify (or better: rationalize) why Trump was actually right.

There is an expression commonly used in Italian called “mirror climbing.” Climbing a mirror is an obviously effortful activity and it is also bound to fail because the mirror’s slippery surface makes it easy to fall from. Italians use the expression metaphorically to denote the struggle of someone attempting to justify a proposition that by their own lights is not justifiable. My friend was certainly guilty of some mirror climbing and she is a clear example of someone who, driven by the strong desire to see her best friend getting married, self-deceives that the vaccine will be available in September. This is in fact how self-deception works. People don’t simply believe what they want for that is psychologically impossible. You couldn’t possibly make yourself believe that the moon was made of cheese, even if you wanted to. Beliefs are just not directly controllable like actions. Rather, it is our wishes, desires, interests that influence the way we come to believe what we want by shaping how we gather and interpret evidence. We might, for example, give more importance to reading news that align with our hopes and scroll past news titles that question what we would like to be true. We might give weight to a teaspoon of evidence coming from a source we wouldn’t normally trust, and instead give credibility to evidence coming from sources that we know is not relevant.

You might ask though, how is my friend’s behavior different from someone who is simply wrong instead of self-deceived? Holding a belief that it turns to be false usually happens out of mistake, and as a result, when people correct us, we don’t have problems revising that belief. Self-deception instead, doesn’t happen out of mere error, it is driven by a precise motivation — desires, hope, fears, worry, and so on — which biases the way we collect and interpret evidence in favor of that belief. Consider my friend again. She is a doctor, and as such she always trusts experts. Now, regardless of political views, Trump, contrary to Dr Fauci, is not an expert in medicine. Normally, my friend knows better than trusting someone who is not an expert, yet the only instance when she doesn’t, is one where there is a real interest at stake. This isn’t a coincidence; the belief there will be a vaccine in October is fueled by a precise hope. This is a problem because our beliefs should be guided by evidence, not wishes. Beliefs, so to speak, are not designed to make us feel better (contrary to desires, for example). They are supposed to match reality, and as such be a tool that we use to navigate our environment. Deceiving ourselves that something is the case when it’s not inevitably leads to disappointment because reality has a way to intrude our hopes and catch up with us.

Given this, what can we do to prevent being falling into the grips of self-deception? Be vigilant. We are often aware of our wishes and hopes (just like you are probably aware now that you’re hoping a vaccine will be released soon). Once we are aware of our motivational states, we should slow down our thoughts and be extra careful when considering evidence in favor. This is the first step in protecting ourselves from self-deception.

Anti-Maskers and the Dangers of Collective Endorsement

photograph of group of hands raised

Tensions surrounding the coronavirus pandemic continue to run high, especially in parts of America in which discussions over measures to control spread of the virus have become something of a political issue. Recently, some of these tensions erupted in the form of protests of “anti-maskers”: in Florida, for example, a group of such individuals marched through a Target, telling people to take off their masks, and playing the song “We’re Not Going to Take It.” Presumably the “it” that they were no longer interested in taking pertained to what they perceived to be a violation of personal liberties, as they felt as though they were being forced to wear a mask against their wills. While evidence regarding the effectiveness of masks at keeping oneself and others safe continues to grow, there nevertheless remains a vocal minority that believes otherwise.

A lot of thought has been put into the problem of why it is that people continually ignore good scientific evidence, especially when the consequences of doing so are potentially dire. There is almost certainly no singular, easy answer to the problem. However, there is one potential reason that I think is worth focusing on, namely that anti-maskers, among many others of those who reject the best available scientific evidence on a number of issues, will tend to trust sources that they find on social media instead of through more reputable outlets. For instance, one investigation of why anti-maskers hold their beliefs pointed to the effects of Facebook groups in which such beliefs are discussed and shared. Indeed, despite their efforts to contain the spread of such misinformation, anti-masker Facebook groups remain easy to find.

However, the question remains: why would anyone believe a group of random Facebook users over scientific experts? The answer to this is no doubt multifaceted as well. But one reason may come down to a matter of trust, and that the ways we determine who is trustworthy works differently online than it does in other contexts.

As frequent internet users will no doubt be familiar with already, it can often be difficult to identify trustworthy sources of information online. One reason is that the internet offers varying degrees of anonymity: the consequence is that one will potentially not have much information about the person one’s talking with, especially given the possibility that people can fabricate aspects of their identities in online environments. Furthermore, interacting with others through text boxes on a computer screen is a very different kind of interaction than one that occurs face-to-face. For instance, researchers have shown that there are different “communication cues” that we pick up on when interacting with each other, including verbal cues like tone of voice, volume of speech, and rate at which one is speaking, and visual cues like facial expressions and body language. These kinds of cues are important when we make judgments about whether we should believe what the other person is saying, and are largely absent in a lot of online communication.

With less information about each other to go on when interacting online, we will then tend to look to other sources of information when determining who to trust. One thing internet users tend to appeal to is endorsement. For instance, when reading things on social media or message board sites we tend to put more trust in those posts that have the most hearts, or likes, or upvotes, etc. This is perhaps most apparent when you’re trying to decide what product to buy: we tend to gravitate towards those with not only the highest ratings, but those that have the most high ratings (something with one 5 star review doesn’t mean much, but a product with hundreds of high reviews means a lot more). The same can be the case when it comes to determining which information to believe: if your post has thousands of endorsements then I’m probably going to at least give it a look, whereas if it has very few, I’ll probably pass it by.

There is good reason to trust information that is highly endorsed. As noted above, it can be hard to determine who to trust online because it’s not clear whether someone is really who they say they are. It’s easy for me to join a Facebook group and tell everyone that I’m an epidemiologist, for example, and without having access to any more information about me you’ve got little other than my word to go on. Something that’s much harder to fake, though, is a whole bunch of likes, or hearts, or upvotes. So the thought is that if enough other people endorse something, that’s good reason to trust it. So here’s one reason why people getting their information off social media might trust that information more than that coming from the experts, namely because it is highly endorsed by many other members of their group.

At the same time, people might be more willing to believe those with whom they interact with online in virtue of the fact that they are interacting with them. For instance, when a scientific body like the CDC tells you that you should be wearing a mask, information is traveling in only one direction. When interacting with groups online, though, it can be much easier to trust those that you are interacting with, and not merely deferring to. Again, this is one of the problems raised by online communication: while there is lots of good information available, it can be easier to trust those with whom one can engage with, as opposed to just take orders from them.

Again, given that the problem is complex and multifaceted means that there will not be a one-size-fits-all solution. That said, it is worthwhile to think about how it might be possible for those with the good information to establish relationships of trust with those who need it, given the unique qualities of online environments.

On “Doing Your Own Research”

photograph of army reserve personnel wearing neck gaiter at covid testing site

In early August, American news outlets began to circulate a surprising headline: neck gaiters — a popular form of face covering used by many to help prevent the spread of COVID-19 — could reportedly increase the infection rate. In general, face masks work by catching respiratory droplets that would otherwise contaminate a virus-carrier’s immediate environment (in much the same way that traditional manners have long-prescribed covering your mouth when you sneeze); however, according to the initial report by CBS News, a new study found that the stretchy fabric typically used to make neck gaiters might actually work like a sieve to turn large droplets into smaller, more transmissible ones. Instead of helping to keep people safe from the coronavirus, gaiters might even “be worse than no mask at all.”

The immediate problem with this headline is that it’s not true; but, more generally, the way that this story developed evidences several larger problems for anyone hoping to learn things from the internet.

The neck gaiter story began on August 7th when the journal Science Advances published new research on a measurement test for face mask efficacy. Interested by the widespread use of homemade face-coverings, a team of researchers from Duke University set out to identify an easy, inexpensive method that people could use at home with their cell phones to roughly assess how effective different commonly-available materials might be at blocking respiratory droplets. Importantly, the study was not about the overall efficacy rates of any particular mask, nor was it focused on the length of time that respiratory droplets emitted by mask-wearers stayed in the air (which is why smaller droplets could potentially be more infectious than larger ones); the study was only designed to assess the viability of the cell phone test itself. The observation that the single brand of neck gaiter used in the experiment might be “counterproductive” was an off-hand, untested suggestion in the final paragraph of the study’s “Results” section. Nevertheless, the dramatic-sounding (though misleading) headline exploded across the pages of the internet for weeks; as recently as August 20th, The Today Show was still presenting the untested “result” of the study as if it were a scientific fact.

The ethics of science journalism (and the problems that can arise from sensationalizing and misreporting the results of scientific studies) is a growing concern, but it is particularly salient when the reporting in question pertains to an ongoing global pandemic. While it might be unsurprising that news sites hungry for clicks ran a salacious-though-inaccurate headline, it is far from helpful and, arguably, morally wrong.

Furthermore, the kind of epistemic malpractice entailed by underdeveloped science journalism poses larger concerns for the possibility of credible online investigation more broadly. Although we have surrounded ourselves with technology that allows us to access the internet (and the vast amount of information it contains), it is becoming ever-more difficult to filter out genuinely trustworthy material from the melodramatic noise of websites designed more for attracting attention than disseminating knowledge. As Kenneth Boyd described in an article here last year, the algorithmic underpinnings of internet search engines can lead self-directed researchers into all manner of over-confident mistaken beliefs; this kind of structural issue is only exacerbated when the inputs to those algorithms (the articles and websites themselves) are also problematic.

These sorts of issues cast an important, cautionary light on a growing phenomenon: the credo that one must “Do Your Own Research” in order to be epistemically responsible. Whereas it might initially seem plain that the internet’s easily-accessible informational treasure trove would empower auto-didacts to always (or usually) draw reasonable conclusions about whatever they set their minds to study, the epistemic murkiness of what can actually be found online suggests that reality is more complicated. It is not at all clear that non-expert researchers who are ignorant of a topic can, on their own, justifiably identify trustworthy information (or information sources) about that topic; but, on the other hand, if a researcher does has enough knowledge to judge a claim’s accuracy, then it seems like they don’t need to be researching the topic to begin with!

This is a rough approximation of what philosophers sometimes call “Meno’s Paradox” after its presentation in the Platonic dialogue of that name. The Meno discusses how inquiry works and highlights that uninformed inquirers have no clear way to recognize the correct answer to a question without already knowing something about what they are questioning. While Plato goes on to spin this line of thinking into a creative argument for the innateness of all knowledge (and, by extension, the immortality of the soul!), subsequent thinkers have often taken different approaches to argue that a researcher only needs to have partial knowledge either of the claim they are researching or of the source of the claim they are choosing to trust in order to come to justified conclusions.

Unfortunately, “partial knowledge” solutions have problems of their own. On one hand, human susceptibility to a bevy of psychological biases make a researcher’s “partial” understanding of a topic a risky foundation for subsequent knowledge claims; it is exceedingly easy, for example, for the person “doing their own research” to be unwittingly led astray by their unconscious prejudices, preconceptions, or the pressures of their social environment. On the other hand, grounding one’s confidence in a testimonial claim on the trustworthiness of the claim’s source seems to (in most cases) simply push the justification problem back a step without really solving much: in much the same way that a non-expert cannot make a reasonable judgment about a proposition, that same non-expert also can’t, all by themselves, determine who can make such a judgment.

So, what can the epistemically responsible person do online?

First, we must cultivate an attitude of epistemic humility (of the sort summarized by Plato’s infamous comment “I know that I know nothing”) — something which often requires us to admit not only that we don’t know things, but that we often can’t know things without the help of teachers or other subject matter experts doing the important work of filtering the bad sources of information away from the good ones. All too often, “doing your own research” functionally reduces to a triggering of the confirmation bias and lasts only as long as it takes to find a few posts or videos that satisfy what a person was already thinking in the first place (regardless of whether those posts/videos are themselves worthy of being believed). If we instead work to remember our own intellectual limitations, both about specific subjects and the process of inquiry writ large, we can develop a welcoming attitude to the epistemic assistance offered by others.

Secondly, we must maintain an attitude of suspicion about bold claims to knowledge, especially in an environment like the internet. It is a small step from skepticism about our own capacities for inquiry and understanding to skepticism about that of others, particularly when we have plenty of independent evidence that many of the most accessible or popular voices online are motivated by concerns other than the truth. Virtuous researchers have to focus on identifying and cultivating relationships with knowledgeable guides (who can range from individuals to their writings to the institutions they create) on whom they can rely when it comes time to ask questions.

Together, these two points lead to a third: we must be patient researchers. Developing epistemic virtues like humility and cultivating relationships with experts that can overcome rational skepticism — in short, creating an intellectually vibrant community — takes a considerable amount of effort and time. After a while, we can come to recognize trustworthy informational authorities as “the ones who tend to be right, more often than not” even if we ourselves have little understanding of the technical fields of those experts.

It’s worth noting here, too, that experts can sometimes be wrong and nevertheless still be experts! Even specialists continue to learn and grow in their own understanding of their chosen fields; this sometimes produces confident assertions from experts that later turn out to be wrong. So, for example, when the Surgeon General urged people in February to not wear face masks in public (based on then-current assumptions about the purportedly low risk of asymptomatic patients) it made sense at the time; the fact that those assumptions later proved to be false (at which point the medical community, including the epistemically humble Surgeon General, then recommended widespread face mask usage) is simply a demonstration of the learning/research process at work. On the flip side, choosing to still cite the outdated February recommendation simply because you disagree with face mask mandates in August exemplifies a lack of epistemic virtue.

Put differently, briefly using a search engine to find a simple answer to a complex question is not “doing your own research” because it’s not research. Research is somewhere between an academic technique and a vocational aspiration: it’s a practice that can be done with varying degrees of competence and it takes training to develop the skill to do it well. On this view, an “expert” is simply someone who has become particularly good at this art. Education, then, is not simply a matter of “memorizing facts,” but rather a training regimen in performing the project of inquiry within a field. This is not easy, requires practice, and still often goes badly when done in isolation — which is why academic researchers rely so heavily on their peers to review, critique, and verify their discoveries and ideas before assigning them institutional confidence. Unfortunately, this complicated process is far less sexy (and far slower) than a scandalous-sounding daily headline that oversimplifies data into an attractive turn of phrase.

So, poorly-communicated science journalism not only undermines our epistemic community by directly misinforming readers, but also by perpetuating the fiction that anyone is an epistemic island unto themselves. Good reporting must work to contextualize information within broader conversations (and, of course, get the information right in the first place).

Please don’t misunderstand me: this isn’t meant to be some elitist screed about how “only the learned can truly know stuff, therefore smart people with fancy degrees (or something) are best.” If degrees are useful credentials at all (a debatable topic for a different article!) they are so primarily as proof that a person has put in considerable practice to become a good (and trustworthy) researcher. Nevertheless, the Meno Paradox and the dangers of cognitive biases remain problems for all humans, and we need each other to work together to overcome our epistemic limitations. In short: we would all benefit from a flourishing epistemic community.

And if we have to sacrifice a few splashy headlines to get there, so much the better.

To Wear a Mask or Not During the COVID-19 Pandemic

photograph of groups of people walking on busy street wearing protective masks

The COVID-19 pandemic is a worldwide phenomenon that has disrupted people’s lives and the economy. Currently, the United States leads COVID cases in the world and as of this writing, the United States has the largest amount of confirmed deaths, and ranks eighth in deaths per capita due to the virus. There are a number of factors that might explain why the numbers are so high: the United States’ failed leadership in tackling the virus back in December/January, the government’s response to handling the crisis once the virus spread throughout the United States, states’ opening up too early — and too quickly — in May and June, and people’s unwillingness to take the pandemic seriously by not social distancing or wearing face masks. Let us focus on the last point. Why the unseriousness? As soon as the pandemic hit, conspiracy theories regarding the virus spread like — well, like the virus itself. Some are so fully convinced about a conspiracy theory that their beliefs may be incorrigible. Others seem only to doubt mask-wearing as a solution.

Part of the unwillingness to wear face masks is due to the CDC and WHO having changed their positions about wearing masks as a preventative measure. From the beginning, the U.S. Surgeon General claimed that masks were ineffective, but now both the CDC and the WHO recommend wearing them.

Why this reversal? We are facing a novel virus. Science, as an institution, works through confirming and disconfirming hypotheses. Scientists find evidence for a claim and it leads to their hypothesis being correct. As time goes on, scientists gather new evidence disconfirming their original hypothesis. And as time continues further, they gather more information and evidence and were too quick to disconfirm the hypothesis. Because this virus is so new, scientists are working with limited knowledge. There will inevitably be back-and-forth shifts on what works and what doesn’t. Scientists must adapt to new information. Citizens, however, may interpret this as skepticism about wearing masks since the CDC and WHO cannot make up their minds. And so people may think: “perhaps wearing masks does prevent the spread of the virus; perhaps it doesn’t. So if we don’t know, then let’s just live our lives as we did.” Indeed, roughly 14% of Americans state they never wear masks. But what if there was a practical argument that might encourage such skeptics to wear a mask that didn’t directly rely on the evidence that masks do prevent spreading the virus? What if, despite the skepticism, wearing masks could still be shown to be in one’s best interest? Here, I think using Pascal’s wager can be helpful.

To refamiliarize ourselves, Pascal’s wager comes from Blaise Pascal, a 17th-century French mathematician and philosopher, who wagered that it’s best to believe in God without relying on direct evidence that God exists. To put it succinctly, either God exists or He doesn’t. How shall we decide? Well, we either believe God exists or we believe He doesn’t exist. So then, there are four possibilities:

God exists God does not exist
Belief in God
  1. +∞ (infinite gain)
2. − (finite loss)
Disbelief in God 4.   −∞ (infinite loss) 3. + (finite gain)

 

For 1., God exists and we believe God exists. Here we gain the most since we gain an infinitely happy life. If we win, we win everything. For 2., we’ve only lost a little since we simply believed and lost the truth of the matter. In fact, it’s so minimal (compared to infinite) that we lose nothing. For 3., we have gained a little. While we have the truth, there is not infinite happiness. And compared to infinite, we’ve won nothing. And finally, for 4., we have lost everything since we don’t believe in God and it’s an eternity of divine punishment. By looking at the odds, we should bet on God existing because doing so means you win everything and lose nothing. If God exists and you don’t believe, you lose everything and win nothing. If God doesn’t exist, compared to infinite, the gain or loss is insignificant. So through these odds, believing in God is your best bet since it’s your chance of winning, and not believing is your chance of losing.

There have been criticisms and responses to Pascal’s wager, but I still find this wager useful as an analogy when applied to mask-wearing. Consider:

Masks Prevent Spreading the Virus Masks Don’t Prevent Spreading the Virus
Belief in Masks Preventing Spreading the Virus (1) (Big Gain) People’s lives are saved and we can flatten the curve easily. (2) − (finite loss) We wasted some time wearing a piece of cloth over our face for a few months.
Disbelief in Masks Preventing Spreading the Virus (4) (Big Loss) We continually spread the virus, hospitals are overloaded with COVID cases, and more deaths. (3) + (finite gain) We got the truth of the matter.

 

For (1), we have a major gain. If wearing masks prevents the spread of the virus and we do wear masks, then we help flatten the curve, lessen people contracting the virus, and help prevent any harms or deaths due to COVID-19. (One model predicts that wearing masks can save up to 33,000 American lives.) This is the best outcome. Suppose (2). If masks do nothing or minimally prevent the spread of the virus, yet we continue to wear masks, we have wasted very little. By simply wearing a restriction over our face, it is simply an inconvenience. Studies show that we don’t lose oxygen by wearing a face mask. And leading experts are hopeful that we may get a vaccine sometime next year. There are promising results from clinical phase trials. And so wearing masks, having a small inconvenience in our lives, is not a major loss. After all, we can still function in our lives with face masks. People who wear masks as part of their profession (e.g. doctors, miners, firefighters, military) still carry out their duties. Indeed, their masks help them fulfill their duties. The inconvenience is a minor loss compared to saving lives and preventing the spread of the virus as stated in (1).

Suppose (3). If (3) is the case, then we’ve avoided inconvenience, but this advantage is nothing compared to the cost (4) represents. While we don’t have to wear a mask, celebrating the riddance of inconvenience pales in comparison to losing unnecessary lives and unknowingly spreading the virus. Compared to what we stand to lose in (4), in (3) we’ve won little.

Suppose (4). If we decide (4) is the strategy, we’ve doomed ourselves by making others sicker, we’ve continually spread the virus, and hospitals have had to turn away sick people which leads to more preventable deaths. We’ve lost so many lives and caused the sickness to spread exponentially, all because we didn’t wear a mask.

Note that we haven’t proved that masks work scientifically (although I highly suspect that they do). Rather, we’re doing a rational cost-benefit analysis to determine what the best strategy is. Wearing masks would be in our best interest. If we’re wrong, then it’s a minor inconvenience. But if we’re right, then we’ve prevented contributing to the spread of the COVID-19 virus which has wreaked havoc on many lives all over the globe. Surely, it’s better to bet on wearing masks than not to.

The Small but Unsettling Voice of the Expert Skeptic

photograph of someone casting off face mask

Experts and politicians worldwide have come to grips with the magnitude of the COVID-19 pandemic. Even Donald Trump, once skeptical that COVID-19 would affect the US in a significant way, now admits that the virus will likely take many more thousands of lives.

Despite this agreement, some are still not convinced. Skeptics claim that deaths that are reported as being caused by COVID-19 are really deaths that would have happened anyway, thereby artificially inflating the death toll. They claim that the CDC is complicit, telling doctors to document a death as “COVID-related” even when they aren’t sure. They highlight failures of world leaders like the Director-General of the World Health Organization and political corruption in China. They claim that talk of hospitals being “war zones” is media hype, and they share videos of “peaceful” local hospitals from places that aren’t hot spots, like Louisville or Tallahassee. They point to elaborate conspiracies about the nefarious origins of the novel coronavirus.

What’s the aim of this strikingly implausible, multi-national conspiracy, according to these “COVID-truthers”? Billions of dollars for pharmaceutical companies and votes for tyrannical politicians who want to look like benevolent saviors.

Expert skeptics like COVID-truthers are concerning because they are more likely to put themselves, their families, and their communities at risk by not physical distancing or wearing masks. They are more likely to violate stay-at-home orders and press politicians to re-open commerce before it is safe. And they pass this faulty reasoning on to their children.

While expert skepticism is not new, it is unsettling because expert skepticism often has a kernel of truth. Experts regularly disagree, especially in high-impact domains like medicine. Some experts give advice outside their fields (what Nathan Ballantyne calls “epistemic trespassing”). Some experts have conflicts of interest that lead to research fraud. And some people—seemingly miraculously—defy expert prediction, for example, by surviving a life-threatening illness.

If all this is right, shouldn’t everyone be skeptical of experts?

In reality, most non-experts do okay deciding who is trustworthy and when. This is because we understand—at least in broad strokes—how expertise works. Experts disagree over some issues, but, in time, their judgments tend to converge. Some people do defy expert expectations, but these usually fall within the scope of uncertainty. For example, about 1 in 100,000 cancers go into spontaneous remission. Further, we can often tell who is in a good position to help us. In the case of lawyers, contractors, and accountants, we can find out their credentials, how long they’ve been practicing, and their specialties. We can even learn about their work from online reviews or friends who have used them.

Of course, in these cases, the stakes are usually low. If it turns out that we trusted the wrong person, we might be able to sue for damages or accept the consequences and try harder next time. But as our need for experts gets more complicated, figuring out who is trustworthy is harder. For instance, questions about COVID-19 are:

  • New (Experts struggle to get good information.)
  • Time-sensitive (We need answers more quickly than we have time to evaluate experts.)
  • Value-charged (Our interests in the information biases who we trust.)
  • Politicized (Information is emotionally charged or distorted, and there are more epistemic trespassers.)

Where does this leave those of us who aren’t infectious disease experts? Should we shrug our shoulders with the COVID-truthers and start looking for ulterior motives?

Not obviously. Here are four strategies to help distill reality from fantasy.

  1. Keep in mind what experts should (and should not) be able to do.

Experts spend years studying a topic, but they cannot see the future. They should be able explain a problem and suggest ways of solving it. But models that predict the future are educated guesses. In the case of infectious diseases, those guesses depend on assumptions about how people act. If people act differently, the guesses will be inaccurate. But that’s how models work.

  1. Look for consensus, but be realistic.

When experts agree on something, that’s usually a sign they’re all thinking about the evidence the same way. But when they face a new problem, their evidence will change continually, and experts will have little time to make sense of it. In the case of COVID-19, there’s wide consensus about the virus that causes it and how it spreads. There is little consensus on why it hurts some people more than others and whether a vaccine is the right solution. But just because there isn’t consensus doesn’t mean there are ulterior motives.

  1. Look for “meta-expert consensus.”

When experts agree, it is sometimes because they need to look like they agree, whether due to worries about public opinion or because they want to convince politicians to act. These are not good reasons to trust experts. But on any complex issue, there’s more than one kind of expert. And not all experts have conflicts of interest. In the case of COVID-19, independent epidemiologists, infectious disease doctors, and public health experts agree that SARS-CoV-2 is a new, dangerous, contagious threat and that social distancing the main weapon against that threat. That kind of “meta-expert consensus” is a good check on expertise and good news for novices when deciding what to believe.

  1. Don’t double-down.

When experts get new evidence, they update their beliefs, even if they were wrong. They don’t force that evidence to fit old beliefs. When prediction models for COVID-related deaths did not bear out, experts updated their predictions. They recognized that predictions can be confounded by many variables, and they used the new evidence to update their models. This is good advice for novices, too.

These strategies are not fool proof. The world is messy, experts are fallible, and we won’t always trust the right people. But while expert skepticism is grounded in real limitations of expertise, we don’t have to join the ranks the COVID-truthers. With hard work and a little caution, we can make responsible choices about who we trust.

Expert Suspicion: Arendt and the “Public Space”

photograph of Open Ohio protesters

In the opening days of May, health care workers reported nearly 100,000 new cases of COVID-19 in the United States; in that same time, several thousand patients died of the disease. Nevertheless, as of May 4th, at least nine states have begun to loosen restrictions on movement in public spaces placed in response to the coronavirus outbreak, reopening beaches, restaurants, gyms, and other “nonessential” businesses.

As shouted by protestors from Arizona to Wisconsin to the White House, one explanation for rolling back the pandemic response, despite the spread of the pandemic itself showing no signs of slowing, is that “the cure cannot be worse than the disease.” Since March, more than 30 million Americans have filed for unemployment and these numbers indicate only a fraction of the economic fallout from the enforced quarantines. Thus far, almost no industry — from entertainment, to higher education, to oil production, and more — has escaped unaffected and, particularly with the globe teetering on the edge of a recession, it is far from clear what sort of long-term consequences of the shutdown lie ahead.

Certainly, with their tendency towards ultra-militarized displays of aggression and their often-explicitly racist messaging,  there is much that is inexcusable about many lockdown protests, but when CNN’s Don Lemon says that people unhappy with the lockdowns just “want a haircut” or “want to go play golf,” he seems to be unfairly painting all complaints about the shutdowns as if they are as ignorant as those clearly silly concerns. A “nonessential” locally-owned art gallery or specialty construction company forced to close to prevent the spread of the disease might, nevertheless, feel terribly “essential” to the people whose livelihoods depend on those businesses being open.

Of course, medical experts agree that easing “social distancing” restrictions at this point is premature and could very well lead to an even more serious spread of the virus. The moral calculation of “millions going bankrupt” against “tens of thousands dying” is not a problem I – or, indeed, anyone – could hope to easily solve. Both of these outcomes are clearly unacceptable and most policymakers around the world seem to be trying to chart a course between Scylla and Charybdis that keeps both threats as low as possible, simultaneously. But conversations about the pandemic seem to typically prioritize only one of these two political concerns (“saving citizens’ livelihoods” vs. “saving citizens’ lives”) at a time.

Much has been said recently (including by me) about “expertise in the time of COVID-19.” Certainly, spreading pseudo- and anti-scientific information is dangerous (particularly during a pandemic) and we should think carefully about how we think about the coronavirus. Trusting experts in matters of public health and safety is a crucial part of living in a large, well-functioning society like ours — pretending otherwise, even for looming existential concerns, is simply irresponsible. But, particularly for people whose exposure to the pandemic has been primarily economic — such as those citizens in less-populated states where the spread of the virus has thankfully been less severe — it can be understandably off-putting to have your most salient personal political concerns belittled (or even morally condemned) for the sake of other political concerns, no matter how objectively important both sets of concerns may be. To denigrate your perspective for the sake of “listening to the experts” (when the perspective of those experts is simply orthogonal to your concerns) might well only serve to provoke a backfire effect that leads primarily to greater levels of frustration at the experts being touted and suspicion of the information they share.

This, I take it, was one thing that the philosopher Hannah Arendt was concerned to avoid by her treatment of politics not simply as a process of governmental operations, but as “the place and activity of shared communication based on the distinct perspectives of equal human beings.” Rather than treat politics or political decision-making as an activity properly performed by specially-trained experts, Arendt argued that wherever people gather together “in the manner of speech and action,” they create a community with power to accomplish their political purposes. In her book, The Human Condition, Arendt explains how the development and preservation of public spaces wherein we can politically engage with each other is both a fundamental element of the human experience and a necessary precondition for civic freedom. Importantly, by “public space” Arendt does not just mean physical locations, but rather the realm of discourse wherein perspectives and concerns can be expressed, challenged, debated, and legitimated. When governments seek to restrict and restrain these sociological structures, they begin to take what she calls a “totalitarian” form, thereby precipitating all manner of further oppressions and human rights abuses (on this, see her The Origins of Totalitarianism).

Just to be clear: I do not mean to suggest that Arendt would necessarily be opposed to a mandated lockdown to prevent the spread of COVID-19 (and I certainly do not mean that Dr. Fauci or other healthcare workers are totalitarian oppressors!). Of course, Arendt held no principled animus against experts as such; she simply recognized that their expertise would have to be shared within the public space wherein others would be able to respond. Artificially constraining the operation of that public space — even for demonstrably moral purposes — is a necessarily troublesome notion. And from the perspective of people concerned about the dire economic consequences of the lockdown, forcing a conversational shift to a discussion of mortality rates and other healthcare issues might come off as just such a constraining move.

So, I think that Arendt’s realistic analysis of how our perspectives shape our participation in political structures can help to explain some of the curious disagreements about the response to the coronavirus pandemic, as well as the all-too-common tension in conversations about what we should do next. The clash of perspectives over what is “clearly the right thing to do” cannot simply be resolved with reference to a particular statistic (economic or scientific), but requires the sort of free speech and effortful conversation that Arendt considered fundamental to the human condition.

(I also want to note: I sorely wish that Arendt could respond to President Trump’s widely rejected assertion that “When somebody’s the president of the United States, the authority is total,” but that’s a different conversation!)

COVID-19 and the Ethics of Belief

photograph of scientist with mask and gloves looking through microscope

The current COVID-19 pandemic will likely have long-term effects that will be difficult to predict. This has certainly been the case with past pandemics. For example, the Black Death may have left a lasting mark on the human genome. Because of variations in human genetics, some people have genes which provide an immunological advantage to certain kinds of diseases. During the Black Death, those who had such genes were more likely to live and those without were more likely to do die. For example, a study of Rroma people, whose ancestors migrated to Europe from India one thousand years ago, revealed that those who migrated to Europe possessed genetic differences from their Indian ancestors that were relevant to the immune system response to Yersinia pestis, the bacterium that causes the Black Death. It’s possible that COVID-19 could lead to similar kinds of long-term effects. Are there moral conclusions that we can draw from this?

By itself, not really. Despite this being an example of natural selection at work, the fact that certain people are more likely to survive certain selection pressures than others does not indicate any kind of moral superiority. However, one moral lesson that we could take away is a willingness to make sure that our beliefs are well adapted to our environment. For example, a certain gene is neither good or bad in itself but becomes good or bad through the biochemical interactions within the organism in its environment. Genes that promote survival demonstrate their value to us by being put to (or being capable of being put to) the test of environmental conditions. In the time of COVID-19 one moral lesson the public at large should learn is to avoid wishful thinking and to demonstrate the fitness of our beliefs by putting them to empirical testing. The beliefs that are empirically successful are the beliefs that should carry on and be adopted.

For example, despite the complaints and resistance to social distancing, the idea has begun to demonstrate its value by being put to the test. This week the U.S. revised its model of projected deaths down from a minimum of 100,000 to 60,000 with the changes being credited to social distancing. In Canada, similar signs suggest that social distancing is “flattening the curve” and reducing the number of infections and thus reducing the strain on the healthcare system. On the other hand, stress, fear, and panic may lead us to accept ideas that are encouraging but not tested.

This is why it isn’t a good idea to look to “easy” solutions like hydroxychloroquine as a treatment for COVID-19. As Dr. Fauci has noted, there is no empirical evidence that the drug is effective at treating it. While there are reports of some success, these are merely anecdotal. He notes, “There have been cases that show there may be an effect and there are others to show there’s no effect.” Any benefits the drug may possess are mitigated by a number of factors that are not known. Variations among the population may exist and so need to be controlled for in a clinical study. Just as certain genes may only be beneficial under certain environing conditions, the same may be true of beliefs. An idea may seem positive or beneficial, but that may only be under certain conditions. Ideas and beliefs need to be tested under different conditions to see whether they hold up. While studies are being conducted on hydroxychloroquine, they are not finished.

Relying on wishful thinking instead can be dangerous. The president has claimed that he downplayed the virus at first because he wanted to be “America’s cheerleader,” but being optimistic or hopeful without seriously considering what one is up against, or by ignoring the warning signs, is a recipe for failure. The optimism that an outbreak wouldn’t occur delayed government action to engage in social distancing measures in Italy and in the U.S. and as a result thousands may die who may not have had the matter been treated more seriously sooner.

As a corollary from the last point, we need to get better at relying on experts. But we need to be clear about who has expertise and why? These are people who possess years of experience studying, researching, and investigating ideas in their field to determine which ones hold up to scrutiny and which ones fail. They may not always agree, but this is often owing to disagreements over assumptions that go into the model or because different models may not be measuring exactly the same thing. This kind of disagreement is okay, however, because anyone is theoretically capable of examining their assumptions and holding them up to critical scrutiny.

But why do the projections keep changing? Haven’t they been wrong? How can we rely on them? The answer is that the projections change as we learn more data. But this far preferable to believing the same thing regardless of changing findings. It may not be as comforting getting a single specific unchanging answer, but these are still the only ideas that have been informed by empirical testing. Even if an expert is proven wrong, the field can still learn from those mistakes and improve their conclusions.

But it is also important to recognize that non-medical experts cannot give expert medical advice. Even having a Ph.D. in economics does not qualify Peter Navarro to give advice relating to medicine, biochemistry, virology, epidemiology, or public health policy. Only having years of experience in that field will allow you to consider the relevant information necessary for solving technical problems and putting forward solutions best suited to survive the empirical test.

Perhaps we have seen evidence that a broad shift in thinking has already occurred. There are estimates that a vaccine could be six months to a year away. Polling has shown a decrease in the number of people who would question the safety of vaccines. So perhaps the relative success of ending the pandemic will inspire new trust in expert opinion. Or, maybe people are just scared and will later rationalize it.

Adopting the habit of putting our beliefs to the empirical test, the moral consequences of which are very serious right now, is going to be needed sooner rather than later. If and when a vaccine comes along for COVID-19, the anti-vaccination debate may magnify. And, once the COVID-19 situation settles, climate change is still an ongoing issue that could cause future pandemics. Trusting empirically-tested theories and expert testimony more, and relying less on hearsay, rumor, and fake news could be one of the most important moral decisions we make moving forward.

Expertise in the Time of COVID

photograph of child with mask hugging her mother

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Admitting that someone has special knowledge that we don’t or can do a job that we aren’t trained for is not very controversial. We rarely hesitate to hire a car mechanic, accountant, carpenter, and so on, when we need them. Even if some of us could do parts of their jobs passably well, these experts have specialized training that gives them an important advantage over us: They can do it faster, and they are less likely to get it wrong. In these everyday cases, figuring out who is an expert and how much we can trust them is straightforward. They have a sign out front, a degree on the wall, a robustly positive Google review, and so on. If we happen pick the wrong person—someone who happens to be incompetent or a fraud—we haven’t lost much. We try harder next time.

But as our needs get more complicated, for example, when we need information about a pandemic disease and how best to fight it, as our need for that kind of scientific information is politicized, figuring out who the experts are and how much to trust them is less clear.

Consider a question as seemingly simple as whether surgical masks help contain COVID-19. At first, experts said everyone should wear masks. Then other experts said masks won’t help against airborne viruses because the masks do not seal well enough to stop the tiny viral particles. Some said that surgical masks won’t help, but N95 masks will. Then some experts said that surgical masks could at least help keep you from getting the disease from others’ spittle, as they talk, cough, and sneeze. Still other experts said that even this won’t do because we touch the masks too often, undermining their protective capacity. Yet still others say that while the masks cannot protect you from the virus, they can protect others from you if you happen to be infected, “contradicting,” as one physician told me, “years of dogma.”

What are we to believe from this cacophony of authorities?

To be sure, some of the confusion stems from the novelty of the novel coronavirus. Months into the global spread, we still don’t know much about it. But a large part of the burden of addressing the public health implications lies not just in expert analysis but how expert judgments are disseminated. And yet, I have questions: If surgical masks won’t keep me from getting the infection because they don’t seal well enough, then how could they keep me from giving it to others? Is the virus airborne or isn’t it? What does “airborne” mean in this context? How do we pick the experts out of this crowd of voices?

Most experts are happy to admit that the world is messier than they would prefer, that they are often beset by the fickleness of nature. And after decades of research on error and bias, we know that experts, just like the rest of us, struggle with biased assumptions and cognitive limitations, the biases inherent in how those before them framed questions in their fields, and by the influence of competing interests—even if from the purest motives—for personal or financial ends. People who are skeptical of expertise point to these deficiencies as reasons to dismiss experts.

But if expertise exists, really exists, not merely as a political buzzword or as an ideal in the minds of ivory tower elitists, then, it demands something from us.

Experts understand their fields better than novices. They are better at their jobs than people who have not spent years or decades doing their work. And thus, when they speak about what they do, they deserve some degree of trust.

Happily, general skepticism about expertise is not widely championed. Few of us — even in the full throes of, for example, the Dunning-Kruger Effect — would hazard jumping into the cockpit of an airplane without special training. Few of us would refuse medical help for a severe burn or a broken limb. Unfortunately, much of the skepticism worth taking seriously attaches to topics that are likely to do more harm to others than to the skeptic: skepticism about vaccinations, climate change, and the Holocaust. If you happen to fall into one of these groups at some point in your life — I grew up a six-day creationist and evolution-denier — you know how hard it is to break free from that sort of echo chamber.

But even if you have extricated yourself from one distorted worldview, how do you know you’re not trapped in another? That you aren’t inadvertently filtering out or dismissing voices worth listening to? This is a challenge we all face when up against a high degree of risk in a short amount of time from a threat that is new and largely unknown and that is now heavily politicized.

Part of what makes identifying and trusting experts so hard is that not all expertise is alike. Different experts have differing degrees of authority.

Consider someone working in an internship in the first year out of medical school. They are an MD, and thus, an expert of sorts. Unfortunately, they have very little clinical experience. They have technical knowledge but little competence applying it to complex medical situations.

Modern medicine has figured out how to compensate for this lack of experience. New doctors have to train for several years under a licensed physician before they can practice on their own. To acquire sufficient expertise, they have to be immersed into the domain of their medical specialty. The point is that not every doctor has the same authority as every other, and this is true for other expert domains, as well.

A further complication is that types of expertise differ in how much background information and training is required to do their jobs well. Some types of expertise are closer to what philosopher Thi Nguyen calls our “cognitive mainland.” This mainland refers to the world that novices are familiar with, the language they can make sense of. For example, most novices understand enough about what landscape designers do to assess their competence. They can usually find reviews of their work online. They can even go look at some of their work for themselves. Even if they don’t know much about horticulture, they know whether a yard looks nice.

But expertise varies in how close to us it is. For example, what mortgage brokers do is not as close to us as landscapers. It is further away from our cognitive mainland, out at sea, as it were. First-time home buyers need a lot of time to learn the language associated with the mortgage industry and what it means for them. The farther out an expert domain is from a novice’s mainland, the more likely they are on what Nguyen calls a “cognitive island,” isolated from resources that would let novices make sense of their abilities and authority.

Under normal circumstances, novices have some tools for deciding who is an expert and who is not, and for deciding which experts to trust and which to ignore. This is not easy, but it can be done. Looking up someone’s credentials, certifications, years of experience, recommendations, track records, and so on, can give novices a sense of someone’s competence.

As the expertise gets farther from novices’ cognitive mainland, they can turn to other experts in closely related fields to help them make sense of it. In the case of mortgages, for example, they might have a friend who works in real estate or someone in banking to help translate the relevant bits to us in a way that meets our need. In other words, they can use “meta-experts,” experts in a closely related domain who understand enough of the domain to help them choose experts in that domain wisely.

Unfortunately, during a public health emergency, uncertainty, time constraints, and politicization mean that all of these typical strategies can easily go awry. Experts who feel pressured by society or threatened by politicians can — even if inadvertently — manufacture a type of consensus. They can double-down on a way of thinking about a problem for the sake of maintaining the authority of their testimony. In some cases, this is a simple matter of groupthink. In other cases, it can seem more intentional, even if it isn’t.

Psychologist Philip Tetlock, in his book with Dan Gardner Superforcasting: The Art and Science of Prediction (2015), explains how to prevent this sort of consensus problem by bringing together diverse experts on the same problem and suspending any hierarchical relationships among them. If everyone feels free to comment and if honest critique is welcomed, better decisions are made. In Are We All Scientific Experts Now? (2014), sociologist Harry Collins contends that this is also how peer review works in academic settings. Not everyone who reviews a scientific paper for publication is an expert in the narrow specialization of the researcher. Rather, they understand how scientific research works, the basic terminology used in that domain, and how new information in domains like it is generated. Not only can experts in related domains allow us to challenge groupthink and spur more creative solutions, they can help identify errors in research and reasoning because they understand how expertise works.

These findings are helpful for novices, too. They suggest that our best tool for identifying and evaluating expertise is, rather than pure consensus, consensus among a mix of voices close to the domain in question.

We might call this meta-expert consensus. Novices need not be especially close to a specialized domain to know whether someone working in it is trustworthy. They only have to be close enough to people close to that domain to recognize broad consensus among those who understand the basics in a domain.

Of course, how we spend our energy on experts matters. There are many questions that political and institutional leaders face that the average citizen will not. The average person need not invest energy on highly specialized questions like:

  • How should hospitals fairly allocate scarce resources?
  • How do health care facilities protect health care workers and vulnerable populations from unnecessary risks?
  • How can we stabilize volatile markets?
  • How do we identify people who are immune from the virus quickly so they can return to the workforce?

The payoff is too low and the investment too significant.

On the other hand, there are questions worth everyone’s time and effort:

  • Should I sanitize my groceries before or when I bring them into my living space?
  • How often can I reasonably go out to get groceries and supplies?
  • How can I safely care for my aging parent if I still have to go to work?
  • Should I reallocate my investment portfolio?
  • Can I still exercise outdoors?

Where are we on the mask thing? It turns out, experts at the CDC are still debating their usefulness under different conditions. But here’s an article that helps make sense of what experts are thinking about when they are making recommendations about mask-wearing.

The work required to find and assess experts is not elegant. But neither is the world this pandemic is creating. And understanding how expertise works can help us cultivate a set of beliefs that, if not elegant, is at least more responsible.

Swamping, Epistemic Trespassing, and Coronavirus

photograph of newspapers folded on top of laptop keyboard

Every day the media are awash with new information about coronavirus. And with good reason: people are worried and want to know the latest developments, what they should do, and how bad things could get. News is coming in not only locally, but globally: my current news feed, for example, has been providing me with information about the coronavirus in the US, Canada, Italy, South Korea, Australia, Spain, among other places. And not just about the spread of the virus itself, but about the consequences thereof, specifically the many events that have been canceled globally, as well as the financial ramifications. It is on everyone’s mind, and everyone is talking about it.

We are, of course, no exception here. But with one story dominating the headlines, there are two phenomena that we should watch out for: the first is swamping, in which other important news stories are, well, swamped by one story occupying everyone’s attention; and the second is epistemic trespassing, in which people who aren’t experts chime in on issues as if they were. Let’s look at these both in turn.

Consider the first problem: news of the coronavirus is so prevalent that it’s easy to lose track of news of anything else going on. For instance, in the US there are reports that a number of senators are currently trying to get a bill passed that would put limitations on websites and users to encrypt their data, and thus have potentially serious ramifications for data privacy in the US. Reddit users have also compiled a list of news stories that people may have missed because of the deluge of information about coronavirus, some that may have made the physical or virtual front page if there weren’t other matters occupying our attention. Important information can be more easily missed, then, if it is swamped by a singular issue.

This is not to say, of course, that it is a bad thing to get a lot of information about coronavirus. Nor is it to say that it is not an important issue that deserves our attention. We should, however, be vigilant both with regards to other important news stories, as well as the possibility that unscrupulous individuals may be using the pandemic as a distraction.

The second problem is a version of what some philosophers have called “epistemic trespassinga phenomenon in which someone weighs in on an issue outside of their area of expertise. For instance, if I, as a trustworthy and trained philosopher, were to write an op-ed about some matter in astrophysicsa topic I know almost nothing aboutthen I would be epistemically trespassing. It seems that as a general rule that one ought not epistemically trespass: you should know something about a subject before commenting on it, and you should not rely on unrelated expertise to be taken seriously. That is not to say, however, that one should avoid learning about and engaging in discussions concerning topics one is interested in: even though one might not be able to be an expert in everything, one is free to learn about subjects that one is unfamiliar with. The problem, then, is not with going where one doesn’t belong, so to speakwe don’t want to say that chemists can’t learn about or comment on art, or that physicists can’t learn about or comment on philosophy, nor that one cannot be knowledgeable about multiple different kinds of fields. The problem is presenting oneself as an expert in some domain that one is not an expert in.

You have no doubt come across some forms of epistemic trespassing with respect to coronavirus news in the form of friends and relatives suddenly becoming armchair epidemiologists and weighing in on what people should be doing and how concerned people should be. It is easy to find examples of such instances online, even from otherwise reputable sources.

Consider, for example, a recent article on The Verge. This piece by Tomas Pueyo argues persuasively that the United States is currently seeing exponential growth in the number of people contracting the disease, and that hospitals are likely to be overwhelmed. Pueyo’s background is in growth marketing, not in epidemiology. While there may be some similarities between marketing trends and the spread of viruses, there is clearly some amount of epistemic trespassing going on here: it seems like it would be better for someone who specializes in epidemiology to comment on the spread of virus instead of someone who works in marketing.

We have, then, two potential pitfalls when it comes to staying vigilant about knowing what’s going on in the world at this point in time: that a singular focus on coronavirus reporting could swamp other news, news that could also have important ramifications if missed, and with so many people weighing in on the pressing issue of the day that we risk running into epistemic trespassers, namely those people who might speak with the authority of an expert, but really don’t have much of an idea of what they’re talking about. As has been discussed here before, an epidemic can impact not just our physical lives, but our epistemic lives, as well.

The Ethics of Amateur Podcast Sleuthing

In late 2016, Up and Vanished, a podcast produced and hosted by independent filmmaker-turned-podcaster Payne Lindsey, released its first episode.  The topic of the podcast is the until recently cold murder case of Georgia eleventh-grade history teacher Tara Grinstead.  Grinstead went missing, presumably from her home in Ocilla, Georgia, in October 2005.  

Continue reading “The Ethics of Amateur Podcast Sleuthing”