← Return to search results
Back to Prindle Institute

What Should We Do About AI Identity Theft?

image of synthetic face and voice

A recent George Carlin comedy special from Dudesy — an AI comedy podcast created by Will Sasso and Chad Kultgen — has sparked substantial controversy. In the special, a voice model emulating the signature delivery and social commentary of Carlin, one of America’s most prominent 20th-century comedians and social critics, discusses contemporary topics ranging from mass shootings to AI itself. The voice model, which was trained on five decades of the comic’s work, sounds eerily similar to Carlin who died in 2008.

In response to controversy over the AI special, the late comedian’s estate filed a suit in January, accusing Sasso and Kultgen of copyright infringement. As a result, the podcast hosts agreed to take down the hour-long comedy special and refrain from using Carlin’s “image, voice or likeness on any platform without approval from the estate.” This kind of scenario, which is becoming increasingly common, generates more than just legal questions about copyright infringement. It also raises a variety of philosophical questions about the ethics of emerging technology connected to human autonomy and personal identity.

In particular, there are a range of ethical questions concerning what I’ve referred to elsewhere as single-agent models. Single-agent models are a subset of generative artificial intelligence that concentrates on modeling some identifying feature(s) of a single human agent through machine learning.

Most of the public conversation around single-agent models focuses on the impact on individuals’ privacy and property rights. These privacy and property rights violations generally occur as a function of the single-agent modeling outputs not crediting and compensating the individuals whose data was used in the training process, a process that often relies on the non-consensual scraping of data under fair use doctrine in the United States. Modeled individuals find themselves competing in a marketplace saturated with derivative works that fail to acknowledge their contributory role in supplying the training data, all while also being deprived of monetary compensation. Although this is a significant concern that jeopardizes the sustainability of creative careers in a capitalist economy, it is not the only concern.

One particularly worrisome function of single-agent models is their unique capacity to generate outputs practically indistinguishable from those of individuals whose intellectual and creative abilities or likeness are being modeled. When an audience with an average level of familiarity with an individual’s creative output cannot distinguish whether the digital media they engage with is authentic or synthetic, this presents numerous concerns. Perhaps most obviously, single-agent models’ ability to generate indistinguishable outputs raises concerns about what works and depictions of a modeled individual’s behavior become associated with their reputation. Suppose the average individual can’t discern whether an output came from an AI or the modeled individual themself. In that case, unwanted associations between the modeled individual and AI outputs may form.

Although these unwanted associations are most likely to harm when the individual generating the outputs does so in a deliberate effort to tarnish the modeled individual’s reputation (e.g., defamation), one need not have this sort of intent for harm to occur. Instead, one might use the modeled individual’s likeness to deceive others by spreading disinformation, especially if that individual is perceived as epistemically credible. Recently, scammers have begun incorporating single-agent models in the form of voice cloning to call families in a loved one’s voice and defraud them into transferring money. On a broader scale, a bad actor might flood social media with an emulation of the President of the United States, relaying false information about the election. In both cases, the audience is deceived into adopting and acting on false beliefs.

Moreover, some philosophers, such as Regina Rini, have pointed to the disturbing implications of single-agent modeling on our ability to treat digital media and testimony as veridical. If one can never be sure if the digital media they engage with is true, how might this negatively impact our abilities to consider digital media a reliable source for transmitting knowledge? Put otherwise, how can we continue to trust testimony shared online?

Some, like Keith Raymond Harris, have pushed back against the notion that certain forms of single-agent modeling, especially those that fall under the category of deepfakes (e.g., digitally fabricated videos or audio recordings), pose a substantial risk to our epistemic practices. Skeptics argue that single-agent models like deepfakes do not differ radically from previous methods of media manipulation (e.g., photoshop, CGI). Furthermore, they contend that the evidential worth of digital media also stems from its source. In other words, audiences should exercise discretion when evaluating the source of the digital media rather than relying solely on the digital media itself when considering its credibility.

These attempts to allay the concerns about the harms of single-agent modeling overlook several critical differences between previous methods of media manipulation and single-agent modeling. Earlier methods of media manipulation were often costly, time-consuming, and, in many cases, distinguishable from their authentic counterparts. Instead, single-agent modeling is accessible, affordable, and capable of producing outputs that bypass an audience’s ability to distinguish them from authentic media.

In addition, many individuals lack the media literacy to discern between trustworthy and untrustworthy media sources, in the way Harris suggests. Moreover, individuals who primarily receive news from social media platforms generally tend to engage with the stories and perspectives that reach their feeds rather than content outside their digitally curated information stream. These concerns are exacerbated by social media algorithms prioritizing engagement, siloing users into polarized informational communities, and rewarding stimulating content by placing it at the top of users’ feeds, irrespective of its truth value. Social science research demonstrates that the more an individual is exposed to false information, the more willing they will be to believe it due to familiarity (i.e., illusory truth effect). Thus, it appears that single-agent models pose genuinely novel challenges that require new solutions.

Given the increasing accessibility, affordability, and indistinguishability of AI modeling, how might we begin to confront its potential for harm? Some have expressed the possibility of digitally watermarking AI outputs. Proponents argue that this would allow individuals to recognize whether media was generated by AI, perhaps mitigating the concerns I’ve raised relating to credit and compensation. Consequently, these safeguards could reduce reputational harm by diminishing the potential for unwanted associations. This approach would integrate blockchain — the same technology used by cryptocurrency — allowing the public to access a shared digital trail of AI outputs. Unfortunately, as of now, this cross-platform AI metadata technology has yet to see widespread implementation. Even with cross-platform AI metadata, we remain reliant on the goodwill of big tech in implementing it. Moreover, this doesn’t address concerns about the non-consensual sourcing of training data through fair use doctrine.

Given the potential harms of single-agent modeling, it’s pertinent that we critically examine and reformulate our epistemic and legal frameworks to accommodate these novel technologies.

The Ethical and Epistemic Consequences of Hiding YouTube Dislikes

photograph of computer screen displaying YouTube icon

YouTube recently announced a major change on their platform: while the “like” and “dislike” buttons would remain, viewers would only be able to see how many likes a video had, with the total number of dislikes being viewable only by the creator. The motivation for the change is explained in a video released by YouTube:

Apparently, groups of viewers are targeting a video’s dislike button to drive up the count. Turning it into something like a game with a visible scoreboard. And it’s usually just because they don’t like the creator or what they stand for. That’s a big problem when half of YouTube’s mission is to give everyone a voice.

YouTube thus seems to be trying to protect its creators from certain kinds of harms: not only can it be demoralizing to see that a lot of people have disliked your video, but it can also be particularly distressing if those dislikes have resulted from targeted discrimination.

Some, however, have questioned YouTube’s motives. One potential motive, addressed in the video, is that YouTube is removing the public dislike count in response to some of their own videos being overwhelmingly disliked (namely, the “YouTube Rewind” videos and, ironically, the video announcing the change itself). Others have proposed that the move aims to increase viewership: after all, videos with many more dislikes than likes are probably going to be viewed less often, which means fewer clicks on the platform. Some creators have even posited that the move was made predominantly to protect large corporations, as opposed to small creators: many of the most disliked videos belong to corporations, and since YouTube has an interest in maintaining a good relationship with them, they would also have an interest in restricting people’s ability to see how disliked their content is.

Let’s say, however, that YouTube’s motivations are pure, and that they really are primarily intending to prevent harms by removing the public dislike count on videos. A second criticism has come in the form of the loss of informational value: the number of dislikes on a video can potentially provide the viewer with information about whether the information contained in a video is accurate. The dislike count is, of course, far from a perfect indicator of video quality, because one can dislike for reasons that don’t have to do with the information it contains: again, in instances in which there have been targeted efforts to dislike a video, dislikes won’t tell you whether it’s really a good video or not. On the other hand, there do seem to be many cases in which looking at the dislike count can let you know if you should stay away: videos that are clickbait, misleading, or generally poor quality can often quickly and easily be identified by an unfavorable ratio of likes to dislikes.

A worry, then, is that without this information, one may be more likely to not only waste one’s time by watching low-quality or inaccurate videos, but also that one may be more likely to be exposed to misinformation. For instance, consider the class of clickbait videos prevalent on YouTube, one in which people will make impressive-looking crafts or food through a series of improbable steps. Seeing that a video of this type has received a lot of dislikes helps the viewer contextualize it as something that’s perhaps just for entertainment value, and should not be taken seriously.

Should YouTube continue to hide dislike counts? In addressing this question, we are perhaps facing a conflict in different kinds of values: on the one hand, you have the moral value of protecting small or marginalized creators from targeted dislike campaigns; on the other hand, you have the epistemic disvalue of removing potentially useful information that can help viewers avoid believing misleading information (as well as the practical value of saving people the time and effort of watching unhelpful videos). It can be difficult to try to balance different values: in the case of the removal of public dislike counts, the question becomes whether the moral benefit is strong enough to outweigh the epistemic detriment.

One might think that the epistemic detriments are not, in fact, too significant. In the video released by YouTube, this issue is addressed, if only very briefly: referring to an experiment conducted earlier this year in which public dislike counts were briefly removed from the platform, the spokesperson states that they had considered how dislikes give views “a sense of a video’s worth.” He then states that,

[W]hen the teams looked at the data across millions of viewers and videos in the experiment they didn’t see a noticeable difference in viewership regardless of whether they could see the dislike count or not. In other words, it didn’t really matter if a video had a lot of dislikes or not, they still watched.

At the end of the video, they also stated, “Honestly, I think you’re gonna get used to it pretty quickly and keep in mind other platforms don’t even have a Dislike button.”

These responses, however, are non-sequiturs: whether viewership increased or decreased does not say anything about whether people are able to judge a video’s worth without a public dislike count. Indeed, if anything it reinforces the concern that people will be more likely to consume content that is misleading or of low informational value. That other platforms do not contain dislike buttons is also irrelevant: that other social media platforms do not have a dislike button may very well just mean that it is difficult to evaluate the quality of information present on those platforms. Furthermore, users on platforms such as Twitter have found other ways to express that a given piece of information is of low value, for example by ensuring that a tweet has a high ratio of responses to likes, something that seems much less likely to be effective on a platform like YouTube.

Even if YouTube does, in fact, have the primary motivation of protecting some of its creators from certain kinds of harms, one might wonder whether there are better ways of addressing the issue, given the potential epistemic detriments.

What Good Is Ignorance?

photograph of single person with flashlight standing in pitch darkness

Most of us think knowledge good, and ignorance bad. We justify this by pointing to all the practical goods that knowledge affords us: we want the knowledgeable surgeon and legislator, and not the ignorant ones. The consequences of having the latter are potentially dire. And so, from there, many people blithely assume ignorance is bad: if knowing is good, not knowing should be avoided.

What’s striking though is that people’s actions often don’t match their words: they will pay lip service to the value of knowledge, yet choose to remain ignorant despite having relatively easy access to know more or know better. The actions of these folks suggests that there is something they must value about ignorance — or, perhaps, they think gathering knowledge is more trouble than it’s worth. Part of the explanation here is no doubt that people are lazy — they are, to put the point more precisely, cognitive misers. However, we should be suspicious of one-factor explanations of complicated behavior. And knowledge looks like it is subject to the Goldilocks principle: we don’t want too little knowledge, but we don’t want too much knowledge either. Do you really want to know everything there is to know about the house you bought? Of course you don’t. While you want to know, say, whether the roof is in good condition, and the foundation is sound, you don’t care exactly how many specks of dust are in the attic. And just as we can oftentimes overstate the value of knowledge, we can understate the value of ignorance too: it turns out, there are some benefits to knowing less. We should canvass several of them.

First, consider the value of flow states: flows states are states where we have intense focus and concentration on the task at hand in the present moment; the merging of action and awareness, and the loss of self-reflection — what people often describe as ‘being in the zone.’ Flow states allow us to achieve amazing things whether in the corporate boardroom, the courthouse, or the basketball court, and many other tasks in-between. We may wonder how flow states are related to ignorance. Here we must understand what is required to be in a flow state: intensive and focused concentration on what one is doing in the present moment; the loss of awareness that one is engaging in a specific activity, among other things. When we’re in a flow state, while writing, say, we focus to the point of immersion into the writing process, inhibiting knowledge of what we’re doing. We do not focus on the keystrokes necessary to produce the words on the page or think too much about the next sentence to come. Athletes often describe how it feels to be in a flow state in similar terms.

Next, consider the value of privacy where we value the ignorance of others. we often value privacy — ignorance of our words and actions by others — performatively, even if we may say things dismissive of privacy. When the issue of state surveillance is broached, some retort that they don’t fear the state knowing their business since they’ve done nothing wrong. The implication here is that only criminals, or folks up to no good, would value their privacy; whereas, law-abiding citizens have nothing to fear from the state. Yet their actions belie their words: they password-protect their account, use blinds and curtains to prevent snooping into their homes, and so on. They, in other words, intuitively understand that privacy is valuable for leading a normal life having nothing to do with criminality. The fact that they would be reticent to forgo their privacy says volumes about what they really value, despite their expressed convictions to the contrary. We can think about the value of privacy by thinking about a society where privacy is absent. As George Orwell masterfully put the point:

“There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live—did live, from habit that became instinct—on the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.”

And finally, sometimes we (rightly) value our ignorance of other people, even those closest to us. Would you really want to know everything about people in your life — every thought, word, and deed? I’m guessing for most folks the answer is no. As the philosopher, Daniel Dennett, nicely explains:

“Speaking for myself, I am sure that I would go to some lengths to prevent myself from learning all the secrets of those around me—whom they found disgusting, whom they secretly adored, what crimes and follies they had committed, or thought I had committed! Learning all these facts would destroy my composure, cripple my attitude towards those around me.”

We thus have a few examples where ignorance — in different forms — is actually quite valuable, and where we wouldn’t want knowledge. This is some confirmation for the Goldilocks principle applied, not just to knowledge, but to ignorance too (stated in reverse): we don’t want too much ignorance, but we don’t want too little ignorance either.

Expertise and the “Building Distrust” of Public Health Agencies

photograph of Dr. Fauci speaking on panel with American flag in background

If you want to know something about science, and you don’t know much about science, it seems that the best course of action would be to ask the experts. It’s not always obvious who these experts are, but there are often some pretty easy ways to identify them: if they have a lot of experience, are recognized in their field, do things like publish important papers and win grant money, etc., then there’s a good chance they know what they’re talking about. Listening to the experts requires a certain amount of trust on our part: if I’m relying on someone to give me true information then I have to trust that they’re not going to mislead me, or be incompetent, or have ulterior motives. At a time like this it seems that listening to the scientific experts is more important than ever, given that people need to stay informed about the latest developments with the COVID-19 pandemic.

However, there continues to be a significant number of people who appear to be distrustful of the experts, at least when it comes to matters concerning the coronavirus in the US. Recently, Dr. Anthony Fauci stated that he believed that there was a “building distrust” in public health agencies, especially when it comes to said agencies being transparent with developments in fighting the pandemic. While Dr. Fauci did not put forth specific reasons for thinking this, it is certainly not surprising he might feel this way.

That being said, we might ask: if we know that the experts are the best people to look to when looking for information about scientific and other complex issues, and if it’s well known that Dr. Fauci is an expert, then why is there a growing distrust of him among Americans?

One reason is no doubt political. Indeed, those distrustful of Dr. Fauci have claimed that he is merely “playing politics” when providing information about the coronavirus: some on the political right in the US have expressed skepticism with the severity of the pandemic and the necessity for the use of face masks specifically, and have interpreted the messages from Dr. Fauci as being an attack on their political views, motivated by differing political interests. Of course, this is an extremely unlikely explanation for Dr. Fauci’s recommendations: someone simply disagreeing with you or giving you advice that you don’t like is not a good reason to find them distrustful, especially when they are much more knowledgeable on the subject than you are.

But here we have another dimension to the problem, and something that might contribute to a building distrust: people who disagree with the experts might develop resentment toward said experts because they feel as though their own views are not being taken seriously.

Consider, for instance, an essay recently written by a member of a right-wing think tank called “How Expert Worship is Ruining Science.” The author, clearly skeptical of the recommendations of Dr. Fauci, laments what he takes to be a dismissing of the views of laypersons. While the article itself is chock-a-block with fallacious reasoning, we can identify a few key points that can help explain why some are distrustful of the scientific experts in the current climate.

First, there is the concern that the line between experts and “non-experts” is not so sharp. For instance, with there being so much information available to anyone with an internet connection, one might think that given one’s ability to do research for oneself that we should not think that we can so easily separate the experts from the laypersons. Not taking the views of the non-expert seriously, then, means that one might miss out on getting at truth from an unlikely source.

Second, recent efforts by social media sites like Twitter and Facebook to prevent the spread of misinformation are being interpreted as acts of censorship. Again, the thought is that if I try to express my views on social media, and my post is flagged as being false or misleading, then I will feel that my views are not being taken seriously. However, the reasoning continues: the nature of scientific inquiry is meant to be that which is open to objection and criticism, and so failing to engage with that criticism, or to even allow it to be expressed, represents bad scientific practice on the part of the experts. As such, we have reason to distrust them.

While this reasoning isn’t particularly good, it might help explain the apparent distrust of experts in the US. Indeed, while it is perhaps correct to say that there is not a very sharp distinction between those who are experts and those who are not, it is nevertheless still important to recognize that if an expert as credentialed and experienced as Dr. Fauci disagrees with you, then it is likely your views need to be more closely examined. The thought that scientific progress is incompatible with some views being fact-checked or prevented from being disseminated on social media is also hyperbolic: progress in any field would slow to a halt if it stopped to consider every possible view, and that the fact that one specific set of views is not being considered as much as one wants is not an indication that productive debate is not being conducted by the experts.

At the same time, it is perhaps more understandable why those who are presenting information that is being flagged as false or misleading may feel a growing sense of distrust of experts, especially when views on the relevant issues are divided along the political spectrum. While Dr. Fauci himself has expressed that he takes transparency to be a key component in maintaining the trust of the public, this is perhaps not the full explanation. There may instead be a fundamental tension between trying to best inform the public while simultaneously maintaining their trust, since doing so will inevitably require not taking seriously everyone who disagrees with the experts.

Truth and Contradiction, Knowledge and Belief, and Trump

photograph of Halloween event at White House with Donald and Melania Trump

At a White House press conference in August, the HuffPost’s White House correspondent, S.V. Dáte, was called on by President Donald Trump for a question. This was the first time Trump had called on Dáte, and the question the reporter asked was the one he had (he said later) been saving for a long time. Here is the exchange:

Dáte: “Mr President, after three and a half years, do you regret at all, all the lying you have done to the American People?” Trump: “All the what?” Dáte: “All the lying, all the dishonesties…” Trump: “That who has done?” Dáte: “You have done…”

Trump cuts him off, ignoring the question, and calls on someone else. The press conference continues, as though nothing has happened. Trump’s reaction to being challenged is familiar and formulaic: he responds by ignoring or denouncing those from whence the challenge comes. In a presidency as tempestuous as this one, that inflicts new wounds on the American democracy daily and lurches from madness to scandal at breakneck speed, this reporter’s question may have slipped under the radar for many.

But let’s go back there for a moment. Not only was it a fair question, it is a wonder that it is not a question Trump is asked every day. The daily litany of lies uttered by the president is shocking, though people who support Trump seem not to mind the lies, or at least are not persuaded thereby to withdraw their support. This seems extraordinary, but maybe it isn’t. As politics continues to grow more divisive and ideologically driven, versions of events, indeed versions of reality, which serve ideologies are increasingly preferred by those with vested interests over ones supported by facts.

Therefore, the answer to Dáte’s question was already implicit in its having to be asked. Given the sheer volume of lies, and given what we know of Trump’s demeanor, it seems clear that he harbors no such regret. Trump gave his answer in dismissing the question.

So, here we are then. The President of the United States is widely acknowledged as a frequent and mendacious liar. If you want to follow up on the amount, content, or modality (Fox News, Twitter, a rally etc.) of Trump’s lies, there are the fact checkers. The Washington Post’s President Trump lie tally database had clocked 20,055 lies to date on July 9. You can search the database of Trump lies by topic and by source. The Post finds that Trump has made an average of 23 false or misleading claims a day over a 14-month period.

Take the president’s appearance last month at an ABC Town Hall with undecided voters. In response to questions about his handling of the pandemic, and regarding the taped, on-the-record interviews with Bob Woodward in which Trump discusses his decision to play down the virus to avoid panic, Trump responds that he had in fact “up-played” the virus. He says this while making no attempt to square the lie off with what is already, in fact, on the public record. As with all Trump’s tweets, public speeches, rallies, press conferences etc., Trump tells lies and fact checkers scramble to confront them.

Of course, Trump should be fact-checked. Fact-checking politicians and other public figures for the veracity of their speech is, and will remain, a vital contribution to public and political discourse. However, it is also important to reflect upon the way the ground has shifted under this activity in the era of Trump; the post-truth era.

The activity of fact-checking, of weighing the President’s claims against known or discoverable truth, presupposes an epistemic relation to the world in which truth and fact are arbiters of – or at least in some way related to – what it is reasonable to believe. Truth and untruth (that is, facts and lies) are, in the conventional sense, at odds with one another – they are mutually exclusive. A logical law of non-contradiction broadly governs conventional discourse. Either “p” or “not-p” is the case; it cannot be both. Ordinarily for a lie to be effective it has to obfuscate or replace the truth. If “p” is in fact true, then the assertion of “not-p” would have to displace the belief in “p” for the lie to work.

But in the Trump Era (the post-truth era) this relation is no longer operative. Trump’s lies often don’t even maintain the pretense of competing with truth in the conventional sense – that is, they don’t really attempt to supersede a fact but rather to shift the reality in which that fact operates as such, or in which it has a claim on belief, decision, and action.

When Trump says he “up-played” the virus without addressing his own on-the-record admission that he downplayed it, he is of course contradicting himself, but more than that he is jettisoning the ordinary sense in which fact and falsehood are at odds with each other. This could be described as a kind of epistemic shift, and is related, I think, to any meaning we might make – now and in the future– of the concept of ‘post-truth’, and what that means for our political and social lives. The concept of post-truth appears to signal a shift in what people can, within political and social discourse, understand knowledge to be, and what claims they can understand it to have upon them. The consequences of this we can already see playing out – especially, for instance, in the pandemic situation in the US, together with the volatile election atmosphere.

Having a concept of epistemology is important here – a concept of what it would be to ‘know’ and what it would be to act on the basis of knowledge. Such a concept would have to demarcate an ancient philosophical distinction – between episteme and doxa; which is the distinction between knowledge and mere opinion or belief.

Post-truth is the ascension of doxa over episteme. In the well-known philosophical analysis of knowledge as justified true belief, for a belief to count as knowledge one must be justified in believing it and it must be true. Knowledge, under this definition which is rudimentary, and somewhat problematic, but nevertheless useful, is belief which is justified and true. But in the post-truth era it seems that the conditions of both justification and truth are weakened, if not dispensed with altogether, and so we are left with an epistemology in which belief alone can count as knowledge – which is no epistemology at all.

It is easy to see why this is not only an epistemic problem, but a moral and political one as well. What knowledge we have, and what it is reasonable to believe and act upon, are core foundations of our lives in a society. There is an important relationship between epistemology and an ethical, flourishing, social and political life. Totalitarianism is built on and enabled by lies and propaganda replaces discourse when all criticism is silenced.

The coronavirus pandemic has been disastrous for the US. A case can easily be made that the pandemic has been able to wreak such devastation because of Trump’s lies – from his decision to downplay the danger and his efforts to sideline and silence experts, to the specific lies and obfuscations he issues via Twitter and at press conferences or Fox News call-ins.

The US has recorded the highest number of infections, and deaths, of anywhere in the world. So, when Trump says “America is doing great” the question must be ‘what this could possibly mean?’ This is no casual lie; nor is it merely the egoistic paroxysm of a president unable to admit error. Repeating at every possible opportunity that ‘America is doing great, the best in the world’ It is a form of gaslighting – and as such is calculated to help Trump disempower and dominate America.

This is in itself quite unsettling, but where is it all going?

In another, particularly bizarre and sinister example of ‘Trumpspeak’ from a couple of weeks ago the president mentioned a plane that allegedly flew from an unnamed city to Washington, D.C., loaded with “thugs wearing these dark uniforms, black uniforms, with gear.” In the absence of any ensuing clarity from the president or anyone else on what this might have been about, and in the light of Trump’s oft-repeated claims of the presence of a ‘radical left’ contingent, of ‘antifa’ and ‘radical democrats’ etc., it seems to have been an intimation of some threat, directly or indirectly, the symbolism of which appeared to be drawn from the ‘law and order’ platform of his campaign. Frankly, it’s hard to say.

But vague lies and unverified claims with dark intimations are the stuff of conspiracy. If you line all that up next to the fact that Trump has generously hinted that if the election does not resolve in his favor, he will consider the result illegitimate, then you can see how the lies, the false stories, the obfuscations and intimations are the tools Trump is using to try to shift power. He is trying to dislodge power from the elite – which can be read as ‘people who know things.’

One way of characterizing the situation is to say that the post-truth situation is creating an epistemic vacuum where ideology trumps reality and it is in this vacuum that Trump will attempt to secure his win.

Take the oft-repeated mail-in ballot lie – that mail-in ballots are subject to widespread electoral fraud. This has been firmly refuted, even by Trump’s own investigation following the 2016 election. Yet it is widely recognized that this lie could foment a sense of resentment among Trump supporters should he not get across the line on November 3. Or it could facilitate his (by now fairly transparent) intention to declare victory on election night should the result be inconclusive as counting proceeds. These are the possible, or even likely, outcomes if Trump is able to create, feed, and capitalize on a situation in which truth and fact have no purchase on, or have no meaningful relationship to, people’s reasons for acting or making choices.

Trump’s lying is both a symptom, and part of the disease of his presidency – a pathology which has infected pretty well the whole Republican party and which is putting great strain on many of the organs and tissues of the American democracy. This really is a time like no other in America’s history, and the stakes are as high as they have ever been.

At this point the ethical dimensions of the question of why truth is important to a healthy and just society seem to be slipping from view as America struggles under Trump to keep an epistemic foundation in political discourse that is broadly governed by principles of veracity. Fact-checking alone cannot win that struggle.

The Ethics of Escapism (Pt. 2): Two Kinds of Escape

photograph of business man with his head buried in the sand

Shortly before Labor Day this year, polling data of the American workforce indicated that a majority (58%) of employees are experiencing some form of burnout. Not only was this an increase from the early days of the pandemic (when the number was around 45%), but over a third of respondents directly referenced COVID-19 as a cause of their increased stress. Reports on everyone from “essential” workers, to parents, to healthcare professionals and more indicate that the effects of the coronavirus are not merely limited to physical symptoms. Ironically, while the steps taken to limit COVID-19’s physical reach have been largely effective (when properly practiced), those same steps (in particular, self-imposed isolation) may be simultaneously contributing to a burgeoning mental health crisis, particularly in light of additional social pressures like widespread financial ruin, state-sanctioned racial injustices, and a vitriolic election season.

Indeed, 2020 has not been an easy year.

Nearly a century ago, J.R.R. Tolkien — creator of Gandalf, Bilbo Baggins, and the whole of Middle-Earth — explained how fantasy stories like The Lord of the Rings not only offer an “outrageous” form of “Escape” from the difficulties people encounter in the lives, but that this Escape can be “as a rule very practical, and may even be heroic.” In his essay On Fairy Stories, Tolkien asks, “Why should a man be scorned if, finding himself in prison, he tries to get out and go home? Or if, when he cannot do so, he thinks and talks about other topics than jailers and prison-walls?” It is true that Escape from reality can sometimes be irresponsible and even immoral (for more on this, see Marko Mavrovic’s recent article), but Tolkien reminds us to avoid confusing “the Escape of the Prisoner with the Flight of the Deserter” — the problems of the latter need not apply to the former.

There are at least two ways we can distinguish between Tolkien’s two kinds of Escape: epistemically (rooted in what someone seeks to escape) and morally (concerning one’s motivations for escaping anything at all). Consider how a person might respond to the NFL’s decision to highlight a message of social justice during its games this season: if they are displeased with such displays because, as Salena Zito explains, they “are tired of politics infecting everything they do” and “ just want to enjoy a game without being lectured,” then we might describe their escape as a matter of escaping from information, perspectives, and conversations that others take to be salient. Depending on how commonly someone engages in such a practice, this could encourage the crystallization of their own biases into an “epistemic bubble” where they end up never (or only quite rarely) hearing from someone who doesn’t share their opinions. Not only can this prevent people from learning about the world, but the “excessive self-confidence” that epistemic bubbles engender can lead to a prideful ignorance about reality that threatens a person’s epistemic standing on all sorts of issues.

If, however, someone instead wants to avoid “being lectured at” while watching a football game because they wish to escape from the moral imperatives embedded within the critiques of the lecture (or, more accurately, the slogan, symbol, chant, or the like), then this is not simply an epistemic escape from information, but an escape from moral inquiry and confrontation. Failing to care about a potential moral wrong (and seeking to avoid thinking about it) is, in itself, an additional moral wrong (just imagine your response to someone ignoring their neighbor trapped in a house fire because they “just wanted to watch football”). In its worst forms, this is an escape from the responsibility of caring for the experiences, needs, and rights of others, regardless of how inconvenient it might be to care about such things (in the middle of a football game or elsewhere). Nic Bommarito has argued that being a virtuous person simply is a matter of caring about moral goods in a manner that manifests such caring by instantiating it in particular ways; much like the people who passed by the injured Samaritan on the road, escaping from reminders that we should care about others cannot be morally justified simply by selfish desires for entertainment.

Both of these are examples of Tolkien’s Flight of the Deserter: someone who has a responsibility to learn about, participate in, and defend the members of their society is choosing to escape — both epistemically and morally — from reminders of the duties incumbent upon their roles as social agents. But this is different from the Escape of the Prisoner who simply desires a temporary reprieve to unwind after a stressful day. In the absence of immediately pressing issues (like, say, your neighbor trapped in a house fire), it seems perfectly acceptable to take some time to relax, de-stress, and recharge your emotional reserves. Indeed, this seems like the essence of “self-care.”

For example, the early weeks of the first anti-pandemic lockdowns happened to coincide with the release of Animal Crossing: New Horizons, a Nintendo game where players calmly build and tend a small island filled with cartoon animals. For a variety of reasons, quarantined players latched on to the peaceful video game, finding in it a cathartic opportunity to simply relax and relieve the stress mounting from the outside world; months later, the popularity (and profitability) of Animal Crossing has yet to wane. You can imagine the surprise, then, when this gamified Escape of the Prisoner was invaded by Joe Biden’s presidential campaign, who elected to offer virtual signs to players wanting to adorn their island in support of the Democratic candidate for president. Although it would seem an exaggeration to call this a “lecture,” insofar as someone complains about “just wanting to play a game” without being confronted with political ads, there seems to be nothing morally wrong with criticizing (or electing to avoid) Biden’s campaign tactic — probably because there is no inherent obligation to care about a politician’s attempt to get elected (in the same way that there is a duty to care for fellow creatures in need).

So, when thinking about the ethics of escape, it is important to distinguish what kind of escape we mean. Attempts to escape from our proper moral obligations (a Flight of the Deserter) will often amount to ignorant or shameful abdications of our moral responsibilities to care for each other. On the other hand, attempts to (temporarily) escape from the often-difficult burdens we bear, both by doing our duties in public society and simply by quarantining ourselves at home, will amount to taking care of the needs of our own finitude — Tolkien’s Escape of the Prisoner.

In short, just as we should care about others, we should also care for ourselves.

 

Part III – “Searching for the Personal when Everything Is Political” by Meredith McFadden

Part I – “The Ethics of Escapism” by Marko Mavrovich

Waiting for a Coronavirus Vaccine? Watch Out for Self-Deception

photograph of happy smiley face on yellow sticky note surrounded by sad unhappy blue faces

In the last few months, as it is clear that the coronavirus won’t be disappearing anytime soon, there has been a lot of talking about vaccines. The U.S. has already started several trials, and both Canada and Europe have followed suit. The lack of a current vaccine has made even more evident how challenging it is to coexist with the current pandemic. Aside from the more extreme consequences that involve hospitalizations, families and couples have been separated for what is a dramatic amount of time, and some visas have been halted. Unemployment rates have hit record numbers with what will be predicted to be a slow recovery. Restaurants, for example, have recently reopened, yet it is unclear what their future will be when the patio season will soon come to an end. With this in mind, many (myself included) are hoping that a vaccine will come, the sooner the better.

But strong interest for a vaccine, raises the worry of how this influences what we believe, and in particular, how we examine evidence that doesn’t fit our hopes. The worry is that one might indulge in self-deception. What do I mean by this? Let me give you an example that clarifies what I have in mind.

Last week, I was talking to my best friend, who is a doctor and, as such, typically defers to experts. When my partner and I told my friend of our intention of getting married, she reacted enthusiastically. Unfortunately, the happiness of the moment was interrupted by the realization that, due to the current coronavirus pandemic, the wedding would need to take place after the distribution of a vaccine. Since then, my friend has repeatedly assured me that there will be a vaccine as early as October on the grounds that Donald Trump has guaranteed it will be the case. When I relayed to her information coming instead from Dr. Anthony Fauci, who instead believes the vaccine will be available only in 2021, my friend embarked in effortful mental gymnastics to justify (or better: rationalize) why Trump was actually right.

There is an expression commonly used in Italian called “mirror climbing.” Climbing a mirror is an obviously effortful activity and it is also bound to fail because the mirror’s slippery surface makes it easy to fall from. Italians use the expression metaphorically to denote the struggle of someone attempting to justify a proposition that by their own lights is not justifiable. My friend was certainly guilty of some mirror climbing and she is a clear example of someone who, driven by the strong desire to see her best friend getting married, self-deceives that the vaccine will be available in September. This is in fact how self-deception works. People don’t simply believe what they want for that is psychologically impossible. You couldn’t possibly make yourself believe that the moon was made of cheese, even if you wanted to. Beliefs are just not directly controllable like actions. Rather, it is our wishes, desires, interests that influence the way we come to believe what we want by shaping how we gather and interpret evidence. We might, for example, give more importance to reading news that align with our hopes and scroll past news titles that question what we would like to be true. We might give weight to a teaspoon of evidence coming from a source we wouldn’t normally trust, and instead give credibility to evidence coming from sources that we know is not relevant.

You might ask though, how is my friend’s behavior different from someone who is simply wrong instead of self-deceived? Holding a belief that it turns to be false usually happens out of mistake, and as a result, when people correct us, we don’t have problems revising that belief. Self-deception instead, doesn’t happen out of mere error, it is driven by a precise motivation — desires, hope, fears, worry, and so on — which biases the way we collect and interpret evidence in favor of that belief. Consider my friend again. She is a doctor, and as such she always trusts experts. Now, regardless of political views, Trump, contrary to Dr Fauci, is not an expert in medicine. Normally, my friend knows better than trusting someone who is not an expert, yet the only instance when she doesn’t, is one where there is a real interest at stake. This isn’t a coincidence; the belief there will be a vaccine in October is fueled by a precise hope. This is a problem because our beliefs should be guided by evidence, not wishes. Beliefs, so to speak, are not designed to make us feel better (contrary to desires, for example). They are supposed to match reality, and as such be a tool that we use to navigate our environment. Deceiving ourselves that something is the case when it’s not inevitably leads to disappointment because reality has a way to intrude our hopes and catch up with us.

Given this, what can we do to prevent being falling into the grips of self-deception? Be vigilant. We are often aware of our wishes and hopes (just like you are probably aware now that you’re hoping a vaccine will be released soon). Once we are aware of our motivational states, we should slow down our thoughts and be extra careful when considering evidence in favor. This is the first step in protecting ourselves from self-deception.

Causality and the Coronavirus

image of map of US displayed as multi-colored bar graph

“Causality” is a difficult concept, yet beliefs about causes are often consequential. A troubling illustration of this is the claim, which is being widely shared on social media, that the coronavirus is not particularly lethal, as only 6% of the 190,000+ deaths attributed to the virus are “caused” by the disease.

We tend to think of causes in too-simplistic terms

Of all of the biases and limitations of human reasoning, our tendency to simplify causes is arguably one of the most fundamental. Consider the hypothetical case of a plane crash in Somalia in 2018. We might accept as plausible causes things such as the pilot’s lack of experience (say it was her first solo flight), the (old) age of the plane, the (stormy) weather, and/or Somalia’s then-status as a failed state, with poor infrastructure and, perhaps, an inadequate air traffic control system.

For most, if not all, phenomena that unfold at a human scale, a multiplicity of “causes” can be identified. This includes, for example, social stories of love and friendship and political events such as wars and contested elections.1

Causation in medicine

Causal explanations in medicine are similarly complex. Indeed, the CDC explicitly notes that causes of death are medical opinions. These opinions are likely to include not only an immediate cause (“final disease or condition resulting in death”), but also an underlying cause (“disease or injury that initiated the events resulting in death”), as well as other significant conditions which are or are not judged to contribute to the underlying cause of death.

In any given case, the opinions expressed on the death certificate might be called into question. Even though these opinions are typically based on years of clinical experience and medical study, they are limited by medical uncertainty and, like all human judgments, human fallibility.

When should COVID count as a cause?

Although the validity of any individual diagnosis might be called into question, aggregate trends are less equivocal. Consider this graph from the CDC which identifies the number of actual deaths not attributed to COVID-19 (green), additional deaths which have been attributed to COVID-19 (blue), and the upper bound of the expected number of deaths based on historical data (orange trend line). Above the blue lines there are pluses to indicate weeks in which the total number (including COVID) exceeds the reported number by a statistically significant margin. This has been true for every week since March 28. In addition, there are pluses above the green lines indicating where the number of deaths excluding COVID was significantly greater than expected. This is true for each of the last eight weeks (ignoring correlated error, we would expect such a finding fewer than one in a million times by chance). This indicates that the number of deaths due to COVID in America has been underreported, not overreported.

Among the likely causes for these ‘non-COVID’ excess deaths, we can point, particularly early in the pandemic, to a lack of familiarity with, and testing for, the virus among medical professionals. As the pandemic unfolded, it is likely that additional deaths can be attributed, in part to indirect causal relationships such as people delaying needed visits to doctors and hospitals out of fear, and the social, psychological, and economic consequences that have accompanied COVID in America. Regardless, the bottom line is clear: without COVID-19, over two hundred thousand other Americans would still be alive today. The pandemic has illuminated, tragically, our interconnectedness and with it our
responsibilities to each other. One part of this responsibility is to deprive the virus of the
opportunity to spread by wearing masks and socially distancing. But this is not enough: we
need to stop the spread of misinformation as well.

 

1 Some argue that we can think of individual putative causes as “individually unnecessary” but as “jointly sufficient.” In the 2000 US Presidential Election, for example, consider the presence of Ralph Nader on the ballot, delays in counting the vote in some jurisdictions, the Monica Lewinsky scandal, and other phenomena such as the “butterfly ballot” in Palm Beach County, Florida. Each of these might have been unnecessary to lead the election to be called for G.W. Bush, but they were jointly sufficient to do so.

The Dangerous Allure of Conspiracy Theories

photograph of QAnon sign at rally

Once again, the world is on fire. Every day seems to bring a new catastrophe, another phase of a slowly unfolding apocalypse. We naturally intuit that spontaneous combustion is impossible, so a sinister individual (or a sinister group of individuals) must be responsible for the presence of evil in the world. Some speculate that the most recent bout of wildfires in California were ignited by a giant laser (though no one can agree on who fired the lasers in the first place), while others across the globe set 5G towers ablaze out of fear that this frightening new technology was created by a malevolent organization to hasten the spread of coronavirus. Events as disparate as the recent explosion in Beirut to the rise in income inequality have been subsumed into a vast web of conspiracy and intrigue. Conspiracy theorists see themselves as crusaders against the arsonists at the very pinnacle of society, and are taking to internet forums to demand retribution for perceived wrongs.

The conspiracy theorists’ framework for making sense of the world is a dangerously attractive one. Despite mainstream disdain for nutjobs in tinfoil hats, conspiracy theories (and those who unravel them) have been glamorized in pop culture through films like The Matrix and The Da Vinci Code, both of which involve a single individual unraveling the lies perpetuated by a malevolent but often invisible cadre of villains. Real-life conspiracy theorists also model themselves after the archetypal detective of popular crime fiction. This character possesses authority to sort truth from untruth, often in the face of hostility or danger, and acts as an agent for the common good.

But in many ways, the conspiracy theorist is the inverse of the detective; the latter operates within the system of legality, often working directly for the powers-that-be, which requires an implicit trust in authority. They usually hunt down someone who has broken the law, and who is therefore on the fringes of the system. Furthermore, the detective gathers empirical evidence which forms the justification for their pursuit. The conspiracy theorist, on the other hand, is on the outside looking in, and displays a consistent mistrust of both the state and the press as sources of truth. Though conspiracy theorists ostensibly obsess over paper trails and blurry photographs, their evidence (which is almost always misconstrued or fabricated) doesn’t matter nearly as much as the conclusion. As Michael Barkun explains in A Culture of Conspiracy: Apocalyptic Visions in Contemporary America,

the more sweeping a conspiracy theory’s claims, the less relevant evidence becomes …. This paradox occurs because conspiracy theories are at their heart nonfalsifiable. No matter how much evidence their adherents accumulate, belief in a conspiracy theory ultimately becomes a matter of faith rather than proof.

In that sense, most conspiracy theorists are less concerned with uncovering the truth than confirming what they already believe. This is supported by a 2016 study, which identifies partisanship as an crucial factor in measuring how likely someone is to buy into conspiracy theories. The researchers determined that “political socialization and psychological traits are likely the most important influences” on whether or not someone will find themselves watching documentaries on ancient aliens or writing lengthy Facebook posts about lizard people masquerading as world leaders. For example, “Republicans are the most likely to believe in the media conspiracy followed by Independents and Democrats. This is because Republicans have for decades been told by their elites that the media are biased and potentially corrupt.” The study concludes that people from both ends of the political spectrum can be predisposed to see a conspiracy where there isn’t one, but partisanship is ultimately the more important predictor whether a person will believe a specific theory than any other factor. In other words, Democrats rarely buy into conspiracy theories about their own party, and vice versa with Republicans. The enemy is never one of us.

It’s no wonder the tinfoil-hat mindset is so addictive. It’s like being in a hall of mirrors, where all you can see is your own flattering image repeated endlessly. Michael J. Wood suggests in another 2016 study that “people who are aware of past malfeasance by powerful actors in society might extrapolate from known abuses of power to more speculative ones,” or that “people with more conspiracist world views might be more likely to seek out information on criminal acts carried out by officials in the past, while those with less conspiracist world views might ignore or reject such information.” It’s a self-fulfilling prophecy, fed by a sense of predetermined mistrust that is only confirmed by every photoshopped UFO. Conspiracy theories can be easily adapted to suit our own personal needs, which further fuels the narcissism. As one recent study on a conspiracy theory involving Bill Gates, coronavirus, and satanic cults points out,

there’s never just one version of a conspiracy theory — and that’s part of their power and reach. Often, there are as many variants on a given conspiracy theory as there are theorists, if not more. Each individual can shape and reshape whatever version of the theory they choose to believe, incorporating some narrative elements and rejecting others.

This mutable quality makes conspiracy theories personal, as easily integratable into our sense of self as any hobby or lifestyle choice. Even worse, the very nature of social media amplifies the potency of conspiracy theories. The study explains that

where conspiracists are the most engaged users on a given niche topic or search term, they both generate content and effectively train recommendation algorithms to recommend the conspiracy theory to other users. This means that, when there’s a rush of interest, as precipitated in this case by the Covid-19 crisis, large numbers of users may be driven towards pre-existing conspiratorial content and narratives.

The more people fear something, the more likely an algorithm will be to offer them palliative conspiracy theories, and the echo chamber grows even more.

Both of the studies previously mentioned suggest that there is a predisposition to believe in conspiracy theories that transcends political alliance, but where does that predisposition come from? It seems most likely that conspiracy beliefs are driven by anxiety, paranoia, feelings of powerlessness, and a desire for authority. A desire for authority is especially evident at gatherings of flat-earthers, a group that consistently mimics the tone and language academic conferences. Conspiracies rely on what Barkun called “stigmatized knowledge,” or “claims to truth that the claimants regard as verified despite the marginalization of those claims by the institutions that conventionally distinguish between knowledge and error — universities, communities of scientific researchers, and the like.” People feel cut off from the traditional locus of knowledge, so they create their own alternative epistemology, which restores their sense of authority and control.

Conspiracy theories are also rooted in a basic desire for narrative structure. Faced with a bewildering deluge of competing and fragmentary narratives, conspiracy theories cobble together half-truths and outright lies into a story that is more coherent and exciting than reality. The conspiracy theories that attempt to explain coronavirus provide a good example of this process. The first stirrings of the virus began in the winter of 2019, then rapidly accelerated without warning and altered the global landscape seemingly overnight. Our healthcare system and government failed to respond with any measure of success, and hundreds of thousands of Americans died over the span of a few months. The reality of the situation flies in the face of narrative structure — the familiar rhythm of rising action-climax-falling action, the cast of identifiable good guys and bad guys, the ultimate moral victory that redeems needless suffering by giving it purpose. In the dearth of narrative structure, theorists suggest that Bill Gates planned the virus decades ago, citing his charity work as an elaborate cover-up for nefarious misdeeds. The system itself isn’t broken or unequipped to handle the pandemic because of austerity. Rather, it was the result of a single bad actor.

Terrible events are no longer random, but imbued with moral and narrative significance. Michael Barkun argues that this is a comfort, but also a factor that further drives conspiracy theories:

the conspiracy theorist’s view is both frightening and reassuring. It is frightening because it magnifies the power of evil, leading in some cases to an outright dualism in which light and darkness struggle for cosmic supremacy. At the same time, however, it is reassuring, for it promises a world that is meaningful rather than arbitrary. Not only are events nonrandom, but the clear identification of evil gives the conspiracist a definable enemy against which to struggle, endowing life with purpose.

A group of outsiders (wealthy Jewish people, the “liberal elite,” the immigrant) are Othered within the discourse of theorists, rendered as villains capable of superhuman feats. The QAnon theory in particular feels more like the Marvel cinematic universe than a coherent ideology, with its bloated cast of heroes teaming up for an Avengers-style takedown of the bad guys. Some of our best impulses — our love of storytelling, a desire to see through the lies of the powerful — are twisted and made ugly in the world of online conspiracy forums.

The prominence of conspiracy theories in political discourse must be addressed. Over 70 self-professed Q supporters have run for Congress as Republicans in the past year, and as Kaitlyn Tiffany points out in an article for The Atlantic, the QAnon movement is becoming gradually more mainstream, borrowing aesthetics from the lifestyle movement and makeup tutorials make itself more palatable. “Its supporters are so enthusiastic, and so active online, that their participation levels resemble stan Twitter more than they do any typical political movement. QAnon has its own merch, its own microcelebrities, and a spirit of digital evangelism that requires constant posting.” Perhaps the most frightening part of this problem is the impossibility of fully addressing it, because conspiracy theorists are notoriously difficult to hold a good-faith dialogue with. Sartre’s description of anti-Semites written in the 1940s (not coincidentally, the majority of contemporary conspiracy theories are deeply anti-Semitic) is relevant here. He wrote that anti-Semites (and today, conspiracy theorists)

know that their statements are empty and contestable; but it amuses them to make such statements: it is their adversary whose duty it is to choose his words seriously because he believes in words. They have a right to play. They even like to play with speech because by putting forth ridiculous reasons, they discredit the seriousness of their interlocutor; they are enchanted with their unfairness because for them it is not a question of persuading by good arguing but of intimidating or disorienting.

This quote raises the frightening possibility that not all conspiracy theorists truly believe what they say, that their disinterest in evidence is less an intellectual blindspot than a source of amusement. Sartre helps us see why conspiracy theories often operate on a completely different wavelength, one that seems to preclude logic, rationality, and even the good-faith exchange of ideas between equals.

The fragmentation of postmodern culture has created an epistemic conundrum: on what basis do we understand reality? As the operations of governments become increasingly inscrutable to those without education, as the concept of truth itself seems under attack, how do we make sense of the forces that determine the contours of our lives? Furthermore, as Wood points out, mistrust in the government isn’t always baseless, so how do we determine which threats are real and which are imagined?

There aren’t simple answers to these questions. The only thing we can do is address the needs that inspire people to seek out conspiracy theories in the first place. People have always had an impulse to attack their anxieties in the form of a constructed Other, to close themselves off, to distrust difference, to force the world to conform to a single master narrative, so it’s tempting to say that there will probably never be a way to completely eradicate insidious conspiracy theories entirely. Maybe the solution is to encourage the pursuit of self-knowledge, our own biases and desires, before we pursue an understanding of forces beyond our control.

On “Doing Your Own Research”

photograph of army reserve personnel wearing neck gaiter at covid testing site

In early August, American news outlets began to circulate a surprising headline: neck gaiters — a popular form of face covering used by many to help prevent the spread of COVID-19 — could reportedly increase the infection rate. In general, face masks work by catching respiratory droplets that would otherwise contaminate a virus-carrier’s immediate environment (in much the same way that traditional manners have long-prescribed covering your mouth when you sneeze); however, according to the initial report by CBS News, a new study found that the stretchy fabric typically used to make neck gaiters might actually work like a sieve to turn large droplets into smaller, more transmissible ones. Instead of helping to keep people safe from the coronavirus, gaiters might even “be worse than no mask at all.”

The immediate problem with this headline is that it’s not true; but, more generally, the way that this story developed evidences several larger problems for anyone hoping to learn things from the internet.

The neck gaiter story began on August 7th when the journal Science Advances published new research on a measurement test for face mask efficacy. Interested by the widespread use of homemade face-coverings, a team of researchers from Duke University set out to identify an easy, inexpensive method that people could use at home with their cell phones to roughly assess how effective different commonly-available materials might be at blocking respiratory droplets. Importantly, the study was not about the overall efficacy rates of any particular mask, nor was it focused on the length of time that respiratory droplets emitted by mask-wearers stayed in the air (which is why smaller droplets could potentially be more infectious than larger ones); the study was only designed to assess the viability of the cell phone test itself. The observation that the single brand of neck gaiter used in the experiment might be “counterproductive” was an off-hand, untested suggestion in the final paragraph of the study’s “Results” section. Nevertheless, the dramatic-sounding (though misleading) headline exploded across the pages of the internet for weeks; as recently as August 20th, The Today Show was still presenting the untested “result” of the study as if it were a scientific fact.

The ethics of science journalism (and the problems that can arise from sensationalizing and misreporting the results of scientific studies) is a growing concern, but it is particularly salient when the reporting in question pertains to an ongoing global pandemic. While it might be unsurprising that news sites hungry for clicks ran a salacious-though-inaccurate headline, it is far from helpful and, arguably, morally wrong.

Furthermore, the kind of epistemic malpractice entailed by underdeveloped science journalism poses larger concerns for the possibility of credible online investigation more broadly. Although we have surrounded ourselves with technology that allows us to access the internet (and the vast amount of information it contains), it is becoming ever-more difficult to filter out genuinely trustworthy material from the melodramatic noise of websites designed more for attracting attention than disseminating knowledge. As Kenneth Boyd described in an article here last year, the algorithmic underpinnings of internet search engines can lead self-directed researchers into all manner of over-confident mistaken beliefs; this kind of structural issue is only exacerbated when the inputs to those algorithms (the articles and websites themselves) are also problematic.

These sorts of issues cast an important, cautionary light on a growing phenomenon: the credo that one must “Do Your Own Research” in order to be epistemically responsible. Whereas it might initially seem plain that the internet’s easily-accessible informational treasure trove would empower auto-didacts to always (or usually) draw reasonable conclusions about whatever they set their minds to study, the epistemic murkiness of what can actually be found online suggests that reality is more complicated. It is not at all clear that non-expert researchers who are ignorant of a topic can, on their own, justifiably identify trustworthy information (or information sources) about that topic; but, on the other hand, if a researcher does has enough knowledge to judge a claim’s accuracy, then it seems like they don’t need to be researching the topic to begin with!

This is a rough approximation of what philosophers sometimes call “Meno’s Paradox” after its presentation in the Platonic dialogue of that name. The Meno discusses how inquiry works and highlights that uninformed inquirers have no clear way to recognize the correct answer to a question without already knowing something about what they are questioning. While Plato goes on to spin this line of thinking into a creative argument for the innateness of all knowledge (and, by extension, the immortality of the soul!), subsequent thinkers have often taken different approaches to argue that a researcher only needs to have partial knowledge either of the claim they are researching or of the source of the claim they are choosing to trust in order to come to justified conclusions.

Unfortunately, “partial knowledge” solutions have problems of their own. On one hand, human susceptibility to a bevy of psychological biases make a researcher’s “partial” understanding of a topic a risky foundation for subsequent knowledge claims; it is exceedingly easy, for example, for the person “doing their own research” to be unwittingly led astray by their unconscious prejudices, preconceptions, or the pressures of their social environment. On the other hand, grounding one’s confidence in a testimonial claim on the trustworthiness of the claim’s source seems to (in most cases) simply push the justification problem back a step without really solving much: in much the same way that a non-expert cannot make a reasonable judgment about a proposition, that same non-expert also can’t, all by themselves, determine who can make such a judgment.

So, what can the epistemically responsible person do online?

First, we must cultivate an attitude of epistemic humility (of the sort summarized by Plato’s infamous comment “I know that I know nothing”) — something which often requires us to admit not only that we don’t know things, but that we often can’t know things without the help of teachers or other subject matter experts doing the important work of filtering the bad sources of information away from the good ones. All too often, “doing your own research” functionally reduces to a triggering of the confirmation bias and lasts only as long as it takes to find a few posts or videos that satisfy what a person was already thinking in the first place (regardless of whether those posts/videos are themselves worthy of being believed). If we instead work to remember our own intellectual limitations, both about specific subjects and the process of inquiry writ large, we can develop a welcoming attitude to the epistemic assistance offered by others.

Secondly, we must maintain an attitude of suspicion about bold claims to knowledge, especially in an environment like the internet. It is a small step from skepticism about our own capacities for inquiry and understanding to skepticism about that of others, particularly when we have plenty of independent evidence that many of the most accessible or popular voices online are motivated by concerns other than the truth. Virtuous researchers have to focus on identifying and cultivating relationships with knowledgeable guides (who can range from individuals to their writings to the institutions they create) on whom they can rely when it comes time to ask questions.

Together, these two points lead to a third: we must be patient researchers. Developing epistemic virtues like humility and cultivating relationships with experts that can overcome rational skepticism — in short, creating an intellectually vibrant community — takes a considerable amount of effort and time. After a while, we can come to recognize trustworthy informational authorities as “the ones who tend to be right, more often than not” even if we ourselves have little understanding of the technical fields of those experts.

It’s worth noting here, too, that experts can sometimes be wrong and nevertheless still be experts! Even specialists continue to learn and grow in their own understanding of their chosen fields; this sometimes produces confident assertions from experts that later turn out to be wrong. So, for example, when the Surgeon General urged people in February to not wear face masks in public (based on then-current assumptions about the purportedly low risk of asymptomatic patients) it made sense at the time; the fact that those assumptions later proved to be false (at which point the medical community, including the epistemically humble Surgeon General, then recommended widespread face mask usage) is simply a demonstration of the learning/research process at work. On the flip side, choosing to still cite the outdated February recommendation simply because you disagree with face mask mandates in August exemplifies a lack of epistemic virtue.

Put differently, briefly using a search engine to find a simple answer to a complex question is not “doing your own research” because it’s not research. Research is somewhere between an academic technique and a vocational aspiration: it’s a practice that can be done with varying degrees of competence and it takes training to develop the skill to do it well. On this view, an “expert” is simply someone who has become particularly good at this art. Education, then, is not simply a matter of “memorizing facts,” but rather a training regimen in performing the project of inquiry within a field. This is not easy, requires practice, and still often goes badly when done in isolation — which is why academic researchers rely so heavily on their peers to review, critique, and verify their discoveries and ideas before assigning them institutional confidence. Unfortunately, this complicated process is far less sexy (and far slower) than a scandalous-sounding daily headline that oversimplifies data into an attractive turn of phrase.

So, poorly-communicated science journalism not only undermines our epistemic community by directly misinforming readers, but also by perpetuating the fiction that anyone is an epistemic island unto themselves. Good reporting must work to contextualize information within broader conversations (and, of course, get the information right in the first place).

Please don’t misunderstand me: this isn’t meant to be some elitist screed about how “only the learned can truly know stuff, therefore smart people with fancy degrees (or something) are best.” If degrees are useful credentials at all (a debatable topic for a different article!) they are so primarily as proof that a person has put in considerable practice to become a good (and trustworthy) researcher. Nevertheless, the Meno Paradox and the dangers of cognitive biases remain problems for all humans, and we need each other to work together to overcome our epistemic limitations. In short: we would all benefit from a flourishing epistemic community.

And if we have to sacrifice a few splashy headlines to get there, so much the better.

Clifford and the Coronavirus

photograph of empty ship helm

In 1877, mathematician and philosopher WK Clifford published a classic essay entitled “The Ethics of Belief.” In it, he asks us to consider a case involving a negligent shipowner:

“A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not overwell built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind, and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.”

Clifford then asks: what should we think of the shipowner? The answer, he thinks, is obvious: he is responsible for the death of the passengers. This is because he had all the evidence before him that his ship needed repairs and really wasn’t very safe, and instead of forming his beliefs in accordance with the evidence, he stifled his doubts and believed what he wanted.

As far as philosophical thought experiments go, Clifford’s case is easy to imagine happening in real life. In fact, there have recently been a number of real-life nautical disasters, although instead of ships sinking, they involve coronavirus outbreaks, the most recent being a Norwegian cruise ship that reported a number of coronavirus cases among crew and passengers earlier in August. In response to the incident, the CEO of the company owning the cruise line stated that “We have made mistakes” and that the outbreak was ultimately the product of a failure of several “internal procedures.” Indeed, the cruise line’s website states that they followed all the relevant guidelines from the Norwegian Institute for Public Health, implemented measures to encourage social distancing and good hygiene, and set sail with only 50% capacity. Despite these measures, though, people still got sick. This is not an isolated event: numerous businesses worldwide — that have adhered to government and other reopening guidelines — have seen spikes in cases of coronavirus among staff and customers.

In introducing his case, Clifford argued that what the shipowner did wrong was to form a belief on insufficient evidence. And it is easy enough to agree with Clifford’s diagnosis when it comes to such egregious belief-forming behavior as he describes. However, real life cases are typically more subtle. Cases like the Norwegian cruise ship and other businesses that have experienced problematic reopening should then lead us to question how much evidence is good enough when it comes to making the decision to reopen one’s business, and who we should find deserving of blame when things don’t work out.

To be fair, there are certainly differences between Clifford’s case and the case of the Norwegian cruise ship: there is no reason to think, for instance, that anyone in charge of the latter actively stifled doubts they knew to be significant. But there are also similarities, in that the evidence that cruise ships are generally not safe places to be right now is abundant and readily available. Even if one adheres to relevant health guidelines, we might wonder whether that is really good enough given what other evidence is available.

We might also wonder who is ultimately to blame. For instance, if guidelines concerning the re-opening of businesses that are provided by a relevant heath agency turn out to be inadequate, perhaps the blame should fall on those in charge of the guidelines themselves, and not those who followed them. There have, after all, been a number of countries that have reinstated stricter conditions on the operation of businesses after initially relaxing them in response to increases in new infections, Norway recently among them. When cases of coronavirus increased as a result of businesses being allowed to reopen, we might then put the blame on the government as opposed to the business owners themselves.

Clifford also makes an additional, more controversial argument that he illustrates in a second example:

“Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out. The question of right or wrong has to do with the origin of his belief, not the matter of it; not what it was, but how he got it; not whether it turned out to be true or false, but whether he had a right to believe on such evidence as was before him.”

Using this second case, Clifford argues that whether things turn out okay or not really isn’t important for determining whether someone has done something wrong: even if everyone on the ship made it safely the shipowner would still be guilty, he just got lucky that everyone survived. While we might think that Clifford is being harsh in his judgment, we might also wonder whether other businesses that have re-opened early in the face of some evidence that doing so may still be dangerous should be considered blameworthy, as well, regardless of the consequences.

Principles, Pragmatics, and Pragmatic Principles

close-up photograph of horse with blinders

In this post, I want to talk about a certain sort of neglected moral hypocrisy that I have noticed is common in my own moral thinking and, that I expect, is common in most of yours. And to illustrate this hypocrisy, I want to look carefully at the hypocritical application of democratic principles, and conclude by discussing President Trump’s recent tweet about delaying the election.

First, however, I want to introduce a distinction between two types of hypocrisy: overt and subtle hypocrisy. Overt hypocrisy occurs when you, in full awareness of the double standard, differentially apply a principle to relevantly similar cases. It is easy to find examples. One is Mitch McConnell’s claim that he would confirm a new Supreme Court Justice right before an election after blocking the confirmation of President Obama’s nominee Merrick Garland because of how close the nation was to a presidential election. It is clear that Senator McConnell knows he is applying the democratic principle inconsistently, he just also does not think politics is about principles, he thinks it is about promoting his political agenda.

Subtle hypocrisy, in contrast, occurs when you inconsistently apply your principles but you do not realize you are applying them inconsistently. Names aside, a lot of subtle hypocrisy, while it is hard to recognize in the moment, is pretty clear upon reflection. We tend to only notice our principles are at play in some contexts and not others. We are more likely to notice curtailments of free speech when it happens to those who say similar things to ourselves. We are much more likely to notice when we are harmed by inequitable treatment than when we are benefited by it.

We are especially likely to hypocritically apply our principles when we begin to consider purported reasons given for various policies. If the Supreme Court issues a decision I agree with, chances are good that I won’t actually go and check the majority reasoning to see if I think it’s sound. Rather, I’m content with the win and trust the court’s decision. In contrast, if the Court issues a decision I find dubious, I often do look up the reasoning and, if I think it is inadequate, will openly criticize the decision.

Why is this sort of hypocrisy so common? Because violations of our principles don’t always jump out at us. Often you won’t notice a principle is at stake unless you carefully deliberate about the question. Yet, we don’t just preemptively deliberate about every action in light of every principle we hold. Rather, something needs to incline us to deliberate. Something needs to prompt us to begin to morally reflect on an action, and, according to an awful lot of psychological research, it is our biases and unreflective intuitions that prompt our tendency to reason (see Part I of Jonathan Haidt’s The Righteous Mind). Because we are more likely to try and think of ethical problems in the behavior of our political enemies, we are much more likely to notice when actions we instinctively oppose violate our principles, and are unlikely to notice the same when considering actions we instinctively support.

I can, of course, provide an example of personal hypocrisy in my application of democratic principles against disenfranchisement. When conservative policy makers started trying to pass voter ID laws I was suspicious, I did my research, and I condemned these laws as democratically discriminatory. In contrast, when more liberal states gestured at moving towards mail-only voting to deal with COVID I just assumed it was fine. I never did any research, and it was just by luck that a podcast informed me that mail-only voting can differentially disenfranchise both liberal voting blocs like Black Americans and conservative voting blocs like older rural voters). Thus, but for luck and given my own political proclivities, my commitment to democratic principles would have been applied hypocritically to condemn only policies floated by conservative lawmakers.

This subtle hypocrisy is extraordinarily troubling because, while we can recognize it once it is pointed out, it is incredibly difficult to notice in the moment. This is one of the reasons it is important to hear from ideologically diverse perspectives, and to engage in regular and brutal self-criticism.

But while subtle hypocrisy is difficult to see, I think there is another sort of hypocrisy which is even more difficult to notice. To see it, it will be useful if we take a brief digression and try to figure out what exactly is undemocratic about President Trump’s proposal to delay the election. I, like many of you, find it outrageous that President Donald Trump would even suggest delaying the election due to the COVID crisis. Partly this is because I believe President Trump is acting in bad faith. Tweeting not because he wants to delay the election but because he wants to preemptively delegitimize it. Or perhaps because he wants to distract the media from otherwise damning stories about COVID-19 and the economy.

But a larger part of me thinks it would be outrageous even if President Trump were acting in good faith, and that is because delaying an election is in tension with core democratic principles. Now, you might think delaying the election is undemocratic because regular elections are the means by which a government is held democratically accountable to its citizens (this is the sort of argument I hear most people making). Thus, if the current government is empowered to delay an election, it might enable the government to, at least for a time, escape democratic accountability. Of course, this is not a real worry in the U.S. context. Even were the U.S. congress to delay the election, it would not change when President Trump is removed from office. His term ends January 20th whether or not a new President has been elected. If no one has been elected, then either the Speaker of the House or the President pro tempore of the Senate takes over (and I am eagerly awaiting whatever new TV show in the Spring decides to run with that counterfactual).

But there is a different principled democratic concern at stake. Suppose a political party, while in control of Congress, would delay an election whenever polls looked particularly unpromising. This would be troublingly undemocratic because while Congress would have to hold the election at some point before January 3rd, they could also wait till the moment that the party currently in power seems to have the largest comparative advantage. But just as gerrymandering is undemocratic because it allows those currently in power to employ their political power to secure an advantage in the upcoming elections, so too is this strategy of delaying elections for partisan reasons.

But what if Congress really were acting in good faith. Would that mean it could be democratic to delay the election? Perhaps. If you were confident you were acting on entirely non-partisan reasons, then delaying in such contexts is just as likely to harm your chances as to help them. And indeed, I could imagine disasters so serious as to justify delaying an election.

However, I think in general there are pragmatic reasons to stick to the democratic principles even when we are acting on entirely non-partisan reasons. First, it can be difficult to verify that reasons are entirely non-partisan. It can be hard to know the intention of Senators, and sometimes it can even be hard to know our own intentions.

Second, and I think more profoundly, there is a concern that we will tend to inequitably notice non-partisan reasons. Take the Brexit referendum. When I first saw some of the chaos that happened following the Brexit vote, I began to seriously consider if the UK should just hold a second referendum. After all, I thought, and still think, there were clear principled democratic issues with the election (for example, there seemed to be a systematic spread of misinformation).

The problem of course is that had the Brexit vote gone the other way, then I almost certainly would never have looked into the election, and so never noticed anything democratically troubling about the result. My partisan thoughts about Brexit influence what non-partisan reasons for redoing the election I ended up noticing. To call for redoing an election is surely at least as undemocratic as calling for delaying an election (indeed, I expect it is quite a bit more undemocratic, since it actually gives one side two chances at winning), and yet I almost instantly condemn the call to delay an election and it took me ages to see the democratic issues with redoing the Brexit vote.

Here, it is not that I was hypocritically applying a democratic principle. Rather, I was missing a democratic principle I should have already had given my tendency to hypocrisy. Because partisan preferences influence what non-partisan reasons I notice, I should have adopted a pragmatic principle against calling for reelections following results with which I disagreed. Not because reelections are themselves undemocratic (just as delaying an election might not itself be undemocratic), but because as a human reasoner, I cannot always trust my own even non-partisan reasoning and so should sometimes blinker it with pragmatic principles.

The Small but Unsettling Voice of the Expert Skeptic

photograph of someone casting off face mask

Experts and politicians worldwide have come to grips with the magnitude of the COVID-19 pandemic. Even Donald Trump, once skeptical that COVID-19 would affect the US in a significant way, now admits that the virus will likely take many more thousands of lives.

Despite this agreement, some are still not convinced. Skeptics claim that deaths that are reported as being caused by COVID-19 are really deaths that would have happened anyway, thereby artificially inflating the death toll. They claim that the CDC is complicit, telling doctors to document a death as “COVID-related” even when they aren’t sure. They highlight failures of world leaders like the Director-General of the World Health Organization and political corruption in China. They claim that talk of hospitals being “war zones” is media hype, and they share videos of “peaceful” local hospitals from places that aren’t hot spots, like Louisville or Tallahassee. They point to elaborate conspiracies about the nefarious origins of the novel coronavirus.

What’s the aim of this strikingly implausible, multi-national conspiracy, according to these “COVID-truthers”? Billions of dollars for pharmaceutical companies and votes for tyrannical politicians who want to look like benevolent saviors.

Expert skeptics like COVID-truthers are concerning because they are more likely to put themselves, their families, and their communities at risk by not physical distancing or wearing masks. They are more likely to violate stay-at-home orders and press politicians to re-open commerce before it is safe. And they pass this faulty reasoning on to their children.

While expert skepticism is not new, it is unsettling because expert skepticism often has a kernel of truth. Experts regularly disagree, especially in high-impact domains like medicine. Some experts give advice outside their fields (what Nathan Ballantyne calls “epistemic trespassing”). Some experts have conflicts of interest that lead to research fraud. And some people—seemingly miraculously—defy expert prediction, for example, by surviving a life-threatening illness.

If all this is right, shouldn’t everyone be skeptical of experts?

In reality, most non-experts do okay deciding who is trustworthy and when. This is because we understand—at least in broad strokes—how expertise works. Experts disagree over some issues, but, in time, their judgments tend to converge. Some people do defy expert expectations, but these usually fall within the scope of uncertainty. For example, about 1 in 100,000 cancers go into spontaneous remission. Further, we can often tell who is in a good position to help us. In the case of lawyers, contractors, and accountants, we can find out their credentials, how long they’ve been practicing, and their specialties. We can even learn about their work from online reviews or friends who have used them.

Of course, in these cases, the stakes are usually low. If it turns out that we trusted the wrong person, we might be able to sue for damages or accept the consequences and try harder next time. But as our need for experts gets more complicated, figuring out who is trustworthy is harder. For instance, questions about COVID-19 are:

  • New (Experts struggle to get good information.)
  • Time-sensitive (We need answers more quickly than we have time to evaluate experts.)
  • Value-charged (Our interests in the information biases who we trust.)
  • Politicized (Information is emotionally charged or distorted, and there are more epistemic trespassers.)

Where does this leave those of us who aren’t infectious disease experts? Should we shrug our shoulders with the COVID-truthers and start looking for ulterior motives?

Not obviously. Here are four strategies to help distill reality from fantasy.

  1. Keep in mind what experts should (and should not) be able to do.

Experts spend years studying a topic, but they cannot see the future. They should be able explain a problem and suggest ways of solving it. But models that predict the future are educated guesses. In the case of infectious diseases, those guesses depend on assumptions about how people act. If people act differently, the guesses will be inaccurate. But that’s how models work.

  1. Look for consensus, but be realistic.

When experts agree on something, that’s usually a sign they’re all thinking about the evidence the same way. But when they face a new problem, their evidence will change continually, and experts will have little time to make sense of it. In the case of COVID-19, there’s wide consensus about the virus that causes it and how it spreads. There is little consensus on why it hurts some people more than others and whether a vaccine is the right solution. But just because there isn’t consensus doesn’t mean there are ulterior motives.

  1. Look for “meta-expert consensus.”

When experts agree, it is sometimes because they need to look like they agree, whether due to worries about public opinion or because they want to convince politicians to act. These are not good reasons to trust experts. But on any complex issue, there’s more than one kind of expert. And not all experts have conflicts of interest. In the case of COVID-19, independent epidemiologists, infectious disease doctors, and public health experts agree that SARS-CoV-2 is a new, dangerous, contagious threat and that social distancing the main weapon against that threat. That kind of “meta-expert consensus” is a good check on expertise and good news for novices when deciding what to believe.

  1. Don’t double-down.

When experts get new evidence, they update their beliefs, even if they were wrong. They don’t force that evidence to fit old beliefs. When prediction models for COVID-related deaths did not bear out, experts updated their predictions. They recognized that predictions can be confounded by many variables, and they used the new evidence to update their models. This is good advice for novices, too.

These strategies are not fool proof. The world is messy, experts are fallible, and we won’t always trust the right people. But while expert skepticism is grounded in real limitations of expertise, we don’t have to join the ranks the COVID-truthers. With hard work and a little caution, we can make responsible choices about who we trust.

COVID-19 and the Ethics of Belief

photograph of scientist with mask and gloves looking through microscope

The current COVID-19 pandemic will likely have long-term effects that will be difficult to predict. This has certainly been the case with past pandemics. For example, the Black Death may have left a lasting mark on the human genome. Because of variations in human genetics, some people have genes which provide an immunological advantage to certain kinds of diseases. During the Black Death, those who had such genes were more likely to live and those without were more likely to do die. For example, a study of Rroma people, whose ancestors migrated to Europe from India one thousand years ago, revealed that those who migrated to Europe possessed genetic differences from their Indian ancestors that were relevant to the immune system response to Yersinia pestis, the bacterium that causes the Black Death. It’s possible that COVID-19 could lead to similar kinds of long-term effects. Are there moral conclusions that we can draw from this?

By itself, not really. Despite this being an example of natural selection at work, the fact that certain people are more likely to survive certain selection pressures than others does not indicate any kind of moral superiority. However, one moral lesson that we could take away is a willingness to make sure that our beliefs are well adapted to our environment. For example, a certain gene is neither good or bad in itself but becomes good or bad through the biochemical interactions within the organism in its environment. Genes that promote survival demonstrate their value to us by being put to (or being capable of being put to) the test of environmental conditions. In the time of COVID-19 one moral lesson the public at large should learn is to avoid wishful thinking and to demonstrate the fitness of our beliefs by putting them to empirical testing. The beliefs that are empirically successful are the beliefs that should carry on and be adopted.

For example, despite the complaints and resistance to social distancing, the idea has begun to demonstrate its value by being put to the test. This week the U.S. revised its model of projected deaths down from a minimum of 100,000 to 60,000 with the changes being credited to social distancing. In Canada, similar signs suggest that social distancing is “flattening the curve” and reducing the number of infections and thus reducing the strain on the healthcare system. On the other hand, stress, fear, and panic may lead us to accept ideas that are encouraging but not tested.

This is why it isn’t a good idea to look to “easy” solutions like hydroxychloroquine as a treatment for COVID-19. As Dr. Fauci has noted, there is no empirical evidence that the drug is effective at treating it. While there are reports of some success, these are merely anecdotal. He notes, “There have been cases that show there may be an effect and there are others to show there’s no effect.” Any benefits the drug may possess are mitigated by a number of factors that are not known. Variations among the population may exist and so need to be controlled for in a clinical study. Just as certain genes may only be beneficial under certain environing conditions, the same may be true of beliefs. An idea may seem positive or beneficial, but that may only be under certain conditions. Ideas and beliefs need to be tested under different conditions to see whether they hold up. While studies are being conducted on hydroxychloroquine, they are not finished.

Relying on wishful thinking instead can be dangerous. The president has claimed that he downplayed the virus at first because he wanted to be “America’s cheerleader,” but being optimistic or hopeful without seriously considering what one is up against, or by ignoring the warning signs, is a recipe for failure. The optimism that an outbreak wouldn’t occur delayed government action to engage in social distancing measures in Italy and in the U.S. and as a result thousands may die who may not have had the matter been treated more seriously sooner.

As a corollary from the last point, we need to get better at relying on experts. But we need to be clear about who has expertise and why? These are people who possess years of experience studying, researching, and investigating ideas in their field to determine which ones hold up to scrutiny and which ones fail. They may not always agree, but this is often owing to disagreements over assumptions that go into the model or because different models may not be measuring exactly the same thing. This kind of disagreement is okay, however, because anyone is theoretically capable of examining their assumptions and holding them up to critical scrutiny.

But why do the projections keep changing? Haven’t they been wrong? How can we rely on them? The answer is that the projections change as we learn more data. But this far preferable to believing the same thing regardless of changing findings. It may not be as comforting getting a single specific unchanging answer, but these are still the only ideas that have been informed by empirical testing. Even if an expert is proven wrong, the field can still learn from those mistakes and improve their conclusions.

But it is also important to recognize that non-medical experts cannot give expert medical advice. Even having a Ph.D. in economics does not qualify Peter Navarro to give advice relating to medicine, biochemistry, virology, epidemiology, or public health policy. Only having years of experience in that field will allow you to consider the relevant information necessary for solving technical problems and putting forward solutions best suited to survive the empirical test.

Perhaps we have seen evidence that a broad shift in thinking has already occurred. There are estimates that a vaccine could be six months to a year away. Polling has shown a decrease in the number of people who would question the safety of vaccines. So perhaps the relative success of ending the pandemic will inspire new trust in expert opinion. Or, maybe people are just scared and will later rationalize it.

Adopting the habit of putting our beliefs to the empirical test, the moral consequences of which are very serious right now, is going to be needed sooner rather than later. If and when a vaccine comes along for COVID-19, the anti-vaccination debate may magnify. And, once the COVID-19 situation settles, climate change is still an ongoing issue that could cause future pandemics. Trusting empirically-tested theories and expert testimony more, and relying less on hearsay, rumor, and fake news could be one of the most important moral decisions we make moving forward.

Expertise in the Time of COVID

photograph of child with mask hugging her mother

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Admitting that someone has special knowledge that we don’t or can do a job that we aren’t trained for is not very controversial. We rarely hesitate to hire a car mechanic, accountant, carpenter, and so on, when we need them. Even if some of us could do parts of their jobs passably well, these experts have specialized training that gives them an important advantage over us: They can do it faster, and they are less likely to get it wrong. In these everyday cases, figuring out who is an expert and how much we can trust them is straightforward. They have a sign out front, a degree on the wall, a robustly positive Google review, and so on. If we happen pick the wrong person—someone who happens to be incompetent or a fraud—we haven’t lost much. We try harder next time.

But as our needs get more complicated, for example, when we need information about a pandemic disease and how best to fight it, as our need for that kind of scientific information is politicized, figuring out who the experts are and how much to trust them is less clear.

Consider a question as seemingly simple as whether surgical masks help contain COVID-19. At first, experts said everyone should wear masks. Then other experts said masks won’t help against airborne viruses because the masks do not seal well enough to stop the tiny viral particles. Some said that surgical masks won’t help, but N95 masks will. Then some experts said that surgical masks could at least help keep you from getting the disease from others’ spittle, as they talk, cough, and sneeze. Still other experts said that even this won’t do because we touch the masks too often, undermining their protective capacity. Yet still others say that while the masks cannot protect you from the virus, they can protect others from you if you happen to be infected, “contradicting,” as one physician told me, “years of dogma.”

What are we to believe from this cacophony of authorities?

To be sure, some of the confusion stems from the novelty of the novel coronavirus. Months into the global spread, we still don’t know much about it. But a large part of the burden of addressing the public health implications lies not just in expert analysis but how expert judgments are disseminated. And yet, I have questions: If surgical masks won’t keep me from getting the infection because they don’t seal well enough, then how could they keep me from giving it to others? Is the virus airborne or isn’t it? What does “airborne” mean in this context? How do we pick the experts out of this crowd of voices?

Most experts are happy to admit that the world is messier than they would prefer, that they are often beset by the fickleness of nature. And after decades of research on error and bias, we know that experts, just like the rest of us, struggle with biased assumptions and cognitive limitations, the biases inherent in how those before them framed questions in their fields, and by the influence of competing interests—even if from the purest motives—for personal or financial ends. People who are skeptical of expertise point to these deficiencies as reasons to dismiss experts.

But if expertise exists, really exists, not merely as a political buzzword or as an ideal in the minds of ivory tower elitists, then, it demands something from us.

Experts understand their fields better than novices. They are better at their jobs than people who have not spent years or decades doing their work. And thus, when they speak about what they do, they deserve some degree of trust.

Happily, general skepticism about expertise is not widely championed. Few of us — even in the full throes of, for example, the Dunning-Kruger Effect — would hazard jumping into the cockpit of an airplane without special training. Few of us would refuse medical help for a severe burn or a broken limb. Unfortunately, much of the skepticism worth taking seriously attaches to topics that are likely to do more harm to others than to the skeptic: skepticism about vaccinations, climate change, and the Holocaust. If you happen to fall into one of these groups at some point in your life — I grew up a six-day creationist and evolution-denier — you know how hard it is to break free from that sort of echo chamber.

But even if you have extricated yourself from one distorted worldview, how do you know you’re not trapped in another? That you aren’t inadvertently filtering out or dismissing voices worth listening to? This is a challenge we all face when up against a high degree of risk in a short amount of time from a threat that is new and largely unknown and that is now heavily politicized.

Part of what makes identifying and trusting experts so hard is that not all expertise is alike. Different experts have differing degrees of authority.

Consider someone working in an internship in the first year out of medical school. They are an MD, and thus, an expert of sorts. Unfortunately, they have very little clinical experience. They have technical knowledge but little competence applying it to complex medical situations.

Modern medicine has figured out how to compensate for this lack of experience. New doctors have to train for several years under a licensed physician before they can practice on their own. To acquire sufficient expertise, they have to be immersed into the domain of their medical specialty. The point is that not every doctor has the same authority as every other, and this is true for other expert domains, as well.

A further complication is that types of expertise differ in how much background information and training is required to do their jobs well. Some types of expertise are closer to what philosopher Thi Nguyen calls our “cognitive mainland.” This mainland refers to the world that novices are familiar with, the language they can make sense of. For example, most novices understand enough about what landscape designers do to assess their competence. They can usually find reviews of their work online. They can even go look at some of their work for themselves. Even if they don’t know much about horticulture, they know whether a yard looks nice.

But expertise varies in how close to us it is. For example, what mortgage brokers do is not as close to us as landscapers. It is further away from our cognitive mainland, out at sea, as it were. First-time home buyers need a lot of time to learn the language associated with the mortgage industry and what it means for them. The farther out an expert domain is from a novice’s mainland, the more likely they are on what Nguyen calls a “cognitive island,” isolated from resources that would let novices make sense of their abilities and authority.

Under normal circumstances, novices have some tools for deciding who is an expert and who is not, and for deciding which experts to trust and which to ignore. This is not easy, but it can be done. Looking up someone’s credentials, certifications, years of experience, recommendations, track records, and so on, can give novices a sense of someone’s competence.

As the expertise gets farther from novices’ cognitive mainland, they can turn to other experts in closely related fields to help them make sense of it. In the case of mortgages, for example, they might have a friend who works in real estate or someone in banking to help translate the relevant bits to us in a way that meets our need. In other words, they can use “meta-experts,” experts in a closely related domain who understand enough of the domain to help them choose experts in that domain wisely.

Unfortunately, during a public health emergency, uncertainty, time constraints, and politicization mean that all of these typical strategies can easily go awry. Experts who feel pressured by society or threatened by politicians can — even if inadvertently — manufacture a type of consensus. They can double-down on a way of thinking about a problem for the sake of maintaining the authority of their testimony. In some cases, this is a simple matter of groupthink. In other cases, it can seem more intentional, even if it isn’t.

Psychologist Philip Tetlock, in his book with Dan Gardner Superforcasting: The Art and Science of Prediction (2015), explains how to prevent this sort of consensus problem by bringing together diverse experts on the same problem and suspending any hierarchical relationships among them. If everyone feels free to comment and if honest critique is welcomed, better decisions are made. In Are We All Scientific Experts Now? (2014), sociologist Harry Collins contends that this is also how peer review works in academic settings. Not everyone who reviews a scientific paper for publication is an expert in the narrow specialization of the researcher. Rather, they understand how scientific research works, the basic terminology used in that domain, and how new information in domains like it is generated. Not only can experts in related domains allow us to challenge groupthink and spur more creative solutions, they can help identify errors in research and reasoning because they understand how expertise works.

These findings are helpful for novices, too. They suggest that our best tool for identifying and evaluating expertise is, rather than pure consensus, consensus among a mix of voices close to the domain in question.

We might call this meta-expert consensus. Novices need not be especially close to a specialized domain to know whether someone working in it is trustworthy. They only have to be close enough to people close to that domain to recognize broad consensus among those who understand the basics in a domain.

Of course, how we spend our energy on experts matters. There are many questions that political and institutional leaders face that the average citizen will not. The average person need not invest energy on highly specialized questions like:

  • How should hospitals fairly allocate scarce resources?
  • How do health care facilities protect health care workers and vulnerable populations from unnecessary risks?
  • How can we stabilize volatile markets?
  • How do we identify people who are immune from the virus quickly so they can return to the workforce?

The payoff is too low and the investment too significant.

On the other hand, there are questions worth everyone’s time and effort:

  • Should I sanitize my groceries before or when I bring them into my living space?
  • How often can I reasonably go out to get groceries and supplies?
  • How can I safely care for my aging parent if I still have to go to work?
  • Should I reallocate my investment portfolio?
  • Can I still exercise outdoors?

Where are we on the mask thing? It turns out, experts at the CDC are still debating their usefulness under different conditions. But here’s an article that helps make sense of what experts are thinking about when they are making recommendations about mask-wearing.

The work required to find and assess experts is not elegant. But neither is the world this pandemic is creating. And understanding how expertise works can help us cultivate a set of beliefs that, if not elegant, is at least more responsible.

Infodemics and Good Epistemic Hygiene

3d rendering of bacteria under a microscope

There has been a tremendous amount of news lately about the outbreak and spread of COVID-19, better known as the coronavirus. And while having access to up-to-date information about a global health crisis can certainly be a good thing, there have been worries that the amount of information out there has become something of a problem itself. So much so that the World Health Organization (WHO) has stated that they are concerned that the epidemic has led to an “infodemic”: the worry is that with so much information it will be difficult both for people to process all of it, and to determine what they should trust and which they should ignore.

With so much information swirling about there is already a Wikipedia page dedicated to all the various rumors and conspiracy theories surrounding the virus. For instance, some of the more popular conspiracy theories state that the virus is a human-made biological weapon (it isn’t), and that there are vaccines already available but are just being kept from the public (there aren’t). It shouldn’t be surprising that social media is the most fertile breeding ground for misinformation, with large Facebook groups spreading not only falsehoods but supposed miracle cures, some of which are extremely dangerous.

In response to these problems, sites like Facebook, Google, and Twitter have been urged to take steps to try to help cull the infodemic by employing fact-checking services, providing free advertising for the WHO, and by trying to make sure that when looking for information about coronavirus online that reputable sources are those that dominate the results.

While all of this is of course a good thing, what should the individual person do when faced with such an infodemic? It is, of course, always a good idea to be vigilant when acquiring information online, especially when that information is coming from social media. But perhaps just as we should engage in more conscientious physical hygiene, we should also engage in a more substantial epistemic hygiene, as well. After all, the spreading of rumors and misinformation can itself lead to harms, so it seems that we should make extra sure that we aren’t forming and spreading beliefs in a way that can potentially be damaging to others.

What might good epistemic hygiene look like in the face of an infodemic? Perhaps we can draw some parallels from the suggested practices for good physical hygiene from the WHO. Some of the main suggestions from the WHO include:

  • Washing hands frequently
  • Maintaining social distance
  • Practicing good respiratory hygiene (like covering your mouth when you cough or sneeze)
  • Staying informed

These are all good ways to minimize chances of contracting or spreading diseases. What parallels could we draw from this when it comes to the infodemic? While the physical act of hand-washing is unlikely to stop the spread of misinformation, perhaps a parallel when it comes to forming beliefs would be to make extra careful which sources we’re getting our information from, and to critically reflect upon our beliefs if we do get information from a less than trustworthy source. Just as taking a little extra time to make sure your hands are clean can help control the spread of disease, so could taking some extra time to critically reflect help control the spread of misinformation.

Maintaining a kind of social distance might be a good idea, as well: as we saw above, the majority of misinformation about the epidemic comes from social media. If we are prone to looking up the latest gossip and rumors, it might be best to just stay out of those Facebook groups altogether. Similarly, just as it’s a good idea to try to protect others by coughing or sneezing into your arm, so too is it a good idea to keep misinformed ideas to yourself. If you feel like you want to spread gossip or information you’ve acquired from some other less-than-reputable source, instead of spreading it around further by posting or commenting on social media, the best thing would be to try to stop the spread as much as possible.

Finally, the WHO does suggest that it is a good idea to stay informed. Again, we have seen that there are better and worse ways of doing this. Staying informed does not mean acquiring information from just anywhere, nor does it mean getting as much information as is humanly possible. In the light of an infodemic one needs to be that much more vigilant and responsible when it comes to the potential spread of misinformation.

Twitter Bots and Trust

photograph of chat bot figurine in front of computer

Twitter has once again been in the news lately, which you know can’t be a good thing. The platform recently made two sets of headlines: in the first, news broke that a number of Twitter accounts were making identical tweets in support of Mike Bloomberg and his presidential campaign, and in the second, reports came out of a significant number of bots making tweets denying the reality of human-made climate change.

While these incidents are different in a number of ways, they both illustrate one of the biggest problems with Twitter: given that we might not know anything about who is making an actual tweet – whether it is a real person, a paid shill, or a bot – it is difficult to know who or what to trust. This is especially problematic when it comes to the kind of disinformation tweeted out by bots about issues like climate change, where it can not only be difficult to tell whether it comes from a trustworthy source, but also whether the content of the tweet makes any sense.

Here’s the worry: let’s say that I see a tweet declaring that “anthropogenic climate change will result in sea levels rising 26-55 cm. in the 21st century with a 67% confidence interval.” Not being a scientist myself, I don’t have a good sense of whether or not this is true. Furthermore, if I were to look into the matter there’s a good chance that I wouldn’t be able to determine whether the relevant studies that were performed were good ones, whether the prediction models were accurate, etc. In other words, I don’t have much to go on when determining whether I should accept what is tweeted out at me.

This problem is an example of what epistemologists have referred to as the problem of expert testimony: if someone tells me something that I don’t know anything about, then it’s difficult for me, as a layperson, to be critical of what they’re telling me. After all, I’m not an expert, and I probably don’t have the time to go and do the research myself. Instead, I have to accept or reject the information on the basis of whether I think the person providing me with information is someone I should listen to. One of the problems with receiving such information over Twitter, then, is that it’s very easy to prey on that trust.

Consider, for example, a tweet from a climate-change denier bot that stated “Get real, CNN: ‘Climate Change’ dogma is religion, not science.” While this tweet does not provide any particular reason to think that climate science is “dogma” or “religion,” it can create doubt in other information from trustworthy sources. One of the co-authors of the bot study worries that these kinds of messages can also create an illusion of “a diversity of opinion,” with the result that people “will weaken their support for climate science.”

The problem with the pro-Bloomberg tweets is similar: without a way of determining whether a tweet is actually coming from a real person as opposed to a bot or a paid shill, messages that defend Bloomberg may be ones intended to create doubt in tweets that are critical of him. Of course, in Bloomberg’s case it was a relatively simple matter to determine that the messages were not, in fact, genuine expressions of support for the former mayor, as dozens of tweets were identical in content. But a competently run network of bots could potentially have a much greater impact.

What should one do in this situation? As has been written about before here, it is always a good idea to be extra vigilant when it comes to getting one’s information from Twitter. But our epistemologist friends might be able to help us out with some more specific advice. When dealing with information that we can’t evaluate on the basis of content alone – say, because it’s about something that I don’t really know much about – we can look to some other evidence about the providers of that information in order to determine whether we should accept it.

For instance, philosopher Elizabeth Anderson has argued that there are generally three categories of evidence that we can appeal to when trying to decide whether we should accept some information: someone’s expertise (with factors including testifier credentials, and whether they have published and are recognized in their field), their honesty (including evidence about conflicts of interest, dishonesty and academic fraud, and making misleading statement), and the extent to which they display epistemic responsibility (including evidence about the ways in which one has engaged with the scientific community in general and their peers specifically). This kind of evidence isn’t a perfect indication of whether someone is trustworthy, and it might not be the easiest to find. When one is trying to get good information from an environment that is potentially infested with bots and other sources of misleading information, though, gathering as much evidence as one can about one’s source may be the most prudent thing to do.

Johnson’s Mumbling and Top-Down Effects on Perception

photograph of Boris Johnson scratching head

On December 6th, in the midst of his reelection campaign, UK Prime Minister Boris Johnson spoke about regulating immigration to a crowd outside a factory in central England, saying “I’m in favour of having people of talent come to this country, but I think we should have it democratically controlled.” When Channel Four, one of the largest broadcasters in the UK, uploaded video of the event online, their subtitles mistakenly read “I’m in favor of having people of color come to this country,” making it seem as though Johnson was, in this speech, indicating a desire to control immigration on racial grounds. After an uproar from Johnson’s Conservative party, Channel Four deleted the video and issued an apology.

However, despite Tory accusations of slander and media partisanship, at least two facts make it likely that this was, indeed, an honest mistake on the part of a nameless subtitler within Channel Four’s organization:

  1. Poorly-timed background noise and Johnson’s characteristic mumbling make the audio of the speech less-than-perfectly clear at the precise moment in question, and
  2. Johnson has repeatedly voiced racist, sexist, and homophobic attitudes in both official and unofficial speeches, as well as in his writings (again, repeatedly) and his formal policy proposals.

Given the reality of (2), someone familiar with Johnson may well be more inclined to interpret him as uttering something explicitly racist (as opposed to the still-problematic dog whistle “people of talent”), particularly in the presence of the ambiguities (1) describes. Importantly, it may not actually be a matter of judgment (where the subtitler would have to consciously choose between two possible words) – it may genuinely seem to someone hearing Johnson’s speech that he spoke the word “color” rather than “talent.”

Indeed, this has been widely reported to be the case in the days following Johnson’s campaign rally, with debates raging online regarding the various ways people report to hear Johnson’s words..

For philosophers of perception, this could be an example of a so-called “top-down” effect on the phenomenology of perceptual experience, a.k.a. “what it seems like to perceive something.” In most cases, the process of perception converts basic sensory data about your environment into information usable by your cognitive systems; in general, this is thought to occur via a “bottom-up” process whereby sense organs detect basic properties of your environment (like shapes, colors, lighting conditions, and the like) and then your mind collects and processes this information into complex mental representations of the world around you. Put differently, you don’t technically sense a “dog” – you sense a collection of color patches, smells, noises, and other low-level properties which your perceptual systems quickly aggregate into the concept “dog” or the thought “there is a dog in front of me” – this lightning-fast process is what we call “perception.”

A “top-down” effect – also sometimes called the “cognitive penetration of perception” – is when one or more of your high-level mental states (like a concept, thought, belief, desire, or fear) works backwards on that normally-bottom-up process to influence the operation of your low-level perceptual systems. Though controversial, purported examples of this phenomenon abound, such as how patients suffering from severe depression will frequently report that their world is “drained of color” or how devoted fans of opposing sports teams will both genuinely believe that their preferred player won out in an unclear contest. Sometimes, evidence for top-down effects comes from controlled studies, such as a 2006 experiment by Proffitt which found that test subjects wearing heavy backpacks routinely reported hills to be steeper than did unencumbered subjects. But we need not be so academic to find examples of top-down effects on perception: consider the central portion of the “B-13” diagram.

When you focus on the object in the center, you can probably shift your perception of what it is (either “the letter B” or “the number 13”) at-will depending on whether you concentrate on either the horizontal or vertical lines around it. Because letters and numbers are high-level concepts, defenders of cognitive penetrability can take this as proof that your concepts are influencing your perception (instead of just the other way around).

So, when it comes to Johnson’s “talent/color” word choice, much like the Yanny/Laurel debate of 2018 or the infamous white/gold (or blue/black?) Dress of 2015, different audience members may – quite genuinely – perceive the mumbled word in wholly different ways. Obviously, this raises a host of additional questions about the epistemological and ethical consequences of cognitive penetrability (many researchers, for example, are concerned to explore perceptions influenced by implicit biases concerning racism, sexism, and the like), but it does make Channel Four’s mistaken subtitling much easier to understand without needing to invoke any nefarious agenda on the part of sneaky anti-Johnson reporters.

Put more simply: even though Johnson didn’t explicitly assert a racist agenda in Derbyshire, it is wholly unsurprising that people have genuinely perceived him to have done so, given the many other times he has done precisely that.

Life on Mars? Cognitive Biases and the Ethics of Belief

NASA satelite image of Mars surface

In 1877 philosopher and mathematician W.K. Clifford published his now famous essay “The Ethics of Belief” where he argued that it is ethically wrong to believe things without sufficient evidence. The paper is noteworthy for its focus on the ethics involved in epistemic questions. An example of the ethics involved in belief became prominent this week as William Romoser, an entomologist from Ohio claimed to have found photographic evidence of insect and reptile-like creatures on the surface of Mars. The response of others was to question whether Romoser had good evidence for his belief. However, the ethics of belief formation is more complicated than Clifford’s account might suggest.

Using photographs sent by the NASA Mars rover, Romoser has observed insect and reptile like forms on the Martian surface. This has led him to conclude, “There has been and still is life on Mars. There is apparent diversity among the Martial insect-life fauna which display many features similar to Terran insects.” Much of this conclusion is based on careful observation of the photographs which contain images of objects, some of which appear to have a head, a thorax, and legs. Romoser claims that he used several criteria in his study, noting the differences between the object and its surroundings, clarity of form, body symmetry, segmentation of body parts, skeletal remains, and comparison of forms in close proximity to each other.

It is difficult to imagine just how significant the discovery of life on other planets would be to our species. Despite all of this, several scientists have spoken out against Romoser’s findings. NASA denies that the photos constitute evidence of alien life, noting that the majority of the scientific community agree that Mars is not suitable for liquid water or complex life. Following the backlash against Romoser’s findings, the press release from Ohio University has been taken down. This result is hardly surprising; the evidence for Romoser’s claim simply is not definitive and does not fit with the other evidence we have about what the surface of Mars is like.

However, several scientists have offered an explanation for the photos. What Romoser saw can be explained by pareidolia, a tendency to perceive a specific meaningful image in ambiguous visual patterns. These include the tendency of many to see objects in clouds, a man in the moon, and even a face on Mars (as captured by the Viking 1 Orbiter in 1976). Because of this tendency, false positive findings can be more likely. If someone’s brain is trained to observe beetles and their characteristics, it can be the case that they would identify visual blobs as beetles and make the conclusion that there are beetles where there are none.

The fact that we are predisposed to cognitive biases means that it is not simply a matter of having evidence for a belief. Romoser believed he had evidence. But various cognitive biases can lead us to conclude that we have evidence when we don’t, or to dismiss evidence when it conflicts with our preferred conclusions. For instance, in her book Social Empiricism Miriam Solomon discusses several such biases that can affect our decision making. For example, one may be egocentrically biased toward using one’s own observation and data over others.

One may also be biased towards a conclusion that is similar to a conclusion from another domain. In an example provided by Solomon, Alfred Wegener once postulated that continents move through the ocean like icebergs drift through the water based on the fact that icebergs and continents are both large solid masses. Perhaps in just the same way Romoser was able to infer based on visual similarities between insect legs and a shape in a Martian image, not only that there were insects on Mars, but that the anatomical parts of these creatures were similar in function to similar creatures found on Earth despite the vastly different Martian environment.

There are several other forms of such cognitive biases. There is the traditional confirmation bias, where one focuses on evidence that confirms their existing beliefs and ignores evidence that does not. There is the anchoring bias, were one relies too heavily on the first information that they hear. There is also the self-serving bias, where one blames external forces when bad things happen to them, but they take credit when good things happen. All of these biases distort our ability to process information.

Not only can such biases affect whether we pay attention to certain evidence or ignore other evidence, but they can even affect what we take to be evidence. For instance, the self-serving bias may lead one to think that they are responsible for a success when in reality their role was a coincidence. In this case, their actions become evidence for a belief when it would not be taken as evidence otherwise. This complicates the notion that it is unethical to believe something without evidence, because our cognitive biases affect what we count as evidence in the first place.

The ethics of coming to a belief based on evidence can be even more complex. When we deliberate over using information as evidence for something else, or whether we have enough evidence to warrant a conclusion, we are also susceptible to what psychologist Henry Montgomery calls dominance structuring. This is a tendency to try to create a hierarchy of possible decisions with one dominating the others. This allows us to gain confidence and to become more resolute in our decision making. Through this process we are susceptible to trading off the importance of different pieces of information that we use to help make decisions. This can be done in such a way where once we have found a promising option, we emphasize its strengths and de-emphasize its weaknesses. If this is done without proper critical examination, we can become more and more confident in a decision without legitimate warrant.

In other words, it is possible that even as we become conscious of our biases, we can still decide to use information in improper ways. It is possible that, even in cases like Romoser, the decision to settle in a certain conclusion and to publish such findings are the result of such dominance structuring. Sure, we have no good reason to infer the fact that the Martian atmosphere could support such life, but those images are so striking; perhaps previous findings were flawed? How can one reject what one sees with their own eyes? The photographic evidence must take precedence.

Cognitive biases and dominance structuring are not merely restricted to science. They impact all forms of reasoning and decision making, and so if it is the case that we have an ethical duty to make sure that we have evidence for our beliefs, then we also have an ethical duty to guard against these tendencies. The importance of such ethical duties is only more apparent in the age of fake news and other efforts to deliberately deceive others on massive scales. Perhaps as a public we should more often ask ourselves questions like “Am I morally obliged to have evidence for my beliefs, and have I done enough to check my own biases ensure that the evidence is good evidence?”

Impeachment Hearings and Changing Your Mind

image of two heads with distinct collections of colored cubes

The news has been dominated recently by the impeachment hearings against Donald Trump, and as has been the case throughout Trump’s presidency, it seems that almost every day there’s a new piece of information that is presented by some outlets as a bombshell revelation, and by others as really no big deal. While the country at this point is mostly split on whether they think that Trump should be impeached, there is still a lot of evidence left to be uncovered in the ongoing hearings. Who knows, then, how Americans will feel once all the evidence has been presented.

Except that we perhaps already have a good idea of how Americans will feel even after all the evidence has been presented, since a recent poll reports that the majority of Americans say that they would not change their minds on their stance towards impeachment, regardless of what new evidence is uncovered. Most Americans, then, seem to be “locked in” to their views.

What should we make of this situation? Are Americans just being stubborn, or irrational? Can they help themselves?

There is one way in which these results are surprising, namely that the survey question asks whether one could imagine any evidence that would change one’s mind. Surely if, say, God came down and decreed that Trump should or should not be impeached then one should be willing to change one’s mind. So when people are considering the kind of evidence that could come out in the hearings, they are perhaps thinking that they will be presented with evidence of a similar kind to what they’ve seen already.

A lack of imagination aside, why would people say that they could not conceive of any evidence that could sway them? One explanation might be found with the way that people tend to interpret evidence presented by those who disagree with them. Let’s say, for example, that I am already very strongly committed to the belief that Trump ought to be impeached. Couldn’t those who are testifying in his defense present some evidence that would convince me otherwise? Perhaps not: if I think that Trump and those who defend him are untrustworthy and unscrupulous then I will interpret whatever they have to say as something that is meant to mislead me. So it really doesn’t matter what kind of evidence comes out, since short of divine intervention all of the evidence that comes out will be such that it supports my belief. And of course my opposition will think in the same way. So no wonder so many of us can’t imagine being swayed.

While this picture is something of an oversimplification, there’s reason to think that people do generally interpret evidence in this way. Writing at Politico, psychologist Peter Coleman describes what he refers to as “selective perception”:

Essentially, the stronger your views are on an issue like Trump’s impeachment, the more likely you are to attend more carefully to information that supports your views and to ignore or disregard information that contradicts them. Consuming more belief-consistent information will, in turn, increase your original support or disapproval for impeachment, which just fortifies your attitudes.

While Coleman recognizes that those who are most steadfast in their views are unlikely to change their minds over the course of the impeachment hearings, there is perhaps still hope for those who are not so locked-in. He describes a “threshold effect”, where people can change their minds suddenly, sometimes even coming to hold a belief that is equally strong but on the opposite side of an issue, once an amount of evidence they possess passes a certain threshold. What could happen, then, is that over the course of the impeachment procedures people may continue to hold their views until the accumulated evidence simply becomes too overwhelming, and they suddenly change their minds.

Whether this is something that will happen given the current state of affairs remains to be seen. What is still odd, though, is that while the kinds of psychological effects that Coleman discusses are ones that describe how we form our beliefs, we certainly don’t think that this is how we should form our beliefs. If these are processes that work in the background, ones that we are subject to but don’t have much control over, then it would be understandable and perhaps (in certain circumstances) even forgivable that we should generally be stubborn when it comes to our political beliefs. But the poll is not simply asking what one’s beliefs are, but what one could even conceivably see oneself believing. Even if it is difficult for us to change our minds about issues that we have such strong views about, surely we should at least aspire to be the kind of people who could conceive of being wrong.

One of the questions that many have asked in response to the poll results is whether the hearings will accomplish anything, given that people seem to have made up their minds already. Coleman’s cautious optimism perhaps gives us reason to think that minds could, in fact, be swayed. At the same time it is worth remembering that being open-minded does not mean that you are necessarily wrong, or that you will not be vindicated as having been right all along. At the end of the day, then, it is difficult not to be pessimistic about the possibility of progress in such a highly polarized climate.

On the Legitimacy of Moral Advice from the Internet

Cropped, black-and-white headshot of a white woman with dark hair, pearl earrings wearing a white blouse and dark blazer.

The subreddit “Am I The Asshole?” describes itself as a “catharsis for the frustrated moral philosopher in all of us.” It is a forum in which users can make posts describing the actions they took in the face of sometimes difficult decisions in order to have users in the community provide their evaluation of whether they did, in fact, act in a morally reprehensible way. Recent posts describe situations ranging from complex relationships with family – “AITA for despising my mentally handicap sister?” – to more humorous situations with friends – “AITA for wearing the “joke” bikini my friend got me?” – all the way to relatively minor inconveniences – “AITA for not wanting to pick up my wife from the airport at 12:30 a.m.?” Verdicts can include “NTA” (not the asshole), “YTA” (you’re the asshole), “ESH” (everyone sucks here), and claims that there’s not enough information to make an informed judgment. Sometimes there is consensus (in the above cases the consensus for the first was that the person was not an asshole, and that they were in the third) and sometimes there is disagreement (the jury continues to be out on the second case).

Seeking moral advice from strangers is nothing new: the newspaper advice column “Dear Abby,” for example, has been running since 1956. But it is worth asking whether these are good places to get moral advice from. Can either an anonymous collective like Reddit, or a pseudonymous author like Dear Abby, really give us good answers to our difficult moral questions?

One might have some concerns about appealing to the aforementioned subreddit for moral advice. For instance, one might question the overall use of soliciting the opinions of a bunch of random strangers on the internet, people whom one knows nothing about, some of whom may very well be moral exemplars, but some of whom will almost certainly also be complete creeps. Why think that you could get any good answers from such a random collection of people?

There is, however, one significant benefit that could come from asking such a group, which is that one can reap the benefits of cognitive diversity. Here’s the general idea: you can often solve problems better and more efficiently if you have a lot of different people with different strengths and different viewpoints contributing to a solution than if you had a group of people who all had the same skills and thought in the same kinds of ways. This is why it’s often helpful to get a new set of eyes on a problem that’s been stumping you, or why sometimes the best solutions can come from outsiders who haven’t thought nearly as much about the problem as you have. So we might think that seeking advice from a massive online community like Reddit can offer us the same kind of benefits: there will be a diversity of views, with different people drawing on different life experiences to offer a variety of perspectives, and so any consensus reached will then be a good indication of whether you really are morally culpable for your actions.

Unfortunately, while the community might give off the impression of being diverse, a recent study from Vice suggests that it is a lot more homogeneous than one might like. For instance, the study reported that:

“Over 68 percent of the internet’s asshole-arbiters are in North America and 22 percent are in Europe, while over 80 percent are white. The survey also found that 77 percent of AITA subscribers are aged between 18 to 34 years old, with over 10 percent aged under 18 and only 3.4 percent aged 45 and over.”

These numbers do not exactly represent the kind of variety of life experience that would allow for the full value of diversity. One particularly telling consequence is the subreddit’s reputation for advising that fights in one’s marriage should almost always result in a divorce, advice that might be different if it weren’t the case that, according to the Vice study, about 70% of the responding users weren’t married.

This is not to say that you will never get any good moral advice from Reddit. It is, however, to say that perhaps the advice you seek from there should be taken with a grain of salt, and perhaps run by a few different types of people before coming to any conclusions. So if an anonymous mass of online users isn’t good enough, then where should one turn instead?

There are no doubt people we’ve come across who we think are good sources of moral advice – family members, perhaps, or close friends – and we might have reason for seeking out advice from these people rather than others – perhaps they are generally able to provide good reasons to support their advice, or maybe good things tend to happen when you listen to them, or maybe they just seem really wise. We might worry, though, whether a friend or family member is the best source of moral advice. Maybe what we really want is an expert. In the same way that I would prefer to get medical advice from my doctor (a medical expert), or advice about how to fix my car from a mechanic (an expert in car repair), perhaps what I should do in seeking out moral advice is to find a moral expert.

How do we find such an expert? Philosophers have debated extensively about what it would take to be a moral expert (as well as if moral experts exist at all), and while these are still open questions, we might think that, in general, a moral expert has to know a lot about the kinds of difficult situations you find yourself in and be able to convey that knowledge when needed in order to address problems that come up. Often when seeking out moral advice, then, we look to thoughtful people who have been through similar situations before, and have come out well as a result. These people might then display some moral expertise, and might be a good source of moral advice.

That we seek out moral experts can explain why people have been writing into advice columns like Dear Abby for so long: Abby is purportedly an expert, and so we might think that her advice is the best available. Abby has, however, seen her share of criticism in the past, and in some recent cases has offered up some real stinkers in terms of advice. While it would be nice, then, if there were a moral sage who could offer us the perfect advice in all circumstances, something that we might take away from the problems with seeking out advice from Reddit and Dear Abby is that the best moral advice might come from not just one source, but rather a variety of viewpoints.

Could Gender-Blind Casting Limit Epistemic Injustice?

Photograph of Edwin Austin Abbey's painting of a scene from Shakespeare's King Lear

Following on the heels of her 2018 Tony award for her role in the revival of Edward Albee’s Three Tall Women, Glenda Jackson is set to reprise her portrayal of the title role in King Lear when it comes to Broadway next season. Lear’s extreme emotional range has led many to consider the role to be one of Shakespeare’s most difficult characters to portray, but Jackson’s embodiment of the mad king in Deborah Warner’s 2016 production at London’s Old Vic was hailed by audiences and critics alike as an artistic and cultural success. Undoubtedly, Jackson’s talent will once again have an opportunity to shine in New York, but this example of gender-blind casting (Jackson did not play “Queen” Lear) offers an interesting suggestion for addressing a problem within the world of entertainment — one that Miranda Fricker called “hermeneutical marginalization.”

In her 2007 book Epistemic Injustice: Power and the Ethics of Knowing, Fricker outlined various ways that an individual might be wronged when they face a disadvantage to accessing or sharing knowledge that others can access freely. Some kinds of epistemic injustice are preceded by what Fricker called hermeneutical marginalization, which are particularly evident in the case of marginalized groups, whose reports of mistreatment, for example, might be ignored or minimized by audiences with greater social power. This concept, as explained by Dr. Emily McWilliams on the Examining Ethics podcast, is what happens “when members of non-dominant groups don’t get to fully participate in the process of meaning-making as we develop our shared pool of concepts through which we communicate.”

Many examples of attempts towards this sort of marginalization can be found in wide-spread responses to recent productions of shows like Hamilton, comic books like Thor and Spider-Man, and movies like Star Wars, Ocean’s Eight and the 2016 reboot of Ghostbusters. When John Boyega was named as a primary cast member of the then-unreleased Star Wars VII: The Force Awakens in 2015, white supremacists called for a boycott of the franchise on the grounds that it should be “kept white.” Donald Glover endured similarly racist criticisms after he was proposed as a possible choice to take over the role of Spider-Man in 2012, as has the cast of Lin-Manuel Miranda’s award-winning Broadway show Hamilton for its re-envisioning of the American founders. When Marvel Comics recast the character of Jane Foster as the new Thor in 2014, detractors criticized the move as “politically correct bullsh**,” a complaint also suffered by the rebooted Oceans Eight and Ghostbusters projects. The upcoming season of the BBC’s Doctor Who that will premiere later this year with Jodie Whittaker at the helm of the T.A.R.D.I.S. faced the same criticism. In particular, the 2016 Ghostbusters film withstood an organized campaign of sexist attacks that was specifically designed to damage the movie’s profitability, even before the film was actually released. In each case, the attempt to remove these criticized women and people of color from the meaning-making process of big-budget storytelling means that they have been likewise victimized by Fricker’s hermeneutical marginalization.

And while endeavors like the Time’s Up campaign and the #MeToo movement have offered opportunities to spread awareness and aid to victims of such marginalization, it seems unlikely that gender-bending reboots hold much promise for changing the landscape of American culture — in fact, as Alexandra Petri has argued, they may actually contribute to the problem of “the male experience being taken as a proxy for the human experience.” Instead of intentional gender-bending, perhaps Glenda Jackson’s gender-blind casting may offer an opportunity to provoke a more widespread “mooreeffoc” moment in the minds of an audience.

Coined by Charles Dickens as reported in his biography by G.K. Chesterton, “mooreeffoc” refers to the sign on the windowed door of a coffee room, read backwards from the inside, to indicate the sudden re-appreciation of something previously taken for granted. Much like how someone might at first be confused, then suddenly pleased to realize that they now understand something obvious in a new light (as when realizing that you can, in fact, read an at-first-confusing sign), the mooreeffoc moment comes uncontrollably when one recovers a “freshness of vision” (to quote J.R.R. Tolkien’s description of the effect) about something previously considered trite.

This is what is needed for representation in Hollywood and beyond: not simply more diverse roles and casts (although that is certainly crucial), but the proper appreciation of those casts on the part of the public at large. Though Fricker promoted a “virtue of hermeneutical justice” wherein sensitivity to “some sort of gap in the collective hermeneutical resources” might function to offset or even prevent the harms done by hermeneutical injustices like marginalization, gender-bending casting decisions do not seem to serve such a purpose. Unfortunately, dominant groups — members of which would do well to reconsider their marginalizing attitudes and actions – will likely continue to raise questions (however unfounded) of political intentions and suspicious concerns over subversive messaging surrounding these roles. Indeed, gender-bending productions may currently be too charged to promote reflective considerations that could precipitate a mooreeffoc.

Yet gender-blind casting might bypass such accusations entirely with its firm foundation on simple actorial merit. Although many may not realize it, gender or race-blind casting has led to some of the more memorable roles in cinematic history, such as Morgan Freeman’s portrayal of Red in The Shawshank Redemption and Sigourney Weaver’s depiction of Ellen Ripley in the Alien franchise. Certainly, if diverse representation is to truly become more common in Western entertainment, then even resistant audiences must come to have a freshness of vision about the possibilities for the depiction of fictional characters (and, by extension, individuals in general). Particularly in light of research that indicates the empathy-promoting power of literature and immersive storytelling, proving to suspicious members of dominant social groups that members of marginalized groups perform perfectly well in the same roles might offer the very wedge needed to provoke a mooreeffoc moment. If gender-blind-casting could bring about this effect even if only for a time — therefore offering an alternative pathway to promote a more equitable entertainment industry — then it seems like it would be worth considering more frequently.