← Return to search results
Back to Prindle Institute

Good To Be Skeptical? Evidentialism and Climate Change

photograph of tree in the desert

When it comes to climate change, defining the limits of reasonable skepticism is not only a matter of intellectual curiosity, but also moral and political urgency. In contemporary scientific circles, skepticism is generally celebrated as a virtue. However, those who reject the near-consensus about anthropogenic climate change also claim the “skeptic” title. This raises an important question: What does it mean to be skeptical, and when is skepticism no longer praiseworthy?

Philosophers have often pondered the extent of human knowledge. Skeptics argue that our understanding is more limited than we tend to believe. Some skeptics even claim that we can never know anything, or that none of our beliefs are justified and we ought to suspend all judgment on all issues.

Many climate scientists claim the title “skeptic” for themselves and attach the label “denier” to their opponents. The Committee for Skeptical Inquiry, for example, has called on the media to stop using the term “skepticism” to refer to those who reject the prevailing climate consensus and to instead use the term “denial.” We can, according to Washington and Cook, authors of Climate Change Denial: Heads in the Sand, think of the difference like this: “Skepticism is healthy both in science and society; denial is not.” However, when it comes to climate change, the term “skeptic” continues to be associated with those who reject the prevailing scientific consensus, blurring the line between skepticism and denial.

To better understand the differences between skepticism and denial, let’s consider a concrete example: the existence of ghosts. A ghost skeptic denies that we are justified in believing that ghosts exist. They neither believe nor disbelieve in ghosts, as they think there isn’t enough evidence to justify a belief in ghosts. A ghost denier, conversely, decidedly believes that ghosts do not exist. They disbelieve in ghosts, arguing that ghosts are incompatible with our best understanding of how the laws of the universe work, and that, absent good evidence for ghosts, we should conclude they do not exist. In general, it is not necessarily better to be a skeptic than a denier. Whether we ought to disbelieve something or merely suspend judgment depends on the particular issue and the strength of the evidence we have.

So why do Washington and Cook think that denial is always a bad thing? Ultimately, they are referring to a very specific sense of “denial.” They mean someone who clings “to an idea or belief despite the presence of overwhelming evidence to the contrary.” This is a sense of denial that draws on Freudian psychoanalysis, which characterizes denial as a pathological defense mechanism that involves denying something because one wishes it weren’t true. Denial in this sense is the result of some kind of emotional or psychological incapacity to accept reality.

It is clearly bad to be a climate change denier, or any kind of denier, in the pathological sense Washington and Cook have in mind. However, we can’t assume everyone who denies the scientific consensus on climate change is suffering from a psychological disorder. Some genuinely believe the evidence they have seen does not justify a belief in anthropogenic climate change. Whether it is a mistake to disbelieve in man-made climate change depends entirely on the strength of the scientific evidence. In my own view, the scientific evidence of anthropogenic climate change is very strong and this, rather than some psychological defect, is what makes denial inappropriate.

However, it is worth noting that most of those who reject the consensus on climate change identify as “skeptics” rather than “deniers,” claiming that they have not yet formed a conclusion on the matter. But plenty of scientists who defend the prevailing view on climate change also think of themselves as still embracing skepticism. This raises the question: who is the real skeptic?

To answer that question, we first need to understand a distinction between philosophical skepticism and the scientific skepticism advocated by figures like Michael Shermer, publisher of Skeptic magazine. Shermer defines skepticism as striking the “right balance between doubt and certainty.” As William James notes, this contrasts with a philosophical skeptic who says, “Better go without belief forever rather than believe a lie!” Philosophical skeptics only think we should believe things that are absolutely certain. Scientific skeptics try to believe whatever the evidence suggests has a greater than 50% chance of being true. These are very different standards. To philosophers, scientific skepticism is just “evidentialism” – the principle that our beliefs should be based solely on available evidence.

So who are the real skeptics? Perhaps some climate skeptics are philosophical skeptics. Perhaps they think it is more likely than not that anthropogenic climate change is real, but that we still aren’t justified in believing it. In this case, climate skeptics might be the “real skeptics,” but only on an interpretation of skepticism that most scientists would think is deeply objectionable.

But most climate skeptics are not philosophical skeptics. As the philosophers Coady and Corry observe, the debate between climate change proponents and climate skeptics is not a dispute between two groups of skeptics, one scientific and one philosophical. Instead, it is a disagreement between two groups of evidentialists, who differ in their interpretations and evaluations of the evidence and hence in their beliefs. Of course, one side must be wrong and the other must be right. But both sides appeal to the evidence, as they see it, to justify their respective views.

Proponents of anthropogenic climate change often accuse climate skeptics of disregarding the wealth of evidence supporting their stance. Conversely, climate skeptics argue that climate change advocates are swayed by personal desires, emotions, or political ideologies. But, at bottom, both criticisms reveal a shared commitment to evidentialism. These are accusations of forming beliefs based on things other than the best available evidence – of violating evidentialism. Neither side of the climate debate adopts the extreme skeptical position of suspending all judgment and belief, regardless of the evidence at hand.

Acknowledging that most people on both sides of this issue are committed to an evidentialist approach is crucial, because it encourages both sides to engage in a constructive dialogue that focuses on the merits of proof, rather than resorting to ad hominem attacks or accusations of bias. By emphasizing the importance of evaluating the strength and reliability of the evidence, it becomes possible to move beyond the polarizing and confusing labels of “skeptic” and “denier” and engage in a more fruitful discussion. Perhaps this could help reverse the current trend in public opinion toward climate skepticism.

Given that both sides of the climate change debate are committed to evidentialism, instead of squabbling over the label “skeptic,” which neither side should want to claim given its philosophical meaning, our focus should return to simply assessing the facts.

The Knowledge Norms of Emotions

simple single-line drawing of person curled up in bed

This post begins with a sad backstory. A little while back my wife and I had a miscarriage of our first child. There was a lot that was terrible in the experience, but in this post I want to address a certain oddity that I noticed about grief.

Due to a range of bureaucratic complications, it took about a week from when we first suspected a miscarriage to when we had final confirmation. During that week, our confidence that we had miscarried grew, but throughout the period it remained a likelihood rather than a certainty.

What surprised me, during that week, was that the uncertainty made it difficult to grieve. Even when I was pretty sure we had lost the child, it felt ‘out of whack’ to grieve the loss, since there was a chance the child was still alive. It was a terrible week, and I was extremely sad, but it felt out of joint to grieve for the child while recognizing the chance that all might be well. There was no obstacle to feeling anxious, there was nothing out of joint about feeling trepidation, but outright grief felt strange. And it continued to feel strange until we received confirmation of the loss.

This, eventually, got me wondering: is grief characterized by a knowledge norm? In philosophy, a knowledge norm is a normative rule which says that knowledge of something is required for an action or mental state to be appropriate. For example, there seems to be a knowledge norm on assertion: you should only tell someone something if you know that thing is true. This explains, for instance, why if I say “it will rain tomorrow” it is appropriate for you to ask “how do you know?” Or why saying “I don’t know” is an appropriate response if someone asks you a question. (For a thorough defense of a knowledge norm of assertion see Timothy Williamson’s “Knowing and Asserting.”)

Many philosophers also argue that there is a knowledge norm of belief: you should only believe X if you know X is true. Thus, Williamson argues in his book Knowledge and its Limits

“Knowledge sets the standard of appropriateness for belief. That does not imply that all cases of knowing are paradigmatic cases of believing, for one might know p while in a sense treating p as if one did not know p—that is, while treating p in ways untypical of those in which subjects treat what they know. Nevertheless, as a crude generalization, the further one is from knowing p, the less appropriate it is to believe p. Knowing is in that sense the best kind of believing. Mere believing is a kind of botched knowing. In short, belief aims at knowledge (not just truth).”

There also seems to be a knowledge norm of certain actions. For instance, it seems like you should only punish someone if you know they are guilty, and only chastise someone if you know they did wrong. Some philosophers have gone even further and suggested that there is a general knowledge norm on all action: you should only treat X as a reason for action if you know X to be true.

My own experience with grief seems to suggest that there might also be a knowledge norm on various emotions; but as far as I know that topic has not yet been seriously investigated by philosophers.

My experience of the miscarriage suggested there was a knowledge norm to grief because the problem was that it felt wrong to grieve our child’s death as long as I recognized that the child might still be alive. This is parallel to how I can’t know the child had died as long as I recognized that the child might still be alive. In some sense, what is characteristic of knowledge is the elimination of all relevant alternatives. As long as those relevant alternatives remained, we did not know, nor did it feel quite right to grieve.

Here is another reason for thinking that grief is characterized by a knowledge norm: it is hard to fit probabilities with the emotion of grief. It would be weird to think that as I grow more certain, my grief grows proportionally. I do not grieve a small amount at a 5% chance that my spouse has died, nor would my grief double as my confidence grows to 10%. I grieve less for less bad things, not for lower probabilities of equally bad things. But it would be equally weird to think that there is some probabilistic threshold at which grief suddenly becomes appropriate. It is not as though when I go from 94% confident my child died to 96% confident my child died that suddenly grief goes from inappropriate to appropriate.

But if grief neither scales with probability, nor requires a certain probabilistic threshold, then it seems like grief is responsive to a standard other than probabilistic credence, and the natural alternative is that it is responsive to knowledge.

Other emotions also seem to be knowledge normed in this way. It is hard to feel grateful because you think it is likely that someone brought you a present. Normally gratitude is a response to the knowledge that someone did something for you. Jonathan Adler makes a point along these lines about resentment: “Mild resentment is never resentment caused by what one judges to be a serious offense directed toward oneself tempered by one’s degree of uncertainty in that judgment.”

Now, some other emotions at first blush seem different. I can be worried about something without knowing that thing will occur. Similarly, I can be hopeful of something without knowledge it will occur. Yet, even here, it seems that there might be some knowledge norm at play. For instance, it seems weird to be worried about or hope for something you know is impossible. Thus, it might be that you must know that something is possible before you can worry about it or hope for it.

If this is right, does it suggest a general pattern? I think it does. Emotions have appropriateness conditions. Resentment is an appropriate response to being wronged. Gratitude is an appropriate response to being given a gift. Hope is an appropriate response to the possibility of certain goods, as worry is an appropriate response to the possibility of certain bads. In each of these cases, what is required to rightly feel the emotion is knowledge.

That, then, is why grieving felt strange. I didn’t yet know if my grief was appropriate since I lacked knowledge of the tragedy to which my grief was a response.

UFOs and Hume on Miracles

photograph of silhouetted figure shining flashlight at light source in the night sky

UFOs appear to be having a cultural moment. A Pentagon report laying out what U.S. intelligence agencies know about UFOs — or, to use the government’s preferred acronym, UAPs (unidentified aerial phenomena) — is expected at the beginning of June. A recent “60 Minutes” segment included interviews with two former Navy pilots who described their encounters with a UFO. The New Yorker ran a long piece about UFOs in its May 10th issue. And last week former Nevada senator Harry Reid penned a long reflection in The New York Times about his interest in the phenomenon.

It is relatively common to see UFOs, such as those tracked by Navy fighters’ infrared weapons cameras, described as “defying the laws of physics”; for example, flying at many times the speed of sound and then coming to an abrupt halt, without any visible means of propulsion. Being woefully ignorant about those laws, it is difficult for me to tell whether this is journalistic hyperbole or a claim to be taken literally. But if we do take it literally, then we can call on the great Scottish philosopher David Hume to help us decide what to believe.

Hume famously defined a “miracle” as a violation of a law of nature. For Hume, a law of nature obtains only when we have an extensive, and exceptionless, experience of a certain kind of phenomenon: for example, our extensive experience of human beings dying “establishes” the law that all human beings die. (As this example suggests, violations of laws of nature are not impossible or inconceivable; they are simply counterinstances to our extensive, exceptionless experience.) If UFOs really defy the laws of physics, then they perform miracles in the Humean sense.

Hume argued that no testimony — i.e., a person’s statement that something is true — can establish the existence of miracles. His argument can be summarized as follows:

1. The evidence against the existence of a miracle is as strong as it possibly could be.

A law of nature is established on the basis of experience, which is the only kind of evidence we can have for a causal proposition. And our experience is both extensive and exceptionless, so it furnishes evidence that is as strong as experiential evidence could be.

2. The evidence for the existence of a miracle from testimony, while perhaps very strong, is weaker than the evidence against the existence of a miracle.

Hume avers that it is always more probable that testimony is false — that the person giving the testimony “either deceive[s] or [has been] deceived” — than that a miracle has occurred. Put another way: to constitute stronger evidence than that which we have against the existence of a miracle, testimonial evidence for a miracle must be such that its falsehood would itself be a miracle — in fact, would itself be a greater miracle than that which the testimony is evidence for.  But for any given piece of testimony, there is always a non-miraculous possibility of its falsehood.

3. We ought to proportion our belief according to our evidence, and evidence for contradictory conclusions cancels out.

Hume here appeals to “evidentialism,” the commonsense idea that we ought to proportion our belief in a proposition to the evidence we have for it. In addition, he says that evidence for a proposition and evidence for its negation “destroy” each other.

4. Therefore, whenever our evidence for a miracle is based entirely on testimony, we ought to believe that it did not occur.

Since the evidence against the existence of a miracle is always stronger than testimonial evidence for it, when testimonial evidence is all the evidence we have for a miracle, we ought to believe that the miracle did not exist or did not occur.

We can now see how the argument can be applied to UFOs. If UFOs really perform miracles, then any testimonial evidence for the existence of UFOs is always weaker than the evidence against their existence. Therefore, we should reject the existence of UFOs if the only evidence we have for them is based on testimony.

It might be objected that we have non-testimonial evidence for UFOs, such as the infrared camera videos. However, these videos are by themselves difficult for most people to interpret or understand, as are most of the alleged photographs of UFOs. The layman must instead rely on the testimony of experts to interpret the videos or photographs for him or her. Thus, even when photographs or videos are held up as evidence of UFOs, it is really the testimony of experts, who provide authoritative interpretations of these materials, that is doing the evidentiary work. And this leads us back to Hume’s problem.

Of course, Hume’s argument is not without its many detractors; objections are legion. One objection revolves around what Hume says about the “Indian prince, who refused to believe the first relations concerning the effects of frost …” The Indian prince had an extensive, exceptionless experience of water in a liquid state. Frost is a counterinstance of the “law” that water is always liquid. Does it follow, then, that the Indian prince could not justifiably believe in frost on the basis of any testimony, no matter how strong? Hume’s response is that solid water is an experience that is not contrary to the prince’s experience, although it is also not conformable to it. The more general problem is that Hume needs to allow for progress in the sciences, including the revision of our understanding of the natural laws. Like many of Hume’s arguments, his argument about miracles set the agenda for much of the succeeding discussion, but left many questions unanswered.

Still, Hume’s argument against miracles is undeniably compelling. As applied to UFOs, the argument shows us the limits of testimony, however well-intentioned or authoritative.

Testimony, Conspiracy Theories, and Hume on Miracles

abstract painting of two faces without eyes facing away from one another

In his Enquiry Concerning Human Understanding, David Hume reports a local rumor from a town in Spain conveyed to him, with a healthy amount of skepticism, by a cardinal. The story was about a man who had undergone a rather miraculous recovery from an ailment. As Hume describes it,

“He had been seen, for so long a time, wanting a leg; but recovered that limb by the rubbing of holy oil upon the stump; and the cardinal assures us that he saw him with two legs.”

The townsfolk were all ardent believers in the miracle, and it was accepted by “all the canons of the church.” The story spread and was believed on the basis of testimony, and was able to pass and be sustained as easily as it was, in part, because of a shared trust among members of the community. Nevertheless, the cardinal himself gave no credence to the story. Despite the fact that many people were willing to testify to its truth, a story about such an event is just not the kind of thing that has any meaningful likelihood of being true. The cardinal “therefore concluded, like a just reasoner, that such an evidence carried falsehood upon the very face of it, and that a miracle, supported by any human testimony, was more properly a subject of derision than of argument.”

Hume relates other stories, common at the time he was writing, of people offering and accepting accounts of miracles. He argues that to adjudicate these matters, our evidence consists in our set of past observations. Miracles are violations of the laws of nature. When we consider whether we ought to believe in miracles on the basis of testimony, we must weigh our past observations of the workings of the laws of nature against our observations regarding the veracity of testimony. The former will always win. We will always have more evidence to support the idea that the laws of nature will remain constant than we will to support the belief in eyewitness testimony which reports that those laws have been broken. As Hume himself states, “the knavery and folly of men are such common phenomena, that I should rather believe the most extraordinary events to arise from their concurrence, than admit of so signal a violation of the laws of nature.”

Hume is not just reporting historical fact but is also prescient (though he might object to that characterization) when he says, “men, in all ages, have been so much imposed on by ridiculous stories of that kind.” More than 250 years later, we’re contending with elaborate conspiracies such as shape-shifting reptilian overlords, 5-G towers that transmit coronavirus, vaccines that implant microchips, and wild accusations of widespread voter fraud sufficient to change the outcome of the election. The Q-Anon conspiracy theory even has representation in the House of Representatives. Believers in this conspiracy think that a powerful cabal of pedophilic baby-eating democrats is secretly running the world, and a child-sex ring. A secret whistleblowing governmental agent — Q — is conveying all of this information to true patriots on internet chat rooms. Q-Anon spread in much the same way that the story about holy water being used to grow new limbs spread — through testimony.

Hume points out that the practice of coming to know things on the basis of testimony depends on certain enduring features of human nature.

“Were not the memory tenacious to a certain degree, had not men commonly an inclination to truth and a principle of probity; were they not sensible to shame, when detected in a falsehood: Were not these, I say, discovered by experience to be qualities, inherent in human nature, we should never repose the least confidence in human testimony. A man delirious, or noted for falsehood and villainy, has no manner of authority with us.”

Society couldn’t function if we couldn’t rely on testimonial evidence. The present political climate elicits feelings of impending existential dread — a sense that truth and meaning are bleeding off the page like amateur watercolor, leaving no visible boundaries. The characteristics that Hume describes are being worn down. We’ve been told not to rely on our memories; it is unpatriotic to pay too much attention to the past. There are no behaviors that should make anyone feel shame; to suggest that someone ought to feel ashamed for deceiving and misleading is to “cancel” that person. In an environment immersed in “alternative facts,” there is no inclination toward truth or “principle of probity.” It is little wonder that in this environment people favor the likelihood of the existence of liberal pedophilic cannibals over the likelihood that anthropogenic climate change is occurring.

With the possible exception of the lizard people who can transform into humans, these conspiracy theories aren’t violations of the laws of nature. That said, a similar kind of inductive argument is possible. Most of these conspiracy theories require a level of seamless complicity among many, many people, who then leave behind no compelling evidence. Election fraud conspiracies, for example, require complicity across states, political parties, and branches of government. So, we’re left with two broad options. Either every person played their role in this flawlessly, leaving behind no trace, or the theory is false, and it arose from “the knavery and folly” of human beings as has so often happened throughout human history. There is a much stronger inductive argument for the latter.

All of this has a moral component to it, but it is difficult to know exactly how to identify it. As Hume points out, humans have certain dispositions that incline them toward truth. On the other hand, they also have strong tendencies to believe nonsense, especially if that nonsense is coherent with what they already believed or might otherwise make them feel good. We could say that everyone ought to have higher epistemic standards, but ought implies can — it makes no sense to say that a person ought to use better methods to form their beliefs when their psychologies prevent them from having any control over such things. There may be no ultimate solution, but there might be some chance that things could improve. Making things better might not be a matter of changing individual minds, but, instead, altering the environments in which those minds are formed. Education should be a high priority and well-funded. We must have policies that reward honesty among public officials and there must be serious consequences when our public figures tell lies.

Incentivizing the Vaccine-Hesitant

photograph of covid vaccination ampoules

Since the beginning of the COVID-19 pandemic, vaccine hesitancy has remained a constant concern. Given expectations that a vaccine would be found, experts always anticipated the problem of convincing those who distrust vaccines to actually get inoculated. A great many articles coming from the major news outlets have aimed at addressing the problem, discussing vaccine hesitancy and, in particular, trying to determine the most promising strategy for changing minds. In The Atlantic, Olga Khazan surveys some of the methods that have been proposed by experts. Attempts to straightforwardly correct misinformation seems to have proven ineffective as they can cause a backfire effect where individuals cling to their pre-existing beliefs even more strongly. Others instead suggest that a dialectical approach might be more successful. In The Guardian, Will Hanmer-Lloyd argues that we should refrain from blaming or name-calling vaccine-hesitant individuals or “post on social media about how ‘idiotic’ people who don’t take the vaccine are” because “it won’t help.” Similar to this “non-judgmental” approach that Hanmer-lloyd recommends, Erica Weintraub Austin, Professor and Director of the Edward R. Murrow Center for Media & Health Promotion Research at Washington State University, and Porismita Borah, Associate Professor at Washington State University, in The Conversation propose talking with vaccine-hesitant people and avoiding “scare-tactics.” Among the things that can help is providing “clear, consistent, relevant reasons” in favor of getting vaccinated while at the same time discussing what constitutes a trustworthy source of information in the first place.

In spite of all these good suggestions, to this day, Pew Research reports that only 60% of Americans would probably or definitely get a vaccine against COVID-19. Though confidence has been on the rise since September, this still leaves a concerning 40% unlikely to pursue vaccination. It is perhaps in light of these facts that a recent proposal is beginning to gain traction: incentivizing people by offering prizes. Ben Welsh of the LA Times reports that the rewards proposed include “Canary home security cameras, Google Nest entertainment systems, Aventon fixed-gear bicycles and gift cards for Airbnb and Lyft.”

But is it right to give out prizes to lure the initially unwilling to seek vaccination?

The answer depends on the moral system to which you subscribe. You might think that given the seriousness of the current circumstances it is especially crucial to get as many folks vaccinated as possible, and that the means of accomplishing this task are of secondary importance. This would be a consequentialist view according to which the moral worth of an action depends on the outcomes it produces. One might feel the force of this line of argument even more when considering that the consequences of vaccine hesitancy can carry dangers not only for the individuals refusing to get vaccinated but for the rest of us as well. Just recently, a Wisconsin pharmacist purposefully made unusable 57 vials of vaccine that could have been used to vaccinate up to 500 people because of a belief they were unsafe. So considering how significant the impact of vaccine-distrust can be, it is understandable that one might employ even unusual methods – such as prizes – to convince those who remain reluctant to join the queue.

On the other hand, if you do not feel the force of this outcome-based argument, you might think that there is something to say about the idea that changing people’s behavior does not necessarily change people’s beliefs. In this sense, offering a prize might not do much to alleviate the distrust they feel towards vaccination or the government. Consider another example. Suppose you do not believe that exercising is good. Yet your best friend, who instead does believe in the positive aspects of exercising, convinces you to go running with her because the view from the hill where she runs is stunning. In that sense, you may eventually elect to go running, but you will not do it because you are now a believer in exercising. You will go running just so that you can admire the view from the hill, without having changed your beliefs about exercise.

What is the problem of not changing people’s beliefs? You might be tempted to think that there is no problem, if you believe that the end result is all that matters. But even in that case, it is beliefs that drive our actions, and so as long as individuals still believe that vaccines are not to be trusted, giving out prizes will only be a marginal and temporary solution that fails to address the deeper, underlying issue. The worry is that someone who may opt to get vaccinated upon receiving a gift card is not deciding to get vaccinated for the right kind of reason. This argument picks out a distinction famously known in philosophy between right versus wrong kinds of reasons. The philosophical debate is complex, but, in general, when it comes to believing something, only epistemic, evidence-based reasons represent good reasons for actions. Should one, instead, come to act on the basis of reasons that have more to do with, say, wishes or desires, those would represent the proper kinds of reasons.

So what is the solution here? Well, there is no solution, as is often the case when it comes to philosophical positions that are fundamentally at odds with one another. But here is the good news: looking at the ways in which real life events connect with philosophical issues can help us figure out what we think. Examining issues in this way can prove useful in isolating the features that may help us understand our own particular commitments and convictions. Thinking through these tensions for ourselves is what allows us to decide whether we think the proposal to encourage vaccination efforts by offering prizes is a legitimate one.

On the Rationality of the Capitol Rioters

photograph of rioters in front of Capitol

In the wake of the Capitol insurrection, there was no shortage of commentary concerning the moral and intellectual failings of the rioters. However, one not infrequent theme of this commentary was that, for all their errors, there was something about their behavior that made a certain sort of sense. After all, if one believed that one’s presidential candidate actually won the election by a landslide, and that this victory was being subverted by shadowy forces that included the Hugo Chávez family, then storming the Capitol can seem like a reasonable response.

Although the word “rationality” was not always used in this commentary, I think this is what these pundits have in mind: that the Capitol rioters were in some sense rational in acting as they did, given their beliefs. They probably didn’t know it, but in making this claim they echoed the view about rationality endorsed by the renowned moral philosopher Derek Parfit. In his magnum opus, On What Matters, Parfit argues that our desires and acts are rational when they causally depend in the right way on beliefs whose truth would give us sufficient reasons to have these desires, or to act in these ways. As applied to the case of the Capitol insurrection, Parfit’s view would seemingly endorse the rioters’ acts as rational, since the content of their beliefs about the election would, if true, give them sufficient reasons to riot. The key point is that on Parfit’s view, it does not matter whether the beliefs upon which the rioters’ actions were based are themselves true, but just that they rationally supported those actions.

Alternatively, David Hume famously wrote that the truth of one’s beliefs does make a difference to the rationality of one’s actions and desires. “It is only in two senses,” he wrote, “that any [desire] can be called unreasonable.” One of those senses is when the desire is “founded on the supposition of the existence of objects, which really do not exist.” In other words, desires based on false beliefs are irrational. Yet Hume appears to be mistaken here. One’s desire to run away can be rational even if based on the false belief that there is a rattlesnake about to strike inches from one’s feet, particularly if one’s belief is rational.

But what about the view that our desires and acts are rational just in case they causally depend in the right way on rational beliefs, whether true or not? If we accept this view, then the Capitol rioters’ actions and desires turn out to be irrational, since they are based on beliefs that are arguably irrational. Parfit resists this view using the example of a smoker who has a strange combination of attitudes: on the one hand, the rational belief that smoking will destroy his health, and on the other hand, and because of this belief, the desire to smoke. According to the view we are now considering, the smoker’s desire would be rational, since it depends on a rational belief. That seems false.

Another view about rationality that might support the Capitol rioters’ actions is the view, familiar from social science disciplines like economics, that the rational action is the one whose subjective expected utility — reflecting the utility of the possible outcomes, and the agent’s beliefs about the probability of those outcomes — is the highest. This view of rationality more or less abandons the idea of rationally assessing our non-instrumental desires, and simply evaluates actions in terms of how well they fulfill those desires. So, on this view, we might say that the rioters’ actions were rational because they maximally fulfilled their desires.

The Parfitian and maximizing views of rationality share a feature that the philosopher Warren Quinn famously highlighted in his article, “Rationality and the Human Good”: according to both views, rationality is at least sometimes indifferent as to the shamelessness, or moral turpitude, of a person’s ends. For example, Parfit’s view implies that someone who believes that the Jews are sub-human and, because of this belief, desires to exploit them in ways that would be immoral if the Jews were full-fledged members of the human race, is practically rational. Similarly, the maximizing view implies that someone who wants to exploit the Jews in such ways is practically rational if they take efficient means to that end. However, Quinn argues, this conception of practical rationality is in tension with the ancient idea that practical rationality is the highest virtue of humans as practical agents. How could practical rationality be morally defective, indifferent to the manifestly forceful practical demands of morality, and yet be the most “authoritative practical excellence”?

If rationality is integrally connected to morality in the way Quinn suggests, then it becomes harder to see how we could say that the Capitol rioters’ actions and desires were rational or in accordance with reason. Even if their beliefs, if true, would have justified their desires and acts, and even if their acts maximize the fulfillment of their desires, the fact is that their beliefs were false, and their actions and desires shameless. And if Quinn is right, that fact should make us reluctant to credit their actions and desires with the label “rational.” For Quinn, you can’t be rational and immoral at the same time. For Parfit or the maximizer, you can.

Thus, it turns out that much of significance hangs on whether we think what the rioters did was in accordance with reason. If we say that it was, either because we adopt Parfit’s conception of rationality or the maximizing conception, then we commit ourselves to the occasional indifference of rationality to moral considerations. If, instead, we adopt Quinn’s view, then we must reject that indifference.

Truth and Contradiction, Knowledge and Belief, and Trump

photograph of Halloween event at White House with Donald and Melania Trump

At a White House press conference in August, the HuffPost’s White House correspondent, S.V. Dáte, was called on by President Donald Trump for a question. This was the first time Trump had called on Dáte, and the question the reporter asked was the one he had (he said later) been saving for a long time. Here is the exchange:

Dáte: “Mr President, after three and a half years, do you regret at all, all the lying you have done to the American People?” Trump: “All the what?” Dáte: “All the lying, all the dishonesties…” Trump: “That who has done?” Dáte: “You have done…”

Trump cuts him off, ignoring the question, and calls on someone else. The press conference continues, as though nothing has happened. Trump’s reaction to being challenged is familiar and formulaic: he responds by ignoring or denouncing those from whence the challenge comes. In a presidency as tempestuous as this one, that inflicts new wounds on the American democracy daily and lurches from madness to scandal at breakneck speed, this reporter’s question may have slipped under the radar for many.

But let’s go back there for a moment. Not only was it a fair question, it is a wonder that it is not a question Trump is asked every day. The daily litany of lies uttered by the president is shocking, though people who support Trump seem not to mind the lies, or at least are not persuaded thereby to withdraw their support. This seems extraordinary, but maybe it isn’t. As politics continues to grow more divisive and ideologically driven, versions of events, indeed versions of reality, which serve ideologies are increasingly preferred by those with vested interests over ones supported by facts.

Therefore, the answer to Dáte’s question was already implicit in its having to be asked. Given the sheer volume of lies, and given what we know of Trump’s demeanor, it seems clear that he harbors no such regret. Trump gave his answer in dismissing the question.

So, here we are then. The President of the United States is widely acknowledged as a frequent and mendacious liar. If you want to follow up on the amount, content, or modality (Fox News, Twitter, a rally etc.) of Trump’s lies, there are the fact checkers. The Washington Post’s President Trump lie tally database had clocked 20,055 lies to date on July 9. You can search the database of Trump lies by topic and by source. The Post finds that Trump has made an average of 23 false or misleading claims a day over a 14-month period.

Take the president’s appearance last month at an ABC Town Hall with undecided voters. In response to questions about his handling of the pandemic, and regarding the taped, on-the-record interviews with Bob Woodward in which Trump discusses his decision to play down the virus to avoid panic, Trump responds that he had in fact “up-played” the virus. He says this while making no attempt to square the lie off with what is already, in fact, on the public record. As with all Trump’s tweets, public speeches, rallies, press conferences etc., Trump tells lies and fact checkers scramble to confront them.

Of course, Trump should be fact-checked. Fact-checking politicians and other public figures for the veracity of their speech is, and will remain, a vital contribution to public and political discourse. However, it is also important to reflect upon the way the ground has shifted under this activity in the era of Trump; the post-truth era.

The activity of fact-checking, of weighing the President’s claims against known or discoverable truth, presupposes an epistemic relation to the world in which truth and fact are arbiters of – or at least in some way related to – what it is reasonable to believe. Truth and untruth (that is, facts and lies) are, in the conventional sense, at odds with one another – they are mutually exclusive. A logical law of non-contradiction broadly governs conventional discourse. Either “p” or “not-p” is the case; it cannot be both. Ordinarily for a lie to be effective it has to obfuscate or replace the truth. If “p” is in fact true, then the assertion of “not-p” would have to displace the belief in “p” for the lie to work.

But in the Trump Era (the post-truth era) this relation is no longer operative. Trump’s lies often don’t even maintain the pretense of competing with truth in the conventional sense – that is, they don’t really attempt to supersede a fact but rather to shift the reality in which that fact operates as such, or in which it has a claim on belief, decision, and action.

When Trump says he “up-played” the virus without addressing his own on-the-record admission that he downplayed it, he is of course contradicting himself, but more than that he is jettisoning the ordinary sense in which fact and falsehood are at odds with each other. This could be described as a kind of epistemic shift, and is related, I think, to any meaning we might make – now and in the future– of the concept of ‘post-truth’, and what that means for our political and social lives. The concept of post-truth appears to signal a shift in what people can, within political and social discourse, understand knowledge to be, and what claims they can understand it to have upon them. The consequences of this we can already see playing out – especially, for instance, in the pandemic situation in the US, together with the volatile election atmosphere.

Having a concept of epistemology is important here – a concept of what it would be to ‘know’ and what it would be to act on the basis of knowledge. Such a concept would have to demarcate an ancient philosophical distinction – between episteme and doxa; which is the distinction between knowledge and mere opinion or belief.

Post-truth is the ascension of doxa over episteme. In the well-known philosophical analysis of knowledge as justified true belief, for a belief to count as knowledge one must be justified in believing it and it must be true. Knowledge, under this definition which is rudimentary, and somewhat problematic, but nevertheless useful, is belief which is justified and true. But in the post-truth era it seems that the conditions of both justification and truth are weakened, if not dispensed with altogether, and so we are left with an epistemology in which belief alone can count as knowledge – which is no epistemology at all.

It is easy to see why this is not only an epistemic problem, but a moral and political one as well. What knowledge we have, and what it is reasonable to believe and act upon, are core foundations of our lives in a society. There is an important relationship between epistemology and an ethical, flourishing, social and political life. Totalitarianism is built on and enabled by lies and propaganda replaces discourse when all criticism is silenced.

The coronavirus pandemic has been disastrous for the US. A case can easily be made that the pandemic has been able to wreak such devastation because of Trump’s lies – from his decision to downplay the danger and his efforts to sideline and silence experts, to the specific lies and obfuscations he issues via Twitter and at press conferences or Fox News call-ins.

The US has recorded the highest number of infections, and deaths, of anywhere in the world. So, when Trump says “America is doing great” the question must be ‘what this could possibly mean?’ This is no casual lie; nor is it merely the egoistic paroxysm of a president unable to admit error. Repeating at every possible opportunity that ‘America is doing great, the best in the world’ It is a form of gaslighting – and as such is calculated to help Trump disempower and dominate America.

This is in itself quite unsettling, but where is it all going?

In another, particularly bizarre and sinister example of ‘Trumpspeak’ from a couple of weeks ago the president mentioned a plane that allegedly flew from an unnamed city to Washington, D.C., loaded with “thugs wearing these dark uniforms, black uniforms, with gear.” In the absence of any ensuing clarity from the president or anyone else on what this might have been about, and in the light of Trump’s oft-repeated claims of the presence of a ‘radical left’ contingent, of ‘antifa’ and ‘radical democrats’ etc., it seems to have been an intimation of some threat, directly or indirectly, the symbolism of which appeared to be drawn from the ‘law and order’ platform of his campaign. Frankly, it’s hard to say.

But vague lies and unverified claims with dark intimations are the stuff of conspiracy. If you line all that up next to the fact that Trump has generously hinted that if the election does not resolve in his favor, he will consider the result illegitimate, then you can see how the lies, the false stories, the obfuscations and intimations are the tools Trump is using to try to shift power. He is trying to dislodge power from the elite – which can be read as ‘people who know things.’

One way of characterizing the situation is to say that the post-truth situation is creating an epistemic vacuum where ideology trumps reality and it is in this vacuum that Trump will attempt to secure his win.

Take the oft-repeated mail-in ballot lie – that mail-in ballots are subject to widespread electoral fraud. This has been firmly refuted, even by Trump’s own investigation following the 2016 election. Yet it is widely recognized that this lie could foment a sense of resentment among Trump supporters should he not get across the line on November 3. Or it could facilitate his (by now fairly transparent) intention to declare victory on election night should the result be inconclusive as counting proceeds. These are the possible, or even likely, outcomes if Trump is able to create, feed, and capitalize on a situation in which truth and fact have no purchase on, or have no meaningful relationship to, people’s reasons for acting or making choices.

Trump’s lying is both a symptom, and part of the disease of his presidency – a pathology which has infected pretty well the whole Republican party and which is putting great strain on many of the organs and tissues of the American democracy. This really is a time like no other in America’s history, and the stakes are as high as they have ever been.

At this point the ethical dimensions of the question of why truth is important to a healthy and just society seem to be slipping from view as America struggles under Trump to keep an epistemic foundation in political discourse that is broadly governed by principles of veracity. Fact-checking alone cannot win that struggle.

Waiting for a Coronavirus Vaccine? Watch Out for Self-Deception

photograph of happy smiley face on yellow sticky note surrounded by sad unhappy blue faces

In the last few months, as it is clear that the coronavirus won’t be disappearing anytime soon, there has been a lot of talking about vaccines. The U.S. has already started several trials, and both Canada and Europe have followed suit. The lack of a current vaccine has made even more evident how challenging it is to coexist with the current pandemic. Aside from the more extreme consequences that involve hospitalizations, families and couples have been separated for what is a dramatic amount of time, and some visas have been halted. Unemployment rates have hit record numbers with what will be predicted to be a slow recovery. Restaurants, for example, have recently reopened, yet it is unclear what their future will be when the patio season will soon come to an end. With this in mind, many (myself included) are hoping that a vaccine will come, the sooner the better.

But strong interest for a vaccine, raises the worry of how this influences what we believe, and in particular, how we examine evidence that doesn’t fit our hopes. The worry is that one might indulge in self-deception. What do I mean by this? Let me give you an example that clarifies what I have in mind.

Last week, I was talking to my best friend, who is a doctor and, as such, typically defers to experts. When my partner and I told my friend of our intention of getting married, she reacted enthusiastically. Unfortunately, the happiness of the moment was interrupted by the realization that, due to the current coronavirus pandemic, the wedding would need to take place after the distribution of a vaccine. Since then, my friend has repeatedly assured me that there will be a vaccine as early as October on the grounds that Donald Trump has guaranteed it will be the case. When I relayed to her information coming instead from Dr. Anthony Fauci, who instead believes the vaccine will be available only in 2021, my friend embarked in effortful mental gymnastics to justify (or better: rationalize) why Trump was actually right.

There is an expression commonly used in Italian called “mirror climbing.” Climbing a mirror is an obviously effortful activity and it is also bound to fail because the mirror’s slippery surface makes it easy to fall from. Italians use the expression metaphorically to denote the struggle of someone attempting to justify a proposition that by their own lights is not justifiable. My friend was certainly guilty of some mirror climbing and she is a clear example of someone who, driven by the strong desire to see her best friend getting married, self-deceives that the vaccine will be available in September. This is in fact how self-deception works. People don’t simply believe what they want for that is psychologically impossible. You couldn’t possibly make yourself believe that the moon was made of cheese, even if you wanted to. Beliefs are just not directly controllable like actions. Rather, it is our wishes, desires, interests that influence the way we come to believe what we want by shaping how we gather and interpret evidence. We might, for example, give more importance to reading news that align with our hopes and scroll past news titles that question what we would like to be true. We might give weight to a teaspoon of evidence coming from a source we wouldn’t normally trust, and instead give credibility to evidence coming from sources that we know is not relevant.

You might ask though, how is my friend’s behavior different from someone who is simply wrong instead of self-deceived? Holding a belief that it turns to be false usually happens out of mistake, and as a result, when people correct us, we don’t have problems revising that belief. Self-deception instead, doesn’t happen out of mere error, it is driven by a precise motivation — desires, hope, fears, worry, and so on — which biases the way we collect and interpret evidence in favor of that belief. Consider my friend again. She is a doctor, and as such she always trusts experts. Now, regardless of political views, Trump, contrary to Dr Fauci, is not an expert in medicine. Normally, my friend knows better than trusting someone who is not an expert, yet the only instance when she doesn’t, is one where there is a real interest at stake. This isn’t a coincidence; the belief there will be a vaccine in October is fueled by a precise hope. This is a problem because our beliefs should be guided by evidence, not wishes. Beliefs, so to speak, are not designed to make us feel better (contrary to desires, for example). They are supposed to match reality, and as such be a tool that we use to navigate our environment. Deceiving ourselves that something is the case when it’s not inevitably leads to disappointment because reality has a way to intrude our hopes and catch up with us.

Given this, what can we do to prevent being falling into the grips of self-deception? Be vigilant. We are often aware of our wishes and hopes (just like you are probably aware now that you’re hoping a vaccine will be released soon). Once we are aware of our motivational states, we should slow down our thoughts and be extra careful when considering evidence in favor. This is the first step in protecting ourselves from self-deception.

The Dangerous Allure of Conspiracy Theories

photograph of QAnon sign at rally

Once again, the world is on fire. Every day seems to bring a new catastrophe, another phase of a slowly unfolding apocalypse. We naturally intuit that spontaneous combustion is impossible, so a sinister individual (or a sinister group of individuals) must be responsible for the presence of evil in the world. Some speculate that the most recent bout of wildfires in California were ignited by a giant laser (though no one can agree on who fired the lasers in the first place), while others across the globe set 5G towers ablaze out of fear that this frightening new technology was created by a malevolent organization to hasten the spread of coronavirus. Events as disparate as the recent explosion in Beirut to the rise in income inequality have been subsumed into a vast web of conspiracy and intrigue. Conspiracy theorists see themselves as crusaders against the arsonists at the very pinnacle of society, and are taking to internet forums to demand retribution for perceived wrongs.

The conspiracy theorists’ framework for making sense of the world is a dangerously attractive one. Despite mainstream disdain for nutjobs in tinfoil hats, conspiracy theories (and those who unravel them) have been glamorized in pop culture through films like The Matrix and The Da Vinci Code, both of which involve a single individual unraveling the lies perpetuated by a malevolent but often invisible cadre of villains. Real-life conspiracy theorists also model themselves after the archetypal detective of popular crime fiction. This character possesses authority to sort truth from untruth, often in the face of hostility or danger, and acts as an agent for the common good.

But in many ways, the conspiracy theorist is the inverse of the detective; the latter operates within the system of legality, often working directly for the powers-that-be, which requires an implicit trust in authority. They usually hunt down someone who has broken the law, and who is therefore on the fringes of the system. Furthermore, the detective gathers empirical evidence which forms the justification for their pursuit. The conspiracy theorist, on the other hand, is on the outside looking in, and displays a consistent mistrust of both the state and the press as sources of truth. Though conspiracy theorists ostensibly obsess over paper trails and blurry photographs, their evidence (which is almost always misconstrued or fabricated) doesn’t matter nearly as much as the conclusion. As Michael Barkun explains in A Culture of Conspiracy: Apocalyptic Visions in Contemporary America,

the more sweeping a conspiracy theory’s claims, the less relevant evidence becomes …. This paradox occurs because conspiracy theories are at their heart nonfalsifiable. No matter how much evidence their adherents accumulate, belief in a conspiracy theory ultimately becomes a matter of faith rather than proof.

In that sense, most conspiracy theorists are less concerned with uncovering the truth than confirming what they already believe. This is supported by a 2016 study, which identifies partisanship as an crucial factor in measuring how likely someone is to buy into conspiracy theories. The researchers determined that “political socialization and psychological traits are likely the most important influences” on whether or not someone will find themselves watching documentaries on ancient aliens or writing lengthy Facebook posts about lizard people masquerading as world leaders. For example, “Republicans are the most likely to believe in the media conspiracy followed by Independents and Democrats. This is because Republicans have for decades been told by their elites that the media are biased and potentially corrupt.” The study concludes that people from both ends of the political spectrum can be predisposed to see a conspiracy where there isn’t one, but partisanship is ultimately the more important predictor whether a person will believe a specific theory than any other factor. In other words, Democrats rarely buy into conspiracy theories about their own party, and vice versa with Republicans. The enemy is never one of us.

It’s no wonder the tinfoil-hat mindset is so addictive. It’s like being in a hall of mirrors, where all you can see is your own flattering image repeated endlessly. Michael J. Wood suggests in another 2016 study that “people who are aware of past malfeasance by powerful actors in society might extrapolate from known abuses of power to more speculative ones,” or that “people with more conspiracist world views might be more likely to seek out information on criminal acts carried out by officials in the past, while those with less conspiracist world views might ignore or reject such information.” It’s a self-fulfilling prophecy, fed by a sense of predetermined mistrust that is only confirmed by every photoshopped UFO. Conspiracy theories can be easily adapted to suit our own personal needs, which further fuels the narcissism. As one recent study on a conspiracy theory involving Bill Gates, coronavirus, and satanic cults points out,

there’s never just one version of a conspiracy theory — and that’s part of their power and reach. Often, there are as many variants on a given conspiracy theory as there are theorists, if not more. Each individual can shape and reshape whatever version of the theory they choose to believe, incorporating some narrative elements and rejecting others.

This mutable quality makes conspiracy theories personal, as easily integratable into our sense of self as any hobby or lifestyle choice. Even worse, the very nature of social media amplifies the potency of conspiracy theories. The study explains that

where conspiracists are the most engaged users on a given niche topic or search term, they both generate content and effectively train recommendation algorithms to recommend the conspiracy theory to other users. This means that, when there’s a rush of interest, as precipitated in this case by the Covid-19 crisis, large numbers of users may be driven towards pre-existing conspiratorial content and narratives.

The more people fear something, the more likely an algorithm will be to offer them palliative conspiracy theories, and the echo chamber grows even more.

Both of the studies previously mentioned suggest that there is a predisposition to believe in conspiracy theories that transcends political alliance, but where does that predisposition come from? It seems most likely that conspiracy beliefs are driven by anxiety, paranoia, feelings of powerlessness, and a desire for authority. A desire for authority is especially evident at gatherings of flat-earthers, a group that consistently mimics the tone and language academic conferences. Conspiracies rely on what Barkun called “stigmatized knowledge,” or “claims to truth that the claimants regard as verified despite the marginalization of those claims by the institutions that conventionally distinguish between knowledge and error — universities, communities of scientific researchers, and the like.” People feel cut off from the traditional locus of knowledge, so they create their own alternative epistemology, which restores their sense of authority and control.

Conspiracy theories are also rooted in a basic desire for narrative structure. Faced with a bewildering deluge of competing and fragmentary narratives, conspiracy theories cobble together half-truths and outright lies into a story that is more coherent and exciting than reality. The conspiracy theories that attempt to explain coronavirus provide a good example of this process. The first stirrings of the virus began in the winter of 2019, then rapidly accelerated without warning and altered the global landscape seemingly overnight. Our healthcare system and government failed to respond with any measure of success, and hundreds of thousands of Americans died over the span of a few months. The reality of the situation flies in the face of narrative structure — the familiar rhythm of rising action-climax-falling action, the cast of identifiable good guys and bad guys, the ultimate moral victory that redeems needless suffering by giving it purpose. In the dearth of narrative structure, theorists suggest that Bill Gates planned the virus decades ago, citing his charity work as an elaborate cover-up for nefarious misdeeds. The system itself isn’t broken or unequipped to handle the pandemic because of austerity. Rather, it was the result of a single bad actor.

Terrible events are no longer random, but imbued with moral and narrative significance. Michael Barkun argues that this is a comfort, but also a factor that further drives conspiracy theories:

the conspiracy theorist’s view is both frightening and reassuring. It is frightening because it magnifies the power of evil, leading in some cases to an outright dualism in which light and darkness struggle for cosmic supremacy. At the same time, however, it is reassuring, for it promises a world that is meaningful rather than arbitrary. Not only are events nonrandom, but the clear identification of evil gives the conspiracist a definable enemy against which to struggle, endowing life with purpose.

A group of outsiders (wealthy Jewish people, the “liberal elite,” the immigrant) are Othered within the discourse of theorists, rendered as villains capable of superhuman feats. The QAnon theory in particular feels more like the Marvel cinematic universe than a coherent ideology, with its bloated cast of heroes teaming up for an Avengers-style takedown of the bad guys. Some of our best impulses — our love of storytelling, a desire to see through the lies of the powerful — are twisted and made ugly in the world of online conspiracy forums.

The prominence of conspiracy theories in political discourse must be addressed. Over 70 self-professed Q supporters have run for Congress as Republicans in the past year, and as Kaitlyn Tiffany points out in an article for The Atlantic, the QAnon movement is becoming gradually more mainstream, borrowing aesthetics from the lifestyle movement and makeup tutorials make itself more palatable. “Its supporters are so enthusiastic, and so active online, that their participation levels resemble stan Twitter more than they do any typical political movement. QAnon has its own merch, its own microcelebrities, and a spirit of digital evangelism that requires constant posting.” Perhaps the most frightening part of this problem is the impossibility of fully addressing it, because conspiracy theorists are notoriously difficult to hold a good-faith dialogue with. Sartre’s description of anti-Semites written in the 1940s (not coincidentally, the majority of contemporary conspiracy theories are deeply anti-Semitic) is relevant here. He wrote that anti-Semites (and today, conspiracy theorists)

know that their statements are empty and contestable; but it amuses them to make such statements: it is their adversary whose duty it is to choose his words seriously because he believes in words. They have a right to play. They even like to play with speech because by putting forth ridiculous reasons, they discredit the seriousness of their interlocutor; they are enchanted with their unfairness because for them it is not a question of persuading by good arguing but of intimidating or disorienting.

This quote raises the frightening possibility that not all conspiracy theorists truly believe what they say, that their disinterest in evidence is less an intellectual blindspot than a source of amusement. Sartre helps us see why conspiracy theories often operate on a completely different wavelength, one that seems to preclude logic, rationality, and even the good-faith exchange of ideas between equals.

The fragmentation of postmodern culture has created an epistemic conundrum: on what basis do we understand reality? As the operations of governments become increasingly inscrutable to those without education, as the concept of truth itself seems under attack, how do we make sense of the forces that determine the contours of our lives? Furthermore, as Wood points out, mistrust in the government isn’t always baseless, so how do we determine which threats are real and which are imagined?

There aren’t simple answers to these questions. The only thing we can do is address the needs that inspire people to seek out conspiracy theories in the first place. People have always had an impulse to attack their anxieties in the form of a constructed Other, to close themselves off, to distrust difference, to force the world to conform to a single master narrative, so it’s tempting to say that there will probably never be a way to completely eradicate insidious conspiracy theories entirely. Maybe the solution is to encourage the pursuit of self-knowledge, our own biases and desires, before we pursue an understanding of forces beyond our control.

On “Doing Your Own Research”

photograph of army reserve personnel wearing neck gaiter at covid testing site

In early August, American news outlets began to circulate a surprising headline: neck gaiters — a popular form of face covering used by many to help prevent the spread of COVID-19 — could reportedly increase the infection rate. In general, face masks work by catching respiratory droplets that would otherwise contaminate a virus-carrier’s immediate environment (in much the same way that traditional manners have long-prescribed covering your mouth when you sneeze); however, according to the initial report by CBS News, a new study found that the stretchy fabric typically used to make neck gaiters might actually work like a sieve to turn large droplets into smaller, more transmissible ones. Instead of helping to keep people safe from the coronavirus, gaiters might even “be worse than no mask at all.”

The immediate problem with this headline is that it’s not true; but, more generally, the way that this story developed evidences several larger problems for anyone hoping to learn things from the internet.

The neck gaiter story began on August 7th when the journal Science Advances published new research on a measurement test for face mask efficacy. Interested by the widespread use of homemade face-coverings, a team of researchers from Duke University set out to identify an easy, inexpensive method that people could use at home with their cell phones to roughly assess how effective different commonly-available materials might be at blocking respiratory droplets. Importantly, the study was not about the overall efficacy rates of any particular mask, nor was it focused on the length of time that respiratory droplets emitted by mask-wearers stayed in the air (which is why smaller droplets could potentially be more infectious than larger ones); the study was only designed to assess the viability of the cell phone test itself. The observation that the single brand of neck gaiter used in the experiment might be “counterproductive” was an off-hand, untested suggestion in the final paragraph of the study’s “Results” section. Nevertheless, the dramatic-sounding (though misleading) headline exploded across the pages of the internet for weeks; as recently as August 20th, The Today Show was still presenting the untested “result” of the study as if it were a scientific fact.

The ethics of science journalism (and the problems that can arise from sensationalizing and misreporting the results of scientific studies) is a growing concern, but it is particularly salient when the reporting in question pertains to an ongoing global pandemic. While it might be unsurprising that news sites hungry for clicks ran a salacious-though-inaccurate headline, it is far from helpful and, arguably, morally wrong.

Furthermore, the kind of epistemic malpractice entailed by underdeveloped science journalism poses larger concerns for the possibility of credible online investigation more broadly. Although we have surrounded ourselves with technology that allows us to access the internet (and the vast amount of information it contains), it is becoming ever-more difficult to filter out genuinely trustworthy material from the melodramatic noise of websites designed more for attracting attention than disseminating knowledge. As Kenneth Boyd described in an article here last year, the algorithmic underpinnings of internet search engines can lead self-directed researchers into all manner of over-confident mistaken beliefs; this kind of structural issue is only exacerbated when the inputs to those algorithms (the articles and websites themselves) are also problematic.

These sorts of issues cast an important, cautionary light on a growing phenomenon: the credo that one must “Do Your Own Research” in order to be epistemically responsible. Whereas it might initially seem plain that the internet’s easily-accessible informational treasure trove would empower auto-didacts to always (or usually) draw reasonable conclusions about whatever they set their minds to study, the epistemic murkiness of what can actually be found online suggests that reality is more complicated. It is not at all clear that non-expert researchers who are ignorant of a topic can, on their own, justifiably identify trustworthy information (or information sources) about that topic; but, on the other hand, if a researcher does has enough knowledge to judge a claim’s accuracy, then it seems like they don’t need to be researching the topic to begin with!

This is a rough approximation of what philosophers sometimes call “Meno’s Paradox” after its presentation in the Platonic dialogue of that name. The Meno discusses how inquiry works and highlights that uninformed inquirers have no clear way to recognize the correct answer to a question without already knowing something about what they are questioning. While Plato goes on to spin this line of thinking into a creative argument for the innateness of all knowledge (and, by extension, the immortality of the soul!), subsequent thinkers have often taken different approaches to argue that a researcher only needs to have partial knowledge either of the claim they are researching or of the source of the claim they are choosing to trust in order to come to justified conclusions.

Unfortunately, “partial knowledge” solutions have problems of their own. On one hand, human susceptibility to a bevy of psychological biases make a researcher’s “partial” understanding of a topic a risky foundation for subsequent knowledge claims; it is exceedingly easy, for example, for the person “doing their own research” to be unwittingly led astray by their unconscious prejudices, preconceptions, or the pressures of their social environment. On the other hand, grounding one’s confidence in a testimonial claim on the trustworthiness of the claim’s source seems to (in most cases) simply push the justification problem back a step without really solving much: in much the same way that a non-expert cannot make a reasonable judgment about a proposition, that same non-expert also can’t, all by themselves, determine who can make such a judgment.

So, what can the epistemically responsible person do online?

First, we must cultivate an attitude of epistemic humility (of the sort summarized by Plato’s infamous comment “I know that I know nothing”) — something which often requires us to admit not only that we don’t know things, but that we often can’t know things without the help of teachers or other subject matter experts doing the important work of filtering the bad sources of information away from the good ones. All too often, “doing your own research” functionally reduces to a triggering of the confirmation bias and lasts only as long as it takes to find a few posts or videos that satisfy what a person was already thinking in the first place (regardless of whether those posts/videos are themselves worthy of being believed). If we instead work to remember our own intellectual limitations, both about specific subjects and the process of inquiry writ large, we can develop a welcoming attitude to the epistemic assistance offered by others.

Secondly, we must maintain an attitude of suspicion about bold claims to knowledge, especially in an environment like the internet. It is a small step from skepticism about our own capacities for inquiry and understanding to skepticism about that of others, particularly when we have plenty of independent evidence that many of the most accessible or popular voices online are motivated by concerns other than the truth. Virtuous researchers have to focus on identifying and cultivating relationships with knowledgeable guides (who can range from individuals to their writings to the institutions they create) on whom they can rely when it comes time to ask questions.

Together, these two points lead to a third: we must be patient researchers. Developing epistemic virtues like humility and cultivating relationships with experts that can overcome rational skepticism — in short, creating an intellectually vibrant community — takes a considerable amount of effort and time. After a while, we can come to recognize trustworthy informational authorities as “the ones who tend to be right, more often than not” even if we ourselves have little understanding of the technical fields of those experts.

It’s worth noting here, too, that experts can sometimes be wrong and nevertheless still be experts! Even specialists continue to learn and grow in their own understanding of their chosen fields; this sometimes produces confident assertions from experts that later turn out to be wrong. So, for example, when the Surgeon General urged people in February to not wear face masks in public (based on then-current assumptions about the purportedly low risk of asymptomatic patients) it made sense at the time; the fact that those assumptions later proved to be false (at which point the medical community, including the epistemically humble Surgeon General, then recommended widespread face mask usage) is simply a demonstration of the learning/research process at work. On the flip side, choosing to still cite the outdated February recommendation simply because you disagree with face mask mandates in August exemplifies a lack of epistemic virtue.

Put differently, briefly using a search engine to find a simple answer to a complex question is not “doing your own research” because it’s not research. Research is somewhere between an academic technique and a vocational aspiration: it’s a practice that can be done with varying degrees of competence and it takes training to develop the skill to do it well. On this view, an “expert” is simply someone who has become particularly good at this art. Education, then, is not simply a matter of “memorizing facts,” but rather a training regimen in performing the project of inquiry within a field. This is not easy, requires practice, and still often goes badly when done in isolation — which is why academic researchers rely so heavily on their peers to review, critique, and verify their discoveries and ideas before assigning them institutional confidence. Unfortunately, this complicated process is far less sexy (and far slower) than a scandalous-sounding daily headline that oversimplifies data into an attractive turn of phrase.

So, poorly-communicated science journalism not only undermines our epistemic community by directly misinforming readers, but also by perpetuating the fiction that anyone is an epistemic island unto themselves. Good reporting must work to contextualize information within broader conversations (and, of course, get the information right in the first place).

Please don’t misunderstand me: this isn’t meant to be some elitist screed about how “only the learned can truly know stuff, therefore smart people with fancy degrees (or something) are best.” If degrees are useful credentials at all (a debatable topic for a different article!) they are so primarily as proof that a person has put in considerable practice to become a good (and trustworthy) researcher. Nevertheless, the Meno Paradox and the dangers of cognitive biases remain problems for all humans, and we need each other to work together to overcome our epistemic limitations. In short: we would all benefit from a flourishing epistemic community.

And if we have to sacrifice a few splashy headlines to get there, so much the better.

Morality Pills Aren’t Enough

close-up photograph of white, chalky pill on pink background

Here’s a problem: despite the coronavirus still being very much a problem, especially in the US, many people refuse to take even the most basic precautions when it comes to preventing the spread of the disease. One of the most controversial is the wearing of masks: while some see wearing a mask as a sign of a violation of personal liberties (the liberty to not have to wear a mask, I suppose), others may simply value their own comfort over the well-being of others. Indeed, refusal to wear a mask has been seen by some as a failure of courtesy to others, or a general lack of kindness.

We might look at this situation and make the following evaluation: the problem with people refusing to take precautions to help others during the current pandemic is the result of moral failings. These failings might be the result of a failure to value others in the way that they ought to, perhaps due to a lack of empathy or tendency towards altruism. So perhaps what we need is something that can help these people have better morals. What we need is a morality enhancing pill.

What would such a pill look like? Presumably it would help an individual overcome some relevant kind of moral deficiency, perhaps in the way that some drugs can help individuals cope with certain mental illnesses. The science behind it is merely speculative; what’s more, it’s not clear that it could ever really work in practice. Add concerns about a morality pill’s potentially even worse moral consequences – violations of free will spring to mind, especially if they are administered involuntarily – and it is perhaps easy to see why such a pill currently exists only in the realm of thought experiment.

But let’s put all that aside and say that such a pill was developed. People who were unempathetic take the pill and now show much more empathy; people who failed to value the well-being of others now value it more. Also say that everyone was happy to get on board, so we put at least some of the bigger practical worries aside. Would it solve the problem of people not taking the precautions that they should in helping stop the spread of coronavirus?

I don’t think it would. This is because the problem is not simply a moral problem, but also an epistemic one. In other words: one can have as much empathy as one likes, but if one is forming beliefs on the basis of false or misleading information, then empathy isn’t going to do much good.

Consider someone who refuses to wear a mask, even though it has been highly recommended that they do by a relevant agency, or perhaps even mandated. Their failure to comply may not be indicative of a failure of empathy: if the person falsely believes, for example, that masks inhibit one’s ability to breathe, then they may be as empathetic as you like and still not change their minds. Indeed, given the belief that masks are harmful, increased levels of empathy may only strengthen one’s resolve: given that one cares about the well-being of others, and believes that masks can inhibit that well-being, they will perhaps strive even more to get people to stop wearing them.

Of course, what we want is not that kind of empathy, we want well-informed empathy. This is the kind of empathy that is directed at what the well-being of others really consists in, not just what one perceives it to be. A good morality pill, then, is one that doesn’t just supplement one’s lack of empathy or altruism or what-have-you, but does so in a way that it is directed at what’s actually, truly morally good.

Here, though, we see a fundamental flaw with the morality pill project. The initial problem was that since those who refuse to follow guidelines that can help decrease the spread of the coronavirus refuse to listen to the evidence provided by scientific experts, then we should look to other solutions, ones that don’t have to involve trying to change someone’s beliefs. The problem with focusing on one’s moral character instead, though, is that bettering one’s moral character is a project that requires changing one’s beliefs, as well. The morality pill solution, then, really isn’t that much of a solution at all.

The morality pill, of course, still exists only in the realm of the hypothetical. Back in the real world we are still faced with the hard problem of trying to get people who ignore evidence and believe false or misleading information to change their minds. Where the morality pill thought experiment fails, I think, is that while it is meant to be a way of getting around this hard problem, it runs right into it, instead.

Clifford and the Coronavirus

photograph of empty ship helm

In 1877, mathematician and philosopher WK Clifford published a classic essay entitled “The Ethics of Belief.” In it, he asks us to consider a case involving a negligent shipowner:

“A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not overwell built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind, and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.”

Clifford then asks: what should we think of the shipowner? The answer, he thinks, is obvious: he is responsible for the death of the passengers. This is because he had all the evidence before him that his ship needed repairs and really wasn’t very safe, and instead of forming his beliefs in accordance with the evidence, he stifled his doubts and believed what he wanted.

As far as philosophical thought experiments go, Clifford’s case is easy to imagine happening in real life. In fact, there have recently been a number of real-life nautical disasters, although instead of ships sinking, they involve coronavirus outbreaks, the most recent being a Norwegian cruise ship that reported a number of coronavirus cases among crew and passengers earlier in August. In response to the incident, the CEO of the company owning the cruise line stated that “We have made mistakes” and that the outbreak was ultimately the product of a failure of several “internal procedures.” Indeed, the cruise line’s website states that they followed all the relevant guidelines from the Norwegian Institute for Public Health, implemented measures to encourage social distancing and good hygiene, and set sail with only 50% capacity. Despite these measures, though, people still got sick. This is not an isolated event: numerous businesses worldwide — that have adhered to government and other reopening guidelines — have seen spikes in cases of coronavirus among staff and customers.

In introducing his case, Clifford argued that what the shipowner did wrong was to form a belief on insufficient evidence. And it is easy enough to agree with Clifford’s diagnosis when it comes to such egregious belief-forming behavior as he describes. However, real life cases are typically more subtle. Cases like the Norwegian cruise ship and other businesses that have experienced problematic reopening should then lead us to question how much evidence is good enough when it comes to making the decision to reopen one’s business, and who we should find deserving of blame when things don’t work out.

To be fair, there are certainly differences between Clifford’s case and the case of the Norwegian cruise ship: there is no reason to think, for instance, that anyone in charge of the latter actively stifled doubts they knew to be significant. But there are also similarities, in that the evidence that cruise ships are generally not safe places to be right now is abundant and readily available. Even if one adheres to relevant health guidelines, we might wonder whether that is really good enough given what other evidence is available.

We might also wonder who is ultimately to blame. For instance, if guidelines concerning the re-opening of businesses that are provided by a relevant heath agency turn out to be inadequate, perhaps the blame should fall on those in charge of the guidelines themselves, and not those who followed them. There have, after all, been a number of countries that have reinstated stricter conditions on the operation of businesses after initially relaxing them in response to increases in new infections, Norway recently among them. When cases of coronavirus increased as a result of businesses being allowed to reopen, we might then put the blame on the government as opposed to the business owners themselves.

Clifford also makes an additional, more controversial argument that he illustrates in a second example:

“Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out. The question of right or wrong has to do with the origin of his belief, not the matter of it; not what it was, but how he got it; not whether it turned out to be true or false, but whether he had a right to believe on such evidence as was before him.”

Using this second case, Clifford argues that whether things turn out okay or not really isn’t important for determining whether someone has done something wrong: even if everyone on the ship made it safely the shipowner would still be guilty, he just got lucky that everyone survived. While we might think that Clifford is being harsh in his judgment, we might also wonder whether other businesses that have re-opened early in the face of some evidence that doing so may still be dangerous should be considered blameworthy, as well, regardless of the consequences.

Moral Panics about “COVID Parties”

photograph of teenagers at corwded concert on the beach

In recent weeks, a new feature has appeared in the discourse focused on the global pandemic and its related quarantine procedures: reportedly, people have been hosting and attending parties designed to spread the coronavirus. From Alabama to Florida to Texas, the details of these so-called “COVID Parties” differ, but one element is common: attendees do not take the threat of the disease seriously. Some gatherings seem to be patterned after “chickenpox parties” intended to encourage herd immunity, others are allegedly motivated by sport or financial gain (one widely-circulated report claimed that a party in Tuscaloosa offered a cash prize to the first guest to contract COVID-19).

However, to date, the evidence for the phenomenon of “COVID parties” is surprisingly scarce: that is to say, it is not clear that any such parties have actually taken place. Consider the story of the “Texas millennial” who supposedly confessed to attending a COVID party shortly before he died in a San Antonio hospital in mid-July.

The hospital’s health director admits that she heard of the disclosure secondhand and journalists have been unable to locate the nurse who purportedly received the confession in the first place. Similarly, most of the claims about college students in Alabama holding contests to intentionally catch COVID-19 are traceable to a single member of the Tuscaloosa City Council commenting on (and seemingly embellishing) a rumor shared by the city’s fire chief about sick teenagers leaving their homes: at this point, no hard evidence (such as alleged eyewitnesses or posts on social media) have surfaced of these parties and the University of Alabama has been unsuccessful at locating any potential attendees. Nevertheless, both of these small-scale stories have been reported by national news outlets.

This suggests that the trending discussions about “COVID parties” evidence what’s called a “moral panic” concerned with discouraging lackadaisical responses to the coronavirus. Such panics result when false beliefs about a purported threat to a social group spread throughout that group, thereby leading group members to be increasingly hostile towards anyone or anything they suspect of embodying the rumored threat. Consider the overreaction of the Christian Right to the popularity of Dungeons and Dragons in the 1980s, longstanding urban legends about the risks of poisoned Halloween candy, and the relatively recent “Momo Challenge” where a YouTube video supposedly tried to convince viewers to harm themselves: each of these were rooted in false beliefs about over-exaggerated threats to children. More serious examples of moral panics can be found in the abuses suffered by patients with HIV/AIDS or members of LGBTQ communities as ignorance and fear-mongering among more dominant social groups fueled (and still fuels) official policies of abandonment and exclusion. Importantly, Oxford Reference points out that “moral panics are generally fueled by media coverage of social issues” — a feature only exacerbated by the contemporary explosion of social media.

Given the many risks posed by the coronavirus, the perceived threat of a COVID party might be understandable: if such a party were to happen, it could significantly contribute to more people contracting the disease — including, crucially, more people than just those who actually chose to attend the party. But there are at least two broad kinds of problems with moral panics, and the rumors of COVID parties seem to exhibit both.

Firstly, moral panics unavoidably pose epistemic problems insofar as they are, by definition, fueled by false beliefs and, often, mishandled evidence (or evidential requirements). For example, to date, the reports of COVID parties have ranged from hearsay to misattribution to, potentially, straightforward fabrication. Consider the tragic case of Carsyn Leigh Davis, an immunocompromised seventeen-year-old in Florida who died after contracting COVID-19 in mid-June. Shortly before she fell ill, Davis attended a party at her church where neither face masks nor social distancing practices were required; initial reports (now corrected) labeled this church gathering a “COVID party,” despite there being no clear evidence that the event was actually intended to spread the coronavirus (the church has explicitly denied these allegations). By jumping to conclusions about the nature of the church party, at least some reporters (and self-styled reporters who share information on social media sites) seem to have fallen prey to the problem of confirmation bias. In a similar way (and for a variety of additional reasons), failures to thoroughly vet second-(or third or fourth)-hand reports of COVID parties have led to what amounts to conspiracy theories being shared openly and uncritically.

Which leads me to the second — and, arguably, more problematic — issue about moral panics: what they tell us about the social groups doing the panicking. Naturally, in order for biases to be confirmable, they must first exist in the minds of biased observers: someone cannot, for example, reflexively equate homosexuals with pedophiles if they do not already falsely believe that those two groups of people are somehow logically associated. Certainly, it is no secret that plenty of skeptics doubt the severity (and even reality) of the COVID-19 pandemic, but the panic about COVID parties suggests more complicated stereotypes are at work.

Consider the commonalities amongst the COVID party reports I’ve already mentioned: each of them focus on patients (or imagined patients) who were also members of subcultures frequently treated as fair game for jokes by the culture-at-large: millennials, college students, and religious fundamentalists. Perhaps most importantly, all three stories hailed from the Deep South. For these sorts of stories to seem salient to readers in the first place, those readers must already be at least somewhat suspicious of (or, at the very least, pretentious towards) those subgroups in a manner that will be suspiciously unvirtuous.

Let’s imagine that Barry is a person who takes the threat posed by the coronavirus seriously, but has never before heard of a COVID party. When Barry reads allegations that people in Alabama have been intentionally competing to contract COVID-19 and win a pot of money, they will likely be skeptical (because it will strike Barry as a wildly unlikely thing for people to do). If, however, Barry then reads that it was a group of college students throwing this alleged COVID party and Barry thereby takes the premise of the story to be more likely, then we can reasonably infer certain unflattering things that Barry likely believes about college students. Replace “college students” with “millennials, “religious fundamentalists,” or “Southerners” and the story reads no differently.

This kind of condescending attitude is akin to what Aristotle decried as an act of “insult” (sometimes also translated as both “insolence” and “hubris”). In his Rhetoric, Aristotle explains that insult consists in shaming someone simply for the mean-spirited pleasure of doing so; as he says, people prone to such acts “think that, in committing them, they are showing their superiority.” Such behavior, Aristotle continues, is often evidence of a dishonorable character and this seems equally true of biased attitudes that do not provoke overt action (beyond tapping the “share” link). Insofar as rumors about COVID parties activate biased presuppositions about various subgroups that Barry assumes to be inferior, we can thereby reasonably suspect that Barry harbors certain immoral prejudices.

So, in addition to their other features, moral panics can function as a barometer for subgroups that prideful members of the dominant social group reflexively patronize. If real evidence of COVID parties becomes available, then critical reactions to such threats would be warranted, but in the absence of such evidence, moral panics are epistemically unjustified. Moreover, insofar as moral panics are motivated by regionalism or other kinds of snobbish stereotypes, they run the risk of actually being prideful moral hazards.

Hydroxychloroquine and the Problem of Expert Disagreement

photograph of Coronavirus Update Breifing with Dr. Fauci at the podium with Trump behind him

On April 5th, after promoting the use of an anti-malarial drug to (possibly) help stem the tide of the coronavirus outbreak, President Trump commented, “What do I know? I’m not a doctor, but I have common sense.” According to Trump, even though we still lack conclusive evidence that hydroxychloroquine is an effective treatment for COVID-19, there is no reason not to try using it: the medication has been prescribed for other reasons for years and some preliminary results suggest it might also help diminish the effects of the novel coronavirus.

In contrast, Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases and member of the Coronavirus Task Force assembled by the White House to combat the outbreak, has repeatedly cautioned against counting on a treatment regimen that, based on what we know at this point, may not actually work; speaking to Fox and Friends on April 3rd, Fauci warned “We’ve got to be careful that we don’t make that majestic leap to assume that this is a knockout drug. We still need to do the kinds of studies that definitely prove whether any intervention is truly safe and effective.”

What should the average American (who, presumably, knows next to nothing about hydroxychloroquine) make of this disagreement? In most cases, we have reason to believe that the President of the United States – whoever that person happens to be – is in a position to be well-informed and trustworthy. Similarly, we have good reasons to think that doctors who have been appointed to lead federal research institutes (like the NIAID) – not to mention medical doctors in general – are believable experts about medications and prescription practices, as well as other matters of healthcare. How is a non-expert supposed to know who should be believed when purported experts disagree?

This is what philosophers sometimes call the “problem of expert disagreement” – if a layperson needs the insight of an expert to make a reasonable judgment about a claim, but two potential experts disagree, how can the layperson decide which expert to believe? Although the answer here might initially seem trivially easy – the layperson should just listen to whichever expert has more relevant knowledge about the claim – things aren’t so simple: how can the layperson know what counts as “relevant knowledge” if they are, in fact, just a layperson?

So, instead, we might look to the credentials of the two experts to see what sort of education or experience they might be employing when making their recommendations. If we know that one expert graduated from a well-respected university that specializes in the relevant field while the other received a degree from a university that does not train experts in the specific domain, then we have some reason to trust the first over the second. Ultimately, though, this test might not be much better than the first option: it requires the layperson to be able to judge the relative merit of credentialing institutions rather than credentialed individuals and this also seems unrealistic to expect actual laypersons to be capable of doing.

It’s worth noting, though, that this is exactly what laypersons think they’re doing when they simply assert that someone “went to Cornell” or “is the President” – they’re citing some person as an authority in virtue of credentials that they hold, regardless of whether those credentials are actually relevant to the question up for debate. In the worst cases, this isn’t just some misleading effect of celebrity, it’s actually the fallacious “argument from authority” (or an argumentum ad verecundiam, if you prefer): this example of bad reasoning occurs whenever a bad argument is grounded on the basis of someone’s authority in an irrelevant area of expertise.

Finally, laypersons might judge between two disagreeing experts by investigating which expert agrees with the standard consensus of other experts in their field. By increasing the sample size of experts beyond just the original two, the layperson can feasibly judge whether or not a particular person is an outlier among their peers. Presumably, a majority of experts will hold the most credibly supported position in the field (indeed, it’s not clear what else would constitute such a position). Of course, there are problems with this method too (experts in a field might agree with each other for all sorts of reasons other than a concern for the truth, for example), but it’s worth noting that this technique can be used even by the most ignorant of laypersons: all we need to know to judge between two experts is which expert’s peer group is bigger.

Typically, the problem of expert disagreement is debated among philosophers interested in social epistemology – the study of how knowledge works in group contexts – but when expert testimony bears on moral matters then ethicists should be concerned with it as well. It’s general epistemological doctrine that thinkers should believe what’s true, but (even if you deny this) it’s straightforwardly (or at least pragmatically) clear that people interested in protecting themselves and their loved ones from a pandemic should listen to the best medical experts available.

All of this is to say that, in the case of hydroxychloroquine and its purported role in fighting COVID-19, Fauci’s expertise (if you’ll forgive me for putting it this way) clearly trumps Trump’s. In the case of the first test, Dr. Fauci’s position as a medical expert gives his opinion immediate priority for medical questions over that of President Trump (whose job often entails seeking the expert advice of specialists like Fauci). For the second test, Dr. Fauci’s educational and professional career are clearly more relevant to medical questions than President Trump’s history of making real estate and television deals – and no amount of “common sense” matters here, either. Finally, although Trump has repeatedly referenced a survey of medical professionals in support of his position, Fauci’s insistence on controlled testing is simply the standard vetting process scientists seek to ensure that new treatment regimens are safe; the group Trump appeals to (based on that survey) numbers around 2300 individuals, whereas Fauci’s is something on the order of “most every medical researcher who has practiced in the last century.” United States presidents command many things, but the scientific method is not one of them.

Which might also be why Trump now appears to be actively censoring Fauci during press briefings, but that’s a topic for a different article.

Life on Mars? Cognitive Biases and the Ethics of Belief

NASA satelite image of Mars surface

In 1877 philosopher and mathematician W.K. Clifford published his now famous essay “The Ethics of Belief” where he argued that it is ethically wrong to believe things without sufficient evidence. The paper is noteworthy for its focus on the ethics involved in epistemic questions. An example of the ethics involved in belief became prominent this week as William Romoser, an entomologist from Ohio claimed to have found photographic evidence of insect and reptile-like creatures on the surface of Mars. The response of others was to question whether Romoser had good evidence for his belief. However, the ethics of belief formation is more complicated than Clifford’s account might suggest.

Using photographs sent by the NASA Mars rover, Romoser has observed insect and reptile like forms on the Martian surface. This has led him to conclude, “There has been and still is life on Mars. There is apparent diversity among the Martial insect-life fauna which display many features similar to Terran insects.” Much of this conclusion is based on careful observation of the photographs which contain images of objects, some of which appear to have a head, a thorax, and legs. Romoser claims that he used several criteria in his study, noting the differences between the object and its surroundings, clarity of form, body symmetry, segmentation of body parts, skeletal remains, and comparison of forms in close proximity to each other.

It is difficult to imagine just how significant the discovery of life on other planets would be to our species. Despite all of this, several scientists have spoken out against Romoser’s findings. NASA denies that the photos constitute evidence of alien life, noting that the majority of the scientific community agree that Mars is not suitable for liquid water or complex life. Following the backlash against Romoser’s findings, the press release from Ohio University has been taken down. This result is hardly surprising; the evidence for Romoser’s claim simply is not definitive and does not fit with the other evidence we have about what the surface of Mars is like.

However, several scientists have offered an explanation for the photos. What Romoser saw can be explained by pareidolia, a tendency to perceive a specific meaningful image in ambiguous visual patterns. These include the tendency of many to see objects in clouds, a man in the moon, and even a face on Mars (as captured by the Viking 1 Orbiter in 1976). Because of this tendency, false positive findings can be more likely. If someone’s brain is trained to observe beetles and their characteristics, it can be the case that they would identify visual blobs as beetles and make the conclusion that there are beetles where there are none.

The fact that we are predisposed to cognitive biases means that it is not simply a matter of having evidence for a belief. Romoser believed he had evidence. But various cognitive biases can lead us to conclude that we have evidence when we don’t, or to dismiss evidence when it conflicts with our preferred conclusions. For instance, in her book Social Empiricism Miriam Solomon discusses several such biases that can affect our decision making. For example, one may be egocentrically biased toward using one’s own observation and data over others.

One may also be biased towards a conclusion that is similar to a conclusion from another domain. In an example provided by Solomon, Alfred Wegener once postulated that continents move through the ocean like icebergs drift through the water based on the fact that icebergs and continents are both large solid masses. Perhaps in just the same way Romoser was able to infer based on visual similarities between insect legs and a shape in a Martian image, not only that there were insects on Mars, but that the anatomical parts of these creatures were similar in function to similar creatures found on Earth despite the vastly different Martian environment.

There are several other forms of such cognitive biases. There is the traditional confirmation bias, where one focuses on evidence that confirms their existing beliefs and ignores evidence that does not. There is the anchoring bias, were one relies too heavily on the first information that they hear. There is also the self-serving bias, where one blames external forces when bad things happen to them, but they take credit when good things happen. All of these biases distort our ability to process information.

Not only can such biases affect whether we pay attention to certain evidence or ignore other evidence, but they can even affect what we take to be evidence. For instance, the self-serving bias may lead one to think that they are responsible for a success when in reality their role was a coincidence. In this case, their actions become evidence for a belief when it would not be taken as evidence otherwise. This complicates the notion that it is unethical to believe something without evidence, because our cognitive biases affect what we count as evidence in the first place.

The ethics of coming to a belief based on evidence can be even more complex. When we deliberate over using information as evidence for something else, or whether we have enough evidence to warrant a conclusion, we are also susceptible to what psychologist Henry Montgomery calls dominance structuring. This is a tendency to try to create a hierarchy of possible decisions with one dominating the others. This allows us to gain confidence and to become more resolute in our decision making. Through this process we are susceptible to trading off the importance of different pieces of information that we use to help make decisions. This can be done in such a way where once we have found a promising option, we emphasize its strengths and de-emphasize its weaknesses. If this is done without proper critical examination, we can become more and more confident in a decision without legitimate warrant.

In other words, it is possible that even as we become conscious of our biases, we can still decide to use information in improper ways. It is possible that, even in cases like Romoser, the decision to settle in a certain conclusion and to publish such findings are the result of such dominance structuring. Sure, we have no good reason to infer the fact that the Martian atmosphere could support such life, but those images are so striking; perhaps previous findings were flawed? How can one reject what one sees with their own eyes? The photographic evidence must take precedence.

Cognitive biases and dominance structuring are not merely restricted to science. They impact all forms of reasoning and decision making, and so if it is the case that we have an ethical duty to make sure that we have evidence for our beliefs, then we also have an ethical duty to guard against these tendencies. The importance of such ethical duties is only more apparent in the age of fake news and other efforts to deliberately deceive others on massive scales. Perhaps as a public we should more often ask ourselves questions like “Am I morally obliged to have evidence for my beliefs, and have I done enough to check my own biases ensure that the evidence is good evidence?”

Impeachment Hearings and Changing Your Mind

image of two heads with distinct collections of colored cubes

The news has been dominated recently by the impeachment hearings against Donald Trump, and as has been the case throughout Trump’s presidency, it seems that almost every day there’s a new piece of information that is presented by some outlets as a bombshell revelation, and by others as really no big deal. While the country at this point is mostly split on whether they think that Trump should be impeached, there is still a lot of evidence left to be uncovered in the ongoing hearings. Who knows, then, how Americans will feel once all the evidence has been presented.

Except that we perhaps already have a good idea of how Americans will feel even after all the evidence has been presented, since a recent poll reports that the majority of Americans say that they would not change their minds on their stance towards impeachment, regardless of what new evidence is uncovered. Most Americans, then, seem to be “locked in” to their views.

What should we make of this situation? Are Americans just being stubborn, or irrational? Can they help themselves?

There is one way in which these results are surprising, namely that the survey question asks whether one could imagine any evidence that would change one’s mind. Surely if, say, God came down and decreed that Trump should or should not be impeached then one should be willing to change one’s mind. So when people are considering the kind of evidence that could come out in the hearings, they are perhaps thinking that they will be presented with evidence of a similar kind to what they’ve seen already.

A lack of imagination aside, why would people say that they could not conceive of any evidence that could sway them? One explanation might be found with the way that people tend to interpret evidence presented by those who disagree with them. Let’s say, for example, that I am already very strongly committed to the belief that Trump ought to be impeached. Couldn’t those who are testifying in his defense present some evidence that would convince me otherwise? Perhaps not: if I think that Trump and those who defend him are untrustworthy and unscrupulous then I will interpret whatever they have to say as something that is meant to mislead me. So it really doesn’t matter what kind of evidence comes out, since short of divine intervention all of the evidence that comes out will be such that it supports my belief. And of course my opposition will think in the same way. So no wonder so many of us can’t imagine being swayed.

While this picture is something of an oversimplification, there’s reason to think that people do generally interpret evidence in this way. Writing at Politico, psychologist Peter Coleman describes what he refers to as “selective perception”:

Essentially, the stronger your views are on an issue like Trump’s impeachment, the more likely you are to attend more carefully to information that supports your views and to ignore or disregard information that contradicts them. Consuming more belief-consistent information will, in turn, increase your original support or disapproval for impeachment, which just fortifies your attitudes.

While Coleman recognizes that those who are most steadfast in their views are unlikely to change their minds over the course of the impeachment hearings, there is perhaps still hope for those who are not so locked-in. He describes a “threshold effect”, where people can change their minds suddenly, sometimes even coming to hold a belief that is equally strong but on the opposite side of an issue, once an amount of evidence they possess passes a certain threshold. What could happen, then, is that over the course of the impeachment procedures people may continue to hold their views until the accumulated evidence simply becomes too overwhelming, and they suddenly change their minds.

Whether this is something that will happen given the current state of affairs remains to be seen. What is still odd, though, is that while the kinds of psychological effects that Coleman discusses are ones that describe how we form our beliefs, we certainly don’t think that this is how we should form our beliefs. If these are processes that work in the background, ones that we are subject to but don’t have much control over, then it would be understandable and perhaps (in certain circumstances) even forgivable that we should generally be stubborn when it comes to our political beliefs. But the poll is not simply asking what one’s beliefs are, but what one could even conceivably see oneself believing. Even if it is difficult for us to change our minds about issues that we have such strong views about, surely we should at least aspire to be the kind of people who could conceive of being wrong.

One of the questions that many have asked in response to the poll results is whether the hearings will accomplish anything, given that people seem to have made up their minds already. Coleman’s cautious optimism perhaps gives us reason to think that minds could, in fact, be swayed. At the same time it is worth remembering that being open-minded does not mean that you are necessarily wrong, or that you will not be vindicated as having been right all along. At the end of the day, then, it is difficult not to be pessimistic about the possibility of progress in such a highly polarized climate.

Trusting Women and Epistemic Justice

An anonymous woman holding up a sign that says #MeToo

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Over the past three months, public figures have been exposed as serial sexual harassers and perpetrators of sexual assault. Survivors of harassment and assault have raised new awareness of toxic masculinity and its effects in a short period of time.

However, as time goes on, supporters of the movement have been voicing rising concerns that something is bound to go awry. There is an undercurrent of worry that an untrustworthy individual will make an errant claim and thereby provide fodder for skeptics and bring the momentum of the movement to a halt. In response to this, it may seem like more vetting or investigation of the claims is the way forward. On the other hand, wouldn’t it be unfortunate to erode trust and belief in women’s stories in hopes of keeping the very momentum in service of hearing women’s voices?

Continue reading “Trusting Women and Epistemic Justice”

Untangling the “Sincerely Held Moral or Religious Belief”

A photo of Donald Trump and Mike Pence leaving Air Force One

On February 10, 2012, the Obama Administration announced that the preventive care benefits mandated by the Patient Protection and Affordable Care Act (aka “Obamacare”) would be interpreted to include contraception coverage. This decision proved controversial from the very beginning and elicited numerous legal objections. Many religious organizations and religious owners of businesses objected to the narrow scope of religious exemptions originally allowed in the mandate. Notably, the Supreme Court ruled in 2014 to expand the exemptions to the mandate to include closely held for-profit corporations with “sincerely held religious beliefs.” At issue in these legal challenges was whether the contraception mandate substantially burdened the free exercise of religion, as it is protected under the First Amendment to the US Constitution and the Religious Freedom Restoration Act.

Continue reading “Untangling the “Sincerely Held Moral or Religious Belief””

Ryke Geerd Hamer and the Dangers of Positive Thinking

Dr. Ryke Geerd Hamer died on July 2. It was hardly noticed in English language media. This is not surprising as, indeed, he was an obscure person. But, unfortunately, his legacy lives on, and the harm he has caused far outweighs the media attention that he has been given (Spanish and German newspapers have dedicated more attention to his death). Continue reading “Ryke Geerd Hamer and the Dangers of Positive Thinking”

Easter and the Ethics of the Resurrection

Easter was recently celebrated by Christians all over the world. 2017 was one of those rare years when the Julian and Gregorian calendars coincide in their timing of Easter; therefore, the Eastern and Western Churches celebrated Jesus’ resurrection on the same day.

Yet, a recent BBC poll revealed that a quarter of British Christians do not believe in the resurrection. We do not have poll results for the rest of the world. The United Kingdom is among the most secularized countries in the world, so the results are probably not representative of other countries. Yet, the modernizing and secularizing tendency seems to be spreading in the rest of the world, and it would be safe to assume that the percentage of people that do not believe in Jesus’ resurrection is on the rise.

Continue reading “Easter and the Ethics of the Resurrection”

The Ethics of an Atheist Pope

With two deeply conservative predecessors (John Paul II and Benedict XVI), Pope Francis has raised a lot of eyebrows over the years. He has not made any significant reform (unlike, say, John XXIII), but his populist style has definitely struck a chord of sympathy amongst many Catholics. John Paul II was a populist as well, but he was closer to the original version of populism, gathering huge crowds all over the world. Francis, on the other hand, is not so apt at crowd gathering, but he is apt at appearing to be in touch with common folks. He has repeatedly washed people’s feet (in remembrance of Jesus’ humbleness), and he is very warm to journalists and visitors. Unlike Benedict XVI, he does not seem to be too interested in pompous rituals or luxurious protocols. We may never know whether these gestures are genuine, or a calculated political image; they are most likely something in between.

Continue reading “The Ethics of an Atheist Pope”