Back to Prindle Institute

When Is Someone Responsible for Not Acting?

photograph of empty chair with "Lifeguard On Duty" sign displayed

A recent law in Minnesota legalized edible marijuana for those over the age of 21 with support from both Republicans and Democrats. While this new law is perhaps unsurprising to many locals and lawmakers, it came as a surprise to one Republican who voted for the bill. Minnesota State Senator Jim Abeler claims that he was not aware of what he was voting for. In fact, he called for the legislature to repeal the new law, a call that was shot down as quickly as it was raised.

Abeler’s claim, that he was not aware of what he was voting for, might be met with genuine suspicion. It’s presumably a part of Abeler’s job to know what he is voting for, and such a claim might be disingenuous. But, for the sake of argument, I would like to consider Abeler’s claim to be genuine.

If Abeler was not aware that his vote would support the legalization of edible marijuana, is he still responsible for his vote?

Abeler’s claim effectively amounts to a denial of one if not both of the classical conditions of moral responsibility: (i) awareness and (ii) voluntariness. Classically, individuals are responsible for an action only if both of these conditions are met. Abeler’s claim is a denial of awareness concerning the specific content of the bill. And his claim is a denial of voluntariness with respect to support for the specifics about which he was unaware. Indeed, assuming Abeler’s claim is genuine, how could Abeler willfully vote to legalize marijuana edibles if he wasn’t aware of this aspect of the bill?

This particular question is an application of the following general question:

Is an individual responsible for something the same individual fails to do?

In philosophy and law, this problem is known as the problem of negligent omissions (see here for an overview in law, and here for an excellent piece in philosophy).

Pinning down when a lack of action becomes a failure of action is tricky. For example, we generally think that if a child is drowning and there is an individual nearby who does not act, this individual is culpable in some sense. The individual ought to have acted and did not — there is a negligent omission. However, we do not generally think the individual is culpable if he is not able to save the child. For example, if the individual is a couple of miles away and unaware of the child drowning, he does not seem to be culpable for not acting.

So, under what conditions is an individual responsible for a failure to act or a failure to act knowledgeably? Generally, there are three conditions. An individual is responsible for an omission if the same individual:

1. Is able to act;
2. Is obliged to act;
3. Is aware of the relevant events and obligation.

To see how these conditions are important, consider the following example. Sylvia is a lifeguard at the local beach. Her job is to save people from drowning and to alert people about the weather conditions by placing the correctly colored flags on the beach. For example, if there is a riptide and it is dangerous to swim in the ocean, she is to place a red flag on the beach. As it happens, an individual begins to drown. Sylvia jumps to the rescue and successfully saves the individual.

Notice that, among the individuals on the beach, it is Sylvia’s job to save the person drowning. The other swimmers do not have the training. And, even if they do, they are not obligated in the same way that Sylvia is obligated to save the person drowning.

Imagine now that Silvia fails to act and the person drowns. Reasonably, she is responsible in some sense. Moreover, imagine the outrage if Sylvia were to turn to a surfer and exclaim, “Why didn’t you save him?” The surfer can legitimately say that he could not have responded because he does not have the proper training. Even if the surfer were to have the training, he would not be obligated to save the individual in terms of his role or job (of course, the surfer still has the more general obligation to help those in need).

The example shows how Sylvia is responsible for her failure to act. She has the ability (the proper training), the obligation in virtue of her role, and the awareness of the event and obligation.

When she fails to save the drowning individual, her omission is negligent.

Now, Abler’s omission is a bit more subtle. He claims to have not voted knowledgeably. He acted, and yet he failed to act with an awareness and knowledge of the bill for which he voted. To see how Abeler is responsible for this omission, let us revisit the lifeguard. Among Sylvia’s various responsibilities is the task to alert the beachgoers of the swimming conditions. Imagine that she raises a yellow flag to alert the people that there is a medium hazard to swimming. However, Sylvia did not take into consideration that the tide is now becoming high tide. This collection of conditions will cause a riptide — Sylvia should have raised a red flag to alert the beachgoers to not swim. Regardless of whether the beachgoers swim, we have a situation where Sylvia acted and did not act knowledgeably. She is responsible for her failure of knowledge because it is her job to account for the water conditions.

Abeler is likewise obligated to know what he votes for in virtue of his role. So, when he fails to vote and when he fails to vote knowledgeably, he is still responsible. His omission is negligent.

All is well at the intuitive level. It seems intuitively correct to ascribe responsibility to Sylvia and Abeler. The conditions are articulated, work together, and have common examples to back them up. But something worrying has happened.

If we can ascribe responsibility for an omission, and more specifically, omission of knowledgeable action, we seem to lose one of the classical conditions of responsibility: awareness.

One way to keep the awareness condition is to maintain that Abeler is generally aware of his responsibilities as a state senator. When Abeler assumed his position, he was aware of his decision and obligated himself to read certain documents. Thus, when he enters his office on voting day and fails to read the bill thoroughly, he has already fulfilled the awareness condition in an important sense. So too with our lifeguard. When Sylvia signs on the dotted line to become a lifeguard, it obligates her to act and be aware in certain ways.

It may also seem that the voluntariness condition is in peril. However, we can offer the same answer that applies to the awareness condition: Abeler voluntarily took on the role of a lawmaker.

There is more to be said about negligent omissions, and there are more ways to pair the classical conditions of responsibility with negligent omissions. What is clear, however, is that Abeler is still responsible for his vote.

The Knowledge Norms of Emotions

simple single-line drawing of person curled up in bed

This post begins with a sad backstory. A little while back my wife and I had a miscarriage of our first child. There was a lot that was terrible in the experience, but in this post I want to address a certain oddity that I noticed about grief.

Due to a range of bureaucratic complications, it took about a week from when we first suspected a miscarriage to when we had final confirmation. During that week, our confidence that we had miscarried grew, but throughout the period it remained a likelihood rather than a certainty.

What surprised me, during that week, was that the uncertainty made it difficult to grieve. Even when I was pretty sure we had lost the child, it felt ‘out of whack’ to grieve the loss, since there was a chance the child was still alive. It was a terrible week, and I was extremely sad, but it felt out of joint to grieve for the child while recognizing the chance that all might be well. There was no obstacle to feeling anxious, there was nothing out of joint about feeling trepidation, but outright grief felt strange. And it continued to feel strange until we received confirmation of the loss.

This, eventually, got me wondering: is grief characterized by a knowledge norm? In philosophy, a knowledge norm is a normative rule which says that knowledge of something is required for an action or mental state to be appropriate. For example, there seems to be a knowledge norm on assertion: you should only tell someone something if you know that thing is true. This explains, for instance, why if I say “it will rain tomorrow” it is appropriate for you to ask “how do you know?” Or why saying “I don’t know” is an appropriate response if someone asks you a question. (For a thorough defense of a knowledge norm of assertion see Timothy Williamson’s “Knowing and Asserting.”)

Many philosophers also argue that there is a knowledge norm of belief: you should only believe X if you know X is true. Thus, Williamson argues in his book Knowledge and its Limits

“Knowledge sets the standard of appropriateness for belief. That does not imply that all cases of knowing are paradigmatic cases of believing, for one might know p while in a sense treating p as if one did not know p—that is, while treating p in ways untypical of those in which subjects treat what they know. Nevertheless, as a crude generalization, the further one is from knowing p, the less appropriate it is to believe p. Knowing is in that sense the best kind of believing. Mere believing is a kind of botched knowing. In short, belief aims at knowledge (not just truth).”

There also seems to be a knowledge norm of certain actions. For instance, it seems like you should only punish someone if you know they are guilty, and only chastise someone if you know they did wrong. Some philosophers have gone even further and suggested that there is a general knowledge norm on all action: you should only treat X as a reason for action if you know X to be true.

My own experience with grief seems to suggest that there might also be a knowledge norm on various emotions; but as far as I know that topic has not yet been seriously investigated by philosophers.

My experience of the miscarriage suggested there was a knowledge norm to grief because the problem was that it felt wrong to grieve our child’s death as long as I recognized that the child might still be alive. This is parallel to how I can’t know the child had died as long as I recognized that the child might still be alive. In some sense, what is characteristic of knowledge is the elimination of all relevant alternatives. As long as those relevant alternatives remained, we did not know, nor did it feel quite right to grieve.

Here is another reason for thinking that grief is characterized by a knowledge norm: it is hard to fit probabilities with the emotion of grief. It would be weird to think that as I grow more certain, my grief grows proportionally. I do not grieve a small amount at a 5% chance that my spouse has died, nor would my grief double as my confidence grows to 10%. I grieve less for less bad things, not for lower probabilities of equally bad things. But it would be equally weird to think that there is some probabilistic threshold at which grief suddenly becomes appropriate. It is not as though when I go from 94% confident my child died to 96% confident my child died that suddenly grief goes from inappropriate to appropriate.

But if grief neither scales with probability, nor requires a certain probabilistic threshold, then it seems like grief is responsive to a standard other than probabilistic credence, and the natural alternative is that it is responsive to knowledge.

Other emotions also seem to be knowledge normed in this way. It is hard to feel grateful because you think it is likely that someone brought you a present. Normally gratitude is a response to the knowledge that someone did something for you. Jonathan Adler makes a point along these lines about resentment: “Mild resentment is never resentment caused by what one judges to be a serious offense directed toward oneself tempered by one’s degree of uncertainty in that judgment.”

Now, some other emotions at first blush seem different. I can be worried about something without knowing that thing will occur. Similarly, I can be hopeful of something without knowledge it will occur. Yet, even here, it seems that there might be some knowledge norm at play. For instance, it seems weird to be worried about or hope for something you know is impossible. Thus, it might be that you must know that something is possible before you can worry about it or hope for it.

If this is right, does it suggest a general pattern? I think it does. Emotions have appropriateness conditions. Resentment is an appropriate response to being wronged. Gratitude is an appropriate response to being given a gift. Hope is an appropriate response to the possibility of certain goods, as worry is an appropriate response to the possibility of certain bads. In each of these cases, what is required to rightly feel the emotion is knowledge.

That, then, is why grieving felt strange. I didn’t yet know if my grief was appropriate since I lacked knowledge of the tragedy to which my grief was a response.

What Good Is Ignorance?

photograph of single person with flashlight standing in pitch darkness

Most of us think knowledge good, and ignorance bad. We justify this by pointing to all the practical goods that knowledge affords us: we want the knowledgeable surgeon and legislator, and not the ignorant ones. The consequences of having the latter are potentially dire. And so, from there, many people blithely assume ignorance is bad: if knowing is good, not knowing should be avoided.

What’s striking though is that people’s actions often don’t match their words: they will pay lip service to the value of knowledge, yet choose to remain ignorant despite having relatively easy access to know more or know better. The actions of these folks suggests that there is something they must value about ignorance — or, perhaps, they think gathering knowledge is more trouble than it’s worth. Part of the explanation here is no doubt that people are lazy — they are, to put the point more precisely, cognitive misers. However, we should be suspicious of one-factor explanations of complicated behavior. And knowledge looks like it is subject to the Goldilocks principle: we don’t want too little knowledge, but we don’t want too much knowledge either. Do you really want to know everything there is to know about the house you bought? Of course you don’t. While you want to know, say, whether the roof is in good condition, and the foundation is sound, you don’t care exactly how many specks of dust are in the attic. And just as we can oftentimes overstate the value of knowledge, we can understate the value of ignorance too: it turns out, there are some benefits to knowing less. We should canvass several of them.

First, consider the value of flow states: flows states are states where we have intense focus and concentration on the task at hand in the present moment; the merging of action and awareness, and the loss of self-reflection — what people often describe as ‘being in the zone.’ Flow states allow us to achieve amazing things whether in the corporate boardroom, the courthouse, or the basketball court, and many other tasks in-between. We may wonder how flow states are related to ignorance. Here we must understand what is required to be in a flow state: intensive and focused concentration on what one is doing in the present moment; the loss of awareness that one is engaging in a specific activity, among other things. When we’re in a flow state, while writing, say, we focus to the point of immersion into the writing process, inhibiting knowledge of what we’re doing. We do not focus on the keystrokes necessary to produce the words on the page or think too much about the next sentence to come. Athletes often describe how it feels to be in a flow state in similar terms.

Next, consider the value of privacy where we value the ignorance of others. we often value privacy — ignorance of our words and actions by others — performatively, even if we may say things dismissive of privacy. When the issue of state surveillance is broached, some retort that they don’t fear the state knowing their business since they’ve done nothing wrong. The implication here is that only criminals, or folks up to no good, would value their privacy; whereas, law-abiding citizens have nothing to fear from the state. Yet their actions belie their words: they password-protect their account, use blinds and curtains to prevent snooping into their homes, and so on. They, in other words, intuitively understand that privacy is valuable for leading a normal life having nothing to do with criminality. The fact that they would be reticent to forgo their privacy says volumes about what they really value, despite their expressed convictions to the contrary. We can think about the value of privacy by thinking about a society where privacy is absent. As George Orwell masterfully put the point:

“There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live—did live, from habit that became instinct—on the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.”

And finally, sometimes we (rightly) value our ignorance of other people, even those closest to us. Would you really want to know everything about people in your life — every thought, word, and deed? I’m guessing for most folks the answer is no. As the philosopher, Daniel Dennett, nicely explains:

“Speaking for myself, I am sure that I would go to some lengths to prevent myself from learning all the secrets of those around me—whom they found disgusting, whom they secretly adored, what crimes and follies they had committed, or thought I had committed! Learning all these facts would destroy my composure, cripple my attitude towards those around me.”

We thus have a few examples where ignorance — in different forms — is actually quite valuable, and where we wouldn’t want knowledge. This is some confirmation for the Goldilocks principle applied, not just to knowledge, but to ignorance too (stated in reverse): we don’t want too much ignorance, but we don’t want too little ignorance either.

Expertise and the “Building Distrust” of Public Health Agencies

photograph of Dr. Fauci speaking on panel with American flag in background

If you want to know something about science, and you don’t know much about science, it seems that the best course of action would be to ask the experts. It’s not always obvious who these experts are, but there are often some pretty easy ways to identify them: if they have a lot of experience, are recognized in their field, do things like publish important papers and win grant money, etc., then there’s a good chance they know what they’re talking about. Listening to the experts requires a certain amount of trust on our part: if I’m relying on someone to give me true information then I have to trust that they’re not going to mislead me, or be incompetent, or have ulterior motives. At a time like this it seems that listening to the scientific experts is more important than ever, given that people need to stay informed about the latest developments with the COVID-19 pandemic.

However, there continues to be a significant number of people who appear to be distrustful of the experts, at least when it comes to matters concerning the coronavirus in the US. Recently, Dr. Anthony Fauci stated that he believed that there was a “building distrust” in public health agencies, especially when it comes to said agencies being transparent with developments in fighting the pandemic. While Dr. Fauci did not put forth specific reasons for thinking this, it is certainly not surprising he might feel this way.

That being said, we might ask: if we know that the experts are the best people to look to when looking for information about scientific and other complex issues, and if it’s well known that Dr. Fauci is an expert, then why is there a growing distrust of him among Americans?

One reason is no doubt political. Indeed, those distrustful of Dr. Fauci have claimed that he is merely “playing politics” when providing information about the coronavirus: some on the political right in the US have expressed skepticism with the severity of the pandemic and the necessity for the use of face masks specifically, and have interpreted the messages from Dr. Fauci as being an attack on their political views, motivated by differing political interests. Of course, this is an extremely unlikely explanation for Dr. Fauci’s recommendations: someone simply disagreeing with you or giving you advice that you don’t like is not a good reason to find them distrustful, especially when they are much more knowledgeable on the subject than you are.

But here we have another dimension to the problem, and something that might contribute to a building distrust: people who disagree with the experts might develop resentment toward said experts because they feel as though their own views are not being taken seriously.

Consider, for instance, an essay recently written by a member of a right-wing think tank called “How Expert Worship is Ruining Science.” The author, clearly skeptical of the recommendations of Dr. Fauci, laments what he takes to be a dismissing of the views of laypersons. While the article itself is chock-a-block with fallacious reasoning, we can identify a few key points that can help explain why some are distrustful of the scientific experts in the current climate.

First, there is the concern that the line between experts and “non-experts” is not so sharp. For instance, with there being so much information available to anyone with an internet connection, one might think that given one’s ability to do research for oneself that we should not think that we can so easily separate the experts from the laypersons. Not taking the views of the non-expert seriously, then, means that one might miss out on getting at truth from an unlikely source.

Second, recent efforts by social media sites like Twitter and Facebook to prevent the spread of misinformation are being interpreted as acts of censorship. Again, the thought is that if I try to express my views on social media, and my post is flagged as being false or misleading, then I will feel that my views are not being taken seriously. However, the reasoning continues: the nature of scientific inquiry is meant to be that which is open to objection and criticism, and so failing to engage with that criticism, or to even allow it to be expressed, represents bad scientific practice on the part of the experts. As such, we have reason to distrust them.

While this reasoning isn’t particularly good, it might help explain the apparent distrust of experts in the US. Indeed, while it is perhaps correct to say that there is not a very sharp distinction between those who are experts and those who are not, it is nevertheless still important to recognize that if an expert as credentialed and experienced as Dr. Fauci disagrees with you, then it is likely your views need to be more closely examined. The thought that scientific progress is incompatible with some views being fact-checked or prevented from being disseminated on social media is also hyperbolic: progress in any field would slow to a halt if it stopped to consider every possible view, and that the fact that one specific set of views is not being considered as much as one wants is not an indication that productive debate is not being conducted by the experts.

At the same time, it is perhaps more understandable why those who are presenting information that is being flagged as false or misleading may feel a growing sense of distrust of experts, especially when views on the relevant issues are divided along the political spectrum. While Dr. Fauci himself has expressed that he takes transparency to be a key component in maintaining the trust of the public, this is perhaps not the full explanation. There may instead be a fundamental tension between trying to best inform the public while simultaneously maintaining their trust, since doing so will inevitably require not taking seriously everyone who disagrees with the experts.

Truth and Contradiction, Knowledge and Belief, and Trump

photograph of Halloween event at White House with Donald and Melania Trump

At a White House press conference in August, the HuffPost’s White House correspondent, S.V. Dáte, was called on by President Donald Trump for a question. This was the first time Trump had called on Dáte, and the question the reporter asked was the one he had (he said later) been saving for a long time. Here is the exchange:

Dáte: “Mr President, after three and a half years, do you regret at all, all the lying you have done to the American People?” Trump: “All the what?” Dáte: “All the lying, all the dishonesties…” Trump: “That who has done?” Dáte: “You have done…”

Trump cuts him off, ignoring the question, and calls on someone else. The press conference continues, as though nothing has happened. Trump’s reaction to being challenged is familiar and formulaic: he responds by ignoring or denouncing those from whence the challenge comes. In a presidency as tempestuous as this one, that inflicts new wounds on the American democracy daily and lurches from madness to scandal at breakneck speed, this reporter’s question may have slipped under the radar for many.

But let’s go back there for a moment. Not only was it a fair question, it is a wonder that it is not a question Trump is asked every day. The daily litany of lies uttered by the president is shocking, though people who support Trump seem not to mind the lies, or at least are not persuaded thereby to withdraw their support. This seems extraordinary, but maybe it isn’t. As politics continues to grow more divisive and ideologically driven, versions of events, indeed versions of reality, which serve ideologies are increasingly preferred by those with vested interests over ones supported by facts.

Therefore, the answer to Dáte’s question was already implicit in its having to be asked. Given the sheer volume of lies, and given what we know of Trump’s demeanor, it seems clear that he harbors no such regret. Trump gave his answer in dismissing the question.

So, here we are then. The President of the United States is widely acknowledged as a frequent and mendacious liar. If you want to follow up on the amount, content, or modality (Fox News, Twitter, a rally etc.) of Trump’s lies, there are the fact checkers. The Washington Post’s President Trump lie tally database had clocked 20,055 lies to date on July 9. You can search the database of Trump lies by topic and by source. The Post finds that Trump has made an average of 23 false or misleading claims a day over a 14-month period.

Take the president’s appearance last month at an ABC Town Hall with undecided voters. In response to questions about his handling of the pandemic, and regarding the taped, on-the-record interviews with Bob Woodward in which Trump discusses his decision to play down the virus to avoid panic, Trump responds that he had in fact “up-played” the virus. He says this while making no attempt to square the lie off with what is already, in fact, on the public record. As with all Trump’s tweets, public speeches, rallies, press conferences etc., Trump tells lies and fact checkers scramble to confront them.

Of course, Trump should be fact-checked. Fact-checking politicians and other public figures for the veracity of their speech is, and will remain, a vital contribution to public and political discourse. However, it is also important to reflect upon the way the ground has shifted under this activity in the era of Trump; the post-truth era.

The activity of fact-checking, of weighing the President’s claims against known or discoverable truth, presupposes an epistemic relation to the world in which truth and fact are arbiters of – or at least in some way related to – what it is reasonable to believe. Truth and untruth (that is, facts and lies) are, in the conventional sense, at odds with one another – they are mutually exclusive. A logical law of non-contradiction broadly governs conventional discourse. Either “p” or “not-p” is the case; it cannot be both. Ordinarily for a lie to be effective it has to obfuscate or replace the truth. If “p” is in fact true, then the assertion of “not-p” would have to displace the belief in “p” for the lie to work.

But in the Trump Era (the post-truth era) this relation is no longer operative. Trump’s lies often don’t even maintain the pretense of competing with truth in the conventional sense – that is, they don’t really attempt to supersede a fact but rather to shift the reality in which that fact operates as such, or in which it has a claim on belief, decision, and action.

When Trump says he “up-played” the virus without addressing his own on-the-record admission that he downplayed it, he is of course contradicting himself, but more than that he is jettisoning the ordinary sense in which fact and falsehood are at odds with each other. This could be described as a kind of epistemic shift, and is related, I think, to any meaning we might make – now and in the future– of the concept of ‘post-truth’, and what that means for our political and social lives. The concept of post-truth appears to signal a shift in what people can, within political and social discourse, understand knowledge to be, and what claims they can understand it to have upon them. The consequences of this we can already see playing out – especially, for instance, in the pandemic situation in the US, together with the volatile election atmosphere.

Having a concept of epistemology is important here – a concept of what it would be to ‘know’ and what it would be to act on the basis of knowledge. Such a concept would have to demarcate an ancient philosophical distinction – between episteme and doxa; which is the distinction between knowledge and mere opinion or belief.

Post-truth is the ascension of doxa over episteme. In the well-known philosophical analysis of knowledge as justified true belief, for a belief to count as knowledge one must be justified in believing it and it must be true. Knowledge, under this definition which is rudimentary, and somewhat problematic, but nevertheless useful, is belief which is justified and true. But in the post-truth era it seems that the conditions of both justification and truth are weakened, if not dispensed with altogether, and so we are left with an epistemology in which belief alone can count as knowledge – which is no epistemology at all.

It is easy to see why this is not only an epistemic problem, but a moral and political one as well. What knowledge we have, and what it is reasonable to believe and act upon, are core foundations of our lives in a society. There is an important relationship between epistemology and an ethical, flourishing, social and political life. Totalitarianism is built on and enabled by lies and propaganda replaces discourse when all criticism is silenced.

The coronavirus pandemic has been disastrous for the US. A case can easily be made that the pandemic has been able to wreak such devastation because of Trump’s lies – from his decision to downplay the danger and his efforts to sideline and silence experts, to the specific lies and obfuscations he issues via Twitter and at press conferences or Fox News call-ins.

The US has recorded the highest number of infections, and deaths, of anywhere in the world. So, when Trump says “America is doing great” the question must be ‘what this could possibly mean?’ This is no casual lie; nor is it merely the egoistic paroxysm of a president unable to admit error. Repeating at every possible opportunity that ‘America is doing great, the best in the world’ It is a form of gaslighting – and as such is calculated to help Trump disempower and dominate America.

This is in itself quite unsettling, but where is it all going?

In another, particularly bizarre and sinister example of ‘Trumpspeak’ from a couple of weeks ago the president mentioned a plane that allegedly flew from an unnamed city to Washington, D.C., loaded with “thugs wearing these dark uniforms, black uniforms, with gear.” In the absence of any ensuing clarity from the president or anyone else on what this might have been about, and in the light of Trump’s oft-repeated claims of the presence of a ‘radical left’ contingent, of ‘antifa’ and ‘radical democrats’ etc., it seems to have been an intimation of some threat, directly or indirectly, the symbolism of which appeared to be drawn from the ‘law and order’ platform of his campaign. Frankly, it’s hard to say.

But vague lies and unverified claims with dark intimations are the stuff of conspiracy. If you line all that up next to the fact that Trump has generously hinted that if the election does not resolve in his favor, he will consider the result illegitimate, then you can see how the lies, the false stories, the obfuscations and intimations are the tools Trump is using to try to shift power. He is trying to dislodge power from the elite – which can be read as ‘people who know things.’

One way of characterizing the situation is to say that the post-truth situation is creating an epistemic vacuum where ideology trumps reality and it is in this vacuum that Trump will attempt to secure his win.

Take the oft-repeated mail-in ballot lie – that mail-in ballots are subject to widespread electoral fraud. This has been firmly refuted, even by Trump’s own investigation following the 2016 election. Yet it is widely recognized that this lie could foment a sense of resentment among Trump supporters should he not get across the line on November 3. Or it could facilitate his (by now fairly transparent) intention to declare victory on election night should the result be inconclusive as counting proceeds. These are the possible, or even likely, outcomes if Trump is able to create, feed, and capitalize on a situation in which truth and fact have no purchase on, or have no meaningful relationship to, people’s reasons for acting or making choices.

Trump’s lying is both a symptom, and part of the disease of his presidency – a pathology which has infected pretty well the whole Republican party and which is putting great strain on many of the organs and tissues of the American democracy. This really is a time like no other in America’s history, and the stakes are as high as they have ever been.

At this point the ethical dimensions of the question of why truth is important to a healthy and just society seem to be slipping from view as America struggles under Trump to keep an epistemic foundation in political discourse that is broadly governed by principles of veracity. Fact-checking alone cannot win that struggle.

Swamping, Epistemic Trespassing, and Coronavirus

photograph of newspapers folded on top of laptop keyboard

Every day the media are awash with new information about coronavirus. And with good reason: people are worried and want to know the latest developments, what they should do, and how bad things could get. News is coming in not only locally, but globally: my current news feed, for example, has been providing me with information about the coronavirus in the US, Canada, Italy, South Korea, Australia, Spain, among other places. And not just about the spread of the virus itself, but about the consequences thereof, specifically the many events that have been canceled globally, as well as the financial ramifications. It is on everyone’s mind, and everyone is talking about it.

We are, of course, no exception here. But with one story dominating the headlines, there are two phenomena that we should watch out for: the first is swamping, in which other important news stories are, well, swamped by one story occupying everyone’s attention; and the second is epistemic trespassing, in which people who aren’t experts chime in on issues as if they were. Let’s look at these both in turn.

Consider the first problem: news of the coronavirus is so prevalent that it’s easy to lose track of news of anything else going on. For instance, in the US there are reports that a number of senators are currently trying to get a bill passed that would put limitations on websites and users to encrypt their data, and thus have potentially serious ramifications for data privacy in the US. Reddit users have also compiled a list of news stories that people may have missed because of the deluge of information about coronavirus, some that may have made the physical or virtual front page if there weren’t other matters occupying our attention. Important information can be more easily missed, then, if it is swamped by a singular issue.

This is not to say, of course, that it is a bad thing to get a lot of information about coronavirus. Nor is it to say that it is not an important issue that deserves our attention. We should, however, be vigilant both with regards to other important news stories, as well as the possibility that unscrupulous individuals may be using the pandemic as a distraction.

The second problem is a version of what some philosophers have called “epistemic trespassinga phenomenon in which someone weighs in on an issue outside of their area of expertise. For instance, if I, as a trustworthy and trained philosopher, were to write an op-ed about some matter in astrophysicsa topic I know almost nothing aboutthen I would be epistemically trespassing. It seems that as a general rule that one ought not epistemically trespass: you should know something about a subject before commenting on it, and you should not rely on unrelated expertise to be taken seriously. That is not to say, however, that one should avoid learning about and engaging in discussions concerning topics one is interested in: even though one might not be able to be an expert in everything, one is free to learn about subjects that one is unfamiliar with. The problem, then, is not with going where one doesn’t belong, so to speakwe don’t want to say that chemists can’t learn about or comment on art, or that physicists can’t learn about or comment on philosophy, nor that one cannot be knowledgeable about multiple different kinds of fields. The problem is presenting oneself as an expert in some domain that one is not an expert in.

You have no doubt come across some forms of epistemic trespassing with respect to coronavirus news in the form of friends and relatives suddenly becoming armchair epidemiologists and weighing in on what people should be doing and how concerned people should be. It is easy to find examples of such instances online, even from otherwise reputable sources.

Consider, for example, a recent article on The Verge. This piece by Tomas Pueyo argues persuasively that the United States is currently seeing exponential growth in the number of people contracting the disease, and that hospitals are likely to be overwhelmed. Pueyo’s background is in growth marketing, not in epidemiology. While there may be some similarities between marketing trends and the spread of viruses, there is clearly some amount of epistemic trespassing going on here: it seems like it would be better for someone who specializes in epidemiology to comment on the spread of virus instead of someone who works in marketing.

We have, then, two potential pitfalls when it comes to staying vigilant about knowing what’s going on in the world at this point in time: that a singular focus on coronavirus reporting could swamp other news, news that could also have important ramifications if missed, and with so many people weighing in on the pressing issue of the day that we risk running into epistemic trespassers, namely those people who might speak with the authority of an expert, but really don’t have much of an idea of what they’re talking about. As has been discussed here before, an epidemic can impact not just our physical lives, but our epistemic lives, as well.

Owning a Monopoly on Knowledge Production

photograph of Monopoly game board

With Elizabeth Warren’s call to break up companies like Facebook, Google, and Amazon, there has been increasing attention to the role that large corporations play on the internet. The matter of limited competition within different markets has become an important area of focus, however much of the debate tends to focus on the economic and legal factors involved (such as whether there should be greater antitrust enforcement). However, the philosophical and moral issues have not received as much attention. If a select few corporations are responsible for the kinds of information we get to see, they are capable of exerting a significant influence on our epistemic standards, practices, and conclusions. This also makes the issue a moral one.

Last year Facebook co-founder Chris Hughes surprised many with his call for Facebook to be broken up. Referencing America’s history of breaking up monopolies such as Standard Oil and AT&T, Hughes charged that Facebook dominates social networking and faces no market-based accountability. Earlier, Elizabeth Warren had also called for large companies such as Facebook, Google, and Amazon to be broken apart, claiming that they have bulldozed competition and are using private information for profit. Much of the focus on the issue has been on the mergers of companies like Facebook and Instagram or Google and Nest. The argument holds that these mergers are anti-competitive and are creating economics problems. According to lawyer and professor Tim Wu, “If you took a hard look at the acquisition of WhatsApp and Instagram, the argument that the effect of those acquisitions have been anticompetitive would be easy to prove for a number of reasons.” For one, he cites the significant effect that such mergers have had on innovation.

Still, others have argued that breaking up such companies would be a bad idea. They will note that a concept like social networking is not clearly defined, and thus it is difficult to say that a company like Facebook constitutes a monopoly in its market. Also, unlike Standard Oil, companies like Facebook or Instagram are not essential services for the economy which undermines potential legal justifications for breaking these companies up. Most of these corporations also offer their services for free which means that the typical concerns about monopolies and anticompetitive practices regarding prices and rising costs of services do not apply. Those who argue this tend to suggest that the problem lies with the capitalist system or that there is a lack of proper regulation of these industries.

Most of the proponents and opponents focus on the legal and economic factors involved. However, there are epistemic factors at stake as well. Social epistemologists study matters relating to questions like “how do groups come to know things?” or “how can communities of inquirers affect what individuals come to accept as knowledge?” In recent years, philosophers like Kevin Zollman have provided accounts of how individual knowers are affected by communication within their network of fellow knowers. Some of these studies have demonstrated that different communication structures within an epistemic network in terms of the beliefs, evidence, and testimonies that are shared can affect what conclusions an epistemic community will settle on. The way that evidence, beliefs, and testimony of other knowers within the network is shared will affect what other people in the network believe is rational.

Once we factor the ways that a handful of corporations are able to influence the communication of information in epistemic communities on the internet, a real concern emerges. Google and Facebook are responsible for roughly 70% of referral traffic on the internet. For different categories of articles the number changes. Facebook is responsible for referring 87% of “lifestyle” content. Google is responsible for 84% of referrals of job postings. Facebook and Google together are responsible for 79% of referral traffic regarding the world economy. Internet searching is a common way of getting knowledge and information and Google controls almost 90% of this field.

What this means is that a few companies are responsible for the communication of the incredibly large amounts of information, beliefs, and testimony that is shared by knowers all over the world. If we think about a global epistemic community or even smaller sub-communities learning and eventually knowing things through referral of services like Google or Facebook, this means that few large corporations are capable of affecting what we are capable of knowing and will call knowledge. As Hughes noted in his criticism of Facebook, Mark Zuckerberg can alone decide how to configure Facebook’s algorithms to determine what people see in their News Feed, what messages get delivered, and what constitutes violent and incendiary speech. What this means is that if a person comes to adopt many or most of their beliefs because of what they are exposed to on Facebook, then Zuckerberg alone can significantly determine what that person can know.

A specific example of this kind of dominance is YouTube. When it comes to the online video hosting platform marketplace, YouTube holds a significantly larger share than competitors like Vimeo or Dailymotion. Content creators know this all too well YouTube’s policies on content and monetization have led many on the platform to lament the lack of competition. YouTube creators are often confused by why certain videos get demonetized, what is and is not acceptable content, and what standards should be followed. In recent weeks demonetization of history focused channels has been particularly interesting. For example, a channel devoted to the history of the First World War had over 200 videos demonetized. Many of these channels have had to begin censoring themselves based on what they think is not allowed. So, history channels have started censoring words that would be totally acceptable on network television.

The problem isn’t merely one of monetization either. If a video is demonetized, it will no longer be promoted and recommended by YouTube’s algorithm. Thus, if you wish to learn something about history on YouTube, Google is going to play a large role in terms of who gets to learn what. This can affect the ways that people evaluate information on these (sometimes controversial) topics and thus what epistemic communities will call knowledge. Some of these content creators have begun looking for alternatives to YouTube because of these issues, however it remains to be seen whether they will offer a real source of competition. In the meantime, however, much of the information that gets referred to us comes from a select few companies. These voices have significant influence (intentionally or not) over what we as an epistemic community come to know or believe.

This makes the issue of competition an epistemic issue, but it also inherently is a moral one. This is because as a global society we are capable of regulating in one way or another the ways in which corporations are capable of impacting our lives. This raises an important moral question: is it morally acceptable for a select few companies to determine what constitutes knowledge? Having information being referred by corporations provides the opportunity for some to benefit over others, and we as a global society will have to determine whether we are okay with the significant influence they wield.

Knowing What You Don’t Know

A photograph of the word "knowledge engraved in white sandstone

It’s inevitable that there will be some things that you think you know that you don’t actually know: everyone gets overconfident and makes mistakes sometimes, and every one of us have had to occasionally eat crow. However, a recent study reports that a significant number of people in the United States face this problem of thinking that they know more than they do about a number of key scientific issues. One of these beliefs is not terribly surprising: while the existence of human-made climate change is overwhelmingly supported by scientists, beliefs about climate change diverge from the scientific consensus largely along partisan lines.

Another issue that sees a significant amount of divergence between laypeople and scientists, however, is a belief about the safety of genetically modified foods, or GM foods for short. The study reports that while there is significant scientific consensus that GM foods are “safe to consume” and “have the potential to provide substantial benefits to humankind”, the predominant view amongst the general population in the US is precisely the opposite: while 88% of surveyed scientists said that GM foods were safe, only 37% of laypeople said they thought the same. Participants in the study were asked to rate the strength of their opposition to GM foods, as well as the extent of their concern with such foods. They were then asked to rate how confident they were in their understanding of various issues about GM foods, and were also asked a series of questions testing their general scientific knowledge. The crucial result from the study was that those who expressed the most extreme opposition to GM foods “knew the least” when it came to general scientific knowledge, but thought that “they knew the most.” In other words, extreme opponents of GM foods were seriously bad at knowing what they know and what they didn’t know.

The consequences of having extreme attitudes toward issues that one is also overconfident about can be significant. As the Nature study reports, the benefits of GM foods are potentially substantial, being able to provide “increased nutritional content, higher yield per acre, better shelf life and crop disease resistance.” Other scientists report numerous other benefits, including aiding those in developing countries in the production of food. However, a number of groups, including Greenpeace, have presented various opposing views to the use of GM foods and GMOs (genetically modified organisms) in general, despite the backlash from numerous scientists. While there are certainly many open questions about GM foods and GMOs in general, maintaining one’s beliefs in opposition to the consensus of experts seems like an irresponsible thing to do.

Apart from the potential negative consequences of holding such views, failing to properly take account of evidence seems to point to a more personal flaw in one’s character. Indeed, a number of philosophers have argued that humility, i.e. a proper recognition of one’s own strengths and limitations, is a virtue generally worth pursuing. People who lack intellectual humility – those who are overly boastful, or who refuse to acknowledge their own shortcomings regarding what they do not know – often seem to be suffering from a defect in character.

As the authors of the Nature study identify, a “traditional view in the public understanding of scientific literature is that public attitudes that run counter to scientific consensus reflect a knowledge deficit.” As such, a focus of those working in scientific communication has been on the education of the public. However, the authors also note that such initiatives “have met with limited success,” and their study might suggest why: because those with the most extreme viewpoints also tend to believe that they know much more than they do, they will likely prove unreceptive to attempts at education, since they think they know well enough already. Instead, the authors suggest that a “prerequisite to changing people’s views through education may be getting them to first appreciate gaps in their knowledge.”

It’s not clear, though, what it would take to get someone who greatly overestimates how well they understand something to appreciate the actual gaps in their knowledge. Indeed, it seems that it might be just as difficult to try to tell someone who is overly confident that they are lacking information as it is to try to teach them about something they already take themselves to know. There is also a question of whether such people will trust the experts who are trying to point out those gaps: if I take myself to be extremely knowledgeable about a topic then presumably I will consider myself to possess a degree of expertise, in which case it seems unlikely that I will listen to anyone else who calls themselves an authority.

As The Guardian reports, compounding the problem are two cognitive biases that can stand in the way of those with extreme viewpoints from changing their minds: “active information avoidance,” in which information is rejected because it conflicts with one’s beliefs, and the “backfire effect,” in which being presented with information that conflicts with one’s beliefs actually results in one becoming more confident in one’s beliefs, rather than less. All of these factors together make it very difficult to determine how, exactly, people with extreme viewpoints can be convinced that they should change their beliefs in the face of conflicting evidence.

Perhaps, then, part of the problem with those who take an extreme stance on an issue while greatly overestimate their understanding of it is again a problem of character: such individuals might lack a degree of humility, at least when it comes to a specific topic. In addition to attempting to address specific gaps in one’s knowledge, then, we might also look toward having people attend to their own intellectual limitations more generally. We are all, after all, subject to biases, false beliefs, and general limitations in our knowledge and understanding, although it is sometimes easy to lose sight of this fact.

Trusting Women and Epistemic Justice

An anonymous woman holding up a sign that says #MeToo

Over the past three months, public figures have been exposed as serial sexual harassers and perpetrators of sexual assault. Survivors of harassment and assault have raised new awareness of toxic masculinity and its effects in a short period of time.

However, as time goes on, supporters of the movement have been voicing rising concerns that something is bound to go awry. There is an undercurrent of worry that an untrustworthy individual will make an errant claim and thereby provide fodder for skeptics and bring the momentum of the movement to a halt. In response to this, it may seem like more vetting or investigation of the claims is the way forward. On the other hand, wouldn’t it be unfortunate to erode trust and belief in women’s stories in hopes of keeping the very momentum in service of hearing women’s voices?

Continue reading “Trusting Women and Epistemic Justice”

Is There a Problem With Scientific Discoveries Made by Harassers?

A scientist taking notes next to a rack of test tubes.

The question about bias in science is in the news again.

It arose before, in the summer, when the press got hold of an inflammatory internal memo that Google employees had been circulating around their company. The memo’s author, James Damore, now formerly of Google, argued that Google’s proposed solutions to eradicating the gender gap in software engineering are flawed. They’re flawed, Damore thought, because they assume that the preponderance of men in “tech and leadership positions” is a result only of social and institutional biases, and they ignore evidence from evolutionary psychology suggesting that biologically inscribed differences in “personality,” “interests,” and “preferences” explain why women tend not to hold such positions.

Continue reading “Is There a Problem With Scientific Discoveries Made by Harassers?”

The Moral Dimensions of the Research Reproducibility Crisis

A close-up photo of a microscope slide.

The labor of scientists has benefited society tremendously. Advancements in medicine and technology have improved both the length and the quality of human lives. Scientific studies have been and continue to be a crucial part of that process. Science, when done well, is indispensable to a healthy, happy, curious human race. Unfortunately, science isn’t always done well. When done poorly, studies can have disastrous effects. People tend to trust claims made by scientists, and that trust turns out to be unwarranted if something has gone wrong with the research.

Continue reading “The Moral Dimensions of the Research Reproducibility Crisis”

Uninformed Public is Danger to Democracy

The economy continues to struggle, the educational system underperforms and tensions exist at just about every point on the international landscape. And there is a national presidential selection process underway. It seems, in such an environment, that citizens would feel compelled to get themselves fully up to date on news that matters. It also would stand to reason that the nation’s news media would feel an obligation to focus on news of substance.

Instead, too many citizens are woefully uninformed of the day’s significant events. A pandering media, primarily television, is content to post a lowest-common-denominator news agenda, featuring Beyoncé’s “Lemonade” release and extensive tributes to Prince.

Constitutional framer James Madison once famously wrote, “Knowledge will forever govern ignorance. And a people who mean to be their own governors must arm themselves with the power which knowledge gives.” Citizens who are unable or unwilling to arm themselves with civic knowledge diminish the nation’s ability to self-govern.

Technological advances have made it easier than ever for citizens to stay informed. The days of waiting for the evening television news to come on or the newspaper to get tossed on your doorstep are long gone. News is available constantly and from multiple sources.

A growing number of citizens, particularly millennials, now rely on social media for “news.” While that might seem like a convenient and timely way to stay informed, those people aren’t necessarily aware of anything more than what their friends had for lunch. Data from the Pew Research Center indicates that about two-thirds of Twitter and Facebook users say they get news from those social media sites. The two “news” categories of most interest among social media consumers, however, are sports and entertainment updates.

Sadly, only about a third of social media users follow an actual news organization or recognized journalist. Thus, the information these people get is likely to be only what friends have posted. Pew further reports that during this election season, only 18 percent of social media users have posted election information on a site. So, less than a fifth of the social media population is helping to determine the political agenda for the other 80 percent.

The lack of news literacy is consistent with an overall lack of civic literacy in our culture. A Newseum Institute survey last year found that a third of Americans failed to name a single right guaranteed in the First Amendment. Forty-three percent could not name freedom of speech as one of those rights.

A study released earlier this year by the American Council of Trustees and Alumni had more frightening results. In a national survey of college graduates, with a multiple-choice format, just 28 percent of respondents could name James Madison as father of the Constitution. That’s barely better than random chance out of four choices on the survey. Almost half didn’t know the term lengths for U.S. senators and representatives. And almost 10 percent identified Judith Sheindlin (Judge Judy) as being on the Supreme Court.

The blame for an under-informed citizenry can be shared widely. The curriculum creep into trendy subjects has infected too many high schools and colleges, diminishing the study of public affairs, civics, history and news literacy.

The television news industry has softened its news agenda to the point where serious news consumers find little substance. Television’s coverage of this presidential election cycle could prompt even the most determined news hounds to tune out. The Media Research Center tracked how the big three broadcast networks covered the Trump campaign in the early evening newscasts of March. The coverage overwhelmingly focused on protests at Trump campaign events, assault charges against a Trump campaign staffer and Trump’s attacks on Heidi Cruz. Missing from the coverage were Trump’s economic plans, national security vision or anything else with a policy dimension.

When the Constitutional Convention wrapped up in 1787, Benjamin Franklin emerged from the closed-door proceedings and was asked what kind of government had been formed. He replied, “A republic, if you can keep it.” Those citizens who, for whatever reasons, are determined to remain uninformed, make it harder to keep that republic intact. Our nation, suffering now from political confusion and ugly protests, sorely needs a renewed commitment to civic knowledge.