← Return to search results
Back to Prindle Institute

The Dangerous Allure of Conspiracy Theories

photograph of QAnon sign at rally

Once again, the world is on fire. Every day seems to bring a new catastrophe, another phase of a slowly unfolding apocalypse. We naturally intuit that spontaneous combustion is impossible, so a sinister individual (or a sinister group of individuals) must be responsible for the presence of evil in the world. Some speculate that the most recent bout of wildfires in California were ignited by a giant laser (though no one can agree on who fired the lasers in the first place), while others across the globe set 5G towers ablaze out of fear that this frightening new technology was created by a malevolent organization to hasten the spread of coronavirus. Events as disparate as the recent explosion in Beirut to the rise in income inequality have been subsumed into a vast web of conspiracy and intrigue. Conspiracy theorists see themselves as crusaders against the arsonists at the very pinnacle of society, and are taking to internet forums to demand retribution for perceived wrongs.

The conspiracy theorists’ framework for making sense of the world is a dangerously attractive one. Despite mainstream disdain for nutjobs in tinfoil hats, conspiracy theories (and those who unravel them) have been glamorized in pop culture through films like The Matrix and The Da Vinci Code, both of which involve a single individual unraveling the lies perpetuated by a malevolent but often invisible cadre of villains. Real-life conspiracy theorists also model themselves after the archetypal detective of popular crime fiction. This character possesses authority to sort truth from untruth, often in the face of hostility or danger, and acts as an agent for the common good.

But in many ways, the conspiracy theorist is the inverse of the detective; the latter operates within the system of legality, often working directly for the powers-that-be, which requires an implicit trust in authority. They usually hunt down someone who has broken the law, and who is therefore on the fringes of the system. Furthermore, the detective gathers empirical evidence which forms the justification for their pursuit. The conspiracy theorist, on the other hand, is on the outside looking in, and displays a consistent mistrust of both the state and the press as sources of truth. Though conspiracy theorists ostensibly obsess over paper trails and blurry photographs, their evidence (which is almost always misconstrued or fabricated) doesn’t matter nearly as much as the conclusion. As Michael Barkun explains in A Culture of Conspiracy: Apocalyptic Visions in Contemporary America,

the more sweeping a conspiracy theory’s claims, the less relevant evidence becomes …. This paradox occurs because conspiracy theories are at their heart nonfalsifiable. No matter how much evidence their adherents accumulate, belief in a conspiracy theory ultimately becomes a matter of faith rather than proof.

In that sense, most conspiracy theorists are less concerned with uncovering the truth than confirming what they already believe. This is supported by a 2016 study, which identifies partisanship as an crucial factor in measuring how likely someone is to buy into conspiracy theories. The researchers determined that “political socialization and psychological traits are likely the most important influences” on whether or not someone will find themselves watching documentaries on ancient aliens or writing lengthy Facebook posts about lizard people masquerading as world leaders. For example, “Republicans are the most likely to believe in the media conspiracy followed by Independents and Democrats. This is because Republicans have for decades been told by their elites that the media are biased and potentially corrupt.” The study concludes that people from both ends of the political spectrum can be predisposed to see a conspiracy where there isn’t one, but partisanship is ultimately the more important predictor whether a person will believe a specific theory than any other factor. In other words, Democrats rarely buy into conspiracy theories about their own party, and vice versa with Republicans. The enemy is never one of us.

It’s no wonder the tinfoil-hat mindset is so addictive. It’s like being in a hall of mirrors, where all you can see is your own flattering image repeated endlessly. Michael J. Wood suggests in another 2016 study that “people who are aware of past malfeasance by powerful actors in society might extrapolate from known abuses of power to more speculative ones,” or that “people with more conspiracist world views might be more likely to seek out information on criminal acts carried out by officials in the past, while those with less conspiracist world views might ignore or reject such information.” It’s a self-fulfilling prophecy, fed by a sense of predetermined mistrust that is only confirmed by every photoshopped UFO. Conspiracy theories can be easily adapted to suit our own personal needs, which further fuels the narcissism. As one recent study on a conspiracy theory involving Bill Gates, coronavirus, and satanic cults points out,

there’s never just one version of a conspiracy theory — and that’s part of their power and reach. Often, there are as many variants on a given conspiracy theory as there are theorists, if not more. Each individual can shape and reshape whatever version of the theory they choose to believe, incorporating some narrative elements and rejecting others.

This mutable quality makes conspiracy theories personal, as easily integratable into our sense of self as any hobby or lifestyle choice. Even worse, the very nature of social media amplifies the potency of conspiracy theories. The study explains that

where conspiracists are the most engaged users on a given niche topic or search term, they both generate content and effectively train recommendation algorithms to recommend the conspiracy theory to other users. This means that, when there’s a rush of interest, as precipitated in this case by the Covid-19 crisis, large numbers of users may be driven towards pre-existing conspiratorial content and narratives.

The more people fear something, the more likely an algorithm will be to offer them palliative conspiracy theories, and the echo chamber grows even more.

Both of the studies previously mentioned suggest that there is a predisposition to believe in conspiracy theories that transcends political alliance, but where does that predisposition come from? It seems most likely that conspiracy beliefs are driven by anxiety, paranoia, feelings of powerlessness, and a desire for authority. A desire for authority is especially evident at gatherings of flat-earthers, a group that consistently mimics the tone and language academic conferences. Conspiracies rely on what Barkun called “stigmatized knowledge,” or “claims to truth that the claimants regard as verified despite the marginalization of those claims by the institutions that conventionally distinguish between knowledge and error — universities, communities of scientific researchers, and the like.” People feel cut off from the traditional locus of knowledge, so they create their own alternative epistemology, which restores their sense of authority and control.

Conspiracy theories are also rooted in a basic desire for narrative structure. Faced with a bewildering deluge of competing and fragmentary narratives, conspiracy theories cobble together half-truths and outright lies into a story that is more coherent and exciting than reality. The conspiracy theories that attempt to explain coronavirus provide a good example of this process. The first stirrings of the virus began in the winter of 2019, then rapidly accelerated without warning and altered the global landscape seemingly overnight. Our healthcare system and government failed to respond with any measure of success, and hundreds of thousands of Americans died over the span of a few months. The reality of the situation flies in the face of narrative structure — the familiar rhythm of rising action-climax-falling action, the cast of identifiable good guys and bad guys, the ultimate moral victory that redeems needless suffering by giving it purpose. In the dearth of narrative structure, theorists suggest that Bill Gates planned the virus decades ago, citing his charity work as an elaborate cover-up for nefarious misdeeds. The system itself isn’t broken or unequipped to handle the pandemic because of austerity. Rather, it was the result of a single bad actor.

Terrible events are no longer random, but imbued with moral and narrative significance. Michael Barkun argues that this is a comfort, but also a factor that further drives conspiracy theories:

the conspiracy theorist’s view is both frightening and reassuring. It is frightening because it magnifies the power of evil, leading in some cases to an outright dualism in which light and darkness struggle for cosmic supremacy. At the same time, however, it is reassuring, for it promises a world that is meaningful rather than arbitrary. Not only are events nonrandom, but the clear identification of evil gives the conspiracist a definable enemy against which to struggle, endowing life with purpose.

A group of outsiders (wealthy Jewish people, the “liberal elite,” the immigrant) are Othered within the discourse of theorists, rendered as villains capable of superhuman feats. The QAnon theory in particular feels more like the Marvel cinematic universe than a coherent ideology, with its bloated cast of heroes teaming up for an Avengers-style takedown of the bad guys. Some of our best impulses — our love of storytelling, a desire to see through the lies of the powerful — are twisted and made ugly in the world of online conspiracy forums.

The prominence of conspiracy theories in political discourse must be addressed. Over 70 self-professed Q supporters have run for Congress as Republicans in the past year, and as Kaitlyn Tiffany points out in an article for The Atlantic, the QAnon movement is becoming gradually more mainstream, borrowing aesthetics from the lifestyle movement and makeup tutorials make itself more palatable. “Its supporters are so enthusiastic, and so active online, that their participation levels resemble stan Twitter more than they do any typical political movement. QAnon has its own merch, its own microcelebrities, and a spirit of digital evangelism that requires constant posting.” Perhaps the most frightening part of this problem is the impossibility of fully addressing it, because conspiracy theorists are notoriously difficult to hold a good-faith dialogue with. Sartre’s description of anti-Semites written in the 1940s (not coincidentally, the majority of contemporary conspiracy theories are deeply anti-Semitic) is relevant here. He wrote that anti-Semites (and today, conspiracy theorists)

know that their statements are empty and contestable; but it amuses them to make such statements: it is their adversary whose duty it is to choose his words seriously because he believes in words. They have a right to play. They even like to play with speech because by putting forth ridiculous reasons, they discredit the seriousness of their interlocutor; they are enchanted with their unfairness because for them it is not a question of persuading by good arguing but of intimidating or disorienting.

This quote raises the frightening possibility that not all conspiracy theorists truly believe what they say, that their disinterest in evidence is less an intellectual blindspot than a source of amusement. Sartre helps us see why conspiracy theories often operate on a completely different wavelength, one that seems to preclude logic, rationality, and even the good-faith exchange of ideas between equals.

The fragmentation of postmodern culture has created an epistemic conundrum: on what basis do we understand reality? As the operations of governments become increasingly inscrutable to those without education, as the concept of truth itself seems under attack, how do we make sense of the forces that determine the contours of our lives? Furthermore, as Wood points out, mistrust in the government isn’t always baseless, so how do we determine which threats are real and which are imagined?

There aren’t simple answers to these questions. The only thing we can do is address the needs that inspire people to seek out conspiracy theories in the first place. People have always had an impulse to attack their anxieties in the form of a constructed Other, to close themselves off, to distrust difference, to force the world to conform to a single master narrative, so it’s tempting to say that there will probably never be a way to completely eradicate insidious conspiracy theories entirely. Maybe the solution is to encourage the pursuit of self-knowledge, our own biases and desires, before we pursue an understanding of forces beyond our control.

Seneca, Stoicism, and Thinking Better about Fear

photograph of feet standing before numerous direction arrows painted on ground

During times of crisis, such as a global pandemic, we have an opportunity to think better about fear. Most folks are living in fear, to various degrees, during this time of uncertainty. And that isn’t fun; fear is often unpleasant ‘from the inside.’ And it can rob us of well-being, and a sense of agency and control. And fear can be irrational: ancient Stoic philosophers argued that it neither makes sense to fear what we cannot control, nor to fear what we can. Of course, too little fear, in certain situations can be fatal; but we’re often faced with the problem of too much fear. Let’s begin with how fear can inhibit our ability to lead a moral life.

When in the grip of fear, it is all too easy to snap at loved ones, focus on our own problems at the expense of others, and generally be unpleasant, and perhaps worse. We may, in the grip of fear, treat others as obstacles in the pursuit of allaying those fears, and inflict unjustified harm in response. By example, Sam fears the trespasser on his property intends him harm — the man, while innocent, looks menacing; Sam shoots first and asks questions later. Here Sam is treating the trespasser as an impediment to his peace of mind; and in the grip of fear, he does something deeply wrong. We often aren’t at our moral best acting out of fear.

In addition, fear can rob us of our ability to think clearly. In the grip of fear, we can have a worse time thinking clearly and rationally: fear can, among other things, enhance our selective attention: the ability to focus on a specific thing in our environment, to the exclusion of others. And while this may be useful in a dangerous situation, it can also make rational thinking difficult. Conjure up the last time you were afraid; you likely weren’t at your smartest or most rational; I wasn’t. And fear can be self-defeating: if we want to address the source of our fears, we often need our full cognitive capacities. We thus need to rid ourselves of the feeling of fear, to best equip ourselves to address the cause.

And there’s the issue of control: the source of fear — e.g. economic uncertainty — may not be in our control. Many things aren’t in our control. Things that have receded into the past, as well as things that await in our unknown future, lie outside our control. The ancient Stoic philosophers thought it irrational to fear what we cannot control: there is nothing to be gained from fearing what we can do nothing about; we emotionally harm ourselves, but gain nothing. Fearing what is beyond our control is like standing on the beach and trying to will the tide not to come in; we can immediately recognize this behavior as irrational. But fearing what is beyond our control isn’t that different: we can do nothing about it; so to focus our mental and emotional energies on it would be a waste. As the Stoic philosopher Seneca explains:

“Wild animals run from the dangers they actually see, and once they have escaped them worry no more. We however are tormented alike by what is past and what is to come. A number of our blessings do us harm, for memory brings back the agony of fear while foresight brings it on prematurely. No one confines his unhappiness to the present.”

How should we then think about fear directed toward the future?

“It is likely that some troubles will befall us; but it is not a present fact. How often has the unexpected happened! How often has the expected never come to pass! And even though it is ordained to be, what does it avail to run out to meet your suffering? You will suffer soon enough, when it arrives; so, look forward meanwhile to better things.”

It isn’t rational, the Stoics held, to fear what we cannot control. And we shouldn’t fear what we can control either: if we can control it, we should. It makes more sense to focus on controlling the source of fear, and addressing it, then continuing to remain afraid. It doesn’t make much sense to be afraid of something we can address. It would make no sense to, say, be afraid of visiting the dentist, as many are, but fail to take active steps to prevent having to visit the dentist (more than necessary), by say, practicing good oral hygiene.

You may reply here that you don’t have direct control over your feelings; it isn’t as though you can voluntarily expel fears by sheer acts of will. There is an element of truth here: we often don’t have direct control over our emotions like we do, say, a light switch; we often just find we have certain feelings and emotions. There are things we can do cognitively to indirectly control and combat fear. Here are a couple of tools to help:

Narrowing one’s time horizon: sometimes the best way to address fear, especially fear of things to come, is to shrink our time horizons. Instead of focusing on the year, month, or even the next week, focus instead on the day. Too long? Narrow it further: focus on the next hour, or even the next minute. Life isn’t lived in an instant; and often enough, our fears lie in anticipation, but aren’t actually realized. This is why Alcoholics Anonymous wisely suggests those in recovery mentally frame their recovery as ‘one day at a time’: sometimes it is too overwhelming to think in larger time slices; doing otherwise may be too overwhelming. We need not be in recovery to benefit from the wisdom of narrowing our time horizon when in the grip of fear.

Gratitude: taking note of what we have to be thankful for — often, if we look hard enough, we can find things for which we should be grateful — can displace fear. We can use the practice of gratitude — like making a gratitude list — to draw our attention to the good things in our lives, to combat the overemphasis on the bad. To put the idea poetically: we can’t long abide both in the shadow of fear, and the sunlight of the gratitude; the latter has an uncanny way of driving away (or significantly reducing the power of) the former.

What’s the point to thinking better about fear? In short: to live a better life. We aren’t our best selves when we’re afraid. And since bad things may eventually befall you — where there is little we can do about it — we may as well appreciate the good things in the moment. What’s the point of that? I’ll let Seneca answer:

“There will be many happenings meanwhile which will serve to postpone, or end, or pass on to another person, the trials which are near or even in your very presence. A fire has opened the way to flight. Men have been let down softly by a catastrophe. Sometimes the sword has been checked even at the victim’s throat. Men have survived their own executioners. Even bad fortune is fickle. Perhaps it will come, perhaps not; in the meantime it is not. So, look forward to better things.”

Though fear can be useful — by example, it may help us survive — we can be too fearful to live good, productive lives. We may not always be able to do something about feeling afraid, but we need not let it completely dictate the quality of our lives either.

The Uses and Abuses of Political Hypocrisy

photograph of pinocchio sculpture atop puppet theatre in Kiev

Accusations of hypocrisy are so common in American politics that one usage of the term “politician” is as a near-synonym for “hypocrite” (“what’s your supervisor like?” “he’s a real politician.”) And for the most part, the hypocrisy of politicians is not considered laudable. Immanuel Kant famously condemned what he called the “political moralist” who “fashions his morality to suit his own advantage as a statesman,” paying homage to morality while devising “a hundred excuses and subterfuges to get out of observing them in practice.” Like Kant, most of us seem to prize integrity in our politicians.

Yet throughout history, other philosophers have argued that hypocrisy is a necessary part of politics. Niccolò Machiavelli defended hypocrisy as an indispensable tool in a world in which “a man who wants to make a profession of good in all regards must come to ruin among so many who are not good.” Much more recently, Judith Shklar argued that hypocrisy is nearly inevitable in political systems premised upon competitive elections, since candidates will employ persuasive rhetoric that requires a certain degree of dissimulation. Ruth Grant went further, arguing that without a plausible alternative for achieving comparable goods, hypocrisy in politics may very well be a moral necessity.

Thus, there appears to be disagreement about the inevitability and justifiability of hypocrisy in politics. In this column, I will focus on the kinds of hypocrisy politicians exhibit, and the question of whether and when political hypocrisy is morally preferable to non-hypocrisy.

It is important to note that hypocrisy is not a mere inconsistency between one’s words and deeds. For example, I would condemn many things I did as a teenager, but that does not automatically make me a hypocrite. One hallmark of hypocrisy is that the hypocrite pays homage to morality not out of genuine concern, but for self-serving reasons — to gain some undeserved advantage, to excuse himself, or to hide from blame. It is for this reason that we cannot trust that the hypocrite’s utterances reflect what he truly cares about, rather than what he believes to be advantageous in the moment.

It follows from this that many instances of political behavior that some might be tempted to call “hypocrisy” are, on reflection, not hypocrisy at all. Suppose that a politician publicly supports policy A, but then reads more about the issue and ends up voting for policy B. His former supporters might accuse him of hypocrisy, but this does not seem to be apt: the inconsistency between his public support of A and his eventual vote for B is due to a change of mind about the issue, rather than some self-serving reason.

Nevertheless, there are certainly many instances of true hypocrisy in politics. And while we tend to value integrity in our politicians, it can be plausibly argued that hypocrisy in the service of morally good ends is preferable to integrity in the service of morally bad ends. Politician A, while secretly racist, campaigns for office on an anti-racist platform in order to attract the support of minority constituents. Politician B, who is also racist, believes that one should stick to one’s principles even when doing so might hurt one’s electoral prospects, so he supports a racist platform. I would rather have A as a player in my political system, since A’s hypocrisy will help his minority constituents, unlike B’s integrity.

On the other hand, compare politician A to politician C, who genuinely holds anti-racist views and supports the same anti-racist platform. C seems morally preferable to A, but on what grounds? One answer is that, since C supports the anti-racist platform because it is right to do so, his support is morally creditworthy in a way that A’s is not. This might be Kant’s view, as he famously distinguished between actions in accord with and actions from duty; according to him, only the latter have moral worth. Another answer is that C’s proper anti-racist motives will more reliably lead him to promote anti-racist ends, whereas A would abandon those ends whenever it suits him politically. Some consequentialists might adopt this line. A third answer is that C’s behavior arises from virtuous motives while A’s does not; this is one position that a virtue theorist might take.

The trouble with the Kantian or virtue theoretical positions is that, while they can distinguish between A and C, they give no reason for us to prefer A over B. As we have seen, A’s support of the anti-racist platform is not morally creditworthy. Furthermore, because it involves deception — A knowingly causes others to falsely believe he supports an anti-racist position — its underlying maxim could not be universalized, which is Kant’s test for whether an action is morally permissible. So, on Kant’s view the deception practiced by A may make A worse than B; A’s behavior amounts to a “double iniquity.” If B’s integrity is a virtue, then the virtue theorist may also have to say that B is preferable to A; and if it is not, then at best A and B are morally equal, as in neither case does their behavior flow from virtuous motives.

Now consider politician D, who is secretly anti-racist but panders to his racist constituents by supporting B’s platform. This character seems obviously morally worse than A or C, but what about B? I think a common reaction is that B is less loathsome than D, but it is difficult to explain this judgment in consequentialist terms. After all, D’s anti-racist values will lead him to support a racist platform less reliably than B, just as C’s anti-racist values will lead him to support an anti-racist platform more reliably than A; so consequentialist reasoning seemingly should lead us to prefer D to B.

On the other hand, it may be that D’s stance undermines something that we care about very much in politicians: namely, transparency. Because we can only vote for or otherwise support a politician on the basis of what they say and do, we want their words and deeds to reflect their actual commitments. This is precisely what is not the case when it comes to hypocrites. Of course, we do not value transparency above everything else: as between a hypocrite who insincerely supports the good and an outright villain, we prefer the former. But as between B, an outright villain, and D, a hypocrite who insincerely supports the bad, we seem to prefer the villain’s integrity, because at least we know where we stand with him. Thus, consequentialism can explain our ranking of politicians A, B, C, and D in terms of the interplay between our desire for transparency and other values we want to promote, such as justice.

Perhaps a useful way to think about this disagreement between consequentialists, Kantians, and virtue theorists concerning political hypocrisy is in terms of two political virtues Max Weber famously distinguished: the ethic of conviction and the ethic of responsibility. The ethic of conviction involves a “constancy of [a person’s] inner relation to certain ultimate ‘values,’” or in other words, integrity. The ethic of responsibility, on the other hand, involves a proper understanding of the consequences of one’s actions and a practical ability to promote one’s ends. Those who favor the latter will likely believe that hypocrisy is no great sin (although still problematic, because transparency-negating), so long as it can be directed toward good ends, as in the case of politician A. By contrast, political hypocrisy is likely to deeply offend someone who places more emphasis on the ethic of conviction. Put another way, those who think an excellent politician is one who effectively promotes policy goals will not care about her reasons for pursuing those goals, except insofar as those reasons make her a less reliable advocate of them. On the other hand, those who think an excellent politician is one whose actions reflect her values will care deeply about eradicating hypocrisy from politics.

Yet, a too-fervent attachment to the ethic of conviction can lead to a dangerous kind of anti-hypocrisy hypocrisy. Consider politician E, who sincerely believes, and often publicly declares, that (1) a country should never start a land war in Asia and (2) a politician should always stick to his core principles. When his party starts a land war in Asia and he, without a hint of self-consciousness, votes in favor of a war resolution, he explains that the resolution authorizes a police action, not a war. This kind of self-deceived hypocrisy is particularly dangerous in politics, since it is very difficult for the politician herself to limit or check. It is also frequently a symptom of a kind of self-righteousness that may, in the politician’s own mind, license her to operate outside the normal bounds of morality. Thus, in a quest for purity of intention, politicians can fall prey to a hypocrisy more insidious than that which they seek to avoid.

We arrive, then, at a few conclusions. First, there are occasions where hypocrisy is morally preferable to non-hypocrisy. In particular, there is reason to prefer the hypocrisy of insincere politicians who support morally laudable ends to the integrity of our sincere villains. At the same time, the most morally problematic hypocrites are the anti-hypocrisy hypocrite and the hypocrite who espouses bad values without believing them. These judgments assume, or are rather supported by, a consequentialist style of moral reasoning that places emphasis on the ethics of responsibility over the ethics of conviction. Kantians and others, who elevate the significance of intention and motive in their moral judgments, may come to different conclusions.

Wildfires and Prison Labor: Crisis Continues to Expose Systemic Inequity

photograph of lone firefighter before a wildfire

As around a dozen wildfires continue to grow in California, the smoke has reached Nebraska. The two major wildfires that are occurring in Northern California are the second- and fourth-largest fires in state history. The status of Big Basin Redwoods State Park, California’s oldest, is changing daily. The oldest trees have seen many fires, but the current threat has been particularly devastating. California Governor Newsom has asked for help from as far away as the east coast of Australia in order to gather more firefighters.

Reaching out so broadly is a result of how central the issue of sufficient firefighters has become. State prison officials shared at a press briefing that due to COVID-19 quarantining and early release measures, California was unable to use its usual contingent of incarcerated firefighters during this year’s wildfire season.

The failure to address land management issues and the increasingly dire effects of climate change have led to the disastrous fire seasons both this year and in the recent past. However, the reliance on dangerous work being done by underpaid and under-protected incarcerated people in order to ensure the safety of others and conserve precious resources is a part of a systemic trend.

The US has more people in prison — in absolute numbers as well as by percentage of the population — than any other country on earth. This statistic alone should give us pause and encourage us to reflect on the purpose of isolating such a large amount of our population. But this year, our handling of the pandemic and, now, these wildfires highlights further issues with mass incarceration, beginning with the justification for imprisoning so many members of your population in the first place.

Society could be aiming at a few different goals when it takes people from society and places them in prison for violating the law. One goal might be sanctioning citizens that have “harmed” society based on a somewhat loose notion of “just deserts”: the individual has done wrong so they deserve punishment, and isolation is seen as the appropriate form of that punishment. Other justifications of incarceration as punishment are based on deterrence: by isolating someone who violates the law, we hope to make it less likely that this person or others — who are aware of the incarcerating policy — will do so again in the future. Incarceration could also be construed as means to rehabilitate someone who has not performed to the standards that the law suggests society deems necessary. In this case, the isolation is supposedly meant to be a constructive time to become able and willing to conform to societal standards more adequately in the future. Finally, incarceration could be a way of isolating someone from society to prevent further harm. (A version of justice that doesn’t fit this model is “restoration,” which focuses on the effects of violating a statute and allows those harmed by the violation to initiate a process where there is opportunity to share concerns, make amends and future plans, and potentially forgiveness, in hopes of healing the part of society that was in fact impacted by the violation).

Most regard the standards for our penal system to be a mix of these goals. A prison sentence might make sense as a mix of deterrence and rehabilitation in a particular case of sentencing, or in the mind of a particular legislator. When considering the labor that those who are serving sentences in prison perform, however, the justification for their punishment plays a crucial role in determining what conditions are appropriate.

Those in favor of the starkly different working conditions for non-incarcerated employees and prison labor use a variety of explanations. Appeals to the need to maintain facilities and rationale of providing job training for people who eventually will “reenter society” are the strongest justifications for employing incarcerated people. However, from these considerations, the working conditions and pay structure that exist today in US prisons do not follow. From the justifications for incarceration as a form of punishment, it is unclear why the human rights protections that guarantee safe working conditions and fair wages would be forfeited along with the freedom of movement that the punishment itself constitutes. For rehabilitation purposes, working while in prison can aid the transition once released, but differentiated pay scales and lax safety protocols appear punitive and demand human rights attention. For the retributive (“desert”) model, the presumption that someone deserves worse or inadequate working conditions on the basis of being incarcerated would need to be considered along with their original sentencing.

Further, the working conditions and wages that make up the structure of prison labor create a market designed to exploit the incarcerated. People in government-run correctional facilities perform jobs that are necessary for the prison to function, but do so without labor protection or the possibility of unionizing. And they do so for a fraction of what those performing the same tasks outside of prisons would earn. As a prison laborer, often the guarantee of safe working conditions simply does not apply.

The disparities in pay and working conditions for incarcerated employees creates an exploitative market, where prisons are incentivized to keep costs low and private businesses can reap great benefit. This economic structure does not simply exploit a marginalized incarcerated population, but, given the structural racism in the US justice system, further imbalances racial inequalities in a society already saturated by racist institutions.

Further, there are a variety of incarcerated employees who perform manufacturing jobs outside the prison, manufacturing products that are sold to government agencies and corporations. This work can include answering calls in a phone bank, constructing furniture, warehouse work, farm work, and, in California, front-line firefighting. The fire season and the pandemic have laid bare the living and working conditions in our prisons.

Throughout the pandemic, the cramped living conditions and poor quality of healthcare have put prisons at particular risk of experiencing a COVID-19 outbreak. Calls to attend this heightened danger have been neglected throughout the national emergency. In California, the nature of the pandemic has led to shutdowns across many of the prisons, making the incarcerated firefighters unable to respond to fires.

What this has meant for this record-breaking year in California is that they cannot rely on the state’s “primary firefighting ‘hand crews.’” According to the Sacramento Bee, “Inmate crews are among the first on the scene at fires large and small across the state… Identified by their orange fire uniforms, inmates typically do the critically important and dangerous job of using chainsaws and hand tools to cut firelines around properties and neighborhoods during wildfires.”

The owners of the prisons actively market their workers to private businesses, emphasizing the low wages their employers would be able to pay, how many have “Spanish language skills,” and how this labor pool is one of the “best kept secrets.”

In 2018, there was a three-week nationwide strike over the work conditions in prisons. Prisons are paying incarcerated people less today than they were in 2001. Prison jobs are unpaid in Alabama, Arkansas, Florida, Georgia, and Texas, and maximum wages have been lowered in at least as many states. Further, in many states the wages that incarcerated employees earn do not accurately reflect “take-home” pay; prisons deduct fees (like garnished wages) for what prison laborers have cost them during their stay. Because it costs money to incarcerate people, the claim goes, their wages should contribute to the running of the prison. As Vox reports, “Most prisons also deduct a percentage of earnings to help cover a prisoner’s child support payments, alimony, and restitution to victims. But at 40 cents an hour, that seems impractical.”

There is no getting around the fact that hiring incarcerated employees is a cost-saving measure. It’s estimated that the “Conservation Camp Program, which includes the inmate firefighters, saves California taxpayers tens of millions of dollars a year. Hiring firefighters to replace them, especially given the difficult work involved, would challenge a state already strapped for cash.” As the Managing Editor at Prison Legal News told Newsweek, “Prisons cannot operate without prison labor. They would simply be unaffordable.” So, incarceration in its current state requires exploitation and unethical labor practices that we wouldn’t accept outside of prisons, and are inflicting disproportionately based on systemically racist institutions.

There is also the option that doesn’t seem to get enough attention: If incarceration costs so much, wouldn’t it be cheaper to have less incarceration?

On “Doing Your Own Research”

photograph of army reserve personnel wearing neck gaiter at covid testing site

In early August, American news outlets began to circulate a surprising headline: neck gaiters — a popular form of face covering used by many to help prevent the spread of COVID-19 — could reportedly increase the infection rate. In general, face masks work by catching respiratory droplets that would otherwise contaminate a virus-carrier’s immediate environment (in much the same way that traditional manners have long-prescribed covering your mouth when you sneeze); however, according to the initial report by CBS News, a new study found that the stretchy fabric typically used to make neck gaiters might actually work like a sieve to turn large droplets into smaller, more transmissible ones. Instead of helping to keep people safe from the coronavirus, gaiters might even “be worse than no mask at all.”

The immediate problem with this headline is that it’s not true; but, more generally, the way that this story developed evidences several larger problems for anyone hoping to learn things from the internet.

The neck gaiter story began on August 7th when the journal Science Advances published new research on a measurement test for face mask efficacy. Interested by the widespread use of homemade face-coverings, a team of researchers from Duke University set out to identify an easy, inexpensive method that people could use at home with their cell phones to roughly assess how effective different commonly-available materials might be at blocking respiratory droplets. Importantly, the study was not about the overall efficacy rates of any particular mask, nor was it focused on the length of time that respiratory droplets emitted by mask-wearers stayed in the air (which is why smaller droplets could potentially be more infectious than larger ones); the study was only designed to assess the viability of the cell phone test itself. The observation that the single brand of neck gaiter used in the experiment might be “counterproductive” was an off-hand, untested suggestion in the final paragraph of the study’s “Results” section. Nevertheless, the dramatic-sounding (though misleading) headline exploded across the pages of the internet for weeks; as recently as August 20th, The Today Show was still presenting the untested “result” of the study as if it were a scientific fact.

The ethics of science journalism (and the problems that can arise from sensationalizing and misreporting the results of scientific studies) is a growing concern, but it is particularly salient when the reporting in question pertains to an ongoing global pandemic. While it might be unsurprising that news sites hungry for clicks ran a salacious-though-inaccurate headline, it is far from helpful and, arguably, morally wrong.

Furthermore, the kind of epistemic malpractice entailed by underdeveloped science journalism poses larger concerns for the possibility of credible online investigation more broadly. Although we have surrounded ourselves with technology that allows us to access the internet (and the vast amount of information it contains), it is becoming ever-more difficult to filter out genuinely trustworthy material from the melodramatic noise of websites designed more for attracting attention than disseminating knowledge. As Kenneth Boyd described in an article here last year, the algorithmic underpinnings of internet search engines can lead self-directed researchers into all manner of over-confident mistaken beliefs; this kind of structural issue is only exacerbated when the inputs to those algorithms (the articles and websites themselves) are also problematic.

These sorts of issues cast an important, cautionary light on a growing phenomenon: the credo that one must “Do Your Own Research” in order to be epistemically responsible. Whereas it might initially seem plain that the internet’s easily-accessible informational treasure trove would empower auto-didacts to always (or usually) draw reasonable conclusions about whatever they set their minds to study, the epistemic murkiness of what can actually be found online suggests that reality is more complicated. It is not at all clear that non-expert researchers who are ignorant of a topic can, on their own, justifiably identify trustworthy information (or information sources) about that topic; but, on the other hand, if a researcher does has enough knowledge to judge a claim’s accuracy, then it seems like they don’t need to be researching the topic to begin with!

This is a rough approximation of what philosophers sometimes call “Meno’s Paradox” after its presentation in the Platonic dialogue of that name. The Meno discusses how inquiry works and highlights that uninformed inquirers have no clear way to recognize the correct answer to a question without already knowing something about what they are questioning. While Plato goes on to spin this line of thinking into a creative argument for the innateness of all knowledge (and, by extension, the immortality of the soul!), subsequent thinkers have often taken different approaches to argue that a researcher only needs to have partial knowledge either of the claim they are researching or of the source of the claim they are choosing to trust in order to come to justified conclusions.

Unfortunately, “partial knowledge” solutions have problems of their own. On one hand, human susceptibility to a bevy of psychological biases make a researcher’s “partial” understanding of a topic a risky foundation for subsequent knowledge claims; it is exceedingly easy, for example, for the person “doing their own research” to be unwittingly led astray by their unconscious prejudices, preconceptions, or the pressures of their social environment. On the other hand, grounding one’s confidence in a testimonial claim on the trustworthiness of the claim’s source seems to (in most cases) simply push the justification problem back a step without really solving much: in much the same way that a non-expert cannot make a reasonable judgment about a proposition, that same non-expert also can’t, all by themselves, determine who can make such a judgment.

So, what can the epistemically responsible person do online?

First, we must cultivate an attitude of epistemic humility (of the sort summarized by Plato’s infamous comment “I know that I know nothing”) — something which often requires us to admit not only that we don’t know things, but that we often can’t know things without the help of teachers or other subject matter experts doing the important work of filtering the bad sources of information away from the good ones. All too often, “doing your own research” functionally reduces to a triggering of the confirmation bias and lasts only as long as it takes to find a few posts or videos that satisfy what a person was already thinking in the first place (regardless of whether those posts/videos are themselves worthy of being believed). If we instead work to remember our own intellectual limitations, both about specific subjects and the process of inquiry writ large, we can develop a welcoming attitude to the epistemic assistance offered by others.

Secondly, we must maintain an attitude of suspicion about bold claims to knowledge, especially in an environment like the internet. It is a small step from skepticism about our own capacities for inquiry and understanding to skepticism about that of others, particularly when we have plenty of independent evidence that many of the most accessible or popular voices online are motivated by concerns other than the truth. Virtuous researchers have to focus on identifying and cultivating relationships with knowledgeable guides (who can range from individuals to their writings to the institutions they create) on whom they can rely when it comes time to ask questions.

Together, these two points lead to a third: we must be patient researchers. Developing epistemic virtues like humility and cultivating relationships with experts that can overcome rational skepticism — in short, creating an intellectually vibrant community — takes a considerable amount of effort and time. After a while, we can come to recognize trustworthy informational authorities as “the ones who tend to be right, more often than not” even if we ourselves have little understanding of the technical fields of those experts.

It’s worth noting here, too, that experts can sometimes be wrong and nevertheless still be experts! Even specialists continue to learn and grow in their own understanding of their chosen fields; this sometimes produces confident assertions from experts that later turn out to be wrong. So, for example, when the Surgeon General urged people in February to not wear face masks in public (based on then-current assumptions about the purportedly low risk of asymptomatic patients) it made sense at the time; the fact that those assumptions later proved to be false (at which point the medical community, including the epistemically humble Surgeon General, then recommended widespread face mask usage) is simply a demonstration of the learning/research process at work. On the flip side, choosing to still cite the outdated February recommendation simply because you disagree with face mask mandates in August exemplifies a lack of epistemic virtue.

Put differently, briefly using a search engine to find a simple answer to a complex question is not “doing your own research” because it’s not research. Research is somewhere between an academic technique and a vocational aspiration: it’s a practice that can be done with varying degrees of competence and it takes training to develop the skill to do it well. On this view, an “expert” is simply someone who has become particularly good at this art. Education, then, is not simply a matter of “memorizing facts,” but rather a training regimen in performing the project of inquiry within a field. This is not easy, requires practice, and still often goes badly when done in isolation — which is why academic researchers rely so heavily on their peers to review, critique, and verify their discoveries and ideas before assigning them institutional confidence. Unfortunately, this complicated process is far less sexy (and far slower) than a scandalous-sounding daily headline that oversimplifies data into an attractive turn of phrase.

So, poorly-communicated science journalism not only undermines our epistemic community by directly misinforming readers, but also by perpetuating the fiction that anyone is an epistemic island unto themselves. Good reporting must work to contextualize information within broader conversations (and, of course, get the information right in the first place).

Please don’t misunderstand me: this isn’t meant to be some elitist screed about how “only the learned can truly know stuff, therefore smart people with fancy degrees (or something) are best.” If degrees are useful credentials at all (a debatable topic for a different article!) they are so primarily as proof that a person has put in considerable practice to become a good (and trustworthy) researcher. Nevertheless, the Meno Paradox and the dangers of cognitive biases remain problems for all humans, and we need each other to work together to overcome our epistemic limitations. In short: we would all benefit from a flourishing epistemic community.

And if we have to sacrifice a few splashy headlines to get there, so much the better.

Essential Work, Education, and Human Values

photograph of school children with face masks having hands disinfected by teacher

On August 21st, the White House released guidance that designated teachers as “essential workers.” One of the things that this means is that teachers can return to work even if they know they’ve been exposed to the virus, provided that they remain asymptomatic. This is not the first time that the Trump administration has declared certain workers or, more accurately, certain work to be essential. Early in the pandemic, as the country experienced decline in the availability of meat, President Trump issued an executive order proclaiming that slaughterhouses were essential businesses. The result was that they did not have to comply with quarantine ordinances and could, and were expected to, remain open. Employees then had to choose between risking their health or losing their jobs. Ultimately, slaughterhouses became flash points for massive coronavirus outbreaks across the country.

As we think about the kinds of services that should be available during the pandemic, it will be useful to ask ourselves, what does it mean to say that work is essential? What does it mean to say that certain kinds of workers are essential? Are these two different ways of asking the same question or are they properly understood as distinct?

It might be helpful to walk the question back a bit. What is work? Is work, by definition, effort put forward by a person? Does it make sense to say that machines engage in work? If I rely on my calculator to do basic arithmetic because I’m unwilling to exert the effort, am I speaking loosely when I say that my calculator has “done all the work”? It matters because we want to know whether our concept of essential work is inseparable from our concept of essential workers.

One way of thinking about work is as the fulfillment of a set of tasks. If this is the case, then human workers are not, strictly speaking, necessary for work to get done; some of it can be done by machines. During a pandemic, human work comes with risk. If the completion of some tasks is essential under these conditions, we need to think about whether those tasks can be done in other ways to reduce the risk. Of course, the downside of this is that once an institution has found other ways of getting things done, there is no longer any need for human employees in those domains on the other side of the pandemic.

Another way of understanding the concept of work is that work requires intentionality and a sense of purpose. In this way, a computer does not do work when it executes code, and a plant does not do work when it participates in photosynthesis. On this understanding of the concept of work, only persons can engage in it. One virtue of understanding work in this way is that it provides some insight into the indignity of losing one’s job. A person’s work is a creative act that makes the world different from the way it was before. Every person does work, and the work that each individual does is an important part of who that person is. If this way of understanding work is correct, then work has a strong moral component and when we craft policy related to it, we are obligated to keep that in mind.

It’s also important to think about what we mean when we say that certain kinds of work are essential. The most straightforward interpretation is to say that essential work is work that we can’t live without. If this is the case, most forms of labor won’t count as essential. Neither schools nor meat are essential in this sense — we can live without both meat and education.

When people say that certain work is essential, they tend to mean something else. For some political figures, “essential” might mean “necessary for my success in the upcoming election.” Those without political aspirations often mean something different too, something like “necessary for maintaining critical human values.” Some work is important because it does something more than keep us alive; it provides the conditions under which our lives feel to us as if they are valuable and worth living.

Currently, many people are arguing for the position that society simply cannot function without opening schools. Even a brief glance at history demonstrates that this is empirically false. The system of education that we have now is comparatively young, as are our attitudes regarding the conditions under which education is appropriate. For example, for much of human history, education was viewed as inappropriate for girls and women. In the 1600’s Anna Maria van Schurman, famous child prodigy, was allowed to attend school at the University of Utrecht only on the condition that she do so behind a barrier — not to protect her from COVID-19 infested droplets, but to keep her very presence from distracting the male students. At various points in history, education was viewed as inappropriate for members of the wealthiest families — after all, as they saw it, learning to do things is for people that actually need to work. There were also segments of the population that for reasons of race or status were not allowed access to education. All of this is just to say that for most of recorded history, it hasn’t been the case that the entire population of children has been in school for seven hours a day. Our current system of K-12 education didn’t exist until the 1930s, and even then there were barriers to full participation.

That said, the fact that such a large number of children in our country have access to education certainly constitutes significant progress. Education isn’t essential in the first sense that we explored, but it is essential in the second. It is critical for the realization of important values. It contributes to human flourishing and to a sense of meaning in life. It leads to innovation and growth. It contributes to the development of art and culture. It develops well-informed citizens that are in a better position to participate in democratic institutions, providing us with the best hope of solving pressing world problems. We won’t die if we press pause for an extended period of time on formal education, but we might suffer.

Education is the kind of essential work for which essential workers are required. It is work that goes beyond simply checking off boxes on a list of tasks. It involves a strong knowledge base, but also important skills such as the ability to connect with students and to understand and react appropriately when learning isn’t occurring. These jobs can’t be done well when those doing them either aren’t safe or don’t feel safe. The primary responsibilities of these essential workers can be satisfied across a variety of presentation formats, including online formats.

In our current economy, childcare is also essential work, and there are unique skills and abilities that make for a successful childcare provider. These workers are not responsible for promoting the same societal values as educators. Instead, the focus of this work is to see to it that, for the duration of care, children are physically and psychologically safe.

If we insist that teachers are essential workers, we should avoid ambiguity. We should insist on a coherent answer to the question essential for what? If the answer is education, then teachers, as essential workers, can do their essential work in ways that keep them safe. If we are also thinking of them as caregivers, we should be straightforward about that point. The only fair thing to do once that is out in the open is to start paying them for doing more than one job.

Morality Pills Aren’t Enough

close-up photograph of white, chalky pill on pink background

Here’s a problem: despite the coronavirus still being very much a problem, especially in the US, many people refuse to take even the most basic precautions when it comes to preventing the spread of the disease. One of the most controversial is the wearing of masks: while some see wearing a mask as a sign of a violation of personal liberties (the liberty to not have to wear a mask, I suppose), others may simply value their own comfort over the well-being of others. Indeed, refusal to wear a mask has been seen by some as a failure of courtesy to others, or a general lack of kindness.

We might look at this situation and make the following evaluation: the problem with people refusing to take precautions to help others during the current pandemic is the result of moral failings. These failings might be the result of a failure to value others in the way that they ought to, perhaps due to a lack of empathy or tendency towards altruism. So perhaps what we need is something that can help these people have better morals. What we need is a morality enhancing pill.

What would such a pill look like? Presumably it would help an individual overcome some relevant kind of moral deficiency, perhaps in the way that some drugs can help individuals cope with certain mental illnesses. The science behind it is merely speculative; what’s more, it’s not clear that it could ever really work in practice. Add concerns about a morality pill’s potentially even worse moral consequences – violations of free will spring to mind, especially if they are administered involuntarily – and it is perhaps easy to see why such a pill currently exists only in the realm of thought experiment.

But let’s put all that aside and say that such a pill was developed. People who were unempathetic take the pill and now show much more empathy; people who failed to value the well-being of others now value it more. Also say that everyone was happy to get on board, so we put at least some of the bigger practical worries aside. Would it solve the problem of people not taking the precautions that they should in helping stop the spread of coronavirus?

I don’t think it would. This is because the problem is not simply a moral problem, but also an epistemic one. In other words: one can have as much empathy as one likes, but if one is forming beliefs on the basis of false or misleading information, then empathy isn’t going to do much good.

Consider someone who refuses to wear a mask, even though it has been highly recommended that they do by a relevant agency, or perhaps even mandated. Their failure to comply may not be indicative of a failure of empathy: if the person falsely believes, for example, that masks inhibit one’s ability to breathe, then they may be as empathetic as you like and still not change their minds. Indeed, given the belief that masks are harmful, increased levels of empathy may only strengthen one’s resolve: given that one cares about the well-being of others, and believes that masks can inhibit that well-being, they will perhaps strive even more to get people to stop wearing them.

Of course, what we want is not that kind of empathy, we want well-informed empathy. This is the kind of empathy that is directed at what the well-being of others really consists in, not just what one perceives it to be. A good morality pill, then, is one that doesn’t just supplement one’s lack of empathy or altruism or what-have-you, but does so in a way that it is directed at what’s actually, truly morally good.

Here, though, we see a fundamental flaw with the morality pill project. The initial problem was that since those who refuse to follow guidelines that can help decrease the spread of the coronavirus refuse to listen to the evidence provided by scientific experts, then we should look to other solutions, ones that don’t have to involve trying to change someone’s beliefs. The problem with focusing on one’s moral character instead, though, is that bettering one’s moral character is a project that requires changing one’s beliefs, as well. The morality pill solution, then, really isn’t that much of a solution at all.

The morality pill, of course, still exists only in the realm of the hypothetical. Back in the real world we are still faced with the hard problem of trying to get people who ignore evidence and believe false or misleading information to change their minds. Where the morality pill thought experiment fails, I think, is that while it is meant to be a way of getting around this hard problem, it runs right into it, instead.

Who Should Get the Vaccine First?

photograph of doctor holding syringe and medicine for vaccination

As at least one COVID-19 vaccine is scheduled to enter clinical trials in the United States in September, and Russia announced that it will be putting its own vaccine into production immediately, it seems like an auspicious moment to reflect on some ethical issues surrounding the new vaccines. Now, if we could produce and administer hundreds of millions of doses of vaccine instantaneously, there would presumably be no ethical question about how it ought to be distributed. The problem arises because it will take a while to ramp up production and to set up the capacity to administer it, so the vaccine will remain a relatively scarce resource for some time. Thus, I believe that there is a genuine ethical question here: namely, which moral principles ought to govern who gets the vaccine when there is not enough to go around and the capacity to administer it remains inchoate? In this column, I will weigh the pros and cons of a few principles that might be used.

One fairly straightforward principle is that everyone is equally deserving of treatment: everyone’s life matters equally, regardless of their race, gender, or socioeconomic status. The most straightforward way of fulfilling the principle is to choose vaccine recipients at random, or by lot. The trouble with this method is that, although it arguably best adheres to the principle of equality, it also fails to maximize the good. We know that not everyone is equally vulnerable to the virus; choosing vaccine recipients by lot would mean that many vulnerable people would die needlessly at the back of the line.

One way of defining “the good” in medical contexts is in terms of quality-adjusted life years, or “QALYs.” One QALY equates to one year of perfect health; QALY scores range from 1 to 0. If our aim in distributing the vaccine is to maximize QALYs, then we would prioritize recipients for whom a vaccine would make the greatest difference in terms of QALYs. Since the vaccine would make the greatest difference to members of vulnerable groups, we would tend to put these groups at the front of the line. We could also combine the principle of maximizing QALYs with the equality principle by selecting individual members of each group by lot while shifting all members of vulnerable groups to the front of the line.

While the principle of maximizing QALYs would in this way help the most vulnerable, it might be open to the objection that it neglects those who perform particularly important social functions. These perhaps include government officials and workers in essential industries who cannot shelter in place. One justification for prioritizing these individuals would be that since they contribute more to the functioning of society, they are entitled to a greater level of protection from threats to their productivity, even if giving them the vaccine first would fail to maximize QALYs. Another idea is that prioritizing such individuals maximizes overall well-being, rather than QALYs: more people benefit if society functions well than if members of vulnerable groups live longer. In a sense, then, we can view the dispute between the principle of maximizing QALYs and the principle of rewarding social productivity as a dispute between two ways of defining “the good.”

Finally, we might consider using the vaccine to reward those who have made significant contributions to social welfare in their lives, both on the grounds of intrinsic desert and to provide incentives for individuals to make similar contributions in the future. For example, we might decide that, between two individuals A and B for whom the vaccine would make an equal difference in terms of QALYs, if A is a war veteran, retired firefighter, teacher, and so on, then A ought to receive the vaccine first. One troubling feature of using this criterion is that owing to past discriminatory policies, this principle might heavily favor men over women. On the other hand, men may already be favored over women by the principle of maximizing QALYs, since they appear to be more vulnerable to COVID-19.

A final suggestion is just to let the market decide who will get the vaccine. But it’s hard to see how that idea is compatible with any of the normative principles discussed in this column. This method of distribution will not maximize QALYs or reward those who make or have made significant contributions to social welfare, and it seems at odds with the notion that all lives matter equally — in effect, it expresses the idea that the lives of the wealthy matter more.

Here is my proposal, for what it’s worth. If the disease were deadlier and there was not effective basic protection against transmission, then we would have to worry much more about the ability of government and essential industries to function without the vaccine. Luckily, COVID-19 does not pose such a threat. This means that operationalizing the principle of maximizing QALYs probably also would maximize overall social well-being, despite prioritizing vulnerable groups over essential workers and non-vulnerable groups. As I suggested above, we ought to select individual members of groups by lot, so as to affirm their basic equality. And in cases where we would make a roughly equal difference in terms of QALYs, we ought to favor the would-be recipient who has made a significant contribution to social welfare in their lives.

Life-Life Tradeoffs in the Midst of a Pandemic

photograph of patients' feet standing in line waiting to get tested for COVID

Deciding who gets to live and who gets to die is an emotionally strenuous task, especially for those who are responsible for saving lives. Doctors in pandemic-stricken countries have been making decisions of great ethical significance, faced with the scarcity of ventilators, protective equipment, space in intensive medical care, and medical personnel. Ethical guidelines have been issued, in most of the suffering countries, to facilitate decision-making and the provision of effective treatment, with the most prominent principle being “to increase overall benefits” and “maximize life expectancy.” But are these guidelines as uncontroversial as they initially appear to be?

You walk by a pond and you see a child drowning. You can easily save the child without incurring significant moral sacrifices. Are you obligated to save the child at no great cost to yourself? Utilitarians argue that we would be blameworthy if we failed to prevent suffering at no great cost to ourselves. Now, suppose, that you decide to act upon the utilitarian moral premise and rescue the child. As you prepare to undertake this life-rescuing task, you realize the presence of two drowning children on the other side of the pond. You can save them both – still at no cost to yourself – but you cannot save all three. What is the right thing to do? Two lives count more than one, thus you ought to save the maximum number of people possible. It seems evident that doctors who are faced with similar decisions ought to maximize the number of lives to be saved. What could be wrong with such an ethical prescription?

Does the ‘lonely’ child have reasonable grounds to complain? The answer is yes. If the child happened to be on the other side of the pond, she would have a considerably greater chance of survival. Also, if, as a matter of unfortunate coincidence, the universe conspired and brought closer to her two extra children in need of rescue, she would have an even greater chance of survival – given that three lives count more than two. But, that seems to be entirely unfair. Whether one has a right to be rescued should not be determined by morally arbitrary factors such as one’s location and the number of victims in one’s physical proximity. Rather, one deserves to be rescued simply on the grounds of being a person with inherent moral status. Things beyond your control, and which you are not responsible for, should not affect the status of your moral entitlements. As a result, every child in the pond should have an equal chance of rescue. If we cannot save all of them, we should flip a coin to decide the one(s) that can be affordably saved. By the same logic, if doctors owe their patients equal respect and consideration, they should assign each one of them, regardless of morally arbitrary factors (such as age, gender, race, social status), an equal chance to receive sufficient medical care.

What about life expectancy? A doctor is faced with a choice of prolonging a patient’s life by 20 years and prolonging another patient’s life by 2 months. For many, maximizing life expectancy seems to be the primary moral factor to take into account. But, what if there is a conflict between maximizing lives and maximizing life? Suppose that we can either save a patient with a life expectancy of 20 years or save 20 patients with a life expectancy of 3 months each. Maximizing life expectancy entails saving the former, since 20 years of life count more than 5 years of life, while maximizing lives entails saving the latter. It could be argued that the role of medicine is not merely to prolong life but to enhance its quality; this would explain why we may be inclined to save the person with the longest life expectancy. A life span of 3 months is not an adequate amount of time to make plans and engage in valuable projects, and is also accompanied by a constant fear of death. Does that entail that we should maximize the quality of life as well? Faced with a choice between providing a ventilator to a patient who is expected to recover and lead a healthy and fulfilling life and providing a ventilator to a patient who has an intellectual disability, what should the doctor do? If the role of medicine is merely to maximize life quality, the doctor ought to give the ventilator to the first patient. However, as US disability groups have argued, such a decision would constitute a “deadly form of discrimination,” given that it deprives the disabled of their right to equal respect and consideration.

All in all, reigning over life and death is not as enviable as we might have thought.

How Many Children Must We Save?

photograph of boys filling water jugs from a ditch in Kenya

The economic slowdown from the coronavirus pandemic is likely to reverse a global trend of poverty reduction. This crisis should renew interest in our moral obligations to the poor. And there is no better place to begin thinking about those obligations than the work of Peter Singer. He argues we are morally required to give a lot more expendable income to the poor:

“On your way to work, you pass a small pond. … [You] are surprised to see a child splashing about in the pond […] it is a very young child, just a toddler, who is flailing about, unable to stay upright or walk out of the pond. […] The child is unable to keep his head above the water for more than a few seconds at a time. If you don’t wade in and pull him out, he seems likely to drown. Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago, and get your suit wet and muddy.”

Clearly, we should save the child. And we could save many people from sickness and death from lack of food, medicine, and shelter by donating a lot more of our expendable income to the poor: you could live a happy life, visit Starbucks less often, and donate the money instead to the poor. Singer argues not donating is morally no different than letting the child drown. Much of our spending on goodies doesn’t contribute to our well-being: we would likely be as happy, going to fewer movies, and buying fewer fancy coffees (and perhaps none). The goods and services we buy with our expendable income don’t compare morally to the lives we could save.

Up to this point, Singer makes a good case. But it doesn’t end there: Singer argues that we are morally required to give substantially more than we (likely) do:

“Suppose you have just sent $200 to an agency that can, for that amount, save the life of a child in a developing country who would otherwise have died. […] But don’t celebrate your good deed by opening a bottle of champagne, or even going to a movie. […] You must keep cutting back on unnecessary spending, and donating what you save, until you have reduced yourself to the point where if you give any more, you will be sacrificing something nearly as important as a child’s life.”

Here Singer is stressing the extent of our moral obligations to the poor: when we decide to go to a movie, or buy a fancy coffee, we could have instead donated that money to save the life of a poor person dying from lack of food, medicine, and shelter. When comparing something trivial, like a caramel macchiato, to a life we could save, we should part with the money. But this line of thinking may lead to overly demanding moral requirements.

We should take a step back to think about moral overdemandingness. Moral requirements can be hard — admitting we lied to a friend may be hard, but morally required — but they can’t be too demanding. Suppose Nathan has a few beers during Monday night football. He does nothing obviously wrong. Any moral theory that says otherwise is too demanding; we should be leery of any moral theory or view that demands too much. Unfortunately, it looks like Singer’s argument may do that. We can explore this with a sorites paradox.

We should first introduce sorites paradoxes. And like with most philosophical ideas, they sound more complicated than they are. An example of a sorites paradox would help. Consider a heap of sand. Taking one grain of sand won’t destroy a heap. And that’s true of every individual grain of sand. If we apply this rule over and over, we will eventually destroy the heap. But if taking a single grain of sand doesn’t destroy the heap, we could take a single grain of sand, over and over, and on this rule, and we would still have a heap — but we know taking one grain at a time, over and over, will eventually destroy the heap. We can formulate this paradox as follows:

(1) A pile of one trillion grains of sand is a heap.

(2) A single grain of sand isn’t a heap.

(3) Taking one single grain of sand won’t create/destroy a heap.

This is a paradox: a set of individual statements that seem right, but taken together cannot be true. If we took a single grain of sand from a heap over and over, according to (3) we wouldn’t destroy the heap. But we intuitively know that isn’t right: if we took enough individual grains of sand, over and over, until a single grain remained, it wouldn’t be a heap.

We can frame Singer’s argument as a sorites paradox:

(4) Saving an innocent person, with a modest donation, isn’t morally too demanding.

(5) There are millions of people we could save with a modest donation.

(6) A moral requirement to save everyone we can with a modest donation is too demanding.

Consider a defense of (4): a cup of coffee or a new pair of shoes doesn’t morally compare to the life of an innocent person; if we could save them, by not buying goodies, and instead donating the money, then we’re morally required to do that. This isn’t morally too demanding: it is as reasonable as saving a child drowning in a shallow pond. However, there are millions of poor folks who need saving, and could be saved by a few modest donations. And individually, these acts of sacrifice wouldn’t rise to the level of overdemandingness; in each case, we could argue the life of a child is morally more important than watching a football game buzzed.

However, if we apply this line of argument over and over, there will eventually come a point where we won’t be able to watch a football game with a few beers because it would be wrong. We could work overtime instead and donate that money to charity. This isn’t to say Singer thinks we should never rest and recover, or earn money to pay our bills. We can still do those things, but only if they have comparable moral worth to the life we might otherwise save. And that looks like it demands too much of us; if a moral claim is overly demanding, we should be suspicious of that claim. This overdemandingness calls attention to an implicit assumption: that moral reasons trump other kinds of reasons — like, say, the value of enjoying a football game with a few beers — to act. And while moral reasons should be weighty in our rational deliberation, it isn’t obvious they override other kinds of reasons, such that those reasons don’t count.

How many poor folks are we morally required to save before it becomes too demanding? Most of us could, and likely should, do more to help the poor than we do, up to the point where it’s too demanding. But where exactly that point is located remains fuzzy.

What Is Voting?

close up photograph of male hand putting vote into a ballot box

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On August 8th the National Speech and Debate Association released the new high school Lincoln Douglas debate topic for the months of September and October. The new topic is:

“Resolved: In a democracy, voting ought to be compulsory.”

One thing I’ve noticed is that whether one thinks voting should be compulsory often depends on what one understands voting to be.

To illustrate, consider the ‘expressive’ view of voting, according to which the reason people vote is not to change policy, but as a way of expressing preferences (like cheering on one’s sports team). Expressive voting is normally presented as a descriptive theory of voting; it is an explanation of why people vote despite Downs paradox — given there is no real chance your vote sways an election, doing almost anything else makes more sense.

But suppose we accepted expressive voting as a prescriptive theory, suppose we thought the point of having elections is to give people the chance to express themselves. In that case you would probably reject compulsory voting.

Much of the expressive value of a vote comes from that fact that people choose to speak. As Ben Saunders puts it “if we grant that there is expressive value in [people] voting . . . it is presumably dependent upon their proper motivations and lost if they vote for the wrong reasons.” If we thought the reason for having the vote is to allow people to express themselves, that would inform our voting laws. It would speak against compulsory voting, and might even speak in favor of other voting reforms.

Expressive voting is a particularly easy-to-follow example, but it’s not a plausible candidate for why we hold elections in the first place. So why should we use majoritarian systems to select policies and leaders?

No doubt there are lots of answers that can be given, but here I want to distinguish two common explanations. First, you might think that voting is a fundamentally competitive activity by which we fairly resolve conflicted interests. Second, you might think that voting is a way by which we incorporate citizens into the legislative process.

On the first view, democracy is like a market, it allows us to make decisions otherwise too complex for any one person to make. An example may help. Suppose a group of friends and I are deciding where we want to go to dinner (we can engage in the fantastic daydream where you can not only be with friends but also go out to eat again). Some want Mexican food and some want Chinese, but we would all rather go to either place together than split up into two groups. Now, given that there is a conflict between our preferences, we need some procedure to resolve this conflict, and one plausible candidate is we should go where the majority of people want to go. After all, by going where the majority of people want, we will treat one another fairly because we will weigh each person’s preference equally. (Of course we might choose not to go where the majority wants every time, perhaps we go where the majority wants a majority of the time and where the minority wants a minority of the time; that too might be fair.) Thus, we decide to vote. By voting, we determine where to get dinner.

But how should I cast my vote. Should I vote for the place I personally prefer, or for the place I think the majority wants to go?

Given my commitment to fairness, I really do think we should go wherever the majority wants. Thus, you might think, I should vote for where I think the majority prefers to go. Except, of course, that ruins the election. Suppose seven out of twelve people want Mexican, but the people who want Chinese have been more vocal and so most of us think that the majority of people prefer Chinese. If we all vote our personal preference, we will reach the answer we all want. In contrast, if we all vote for what we think the right answer is, then we will end up making the wrong choice.

It is not selfish to vote for the restaurant you would personally prefer to eat at. Why not? Because you are not actually saying that is where we should go. In participating in the vote you are saying we should go where the majority pick; in voting you are simply contributing your little bit of information to the collective knowledge pool. Even though my actual deep preference is to go where the majority would prefer. I should not try to vote for where the majority prefer because the whole point is to use the vote to reach that decision (saving us from needing to figure it out ourselves).

This is of course a common view of the role of voting in a democracy. Voting is a way to synthesize preferences across large numbers.

Just like free markets allow us to reach efficient systems which no individual person is capable of reasoning to, so you might think that well designed electoral systems create a disaggregated decision procedure where each person’s pursuit of private interests secures the public good more effectively than an alternative.

(My favorite vision of this view of democracy is articulated in Chapter 2 of Posner and Weyl’s absolutely fabulous book Radical Markets.)

In contrast, there is a second view according to which democracy does not integrate our private preferences into some efficient response to the public good. Rather, democracy itself provides an opportunity for everyone to partially legislate. By voting we act as a citizen, we enter into the general will, and in the process we come to share in the nation’s self-determination and sovereignty.

Viewed this way, voting is actually somewhat like serving on a jury. As District Judge Young argued in 1989, the jury plays a central role in our system of justice because it ties the deliverance of judges to the judicial standards of the citizenry. The jury acts as a representative of the population, and thus embodies the democratic idea that justice should ultimately be placed in the hands of the governed.

Like jury duty, we might think in voting we really are, in a small way, acting as a legislator. We are not registering our preference and then allowing the collective structure to issue its judgement, rather we are each making our own best judgement and deferring to the general consensus when others disagree.

While talking with debaters and reading the academic literature on compulsory voting, I eventually realized that people’s background assumptions about what voting was influenced their thoughts on if it should be compulsory. If I choose not to register my vote for where to go to dinner, I am thereby strengthening the vote of everyone else; I’m making their preference carry a little more weight. In contrast, if I regard voting as me playing my legislative role as citizen, then in declining to vote I’m actually hoisting a greater responsibility on others. I’m failing to provide my own counsel to temper there’s, and so increasing the deliberative burden on them to get the answer right. What you understand voting to be can change in fairly profound ways whether you’re inclined to compulsory voting (for more arguments on the subject see the definitive introduction, namely Brennan and Hill’s Compulsory Voting: For and Against).

Yet, despite these background assumptions being operative, very few people noticed the background disagreement on what a vote is. I myself had firm beliefs on lots of questions about voting, but have only now realized I don’t have a very clear sense of what I understand democratic voting to be.

So how should I understand the vote. I am unsure. If we just cared about producing the greatest social good, I expect something like Posner and Weyl’s quadratic voting system really would be best — it would utilize market principles and wisdom of the crowd to disaggregate decision-making allowing the system as a whole to consider more information than individual voter’s can consider themselves. The election thus is far more than the sum of its parts.

Does this mean I should vote in my self-interest rather than the national interest (just as I should vote where I personally want to get dinner)? Probably not. Perhaps we would make better decisions if everyone voted that way. But most people don’t vote that way. Instead they vote for the candidate or policy they think is best for the nation as a whole. People both self-report to vote in what they regard as the nation’s interest, and voting patterns suggest people don’t just vote in their self-interest (Brennan and Hill 39-40 provide an overview to the literature). Given that that is how others vote, it would seem unfair to vote in your own self-interest (even if we could design an electoral system where voting personal preference is neither unfair nor selfish).

And indeed, on the whole perhaps a system where we all vote for what we think is the national interest is better. While we are probably better at figuring out what is in our own self-interest (and so using external procedures to synthesize those judgements), perhaps the real value of democracy is not in making the best decision but rather in allowing each citizen to share in the sovereign act of legislation. Perhaps better that the ruled are also, in some sense, the rulers rather than outsourcing sovereignty to the opaque judgments of a market system.

Voluntourism and the Problem with Good Intentions

photograph of African children posing with white volunteers

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In the past few months alone, the global tourism industry has lost a staggering 320 billion dollars, making it just one of many industries to suffer from the pandemic. Most nations are no longer accepting the few American tourists still interested in international travel, rendering American passports “useless” in the words of one critic. Global tourism will have to evolve in the coming years to address health concerns and growing economic disparity, which is why many within the industry see this state of uncertainty as the perfect moment for reform. In particular, some are questioning the future of one of the most contentious sectors of modern global tourism, the “voluntourism” industry.

Voluntourism, a word usually used with derision, combines “volunteer” and “tourism” to describe privileged travelers who visit the so-called Third World and derive personal fulfillment from short-term volunteer work. College students taking a gap year, missionaries, and well-to-do middle-aged couples travel across the globe to build houses, schools, and orphanages for impoverished natives, either for the sake of pleasure or to pad out their CV. A 2008 study estimated that over 1.6 million people incorporate volunteer work into their vacations every year. The popularity of this practice explains why it’s so financially lucrative for religious organizations and charities; the same study found that the voluntourism industry generates nearly 2 billion dollars annually, making up a sizable chunk of overall global tourism.

But as many have pointed out, voluntourism often creates more problems than it solves. In an article for The Guardian, Tina Rosenberg explains how many nations have continued to rely on orphanages (despite their proven inefficiency when compared to foster care systems) simply because there is money to be made off of well-intentioned tourists who wish to volunteer in them. Furthermore, the majority of voluntourists are completely unqualified to perform construction work or care for orphaned children, which ends up creating more unpaid work for locals. Rosenberg also explains how local economies suffer from this practice:

 “Many organisations offer volunteers the chance to dig wells, build schools and do other construction projects in poor villages. It’s easy to understand why it’s done this way: if a charity hired locals for its unskilled work, it would be spending money. If it uses volunteers who pay to be there, it’s raising money. But the last thing a Guatemalan highland village needs is imported unskilled labour. People are desperate for jobs. Public works serve the community better and last longer when locals do them. Besides, long-term change happens when people can solve their own problems, rather than having things done for them.”

Overall, it’s often more expensive to fly out Western tourists and provide them with an “authentic” and emotionally charged experience than it would be to pay local laborers.

And yet, the emotional needs of tourists tend to come first. As Rafia Zakaria points out, “deliberate embrace of poverty and its discomforts signals superiority of character” for well-off voluntourists, who fondly look back on the sweltering heat and squalid living conditions they endured for the sake of helping others. This embrace of discomfort partly stems from white guilt, though some have pointed out that wealthy non-Western countries also participate in voluntourism. It has been labeled a new iteration of colonialism, perhaps with good reason. Colonial subjects have historically been positioned as an abject Other in need of Western paternalism, a dynamic that is reproduced in the modern voluntourism industry. As Cori Jakubiak points out in The Romance of Crossing Borders, voluntourism is essentially an attempt at buying emotional intimacy, which often overshadows the structural inequality that makes such intimacy possible. While voluntourism ostensibly taps into our most charitable impulses, in many ways it can be viewed as a moral deflection. Nakaria notes that

“Typically other people’s problems seem simpler, uncomplicated and easier to solve than those of one’s own society. In this context, the decontextualized hunger and homelessness in Haiti, Cambodia or Vietnam is an easy moral choice. Unlike the problems of other societies, the failing inner city schools in Chicago or the haplessness of those living on the fringes in Detroit is connected to larger political narratives. In simple terms, the lack of knowledge of other cultures makes them easier to help.”

At the same time, it seems wrong to completely reject qualified and genuinely well-intentioned travelers who wish to alleviate human suffering. Good intentions may not redeem the harm caused by the industry, but they also shouldn’t be dismissed as just the vestiges of colonialism. If properly educated on structural inequality, many voluntourists could actually help the communities they visit instead of perpetuate pre-existing problems. Furthermore, one could argue that most forms of volunteer work, whether domestic or abroad, contain some of the worst aspects of voluntourism. Wealthy Americans volunteer to work with the poor and needy at home, and however good their intentions are, they are perfectly capable of reproducing structures of power and privilege within those interactions.

Some see COVID as an opportunity to reform the voluntourism industry and weed out the useless or corrupt organizations, as a recent report for the World Economic Forum proposes. The most important thing moving forward is that we re-assess the needs of disenfranchised communities and adjust the practices of NGOs accordingly. There is a difference between building a school house and reforming educational policies, a fact which all charitable tourists should keep in mind before going abroad.

Should College Football Be Canceled?

photograph of footbal next to the 50 yard line

On August 11, the Big Ten conference announced it would be postponing its fall sports season to Spring 2021. This decision shocked many, as it was the first Division I college football conference to cancel its fall season.  After the announcement, Vice President Mike Pence took to twitter to voice his disapproval and to declare that, “America needs college football” and President Donald Trump simply tweeted, “Play College Football!” Trump and Pence weren’t the only politicians to express this belief, though they are certainly the highest-ranking members of government to express a moral position in favor of continuing the college sports season amidst the pandemic.

Questions surrounding America’s 2020 college football season make up a few of the many ethical dilemmas surrounding higher education during the pandemic. Canceling this season means further economic loss and the potential suppression of a labor movement, while playing ball could have dire consequences for the safety of players and associated colleges. Navigating this dilemma requires asking several questions about both the economic importance and cultural significance of college football.

Do schools have an ethical duty to cancel their football season? What values do athletic programs hold in college education? And what is at stake for the players, the schools, and Americans at large?

Is a sports season, in and of itself, dangerous to attempt during the pandemic? The official CDC guidance on playing sports advises that participants should wear masks, keep a 6-feet distance, and bring their own equipment. They also rank sports activities from low to high risk, with the lowest risk being skill-building drills at home and the highest being competition with those from different areas. While the CDC does not necessarily advocate against the continuation of athletic programs during the pandemic, can the same be said for other medical professionals? After VP Mike Pence’s tweet, several prominent health professionals “clapped back” on Twitter, pushing back against the need for football, and even suggesting that continuing fall sports is of least priority during the pandemic. Some medical professionals have even ranked football as one of the most dangerous sports for COVID-19 transmission.

Despite the physical dangers, cancelling football season has serious economic consequences for colleges. It is estimated that there is at least $4 billion at stake if college football is cancelled. While losing one year’s worth of revenue on sports might not seem like a big deal, many colleges rely on athletic revenue to cover the costs of student scholarships and coaching contracts. In fact, a 2018 study by the NCAA found that overall, Division I athletic programs were operating at a deficit, and their revenues were helping them scrape by. Without revenues this season thousands of professors and staff members could face the risk of job loss, due to colleges’ lack of money to cover athletic investments. Small businesses that see large profits from the influx of fans during football season face a huge decrease in revenue. Even sports bars and restaurants, which draw in customers by airing current games, face significant economic losses.

Additionally, college sports serve as a primary form of entertainment for millions of people. In 2019 alone, over 47 million spectators attended college football games and an average of over 7 million people watched games on TV. College football clearly holds large cultural value in American society. During a time which is already financially, emotionally, and mentally troubling, losing one’s hobby, or ties to a community of like-minded people, might worsen the growing mental health crisis spurred by the pandemic.

The question of whether or not college football season should continue is also further complicated by the existing ethical debates within the sport itself. NCAA football teams have had a wide-ranging history of corruption, from academic violations to embezzlement schemes. Even more disturbing are the several sexual abuse scandals that have rocked major college football teams in recent years, both involving athletes and athletic officials. The clear racial divides in the makeup of players and athletic officials, has stirred debates about the haunting similarities between college football and slavery.

Over the past decade, there has been a growing movement in favor of instituting labor rights for college athletes. Several lawsuits against the NCAA, primarily on behalf of football players, have argued that widespread lack of compensation violates labor laws. Movements to unionize college football have become even more common during COVID, with some arguing that recent league debates about canceling the football season are more about controlling players’ ability to organize than it is about players’ health and safety. In an op-ed in The Guardian, Johanna Mellis, Derak Silva, and Nathan Kalman-Lamb argue that the decision to cancel the college football season was motivated by fear of the growing movement demanding widespread reform in the NCAA. They assert that if colleges really cared about the health and safety of their players, they would not have “compelled thousands of players back on to campus for workouts over the spring and summer, exposing them to the threat of Covid-19.” The argument is especially strong when one considers the fact that a growing movement of athletes, using the hashtags #WeAreUnited and #WeWantToPlay, have been threatening to refuse to play without the ability to unionize.

Despite potentially ulterior motives for cancelling the college football season, it still might arguably be the most ethical decision. Nearly a dozen college football players have already suffered life-threatening conditions as a result of the spread of COVID. The continuation of a fall sports season will endanger athletes, athletic officials, spectators, and also non-athlete students. Even if in-person spectators are prohibited, the continuation of fall sports requires cross-state team competition, which is ranked as the highest risk sports activity by the CDC. Several outbreaks have already occurred during fall training at colleges across the nation. Outbreaks on teams have not only the potential to harm athletes, but also students at the universities which they attend.

While two Division I conferences across the country have canceled their season, others appear unwavering in their desire to play football. Fortunately, the NCAA has developed a set of regulations aimed to protect players from retaliation if they choose not to play. With human lives, the economic survival of colleges, and a labor organization movement at stake, America’s 2020 college football season is set to be the most ethically confuddling in history.

Cottagecore and the Ethics of Retreat

photograph of a Texas homestead at dusk surrounded by wilflowers

Over the past few months, many Americans have turned to bread-making to stave off the boredom and helplessness of quarantine. The craze for bread-making eventually reached a point where flour mills were unable to keep up with the sharp increase in demand for their product, leaving grocery store shelves barren. But bread-making is just one strand of a much more intricate lifestyle movement which began taking shape about a year before coronavirus.

Cottagecore,” as the trend is called, is partly an aesthetic and partly a set of ideals for “the good life.” The word “cottage” references country living as embodied by the self-sufficient and aesthetically pleasing cottage, and the slang/suffix “core” indicates a genre or category. Much in the same way that people are dedicated to certain genres of music, online communities have sprung up around certain aesthetics; one popular example is the “academia” aesthetic, which celebrates elements from 19th-century fashion and architecture that have become associated with higher education (think old leather books, marble busts, and tweed blazers).

Cottagecore also has an online community, and a glance through the cottagecore tag on Tumblr or Pinterest will give you a good idea of its aesthetic interests: frolicking goats, woven baskets bursting with ripe tomatoes, rough-hewn wooden furniture, and of course, loaves of bread. Everything is bathed in a golden haze of sunlight, or at least in a sepia-toned filter. But cottagecore also encompasses fashion, art, and even entertainment. For example, a budding genre of video games has both capitalized on and helped further cement the aesthetic and philosophy of cottagecore. In smash-hits like Stardew Valley, Harvest Moon, and to a lesser extent, the Animal Crossing series, the player leaves behind a hollow life in the city to start over in a close-knit rural community. Gameplay elements include conversing with and sometimes even romancing the locals, operating a self-sufficient farm (often through a variety of cottage-industries, such as farming, fishing, baking, and raising animals), and basking in the pristine beauty of nature. There isn’t a way to win or lose these games, or a villain the player must defeat in order to advance the plot. The closest thing to an antagonist in Stardew Valley is a Walmart-esque corporation that the player is gently encouraged to drive out of business in favor of a mom-and-pop general store.

As the Stardew Valley example shows, cottagecore is at least partly rooted in anti-capitalist sentiment. It presents an escape from the drudgery of industrial urban life, a sort of reverse rural-flight of the imagination. As Shania O’Brien optimistically puts it,

“This budding aesthetic movement paints the picture of an idyllic landscape and prioritises the simple pleasures in one’s life. Cottagecore turns its nose up at sixteen-hour workdays, at the fast-paced anxieties of late-stage capitalism, at toxic masculinity. It rejects the connections we make under these systems, labelling them inauthentic facsimiles of genuine relationships.”

At first glance, it’s difficult to discern what might be morally objectionable about a pretty moodboard on Pinterest, or what is morally objectionable about the philosophy behind that moodboard. However, indigenous people have pointed out that cottagecore is overwhelmingly white, and often unknowingly perpetuates settler colonialism. Specifically, critics argue that the aesthetic idealizes the American homestead as a beacon of self-sufficiency rather than the legacy of brutal Westward expansion. It’s worth interrogating why so many are tempted to romanticize rural life, and whether or not retreating from the problems of capitalism is worthwhile or desirable.

While cottagecore is certainly a trend of the moment, idealizing the countryside has been a common practice throughout human history. In response to the industrial revolution, many 19th century artists, like Jean-François Millet, began taking peasant life seriously as an artistic subject. The genre of “rural naturalism,” which depicted an ideal version of farmers and laborers going about their daily lives, reflected the artists’ sense of alienation from nature as well as a growing urban market for sentimental depictions of peasant life. The late 19th-century Arts and Crafts movement, which was spearheaded by socialist artists, also celebrated individual craftsmanship over the products churned out by urban factories.

Rural idealism in art is perhaps best embodied through Thomas Cole’s five-painting series titled The Course of Empire, completed in 1836. The first painting depicts an untouched and frightening wilderness, which is transformed by human cultivation into a tranquil rural paradise in the second painting. This second painting is clearly meant to represent the ideal state of mankind, which is a sharp contrast to the next two paintings in the series. These depict a decadent and immoral urban environment, as well as its apocalyptic destruction. The final painting shows the ruins of the now-unpopulated city, suggesting that desolation is inevitable when humanity leaves rural Arcadia behind. Although Cole’s series depicts a pseudo-Roman city, it was created explicitly to critique 19th-century American capitalism, and clearly reflects growing fears about urbanization and the decadence of empire.

America has always had a unique and explicitly political version of rural idealism. Thomas Jefferson, a foundational proponent of agrarianism, wrote in 1785 that “Cultivators of the earth are the most valuable citizens. They are the most vigorous, the most independent, the most virtuous, and they are tied to their country & wedded to its liberty and interests by the most lasting bands.” This statement was echoed by prominent American naturalist A.J. Downing, who succinctly explained in A Treatise on the Theory and Practice of Landscape Gardening in 1841 that “There is a moral influence in a country home.” Living in the country simply made you a better person, because you stood at a remove from the vulgar commercialism and social mixing of the city.

As Jefferson saw it, rural life isn’t isolated from the nation, but integral to it. Country, after all, can mean either a tract of rural land or a state, and thinkers like Jefferson sought to collapse the distinction between the two. In that sense, one could argue that modern rural escapism isn’t so much about retreating from reality as it is about taking part more fully in political society. However, this example also reveals a paradox of self-sufficient rural living. Jefferson’s farmer is both self-sufficient and tied down by civic responsibilities, both in society and outside of it. Also, of course, Jefferson has conveniently ignored the existence of the slaves who did the actual planting and tilling on his plantations, and the indigenous populations who lived there before him. Much like European artists of the 19th century, Jefferson is clearly promoting a certain kind of rural living as ideal with a self-serving political agenda. For artists and politicians alike, rural people were considered the “true” citizens of the state, the stable and honest antithesis to the cultural confusion of modernity.

20th-century escapees to the countryside certainly saw their rural retreat as political. The 1960s saw droves of educated middle-class city-dwellers retreating to rural communes with hopes of creating socialist utopias, as Jenny Odell touches on in her book How to Do Nothing. However, Odell found that the communes of the 1960s “exemplify the problems with imagined escape from the media and effects of capitalist society, including the role of privilege.” She explains that these communes, though ostensibly committed to egalitarianism and the rejection of privilege, often replicated the very structures they sought to escape. Women ended up doing all the dishes and menial housework, and because communes were primarily white, very few people of color were able to take part in the utopian project. Once again, it becomes apparent how difficult it is to truly escape society by returning to “the simpler things.” Furthermore, only those with privilege are able to enact their fantasies of escape. One famous historical example of this is Marie Antoinette’s Hameau de la Reine, a highly aestheticized mock peasant village built on the grounds of Versailles where the queen (who could be considered an early proponent of cottagecore) would pretend to milk cows and grow cabbages. Rural escapism almost always involves some element of privilege, evidently. Ironically enough, shots of the queen running through meadows and playing shepherdess in her hamlet, as depicted in Sophia Coppola’s 2006 Marie Antoinette film, are very popular in the cottagecore community.

The cottagecore movement borrows a vague moral sensibility from 19th-century agrarianism and marries it with socialist idealism, embodied especially by the communes of the 1960s. Much like those communes, cottagecore isn’t especially interested in those who already live in rural communities, unless they’re romanceable options in a video game. The “noble peasant,” an invention that was problematic to begin with, has disappeared from this configuration, because cottagecore is more about the desires of citydwellers than the needs of the communities they yearn to join. Cottagecore is political in that it is a response to alienation, an attempt at mapping out possibilities for a life without capitalism. But it still has a ways to go before it can truly live up to O’Brien’s description.

It’s unlikely that the damage of colonialism will be significantly worsened by Pinterest mood boards and jars of sourdough starter. There is also no evidence that young people are actually acting out their fantasies of rural retreat, so clearly cottagecore is more of a sensibility than an actual spur to change. As Odell argues, the impulse to mentally retreat from capitalism is both admirable and deserving of skepticism. As she says, “Some hybrid reaction is needed. We have to be able to do both: to contemplate and participate, to leave and always come back, where we are needed . . . To stand apart is to take the view of the outsider without leaving, always orientated toward what is is you would have left.” Much in the same vein, O’Brien suggests that “You can aesthetically participate in cottagecore, but more importantly, you can also incorporate its sentiment into your praxis by engaging in mutual aid, in environmental politics, in feminist activism. It is pointless to dream about wildflowers and serenity when you are doing nothing to bring that world closer.” Escapist fantasies can be an impetus to change, so long as they allow us to find new and more authentic forms of connection with others.

The “Wall of Moms” and Manipulating Implicit Bias

photograph of "Wall of Moms" protesting in Portland

Since Officer Chauvin murdered George Floyd, cities across the US and the world have protested the ongoing murder of Black men and women in public and without consequence by the police, and even by neighbors. Protesters have been met with more violence and escalation, by responding police officers, followed by national reserve units, and most recently the deployment of unmarked federal agents to multiple cities.

In the media, the characterization of these protests has been shifting since their onset. Reports of rioting, property damage, and looting contrasted with messages of the priority of the significance of human lives taken by white supremacist violence and the damage to the Black community over time. While some news stories highlighted the rowdiness of protests after dark, in response to police driving vehicles into crowds, tear-gassing groups, and shooting rubber bullets, others focused on the peaceful gatherings with speeches, songs, and non-escalating marches.

On social media, advice regarding how to stay safe in the midst of these large gatherings during COVID and in the face of military escalation proliferated. From wearing masks, to how to contact a lawyer, to what to do if teargassed, messages about how to stand up for Black Lives Matter were readily available. A common thread among these topics of advice was what to do if you are white and out supporting BLM.

The advice for white protesters frequently included the importance of reminding oneself that the protests center on experiences that are not endemic to the white population, but rather the non-white. This means that while numbers speak to support and are important, it is in the supportive rather than directive role that white protesters may be most strong. Further, as a member of the protest that is less susceptible to violence and physical threat, individuals can help others that are more at risk. Videos began to show white protesters putting their bodies between Black speakers, demonstrators, or groups of protesters and police officers in riot gear.

The white ally had a clear space in the media: protector of protesters.

On July 21st, a group of white women joined arms and formed a wall between police in riot gear and protesters in Portland on a late Tuesday night. Calling themselves a “Wall of Moms,” they shouted at the most recent show of militarized force by the police using phrases such as, “You wouldn’t shoot your mother!” They were teargassed and absolutely shocked at such treatment by “their” police.

The white individuals between militarized police and Black protesters, including the Wall of Moms, are using the biases of the police in order to lessen the likelihood that violence will break out, counting on their disinclination to harm white bodies compared to Black bodies. The effectiveness of this strategy relies on the notion that the police behave differently when faced with white members of society than non-white members, and this has been shown over and over again, both in protests and in the data on police brutality.

When faced with armed and yelling white people outside state capitals fighting public health policies, police are quite capable of de-escalation. However, people marching, unarmed, to bring awareness to, and protesting, such pernicious racial injustice that have led to systemic murder prompts such escalation as to draw extreme concern from the UN Human Rights Council. In fact, in many cases BLM protesters had to de-escalate police, rather than the other way around.

The Wall of Moms incorporates a variety of police and societal biases, which they explicitly invoked in their explanations of their strategies. As with all cases of the sort of white support mentioned above, using the privilege of one’s skin to attempt to change police’s behavior is manipulating the perceived biases of the police. The Wall of Moms evokes race, class, and gender in order to be effective.

The middle class white women who conform to the role of “mother” are attempting to draw a contrast between themselves and the protesters behind them. Many used rhetoric involving “protecting the children,” labeling the protesters in Portland as youths in need of mothering. Further, the call to “bring out the moms” itself reflects racial bias, including a de-feminization of Black women and reduction of Black individuals who inhabit the same roles as white people. It  neglects the fact that many Black mothers have played active roles in past BLM movements and been a part of the 2020 protests from the beginning.

The Wall of Moms went a step further than the other white bodies placed between protesters doing the protesting and the violent police. They created their own message, and, in the end, their own 501c3. They co-opted the idea that moms were a new and necessary part of the protests. They reinforced gender norms and the role of “Mother.” A “Wall of Dads” joined them armed with leaf blowers. The sense of white middle-class “normalcy” to play with the police’s preconceptions of people not to harm went beyond working with the underlying biases that make up the potential issues with de-escalation and underscored the roles of race and gender as real divides in our society.

In the case of the Wall of Moms, the privileges that put them in the position to potentially de-escalate the police’s racist violence also manifests the privileges regarding media coverage. The way that the Wall of Moms embraces the traditional picture of what it means to be a “normal” woman in our society plays on gendered biases involved in the hierarchies of privilege, and this in part is what leads to the ease with which they can take over the narrative of the protests in the media. White women occupy roles that call out for the need to be protected, and yet they were harmed here. This narrative takes over the story and eclipses the 125 cases of police violence against protesters before the Wall of Moms ever appeared on the scene.

The next day and into the week, media coverage of their courage, and outrage at their treatment, took over. In a piece by The Washington Post, the courage of the Wall of Moms was lauded in heroic terms:

“In front of the federal courthouse, federal agents in tactical gear used batons to push back the moms in bike helmets. Dozens were tear-gassed. Some were hit with less-lethal bullets fired into the crowd.

Still, they stayed.”

CNN reported the reaction of one participant, in disbelief that the Wall of Moms received the treatment that had been reported for weeks, this time framed in a decidedly positive light:

“The Feds came out of the building, they walked slowly, assembled themselves and started shooting [teargas] I couldn’t believe it was happening. Traumatic doesn’t even begin to describe it… Getting shot and gassed and vomiting all over myself and not being able to see, something clicked in my brain and I was like how could we collectively as mothers let our kids do this? I got home and showered and I told my husband we were going out the next night.”

A “Today” article reporting on events opened with, “The group, which includes hundreds of mothers, has said the protests are peaceful, but the police have been violent.” Such reports highlight the testimony of a group of white women after weeks of similar reporting by Black protesters that had not been compelling enough to quell dismissal or criticism of the protests.

The move from supportive role to main-story is not a novel one for white allies, especially for white women.

If we understand these behaviors in terms of implicit biases, they are relatively difficult to fit into our theoretical frameworks of moral evaluation. The biases include:

  • The police’s racist biases,
  • the “white ally” or savior’s explicit manipulation of the racist biases,
  • the Wall of Moms bringing in the implicit biases of motherhood and traditional gender roles that intersect with the racist stereotypes that don’t fit these roles, and
  • the media/audience biases that allow the story to be one not of the strategic manipulation of biases but rather reifying the roles the Wall of Moms invoke

These implicit biases pose issues for moral responsibility. When individuals endorse their behaviors and the attitudes that result in their behaviors, it is easier to hold them fully accountable for their behaviors and attitudes.

In the case where I think “Rich people are smart” and agree with the view that our society is set up in the structure of a meritocracy, it may be a simple matter to hold me responsible for the behavior that results from this perspective. The associations in our society that cast the behaviors of wealthier people in a more positive light may very well be influencing my belief, but my explicit endorsement plays a role in how we assess my behavior. If, for instance, I negatively judge and avoid individuals that have features I associate with less affluent groups, the fact that I have a belief system I stand behind that informs this behavior suggests that I am knowingly complicit. The harm I may cause to individuals is attributable to me and my beliefs, and therefore morally evaluating my behavior is relatively straightforward.

In contrast, say that while I have internalized the notion that we live in a meritocracy, and therefore rich people have in some way deserved their place in our economic and social system, I don’t actively or consciously endorse the idea that they are, in fact, smarter than those in other economic strata. These notions may come out in my behaviors – judging and avoiding personality characteristics or features associated with the less affluent, voting for policies that punish the poor or support the rich, etc. In this case, I may cause harm, but due to beliefs and attitudes that I do not explicitly endorse. They are attributable to me in a less clear or direct way: they are part of my motivational set, but wouldn’t show up in my explicit deliberation, narrative, or defenses for my behavior. This makes the behavior (and harm) resulting from the implicit biases more difficult to evaluate, and more difficult to alter in the long run.

In ethics, harm-based views have an easier time dealing with the distinction between implicit and explicit attitudes, because the important part of our behavior is the result: if you cause harm, that is the focus, and what we should hold you accountable for. Views that focus on intent, or the quality of the will behind the actions, have a more difficult time distinguishing what moral evaluation we should assign behaviors that result from implicit attitudes.

In the case of the police’s racist biases, this leads to systemic murder and brutalization of non-white, especially Black members of our communities. It leads to dramatic differences in responses to groups of people protesting, and a culture of terror inflicted in non-white spaces.

For the “white ally” these biases can be manipulated to produce positive results — avoiding harm and supporting movements by making space for messages and impact to continue forward without the force of the police’s biases to run free. However, the performance can also erode these effects and do harm by perpetuating the “white savior” narrative.

The Wall of Moms echoes this duality. While they might play a supportive role – making space for safe and impactful movements – they might also reinforce the stereotypes and biases they are attempting to play on.

The media, unfortunately, is a reflection and amplification of the societal biases and stereotypes that make it less likely for white people to be subject to violence and extreme violence. Protests for racial justice are more likely to be subject to suspicion and violence than protests in support of white interests. The media picks up on the interests of its average viewer – as the Wall of Moms members put it, “normal,” in both age, class, skin tone, and gender.

A harm-based view can account for both these drawbacks and advantages of the behaviors of white participants in the BLM protests. It can recognize that these behaviors are the topic of so many discussions and come up in such problematic ways. It can direct us in how to refocus and what to refocus on.

When the interaction of so many implicit biases is necessary to make sense of these tactics, evaluating the behavior morally at individual levels defies our models of moral evaluation. The individuals and groups involved in these behaviors would likely deny or fail to endorse the underlying attitudes and bases for their behaviors. The police would deny their behaviors are rooted in a contrasting value of white and non-white lives, and the Wall of Moms likely would deny their reification of the interaction between race and gender roles, and fail to acknowledge their role in taking over the message with their privilege.

In important ways, the biases of both the police and the white allies are reflecting the biases of societal privilege back to each other and to the society. The behavior of the Wall of Moms and the other white actors discussed here wouldn’t make sense as tactics without the racism inherent in our society – either implicitly or explicitly present in police officers or systems of policing put in place by our communities. This makes bias and the systems of privilege that cultivate it the responsibility of the community, and especially those with the privilege, to dismantle.

Clifford and the Coronavirus

photograph of empty ship helm

In 1877, mathematician and philosopher WK Clifford published a classic essay entitled “The Ethics of Belief.” In it, he asks us to consider a case involving a negligent shipowner:

“A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not overwell built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind, and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.”

Clifford then asks: what should we think of the shipowner? The answer, he thinks, is obvious: he is responsible for the death of the passengers. This is because he had all the evidence before him that his ship needed repairs and really wasn’t very safe, and instead of forming his beliefs in accordance with the evidence, he stifled his doubts and believed what he wanted.

As far as philosophical thought experiments go, Clifford’s case is easy to imagine happening in real life. In fact, there have recently been a number of real-life nautical disasters, although instead of ships sinking, they involve coronavirus outbreaks, the most recent being a Norwegian cruise ship that reported a number of coronavirus cases among crew and passengers earlier in August. In response to the incident, the CEO of the company owning the cruise line stated that “We have made mistakes” and that the outbreak was ultimately the product of a failure of several “internal procedures.” Indeed, the cruise line’s website states that they followed all the relevant guidelines from the Norwegian Institute for Public Health, implemented measures to encourage social distancing and good hygiene, and set sail with only 50% capacity. Despite these measures, though, people still got sick. This is not an isolated event: numerous businesses worldwide — that have adhered to government and other reopening guidelines — have seen spikes in cases of coronavirus among staff and customers.

In introducing his case, Clifford argued that what the shipowner did wrong was to form a belief on insufficient evidence. And it is easy enough to agree with Clifford’s diagnosis when it comes to such egregious belief-forming behavior as he describes. However, real life cases are typically more subtle. Cases like the Norwegian cruise ship and other businesses that have experienced problematic reopening should then lead us to question how much evidence is good enough when it comes to making the decision to reopen one’s business, and who we should find deserving of blame when things don’t work out.

To be fair, there are certainly differences between Clifford’s case and the case of the Norwegian cruise ship: there is no reason to think, for instance, that anyone in charge of the latter actively stifled doubts they knew to be significant. But there are also similarities, in that the evidence that cruise ships are generally not safe places to be right now is abundant and readily available. Even if one adheres to relevant health guidelines, we might wonder whether that is really good enough given what other evidence is available.

We might also wonder who is ultimately to blame. For instance, if guidelines concerning the re-opening of businesses that are provided by a relevant heath agency turn out to be inadequate, perhaps the blame should fall on those in charge of the guidelines themselves, and not those who followed them. There have, after all, been a number of countries that have reinstated stricter conditions on the operation of businesses after initially relaxing them in response to increases in new infections, Norway recently among them. When cases of coronavirus increased as a result of businesses being allowed to reopen, we might then put the blame on the government as opposed to the business owners themselves.

Clifford also makes an additional, more controversial argument that he illustrates in a second example:

“Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out. The question of right or wrong has to do with the origin of his belief, not the matter of it; not what it was, but how he got it; not whether it turned out to be true or false, but whether he had a right to believe on such evidence as was before him.”

Using this second case, Clifford argues that whether things turn out okay or not really isn’t important for determining whether someone has done something wrong: even if everyone on the ship made it safely the shipowner would still be guilty, he just got lucky that everyone survived. While we might think that Clifford is being harsh in his judgment, we might also wonder whether other businesses that have re-opened early in the face of some evidence that doing so may still be dangerous should be considered blameworthy, as well, regardless of the consequences.

The Melodrama of the United States Postal Service

photograph of rusty mail boxes in rural New Mexico

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


“Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds,” reads the motto of the United States Postal Service (USPS). But what neither acts of god nor nature can impede, politics can grind to a halt. The USPS has become another among many points of contention between Democratic politicians and the administration of President Donald Trump. But for an institution that predates the United States itself, political struggle is old hat.

The current issue facing the USPS is alleged by the Trump administration to be simply financial. It’s a poorly run business hemorrhaging money, according to new Postmaster-General  Louis DeJoy. Appointed in June 2020, DeJoy previously worked in the private sector as a management consultant for supply chain logistics. His appointment was criticized by Democratic lawmakers as blatantly partisan, as DeJoy is a significant political contributor to the the Republican Party and has never previously worked in the USPS. (New Breed Logistics, of which DeJoy was the CEO at the time of its sale to XPO Logistics, did work extensively with the USPS during DeJoy’s tenure.) Moreover, the restructuring measures DeJoy has executed are being decried as a deliberate effort to suppress voting by sabotaging the viability of mail-in ballots ahead of the November 2020 presidential election. Limits on overtime pay, and a policy of holding until the next day mail that cannot be delivered within scheduled working hours, have led to massive delays to mail service in Philadelphia. Some residents have gone up to three weeks without receiving scheduled deliveries.

USPS financial troubles are not a fiction, and they have been exacerbated by the COVID-19 pandemic. However, the postal woes are due largely to external factors rather than bad management. In fact, bipartisan legislation burdened the USPS with tremendous financial obligations. The Postal Accountability and Enhancement Act (PAEA) of 2006, Section 803, requires of the USPS something required of no other institution in the US. The USPS must fully fund the projected future healthcare expenses of retired postal workers each fiscal year. Even this extraordinary requirement wouldn’t have led to such deep financial problems for the USPS, but for the Great Recession of 2007 and the COVID-19 pandemic. Both circumstances depressed the already shrinking volume of letters and first-class mail from which the USPS used to derive much of its revenue.

But the existence of financial problems, whatever their provenance, is a red-herring according to Philip Rubio, historian and author of Undelivered: From the Great Postal Strike of 1970 to the Manufactured Crisis of the U.S. Postal Service. The USPS remains tremendously popular and effective. It delivers many times more items than FedEx and UPS combined, and delivers to many more places than any other service in the US. Rather than thinking of the USPS as a business, Rubio urges that it should be thought of as a public service. It is not meant to be profitable, competitive, or even self-sufficient, but to provide a necessary service and to serve as a tool of federal power. The monopoly the USPS has over the delivery of letters was given to it by Congress between 1845-1850, when they effectively legislated competing courier services out of existence. Without legislative intervention, the USPS would likely have been displaced long ago. But with the support of Congress, the postal service has largely grown and thrived.

This is the crux of the postal melodrama. It is another theater of the conflict between the self-styled champions of free markets and their sworn enemy — government services. The billionaire conservative political donor, Charles Koch, spent years fomenting political action against the USPS. The politically libertarian Cato Institute routinely publishes paeans of postal privatization, calling for the invisible hand to unmake what the US Constitution wrought. As always the argument is that stifling competition is bad: bad for the consumers who might see better service and lower prices; bad for entrepreneurs who might make their fortunes but for being stifled by legislative meddling; and bad for the companies that “benefit” from regulation because they stagnate. Proponents of privatization look at the example of several European nations that have (partly) privatized their post: the Netherlands, Germany, and England for example. Germany’s DeutschePost DHL Group has become a diversified, global company that continues to run a profit, even during the COVID-19 pandemic. Privatization has allowed cost-cutting steps for the postal service in the Netherlands, which stopped Monday delivery due to an overall decline in postal volume — though such changes still have to be approved by Dutch parliament. Why do such changes still have to be ratified by the legislature if the post is private? Because they are still mandated by law to provide universal letter courier services.

Stressing the universal service mandate is the counterargument made by opponents of privatization when they claim the USPS is a public service rather than a business. Without mail delivery many people wouldn’t get their paychecks, medicine, tax forms, family newsletters, etc. While a private company could provide these services, to those willing to pay fair market rate, the question is whether the market should be allowed to dictate the price and availability of such services. Others argue that replacing the trusty old post office with a soulless corporate delivery and logistics firm would eliminate the community binding role that local post offices play in small and rural communities.

But even necessary services, like utilities, are often private companies — both in the United States and abroad. Private companies that provide gas, electricity, and telecommunications services are subject to governmental oversight but still run in order to turn a profit for their owners or shareholders. Both private and public models can work — so the issue is ultimately one of principle. The question to ask is what services do we as a society want to allow to be governed by a principle of profit; and are there any services that it is immoral, or just unwise, to allow to be so governed? Libertarian-minded people will argue, as they are want to do, that any restriction on people entering freely into contracts with each other is morally and politically destructive. Others will counter that this is a nice, abstract fantasy that doesn’t capture the real relations of political and economic power among individuals enmeshed in historic systems of oppression.

Even before the COVID-19 pandemic and the political tug-of-war in the US over mail-in-voting, the USPS has been a political target for free-market advocates. The timing of the restructuring of the USPS and subsequent delivery slowdowns strikes many seeking President Trump’s ouster as suspicious. It coincides with his unfounded, but frequently stated, concerns about voter fraud and suggestion that the November 2020 election be delayed. We should all watch intently the continuing saga of the USPS.

To Wear a Mask or Not During the COVID-19 Pandemic

photograph of groups of people walking on busy street wearing protective masks

The COVID-19 pandemic is a worldwide phenomenon that has disrupted people’s lives and the economy. Currently, the United States leads COVID cases in the world and as of this writing, the United States has the largest amount of confirmed deaths, and ranks eighth in deaths per capita due to the virus. There are a number of factors that might explain why the numbers are so high: the United States’ failed leadership in tackling the virus back in December/January, the government’s response to handling the crisis once the virus spread throughout the United States, states’ opening up too early — and too quickly — in May and June, and people’s unwillingness to take the pandemic seriously by not social distancing or wearing face masks. Let us focus on the last point. Why the unseriousness? As soon as the pandemic hit, conspiracy theories regarding the virus spread like — well, like the virus itself. Some are so fully convinced about a conspiracy theory that their beliefs may be incorrigible. Others seem only to doubt mask-wearing as a solution.

Part of the unwillingness to wear face masks is due to the CDC and WHO having changed their positions about wearing masks as a preventative measure. From the beginning, the U.S. Surgeon General claimed that masks were ineffective, but now both the CDC and the WHO recommend wearing them.

Why this reversal? We are facing a novel virus. Science, as an institution, works through confirming and disconfirming hypotheses. Scientists find evidence for a claim and it leads to their hypothesis being correct. As time goes on, scientists gather new evidence disconfirming their original hypothesis. And as time continues further, they gather more information and evidence and were too quick to disconfirm the hypothesis. Because this virus is so new, scientists are working with limited knowledge. There will inevitably be back-and-forth shifts on what works and what doesn’t. Scientists must adapt to new information. Citizens, however, may interpret this as skepticism about wearing masks since the CDC and WHO cannot make up their minds. And so people may think: “perhaps wearing masks does prevent the spread of the virus; perhaps it doesn’t. So if we don’t know, then let’s just live our lives as we did.” Indeed, roughly 14% of Americans state they never wear masks. But what if there was a practical argument that might encourage such skeptics to wear a mask that didn’t directly rely on the evidence that masks do prevent spreading the virus? What if, despite the skepticism, wearing masks could still be shown to be in one’s best interest? Here, I think using Pascal’s wager can be helpful.

To refamiliarize ourselves, Pascal’s wager comes from Blaise Pascal, a 17th-century French mathematician and philosopher, who wagered that it’s best to believe in God without relying on direct evidence that God exists. To put it succinctly, either God exists or He doesn’t. How shall we decide? Well, we either believe God exists or we believe He doesn’t exist. So then, there are four possibilities:

God exists God does not exist
Belief in God
  1. +∞ (infinite gain)
2. − (finite loss)
Disbelief in God 4.   −∞ (infinite loss) 3. + (finite gain)

 

For 1., God exists and we believe God exists. Here we gain the most since we gain an infinitely happy life. If we win, we win everything. For 2., we’ve only lost a little since we simply believed and lost the truth of the matter. In fact, it’s so minimal (compared to infinite) that we lose nothing. For 3., we have gained a little. While we have the truth, there is not infinite happiness. And compared to infinite, we’ve won nothing. And finally, for 4., we have lost everything since we don’t believe in God and it’s an eternity of divine punishment. By looking at the odds, we should bet on God existing because doing so means you win everything and lose nothing. If God exists and you don’t believe, you lose everything and win nothing. If God doesn’t exist, compared to infinite, the gain or loss is insignificant. So through these odds, believing in God is your best bet since it’s your chance of winning, and not believing is your chance of losing.

There have been criticisms and responses to Pascal’s wager, but I still find this wager useful as an analogy when applied to mask-wearing. Consider:

Masks Prevent Spreading the Virus Masks Don’t Prevent Spreading the Virus
Belief in Masks Preventing Spreading the Virus (1) (Big Gain) People’s lives are saved and we can flatten the curve easily. (2) − (finite loss) We wasted some time wearing a piece of cloth over our face for a few months.
Disbelief in Masks Preventing Spreading the Virus (4) (Big Loss) We continually spread the virus, hospitals are overloaded with COVID cases, and more deaths. (3) + (finite gain) We got the truth of the matter.

 

For (1), we have a major gain. If wearing masks prevents the spread of the virus and we do wear masks, then we help flatten the curve, lessen people contracting the virus, and help prevent any harms or deaths due to COVID-19. (One model predicts that wearing masks can save up to 33,000 American lives.) This is the best outcome. Suppose (2). If masks do nothing or minimally prevent the spread of the virus, yet we continue to wear masks, we have wasted very little. By simply wearing a restriction over our face, it is simply an inconvenience. Studies show that we don’t lose oxygen by wearing a face mask. And leading experts are hopeful that we may get a vaccine sometime next year. There are promising results from clinical phase trials. And so wearing masks, having a small inconvenience in our lives, is not a major loss. After all, we can still function in our lives with face masks. People who wear masks as part of their profession (e.g. doctors, miners, firefighters, military) still carry out their duties. Indeed, their masks help them fulfill their duties. The inconvenience is a minor loss compared to saving lives and preventing the spread of the virus as stated in (1).

Suppose (3). If (3) is the case, then we’ve avoided inconvenience, but this advantage is nothing compared to the cost (4) represents. While we don’t have to wear a mask, celebrating the riddance of inconvenience pales in comparison to losing unnecessary lives and unknowingly spreading the virus. Compared to what we stand to lose in (4), in (3) we’ve won little.

Suppose (4). If we decide (4) is the strategy, we’ve doomed ourselves by making others sicker, we’ve continually spread the virus, and hospitals have had to turn away sick people which leads to more preventable deaths. We’ve lost so many lives and caused the sickness to spread exponentially, all because we didn’t wear a mask.

Note that we haven’t proved that masks work scientifically (although I highly suspect that they do). Rather, we’re doing a rational cost-benefit analysis to determine what the best strategy is. Wearing masks would be in our best interest. If we’re wrong, then it’s a minor inconvenience. But if we’re right, then we’ve prevented contributing to the spread of the COVID-19 virus which has wreaked havoc on many lives all over the globe. Surely, it’s better to bet on wearing masks than not to.

Hypocrisy and the Fall of Falwell

close-up photograph of Jerry Falwell Jr. at speaking engagement

It has not been a good week for Jerry Falwell Jr. It began when the prominent evangelical posted a bizarre photograph on Instagram of himself with his pants unzipped and his arm around a bare-midriffed woman who is not his wife. Falwell tried to do damage control with a radio spot that, owing to his possibly substance-induced incoherence, dug him deeper into the hole. Days later, the board of trustees of Liberty University, an evangelical college in Lynchburg, Virginia, announced that Falwell will be taking an indefinite leave of absence from his role as president and chancellor. While Falwell has been accused of arguably much more serious misconduct, the final straw appears to have been this display of flagrant hypocrisy; Liberty Law School’s honor code includes prohibitions on “display of objects or pictures” that are “sexual in nature,” “sexually oriented joking,” “the encouragement or advocacy of any form of sexual behavior” that would undermine the University’s “Christian identity,” and the possession of alcohol (in the picture, Falwell holds a glass of what he calls “black water”).

Falwell’s is not the first case of hypocrisy by a high-profile religious leader. Yet the ethical argument against hypocrisy is far from clear. What is it about hypocrisy that makes it morally objectionable?

In order to answer this question, we must first say what hypocrisy is. Ask most people, and they will tell you that hypocrisy is not practicing what you preach. But consider this: in the process of becoming mature adults, we often do things that we later condemn, or condemn things we later do. On some occasions, this can amount to hypocrisy — particularly if we try to hide the fact that we previously engaged in the behavior of which we currently disapprove. Yet it does not have to be hypocritical to acknowledge that we have undergone moral improvement, and as a consequence currently disapprove of what we did in the past. So, not practicing what you preach is not enough to make someone a hypocrite. I believe that what’s required, beyond the inconsistency between our words and deeds, is that this inconsistency involves representing oneself as better than one is by the lights of some community’s moral standards.

That hypocrisy is not mere inconsistency in itself suggests that the ethical complaint against hypocrisy cannot simply be that it involves inconsistency. After all, there is an inconsistency over time between the actions and words of a reformed racist, but such inconsistency is to be welcomed.

One suggestion is that hypocrisy is a form of dishonesty. Hypocrites pretend to be better than they are, thus deceiving others about their moral commitments and concerns. Upon reflection, however, this can only be a small part of the story. There is a certain type of hypocrite — we might call her a cynical hypocrite — who consciously pretends to be morally better than she is in order to obtain some extrinsic benefits, such as social status. This kind of hypocrisy does involve dishonesty. Yet many hypocrites — indeed, those who on some views most clearly deserve the label — are perfectly sincere in their belief in their own goodness, as well as in their condemnation of others for norm violations. It might be suggested that the problem with these hypocrites is that they are self-deceived, but even if this is true, self-deception does not usually invite the sort of moral opprobrium to which hypocrites are regularly subjected.

Another suggestion is that, because hypocrites are primarily concerned with representing themselves as morally better than they are, their words are unlikely to represent (a) their actual values or (b) the “correct” assessment of the moral facts. Insincere hypocrites are motivated to hide their true commitments behind the appearance of goodness, while sincere hypocrites are likely to make whatever moral judgments will represent themselves in the best light. In either case, their testimony about (a) and (b) is suspect. The suggestion, then, is that hypocrisy is a kind of untrustworthiness. While I think this diagnosis gets at something important about hypocrites, it does not explain our moral objection to them. After all, there are plenty of people whose testimony we cannot trust, but whom we do not loathe. Think, for example, of an extremely naïve person whose moral judgments are clouded by a misplaced faith in human goodness. Such a person is not trustworthy, in the sense that it would be foolish for us to rely on their testimony when deciding the morally right course of action. We might even criticize such a person for being naïve. But we would not have the strong negative response to this person that we regularly do to hypocrites.

The last and, I think, best suggestion is that hypocrites are free riders, enjoying the advantages of undeserved moral approval while secretly collecting the dividends of vice. On this view, what makes hypocrisy objectionable is that it tends to cause hypocrites to appear better than they really are, whether they are sincere or insincere. So long as their hypocrisy remains unmasked, others will reward this apparent goodness even as the hypocrite continues to reap the benefits of acting contrary to moral standards. This account seems able to explain why we hate hypocrites so much: generally, we tend to hate people who obtain advantages they don’t deserve, as well as those who fail to make their contribution to goods we all enjoy — in this case, morality itself. It offends our deeply ingrained, and possibly innate, sense of fairness.

To return to Falwell and others like him, we can now see one important reason why even other evangelicals might have a strong negative response to his behavior. Leaders of all kinds, but particularly leaders of religious communities, often owe their status in part to a belief that they exemplify certain moral virtues. When such leaders are unmasked as hypocrites, this reveals that their leadership role, with all the perks that come with it, is undeserved. And this strikes us as deeply unfair; after all, there are plenty of other people who are earnestly striving to live according to often strict standards, yet who receive less praise and other benefits for doing so.

Thomas Hobbes called hypocrisy a “double iniquity,” suggesting that it was actually worse than outright villainy. On the fairness account, this makes sense: the hypocrite not only violates moral norms, but commits the further wrong of free riding on others’ compliance with moral norms in order to reap the undeserved benefits of appearing good and doing evil. In short, there are grounds for thinking that being a hypocrite with respect to some standard is worse than simply flouting the standard. This still doesn’t mean that hypocrites are always worse than simple wrong-doers — not all standards are equally important — but it does mean that hypocrites with respect to some rule, like Falwell, are liable to more loathing than someone who simply breaks that rule, like Falstaff.

Reflections of a Teacher during the COVID-19 Pandemic

photograph of bright empty classroom

If each month of our collective coronavirus experience were given a theme, the appropriate theme for August might be education, and all of the benefits and challenges that come along with trying to facilitate learning in both children and adults during the pandemic. We all take on many roles, and if you’re like me, you’ve found that certain roles have been amplified and underscored, they’ve become not just descriptive but definitional. In pandemic conditions, one or two roles stand out as necessary rather than contingent features of our personal identities. In my own case, my role as teacher and mentor has taken on existential significance; the way that I perceive and respond to this event can only be properly understood through that lens.

These days I frequently daydream about what I will tell my grandchildren about what life was like during the pandemic: the things that were frustrating, the things that scared me, the things that stood out as beautiful, and the things that genuinely surprised me. Perhaps by then I will better understand the way that people behave when they are frightened, and it will no longer strike me as startling that people refused to wear masks or that, when teachers expressed concerns about returning to packed schools with poor ventilation systems, they were called cowards and accused of wanting to get paid for doing less work.

There is some comfort in knowing that the future will be fairly similar to the past. This can be true even when the past is unpleasant and ugly. At least when the future is like the past, we know how to plan; we know who we are. We aren’t scared that we’ll come unmoored and that we’ll drift into some hazy, undefined abyss in which our lives cease to be meaningful by our present standards. I think most of us have these fears right now. On a good day, I believe that we can look forward to a future in which everything is mostly the same as it used to be except that we’ll know how to bake sourdough bread and we’ll hold more meetings on Zoom. On a bad day, I’m concerned that the institutions that I cherish the most will be so degraded or will go through such significant changes that the world we’ll have on the other side of all of this won’t be one that I’ll recognize or one in which I desire to participate.

When I tell my grandchildren about what it was like to live during this pandemic, I wonder which parts of my story will surprise them. I’m sure it won’t surprise them that technology played a massive role in the delivery of education, by then that practice is sure to have become commonplace. From my very core I hope that it doesn’t surprise them that education was once delivered from one person to another. I hope that teaching remains an act of care and of intimacy between groups of people.

We know at this point that turning to remote learning poses accessibility problems for many students. Plenty of households both nationally and worldwide don’t have access to the internet. In response, some locations both inside and outside of the United States have adopted creative strategies. In countries like Morocco, Mexico, Mongolia, China, and India, education is being broadcast on television to students, often in places where they can gather together with a minimal level of interaction with adults who may be more vulnerable to the disease.

Universities have been innovative as well. Late last semester, at least some of the faculty members on my campus were given a choice regarding which delivery method to use in the fall. This decision, on this specific occasion, was about how best to offer a course during a pandemic. We could choose between asynchronous and synchronous options. Asynchronous courses are traditional online classes. Teachers provide the students with material that those students can engage with at any time of the day or night that is convenient for them. In addition to the health and public safety advantages, one of the most significant virtues of this kind of approach is flexibility. Students and teachers are home with family members, sometimes with dependent children who are also being educated at home. Students may get sick, and asynchronous options make it easier for them to avoid falling behind if they do.

Synchronous courses are taught live, during the scheduled class time via Zoom or other comparable platform. These courses are convenient for students; they don’t have to commute or even get out of bed if they don’t want to. One of the main disadvantages is the potential for decreased participation. Students can turn off their cameras and listen to the class while multitasking, leaving the professor with the impossible task of engaging a sea of black screens.

Universities also offer blended courses. In these courses, students have real face-to-face interactions with their professors in a socially distanced way. Half of the class comes to school live in the classroom on one day while the others attend via Zoom. During the next class period the roles flip, and the other group of students receive in-person education. The main advantage is that students get to work with their professors and with each other in a familiar way. The main disadvantage is that the public health threat increases when people get together, even when universities try their best. It’s also not clear that this approach has many real advantages over simply conducting the course on Zoom. Students can’t approach the teachers or each other in this format either.

In the fall semester, I am teaching at least one class using every one of the delivery methods that my university offers. I’ve attended many trainings this summer and in some ways I’m excited by the new possibilities. I now have technological tools that I didn’t have last semester — tools that might really speak to a generation of students who were raised with technology as extensions of their minds and bodies.

I’m also afraid of an educational unmooring. I’m worried that education will become disconnected from critical values. One way of thinking about education is utilitarian; education is useful because people can do good things with it. When people are educated, they may become better citizens, be more productive, and find themselves in the position to do more good in the world. If the end goal of education is purely utilitarian, then one way of getting knowledge into people’s heads is as good as any other. If broadcasting educational television proves effective in teaching students, our brave new world might cast a flat screen in the role of Susie’s fourth grade teacher.

I’m reminded of the predicament of Mildred Montag, the wife of the protagonist in Fahrenheit 451. In a society without books, Mildred is entertained by her parlor walls, which are giant television screens. She comes to think of the characters that grace those walls as family. This is made easier by the fact that the Montag’s paid for the attachment that allows the characters on the screen to refer to Mildred by name when they speak. Mildred’s walls allow her to achieve the utilitarian objectives of entertainment and socialization without any need to actually engage in real, meaningful human interaction.

Knowledge is valuable, but the passing of knowledge from one knower to the next may also be valuable. Imagine a world in which all facts are stored digitally in computers. In some sense, in this world we’ve achieved a state of omniscience, all the facts are “known.” But imagine that very few of the facts are known in the minds of persons. Would this be a desirable world to live in? Do we value the brute acquisition of knowledge, or do we value knowledge that we can discuss and apply together as a social enterprise?

The textbook for my first philosophy course as a freshman was titled The Great Conversation. It’s the only textbook title from my college days that I can remember. Good teachers, like Socrates, engage their students in conversation to help them to recognize the gaps in their knowledge. There is a reason that Plato’s work, written in dialogue form, is so effective and meaningful so many years after it was written.

Crucially, I’m concerned that legislators, administrators, and entrepreneurs will look at the educational innovations we’ve achieved as ways to further commodify education. For better or for worse, we’ve experimented with replacing caregivers, priests, and sex workers with technological counterparts. These are all roles for which one would think that human interaction is critical. The post coronavirus educational system may be a la cart — select the educational product that is most convenient for you, even if that format involves no real mentoring. If we do this, we will have abandoned the interactions that are so important to meaningful learning experiences.

I hope I don’t have to explain to my grandchildren why that happened.

Does Character Matter?

photograph of empty oval office

One infamous feature of the Trump era is the shocking decline in the proportion of Republican voters who say that the president’s moral character matters to them. According to a recent Gallup poll, during the Clinton administration 86 percent of Republicans thought it was very important for “a president to provide moral leadership for the country.” In 2018, that number was down to 63 percent. The almost inescapable conclusion is that Republicans have simply dropped the requirement of good character — or perhaps made a special exception — in light of President Trump’s obvious moral turpitude.

However, in a certain way the shift is understandable. Although we may think that good moral character is desirable in our elected officials, it is less clear why this should be so. After all, it seems plausible that we ought to support politicians who will be most successful at their jobs, and that the success of an elected official consists solely in successful governance. But moral character is, at best, a weak indicator of a person’s capacity to govern. For example, Robert Caro’s monumental biography of President Lyndon Johnson conclusively demonstrates that he was a real piece of work, but he was also a fabulously effective politician. On the other hand, it is doubtful whether Mother Teresa could have become, like Johnson, a “master of the Senate,” despite — or perhaps because of — her saintly disposition. Thus, if we think that capacity to govern is the sole criteria of success for a politician, then it seems that moral character does not matter a great deal. Much more relevant is a would-be leader’s record of managing and utilizing unwieldy bureaucracies.

On the other hand, most people seem to have a strong intuition that it would be impermissible to allow a murderer or rapist to hold office, no matter how effective they are at governing. So, we are confronted with two contradictory intuitions: that we ought to support politicians solely based on their capacity to govern, and that we ought not support certain morally egregious politicians regardless of their capacity to govern. Something has got to give.

One might question the claim that moral character is a weak indicator of a person’s capacity to govern. An ancient strand of political thought stretching back to Plato and Aristotle has it that virtue is a necessary attribute of a successful leader since effective statecraft requires practical wisdom, and practical wisdom is both the crown of the practical virtues and cannot exist without them. Anecdotally, the evidence is at best unclear. After all, President Johnson will perhaps be forever known for his disastrous decision to escalate the war in Vietnam, a decision that may have been due, at least in part, to certain character flaws. Likewise, President Trump’s cruelty and stupidity seems to be reflected in his many cruel and stupid policies. At the same time, there are surely instances of morally exemplary characters who perform poorly in political office. Thus, a more systematic study than is possible here would be required to make this objection stick.

Another place that some have pushed back on the argument is the implicit claim that successful governance has nothing to do with having a morally good character. What if exercising virtue is part of governing? If to govern is, at least in part, to provide moral leadership, then an elected official’s acts of humility, kindness, justice, and prudence are also acts of governing. If this is the case, then when, for example, a president consoles victims of a natural disaster or school shooting, makes a wise decision about during a foreign policy crisis, or celebrates the civic contributions of particular citizens, these are all at least arguably instances of governing, and yet also (at their best) authentic demonstrations of virtue.

Another weak point of the argument against moral character is the claim that we ought, without qualification, to support politicians who will be most successful at their jobs. Of course, it is important that politicians be successful, since governing is a kind of job that one can do well or badly. But a political office is also a position that comes with a tremendous number of perks; it is not just a reward, but it certainly is one. Because of this, some have argued that we ought to assess a politician not only with respect to how successful she is in policy terms, but also in terms of whether she deserves to hold political office, with all of its advantages. It is this idea that, I believe, best explains why we feel that we ought not support a murderer or rapist for office, no matter how good they are at governing. At minimum, we think that there is a moral threshold below which a politician is disqualified from the advantages of office. Where exactly that threshold lies is a matter of debate, as is whether a politician can re-qualify herself by properly atoning for her moral failures.

In short, we should reject the argument that character does not matter for three reasons. First, it is not at all clear that character is only a weak indicator of the ability to govern. Second, the exercise of virtue is itself part of effective governance. Finally, because political office is accompanied by various perquisites, some decrepit characters may not merit it. With a firmer grip on why character matters, it may hopefully be easier for people to avoid inconsistently applying the character standard to their assessments of politicians.

Principles, Pragmatics, and Pragmatic Principles

close-up photograph of horse with blinders

In this post, I want to talk about a certain sort of neglected moral hypocrisy that I have noticed is common in my own moral thinking and, that I expect, is common in most of yours. And to illustrate this hypocrisy, I want to look carefully at the hypocritical application of democratic principles, and conclude by discussing President Trump’s recent tweet about delaying the election.

First, however, I want to introduce a distinction between two types of hypocrisy: overt and subtle hypocrisy. Overt hypocrisy occurs when you, in full awareness of the double standard, differentially apply a principle to relevantly similar cases. It is easy to find examples. One is Mitch McConnell’s claim that he would confirm a new Supreme Court Justice right before an election after blocking the confirmation of President Obama’s nominee Merrick Garland because of how close the nation was to a presidential election. It is clear that Senator McConnell knows he is applying the democratic principle inconsistently, he just also does not think politics is about principles, he thinks it is about promoting his political agenda.

Subtle hypocrisy, in contrast, occurs when you inconsistently apply your principles but you do not realize you are applying them inconsistently. Names aside, a lot of subtle hypocrisy, while it is hard to recognize in the moment, is pretty clear upon reflection. We tend to only notice our principles are at play in some contexts and not others. We are more likely to notice curtailments of free speech when it happens to those who say similar things to ourselves. We are much more likely to notice when we are harmed by inequitable treatment than when we are benefited by it.

We are especially likely to hypocritically apply our principles when we begin to consider purported reasons given for various policies. If the Supreme Court issues a decision I agree with, chances are good that I won’t actually go and check the majority reasoning to see if I think it’s sound. Rather, I’m content with the win and trust the court’s decision. In contrast, if the Court issues a decision I find dubious, I often do look up the reasoning and, if I think it is inadequate, will openly criticize the decision.

Why is this sort of hypocrisy so common? Because violations of our principles don’t always jump out at us. Often you won’t notice a principle is at stake unless you carefully deliberate about the question. Yet, we don’t just preemptively deliberate about every action in light of every principle we hold. Rather, something needs to incline us to deliberate. Something needs to prompt us to begin to morally reflect on an action, and, according to an awful lot of psychological research, it is our biases and unreflective intuitions that prompt our tendency to reason (see Part I of Jonathan Haidt’s The Righteous Mind). Because we are more likely to try and think of ethical problems in the behavior of our political enemies, we are much more likely to notice when actions we instinctively oppose violate our principles, and are unlikely to notice the same when considering actions we instinctively support.

I can, of course, provide an example of personal hypocrisy in my application of democratic principles against disenfranchisement. When conservative policy makers started trying to pass voter ID laws I was suspicious, I did my research, and I condemned these laws as democratically discriminatory. In contrast, when more liberal states gestured at moving towards mail-only voting to deal with COVID I just assumed it was fine. I never did any research, and it was just by luck that a podcast informed me that mail-only voting can differentially disenfranchise both liberal voting blocs like Black Americans and conservative voting blocs like older rural voters). Thus, but for luck and given my own political proclivities, my commitment to democratic principles would have been applied hypocritically to condemn only policies floated by conservative lawmakers.

This subtle hypocrisy is extraordinarily troubling because, while we can recognize it once it is pointed out, it is incredibly difficult to notice in the moment. This is one of the reasons it is important to hear from ideologically diverse perspectives, and to engage in regular and brutal self-criticism.

But while subtle hypocrisy is difficult to see, I think there is another sort of hypocrisy which is even more difficult to notice. To see it, it will be useful if we take a brief digression and try to figure out what exactly is undemocratic about President Trump’s proposal to delay the election. I, like many of you, find it outrageous that President Donald Trump would even suggest delaying the election due to the COVID crisis. Partly this is because I believe President Trump is acting in bad faith. Tweeting not because he wants to delay the election but because he wants to preemptively delegitimize it. Or perhaps because he wants to distract the media from otherwise damning stories about COVID-19 and the economy.

But a larger part of me thinks it would be outrageous even if President Trump were acting in good faith, and that is because delaying an election is in tension with core democratic principles. Now, you might think delaying the election is undemocratic because regular elections are the means by which a government is held democratically accountable to its citizens (this is the sort of argument I hear most people making). Thus, if the current government is empowered to delay an election, it might enable the government to, at least for a time, escape democratic accountability. Of course, this is not a real worry in the U.S. context. Even were the U.S. congress to delay the election, it would not change when President Trump is removed from office. His term ends January 20th whether or not a new President has been elected. If no one has been elected, then either the Speaker of the House or the President pro tempore of the Senate takes over (and I am eagerly awaiting whatever new TV show in the Spring decides to run with that counterfactual).

But there is a different principled democratic concern at stake. Suppose a political party, while in control of Congress, would delay an election whenever polls looked particularly unpromising. This would be troublingly undemocratic because while Congress would have to hold the election at some point before January 3rd, they could also wait till the moment that the party currently in power seems to have the largest comparative advantage. But just as gerrymandering is undemocratic because it allows those currently in power to employ their political power to secure an advantage in the upcoming elections, so too is this strategy of delaying elections for partisan reasons.

But what if Congress really were acting in good faith. Would that mean it could be democratic to delay the election? Perhaps. If you were confident you were acting on entirely non-partisan reasons, then delaying in such contexts is just as likely to harm your chances as to help them. And indeed, I could imagine disasters so serious as to justify delaying an election.

However, I think in general there are pragmatic reasons to stick to the democratic principles even when we are acting on entirely non-partisan reasons. First, it can be difficult to verify that reasons are entirely non-partisan. It can be hard to know the intention of Senators, and sometimes it can even be hard to know our own intentions.

Second, and I think more profoundly, there is a concern that we will tend to inequitably notice non-partisan reasons. Take the Brexit referendum. When I first saw some of the chaos that happened following the Brexit vote, I began to seriously consider if the UK should just hold a second referendum. After all, I thought, and still think, there were clear principled democratic issues with the election (for example, there seemed to be a systematic spread of misinformation).

The problem of course is that had the Brexit vote gone the other way, then I almost certainly would never have looked into the election, and so never noticed anything democratically troubling about the result. My partisan thoughts about Brexit influence what non-partisan reasons for redoing the election I ended up noticing. To call for redoing an election is surely at least as undemocratic as calling for delaying an election (indeed, I expect it is quite a bit more undemocratic, since it actually gives one side two chances at winning), and yet I almost instantly condemn the call to delay an election and it took me ages to see the democratic issues with redoing the Brexit vote.

Here, it is not that I was hypocritically applying a democratic principle. Rather, I was missing a democratic principle I should have already had given my tendency to hypocrisy. Because partisan preferences influence what non-partisan reasons I notice, I should have adopted a pragmatic principle against calling for reelections following results with which I disagreed. Not because reelections are themselves undemocratic (just as delaying an election might not itself be undemocratic), but because as a human reasoner, I cannot always trust my own even non-partisan reasoning and so should sometimes blinker it with pragmatic principles.