← Return to search results
Back to Prindle Institute

Immoral Emotions, Intentionality, and Insurrection

photograph of Capitol mob being tear gassed outside

Psychologists believe that emotions — those physical reactions and expressive behaviors that accompany feelings like fear, disgust, and joy — are, in and of themselves, neither moral nor immoral, and neither ethical nor unethical. Rather, we assess the behaviors motivated by, or following from, emotions as being either healthy or unhealthy for the individual. Emotions in this sense serve as coping mechanisms (or ego defenses), and are identified as being positive or negative. When these behaviors are deemed negative, they result in unhealthy outcomes for the person.

One major research question that psychologists often face when studying emotional development asks which comes first: thinking or feeling? Does our physiological activity precede conscious awareness, or is it the other way around? The current research tends to suggest the latter; research supports the idea that cognition precedes emotion. The Schacter Two-Factor theory of emotion, states that an emotion, say anger, is recognized as the emotion of anger only after we cognitively interpret it within the immediate environment, and then cognitively label it as anger. (I might label the emotion as anger, or rage or annoyance, depending on the circumstance of the immediate environment since all three of these emotions are related. Rage, for example, is an intensification of anger (the basic emotion), while annoyance is anger, but to a much lesser degree.) While anger might lead one to act immorally, the emotion itself is not considered good or bad.

But this view that cognition precedes emotion might seem to put pressure on the idea we should regard emotions as being neither moral or immoral. For example, philosopher Martha Nussbaum believes that if emotions do have a cognitive component, then they must be taken into account when evaluating ethical judgments (intentions) made that precede behaviors. Jonathan Haidt goes even further by labeling emotions as either moral or immoral depending on how prosocial the resulting behaviors are, and if the emotion was elicited by the concern for others or strictly out of self-interest. If emotions are labeled as such, then the most recent events at the Capitol can be interpreted in this context.

On January 6, a mob stormed the U.S. Capitol building, ransacking offices of lawmakers, hunting for specific government officials, seeking to cause them harm and do physical destruction to the building itself. They were seeking to stop the official count of the Electoral College that would certify the election of a new POTUS, even if it meant that the Vice President and Speaker of the House had to be executed. It’s easy to point to this aggressive behavior as being the result of political polarization, the in-group vs out-group phenomenon, and the effects of social media on collectives. Each of these explanations refers to group behavior, collections of people who must be brought to justice. But, what role did individuals play in the fomenting of such behaviors?

Often, when individuals moralize and then find others of kindred attitudes, a moral convergence is formed. Furthermore, it is known that when the kindling of moralization and moral convergence is present, aggressive behaviors often follow, but there must be a spark to ignite the kindling. It is important to note, however, that the opposite occurs equally as often with non-violent protest groups. There the kindling is present, but there is no violent behavior by the group or any individual within the group; the igniting spark is not present in the non-violent protest group. What makes up this so-called spark? Perhaps the answer can be found by a closer inspection of immoral emotions.

Prior to the attack on the Capitol, the mob met at the Ellipse in front of the White House where the group heard emotionally charged speeches from POTUS and his attorney Rudy Giuliani for over an hour. The speeches conveyed to the group a message that the election had been rigged and stolen from their candidate, and by extension, from them. An emotion of contempt for those responsible for this supposed theft could quite reasonably have been cognitively identified by the persons making up the mob. The speech-makers used terms like “cheaters” and “liars” to generate just such an emotional response.

Anger is elicited when one sees that something is in the way of completing desired goals. If the anger is based in self-interest, then the pro-sociality of the action tendency is low, and the emotion, by definition, is immoral. The speeches were angry ones in the sense that they conveyed the idea that the perceived common goal of re-electing the sitting president was being thwarted by cheating and lying enemies of democracy. The mob was in an environment where it was easy for the individual members to experience anger and contempt as the speeches progressed. In addition, they were under the impression, according to the speech given by the sitting president, that the theft was being carried out just up the street. Anger plus anticipation most often results in aggressive behavior. The kindling was laid, and the spark that lit it came in the form of these emotion-laden speeches filled with words indicative of the emotions of anger, fear, and contempt. Giuliani’s cry for “trial by combat” coupled with these words from their president suggesting that after the count had been interrupted that they would be “the happiest people,” and that what was required was a bit of courage “because you’ll never take back our country with weakness. You have to show strength, and you have to be strong,” could very well have lit the already-present kindling. If the group saw this as a moral issue (“save your country!”), a right issue, and an issue worth fighting for, then the mob was primed to commit these violent acts. As Milgram and others showed us long ago, humans are not above inflicting harm on others as long as an authority figure encourages them to do so.

But, do the emotions experienced by the Capitol mob need to be labeled as immoral in order to explain their egregious behavior? Do we need to follow Haidt and Nussbaum in condemning the emotion and not just the resulting act? Emotions serve as coping strategies or ego defense mechanisms that motivate behavioral responses. The coping strategies used to deal with their conflicting emotions, and the ego defensive behaviors exhibited by the mob can be explained more parsimoniously by the cognitive theory of emotion: there is an emotion present, anger, (but what to do about it?), there is behavior, attack (but whom?), the function of the attack is to destruct, and the ego defense is displacement (attacking something weaker than the perpetrator) in this case, a few unarmed lawmakers. Emotions were no doubt manipulated and contributed to the mayhem, but they also aren’t the primary suspect.

Under Discussion: The Moral Necessity of International Agreements

photograph of national flags from all over the world flying

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Combating Climate Change.

On his first day in office, newly elected President Joe Biden signed an executive order officially rejoining the United States to the 2015 Paris Agreement. President Obama initially joined the treaty during the end of his second term. However, one of Donald Trump’s first acts as president was to withdraw the U.S.’s pledge, and this process took over 3 years, only technically going into effect just before he lost the 2020 election.

The Paris Agreement is by no means the first international environmental treaty. Many prominent international environmental treaties followed the 1972 Stockholm Declaration. These international environmental agreements have tackled everything from acid rain to whaling. One of the most famous international environmental efforts was the 1987 Montreal Protocol in which countries pledged to drastically decrease their CFC consumption in order to preserve the ozone layer. While the context might be different, the essential function of the Montreal Protocol and the Paris Agreement are essentially the same: sideline national interests in order to address a pressing global environmental problem. In fact, the issues are so similar, that these two agreements have been compared.

There are many moral considerations when assessing whether or not international agreements are the most efficient and fair method for addressing environmental problems. Below are some to consider.

Are international agreements which impose differing standards across nations fair and equitable?

Then-President Trump cited many reasons for pulling out of the Paris Agreement, but chief among them was the assertion that the agreement was unfair to the United States. Trump was technically correct in his assertion that there were different mitigation expectations across participating nations. For example, under the Paris Agreement, Europe and the United States are responsible for cutting a larger part of their emissions compared to higher emission countries such as China. However, Trump’s criticism fails to recognize two major considerations of this arrangement which make it more equitable.

Climate change is an environmental problem which has its origins in over a century of industrial pollution. Though China may currently be emitting more greenhouse gases than the United States, the majority of the existing greenhouse gases in the atmosphere were emitted by the United States and European countries. For this reason, the United States and Europe might be fairly expected to reduce their emissions by more because they technically share a larger portion of the responsibility for the current crisis.

Additionally, imposing larger restrictions on Europe and the U.S. fairly acknowledges the economic privileges which countries in the West and Global North hold. Historically, international environmental agreements have acknowledged the tension between the history of colonialism, economic development, and environmental protection. The modern recognition of this tension is due in large part to a 1967 declaration to the United Nations by the Group of 77 (G77), a coalition of countries in the Global South, which demanded that the United Nations recognize the positionality of their environmental issues compared to those of powerful, former-colonizer, industrialized countries. The G77 were largely successful in pushing for economic considerations to be included in international environmental agreements.

Though Trump’s criticisms of the Paris Agreement may be unfounded, there are those who criticize the content of the agreement for not going far enough – either in terms of equity or addressing climate change. The Paris Agreement has been criticized as not aggressive enough by environmental activists. Some might also point out that “developed countries” are still not obliged to carry their historical and population-weighted burden in the Paris Climate Agreement. Outside of these valid content-driven criticisms, is there something more to critique about the Paris Agreement from a procedural perspective?

Do international agreements present an irresolvable conflict between national and international interests?

Many prominent Republicans have painted the Paris Agreement as a pledge to put the well-being of the citizens of foreign nations before those within the United States. Senator Ted Cruz tweeted, “By rejoining the Paris Agreement, President Biden indicates he’s more interested in the views of the citizens of Paris than in the jobs of the citizens of Pittsburgh.” Ignoring the questionable analogy drawn by that statement, is Cruz correct that this international climate agreement unethically sacrifices the interests of the United States’ citizens?

While there might be other types of environmental damage which provide a more unbalanced benefit/detriment scheme in terms of aggressors to victims, a pretty fundamental aspect of climate change is that it will affect climate across the globe. Though some geographical areas will experience more intense changes in climate compared to others, the United States stands to suffer largely from climate change. Climate projections for the next 50 years predict that the United States will have to change the way people farm in the Midwest, the way people use water in the West, and where people live relative to the coasts. These changes, and more, will likely usher a social and economic crisis without mitigation of greenhouse emissions and adaptation to the changing climate. Ted Cruz’s assertion that joining the Paris Agreement forsakes national interests in the name of internationalism is evidently untrue. The United States stands to gain a lot from promoting a cooperative effort in which all nations pledge to reduce their carbon emissions.

Does the nature of climate change necessitate international agreements to actualize solutions?

Setting aside the half-century’s worth of international cooperation in reference to environmental issues, can one still make the case for the importance of an international agreement to address climate change specifically? The function of international agreements is to not only declare and acknowledge, as a world, that certain issues are worthy of our effort and attention, but also to create incentive to actively and cooperatively address major environmental catastrophes. Technically, all nations within the Paris Agreement can perform any of the actions within their pledge without joining the agreement itself. So why go to all the trouble to structure, debate, and sign the treaty? International agreements address both the moral and practical considerations raised by climate change and other international environmental catastrophes. Practically, cooperation is a more effective method for combating problems for which there is no clear and direct cause and effect, a conundrum common in collective moral harms. To collectively combat climate change, countries must share resources, technology, and scientific data. Without an organized structure in which to participate, climate change would likely be impossible to efficiently address. Another reason why international agreements play an important role is that climate change requires moral obligations staked in cooperation in order to effectively and fairly tackle the issue. Without international agreements, countries which contribute the most to climate change could simply choose to do nothing – a track the United States appeared to be on during the Trump presidency. The stark injustice, geographically, economically, and racially, which climate change threatens to unleash, morally demands a widespread cooperative effort to combat.

Do nations have an individual moral obligation to prevent harm to other nations?

Putting aside practical and justice-based concerns, is there a moral obligation on an individual basis for countries to limit their contributions to climate change? Generally, the principle of do no harm is recognized in international environmental law quite frequently. This principle is so fundamental to international environmental cooperation, it appeared in the first international environmental agreement, Principle 21 of the Stockholm Convention. Principle 21 strikes the balance between national interest and moral imperative and has since been referenced by modern international environmental treaties. Aside from the consistent international recognition of this moral principle, it is also quite intuitive.

It is clear at this point that the emission of greenhouse gasses causes harm in the form of climate change – both to human beings and to the environment. Based on this consideration alone, there is arguably a moral imperative as a nation to do everything within our power to prevent our contribution to climate change. Joining the Paris Climate Agreement is an important step in this process, as it holds the United States accountable within the context of our collective obligation to prevent climate change.

Under Discussion: Economic Concerns for a Green Future

photograph of power plant smoke blotting out sun

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Combating Climate Change.

Since taking his oath of office on January 20, 2021, President Biden has quickly taken steps toward fulfilling his promise to make combating climate change a key policy priority for his administration. This agenda marks a dramatic change from the actions of the Trump administration, which systematically rolled back over one hundred environmental protections and regulations. One of the first steps President Biden took was to begin the process of rejoining the Paris Climate Agreement, an international commitment to roll back carbon emissions. President Trump began the process of withdrawing the United States from the agreement in 2017. The central climate goals of the Biden administration are to decarbonize the U.S. power sector by the year 2035 and to make the U.S a 100% clean energy economy with zero net emissions by the year 2050. In the short term, he is pausing new drilling on public lands. President Biden intends for the United States to be a global climate leader during his administration, using climate demands as leverage in deliberations with foreign powers to encourage other countries to also put climate first.

The responses to President Biden’s climate agenda are not all worth considering. Anthropogenic climate change deniers continue to exist and probably always will. Some deniers are more inclined to believe climate conspiracies than they are to trust the consensus view among experts in climate science. Some people, politicians in particular, continue to deny that anthropogenic climate change is happening because they receive donations from the fossil fuel industry or because they know that their voting constituency values fossil fuels over climate. These segments of society can be loud, but the arguments that they are offering aren’t compelling.

Dissenting voices that pose more of a challenge come from those who are afraid of losing their jobs or worry that the economy will become weak if we abandon fossil fuels. Energy is a significant part of our economy, and the fossil fuel industry is the biggest part of that sector, comprising roughly 63%. There is no doubt that pursuing a green energy future will be a substantial change that will displace many workers in the U.S. and abroad. Those that think that these economic considerations should outweigh other consequences seem to be operating according to a principle that says something like: “If a policy leads to loss of employment in a particular field on a large scale, that policy should be rejected.” Do we have good reason to believe that such a principle is true? Several arguments speak against it.

First, if the concern is that the economy will collapse under the pressure of abandoning the fossil fuel industry or that large segments of the population will be permanently out of work, we can look back to other major shifts in our economic system which demonstrate that this is not so. For instance, before the emergence of the modern fossil fuel industry, we used products extracted from the carcasses of whales. Whale oil provided flammable material for lanterns and candles. It was used to make soap, margarine, and to grease mechanical equipment. Before the discovery of plastics, we used baleen (essentially whale bones) to construct the ribbing of corsets and to make children’s toys. We used the bodies of whales to make and do so many things that for some time, whaling was the fifth largest segment of the economy. When we shifted from whale products to fossil fuels and plastics, some jobs disappeared but other jobs were created.

Despite the usefulness of whale products, there were plenty of good reasons to put an end to the whaling industry. Not least among these reasons is that the practice drove whale populations to the brink of extinction. Countless sentient beings were killed and those who were not were frequently seriously wounded during attempts on their lives. The whaling industry was also very dangerous for the humans who participated in it. Often, entire vessels would sink. On other occasions, whalers would be seriously hurt or even killed in battles with whales fighting for their lives. The work involved for the people who actually put themselves in harm’s way was tremendously exploitative; it was not the typical sailor who would get rich from the endeavor. Instead, it was the captain of the ship or the financier.

Despite all of this death, destruction, and exploitation, the whaling industry persisted for centuries. Arguments against it were not taken seriously. How would society function without whaling? What would people who earned their livelihoods from whaling do if the industry suddenly came to an end?

Though some whaling still occurs, the presence of market alternatives brought an end to the whaling industry as a pervasive practice. In the mid-1800’s, we started extracting oil from reservoirs in the ground. In the early 1900s, we developed plastics. In the end, moral arguments didn’t kill the whaling industry, market alternatives did. Those who did the perilous work of killing whales found employment in different sectors.

The threat posed by anthropogenic climate change is many degrees of magnitude greater than the threat posed by whaling. It isn’t just human lives or the lives of whales that are at risk; climate change presents risks for all life on earth, for ourselves, our children and our grandchildren. Those that contribute to the problem least will be the hardest hit. We can hope that these moral arguments won’t be similarly ignored.

Happily, market alternatives to fossil fuels have existed for quite some time, but the United States has been reluctant to pursue them aggressively. If the concern is loss of jobs, the green energy sector has the potential to replace those that are lost. One of President Biden’s goals for his first term is to make changes that will result in 10 million clean energy jobs that pay high wages and offer benefits and worker protections.

What’s more, we don’t apply the principle, “If a policy leads to loss of employment in a particular field on a large scale, that policy should be rejected” to all possible jobs, only those that preserve our existing systems of power. When a Wal-Mart moves in across the street and puts a mom-and-pop shop out of business, politicians rarely raise concerns about the jobs lost. In those cases, “that’s just the way the market works.” In the case of fossil fuels, the concern doesn’t really seem to be about loss of jobs, it seems to be fear that the people who currently have power will lose it. People with money and power rarely want to give up the source of those things, regardless of what might be at stake.

President Biden’s climate goals are ambitious and it’s far from certain that we can achieve them, especially given the fact that many of these proposals will require collaboration between political parties. That seems close to impossible to achieve in this political climate. It is unfortunate that there is such political gridlock on this issue. If there weren’t fortunes to be defended, one would think that everyone could come together on this. A green energy future would be indisputably better for the lives and health of everyone and for the natural beauty of this planet.

Under Discussion: The Marginalization of the Future

photograph of human shadow stretching out over dry lakebed

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Combating Climate Change.

Predictive models projecting the course of global temperature rise and general climate change have been largely accurate. As the anticipated effects have become clearly manifest in weather effects, governments, businesses, and individuals have begun to consider the grim future that awaits. And yet across the world, especially in the United States, many people continue to deny that human action is responsible for climate change. Or, even where people acknowledge the reality of climate change, they do not deign to take action. Frequently this inaction stems from a conflict between the scope of the needed action, and a belief in individualist and free-market ethics.

Proponents of free-market views on economics and ethics argue that what is most efficient or most ethical, respectively, is to allow individuals to negotiate one-on-one exchanges in accordance with their preferences. This is the rationale behind at-will and right-to-work employment laws and the repeal of the individual mandate of the Affordable Care Act, among other things. Anathema to a free market is centrally-coordinated action from strong governments or monopolistic corporations. This is where the reticence of even those who recognize the looming danger of climate change enters. They disagree that either massively and centrally-coordinated actions are necessary or that such action, even if in some sense pressing, is not politically or ethically acceptable.

Why not? What could be unacceptable about massive and centrally-coordinated action? The idea is that such action necessarily tramples on individual preferences. If most individuals want to act on climate change, then they will make deals in the market to affect that change and top-down institutional action will be superfluous and risk creating a tyranny that outlasts the current emergency.

What can easily evade our attention here is what does not get mentioned: nothing is said about the people and creatures that will inherit the world as shaped by our choices. People who do not yet exist do not have preferences and so the free market had no direct mechanism to factor in their interests. This difficulty is highlighted by a constellation of issues known as the non-identity problem, future individual paradox, or intergenerational justice. (Note: intergenerational justice also covers the rights and interests of past and deceased persons.)

The marginalization of future persons within a free-market decision-making structure is a deep-seated, structural problem. A free-market exchange assumes that interested individuals are directly interacting to advocate for their preferences or interacting through an agent who will do so. And future persons are not the only entities marginalized in this way: any lifeforms that cannot secure meaningful advocacy for themselves are effectively marginalized. The forms of racism, misogyny, and other invidious bigotry with which we are all too familiar also operated (partially) through this mechanism. Whereas future persons do not exist to advocate for themselves, oppressed groups have been — and are — deliberately prevented from such advocacy. Like future persons, non-human animals and the inanimate environment are, by the nature of their existence, incapable of advocating for themselves.

But don’t people with the ability to advocate for marginalized entities do so? Can’t that solve the problem? In short, no. In the case of currently existing human beings, there has proven to be no substitute for self-advocacy or advocacy through others who share a meaningfully similar perspective. Hence the importance of historic firsts in political representation, like Kamala HarrisRaphael WarnockDeb HaalandIlhan OmarSarah McBrideRashida Tlaib, and Jon Ossof. However, there is no way to extend the power of political participation to animals, the environment, or future persons.

While there is rhetoric to the effect that we must consider how our actions will affect the world inherited by those that come after us, its reach is often limited and the motivations behind it sometimes suspect. Deficit hawks in U.S. politics wring their hands and rend their garments about the debt we are foisting on our children and grandchildren as a way to avoid spending money on current problems that aren’t in line with their preferences. Many young people are concerned for the world that they will have to live in imminently and seethe at the injustice of having to clean up the mess made by their predecessors. This latter concern is not illegitimate — it simply isn’t the same as concern for people who do not yet exist.

Under Discussion: Undermining a Democratic Response

photograph of protestors with "People over Pipelines" sign

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Combating Climate Change.

On the first day of his administration, President Joe Biden issued an executive order cancelling the permit for the Keystone XL Pipeline. Premier Jason Kenney of Alberta, the province where the oil sands are located, expressed his disappointment stating that this is “not how you treat a friend and ally,” and indeed Canada’s Premiers apparently “want to go to war” over it. This kind of political posturing reminds one of recent events. The riot at the U.S. Capitol followed weeks of politicians lying to voters about the election, and as a result people, who were frustrated by a reality that was not matching what they were being told, lashed out violently. If there is one lesson that any democracy should be able to learn from such an episode it is that misleading the public does have consequences, it undermines the capacity of voters to evaluate their options, and thus it undermines democracy. Does this lesson have relevance when it comes to climate change, something which has the capacity to wreak massive economic and social instability?

Was Keystone ever viable? Are Canadian politicians simply spreading false hope that there is a significant economic future for that pipeline? If so, are they on any better moral grounds than Republicans who lied about elections? First, it is important to recognize that the province of Alberta is heavily dependent on natural resource development, particularly the extraction of bitumen from the oil sands. In recent years, this sector has been badly hit by floundering oil prices, troubles getting pipelines built, and now more recently COVID-19 has caused prices to drop resulting in even lower oil revenues. The Alberta economy, slowly recovering from a recession, has now been even harder hit. Unemployment is up, and investment is down. As a result, the Alberta government is now running a record high deficit.

In an effort to push back against these forces over the past few years, the Alberta and Canadian governments have been significant supporters of the Keystone XL pipeline. Despite the troubled history of the project, the Alberta government became an investor in order to move construction along at a cost of 1.5 billion dollars with an additional 6 billion in loan guarantees. It was expected that the pipeline could create tens of thousands of jobs. Despite all of this, the pipeline project has been troubled since its inception. It has been met with opposition from indigenous people for cultural, treaty, and health reasons, and it has been widely protested because of concerns about the climate. While some concerns involve air pollution, and the potential for an oil spill, the potential for increased carbon emissions has been especially problematic politically speaking. Extracting the crude bitumen of the Alberta oil sands involves 17% higher carbon emissions than conventional oil. As a result, then-President Obama was heavily lobbied to deny a permit for construction, ultimately doing so because it was seen as undercutting the credibility of the United States in climate change negotiations. Eventually, President Trump did issue the permit for construction, before this permit was withdrawn by President Biden.

The question is whether Keystone XL was ever a very realistic option for the Alberta government to cling to. Was the writing on the wall? As Canadian journalist Aaron Wherry notes, “The project’s fate seemed sealed years ago, but it haunts us still.” Afterall, there were years of court challenges, revisions to the design, permits were granted and taken away. The public soured on the project as well. In just four years public support in the United States for the project fell from 65% approval to 48%. A 2017 poll of Canadians found that only 48% supported the project even though 77% of Albertans supported it. Investors were shy about putting money into the project, and thus the Alberta government is now on the hook for billions of dollars. And, with the public and politicians increasingly showing a willingness to act on climate change, this project’s future was always in question.

Despite this, the Alberta government continued to, and continues to, unrealistically give the public the impression that something can be done to change this. Alberta Premier Jason Kenny, for example, was said to be counting on union support in the United States for the project, despite “not understanding American politics well enough to know that that particular ship has sailed; it was as realistic as the company’s Onion-esque last minute pledge to power the operation of the pipeline with renewable energy.” And as Warren Mabee of Queen’s University notes, “While the reaction from Alberta implies Biden’s move came as a shock, the truth is that cancelling Keystone XL was a key part of Biden’s election platform” and has suggested that Canadian politicians should get a reality check when it comes to the oil sector.

So, it is worth noting whether there are similarities between what Republicans told their constituents following the election and what Canadian politicians are telling theirs regarding Keystone XL. In both cases you have frustrated citizens, many vulnerable to unemployment and a lack of prospects. In the United States, Republican politicians granted credibility to the claim that there were significant election irregularities despite almost no evidence and were complicit in unrealistic and long-shot attempts to overturn the election in order to satisfy what their voters wanted to hear. In Alberta, politicians continue to grant credibility to the viability of pipeline projects which promise to restore good times to the province, despite evidence that the project was environmentally risky and that the project looked increasingly doomed. And now, even still Premier Kenny calls for trade sanctions which are considered “unrealistic and unproductive in the extreme” in order to appeal to a base of supporters.

In the case of the United States, the effect of this willingness to entertain lies about the election was the storming of the Capitol and the undermining of democracy. While the Canadian Parliament may be safe for now, the Alberta government has made use of inflammatory language and promises which may also undermine democracy. For example, the governing party of Alberta claimed that their predecessors “surrendered to Obama’s veto of Keystone XL” and ran on a promise threatening to hold a referendum over constitutional changes if they could not get a pipeline built. In other words, politicians trying to appeal to their base, optimistically attached their hopes to a pipeline that investors soured on and invested billions of public money into despite facing increasing political opposition at home and in the United States. As a result, the people of Alberta will likely be angrier at the Canadian federal government and the rest of Canada. Politicians could not be honest with their voters, and as a result social and democratic cohesion may suffer. Is there a moral difference between the two cases?

It is important to note that this is only a case study to demonstrate a larger moral concern. We have seen in the last year that citizens will accept complete falsehoods if it fits with what they want. Despite over 2 million people dying of COVID-19 in real time over the past year, many still believe that the virus is not real or is no worse than the flu. So, looking forward, what will happen when the effects of climate change become even more prominent? If Florida begins to sink due to rising sea levels, will that be branded as just a fluke or a bad summer? If actual economic and climate problems are facing society, it will be the convenient mistruths that will be exploited to undermine the ability of citizens to make decisions that are in their best interests.

On the Rationality of the Capitol Rioters

photograph of rioters in front of Capitol

In the wake of the Capitol insurrection, there was no shortage of commentary concerning the moral and intellectual failings of the rioters. However, one not infrequent theme of this commentary was that, for all their errors, there was something about their behavior that made a certain sort of sense. After all, if one believed that one’s presidential candidate actually won the election by a landslide, and that this victory was being subverted by shadowy forces that included the Hugo Chávez family, then storming the Capitol can seem like a reasonable response.

Although the word “rationality” was not always used in this commentary, I think this is what these pundits have in mind: that the Capitol rioters were in some sense rational in acting as they did, given their beliefs. They probably didn’t know it, but in making this claim they echoed the view about rationality endorsed by the renowned moral philosopher Derek Parfit. In his magnum opus, On What Matters, Parfit argues that our desires and acts are rational when they causally depend in the right way on beliefs whose truth would give us sufficient reasons to have these desires, or to act in these ways. As applied to the case of the Capitol insurrection, Parfit’s view would seemingly endorse the rioters’ acts as rational, since the content of their beliefs about the election would, if true, give them sufficient reasons to riot. The key point is that on Parfit’s view, it does not matter whether the beliefs upon which the rioters’ actions were based are themselves true, but just that they rationally supported those actions.

Alternatively, David Hume famously wrote that the truth of one’s beliefs does make a difference to the rationality of one’s actions and desires. “It is only in two senses,” he wrote, “that any [desire] can be called unreasonable.” One of those senses is when the desire is “founded on the supposition of the existence of objects, which really do not exist.” In other words, desires based on false beliefs are irrational. Yet Hume appears to be mistaken here. One’s desire to run away can be rational even if based on the false belief that there is a rattlesnake about to strike inches from one’s feet, particularly if one’s belief is rational.

But what about the view that our desires and acts are rational just in case they causally depend in the right way on rational beliefs, whether true or not? If we accept this view, then the Capitol rioters’ actions and desires turn out to be irrational, since they are based on beliefs that are arguably irrational. Parfit resists this view using the example of a smoker who has a strange combination of attitudes: on the one hand, the rational belief that smoking will destroy his health, and on the other hand, and because of this belief, the desire to smoke. According to the view we are now considering, the smoker’s desire would be rational, since it depends on a rational belief. That seems false.

Another view about rationality that might support the Capitol rioters’ actions is the view, familiar from social science disciplines like economics, that the rational action is the one whose subjective expected utility — reflecting the utility of the possible outcomes, and the agent’s beliefs about the probability of those outcomes — is the highest. This view of rationality more or less abandons the idea of rationally assessing our non-instrumental desires, and simply evaluates actions in terms of how well they fulfill those desires. So, on this view, we might say that the rioters’ actions were rational because they maximally fulfilled their desires.

The Parfitian and maximizing views of rationality share a feature that the philosopher Warren Quinn famously highlighted in his article, “Rationality and the Human Good”: according to both views, rationality is at least sometimes indifferent as to the shamelessness, or moral turpitude, of a person’s ends. For example, Parfit’s view implies that someone who believes that the Jews are sub-human and, because of this belief, desires to exploit them in ways that would be immoral if the Jews were full-fledged members of the human race, is practically rational. Similarly, the maximizing view implies that someone who wants to exploit the Jews in such ways is practically rational if they take efficient means to that end. However, Quinn argues, this conception of practical rationality is in tension with the ancient idea that practical rationality is the highest virtue of humans as practical agents. How could practical rationality be morally defective, indifferent to the manifestly forceful practical demands of morality, and yet be the most “authoritative practical excellence”?

If rationality is integrally connected to morality in the way Quinn suggests, then it becomes harder to see how we could say that the Capitol rioters’ actions and desires were rational or in accordance with reason. Even if their beliefs, if true, would have justified their desires and acts, and even if their acts maximize the fulfillment of their desires, the fact is that their beliefs were false, and their actions and desires shameless. And if Quinn is right, that fact should make us reluctant to credit their actions and desires with the label “rational.” For Quinn, you can’t be rational and immoral at the same time. For Parfit or the maximizer, you can.

Thus, it turns out that much of significance hangs on whether we think what the rioters did was in accordance with reason. If we say that it was, either because we adopt Parfit’s conception of rationality or the maximizing conception, then we commit ourselves to the occasional indifference of rationality to moral considerations. If, instead, we adopt Quinn’s view, then we must reject that indifference.

The Ethics of Presidential Polling

computer image of various bar graphs

Pretend you are driving your car on an interstate highway. Now envision yourself getting the car up to a speed of 85mph, then closing your eyes (actually close your eyes now) and keep them closed for a whole minute. You will want to open your eyes after a few seconds because of the fear of not knowing what lies ahead. Is there a curve? A semi-trailer in front of you? Or, is the lane ahead blocked for construction? This thought experiment illustrates in part that a major ingredient of fear is uncertainty.

Humans want to know the future, even if that future will happen within the next few minutes.

Evolution has provided us with the need to predict so that we have a better chance of survival. The more we know, the better chance we have to anticipate, and thus, to survive. It’s also true when we want to know who is ahead in political polls prior to an election. Fears about the economy, possible international conflict, a raise in taxes, and the assurance of the continuation of Social Security and health care all play a role in deciding whom we will support on election day. There may be a fear that your values and the party that best represents them will not be elected to lead the nation, and this serves as a motivation to vote your ticket.

The 2020 presidential election is over, but the dust has only just settled. State and national political polling for the presidential election began in earnest as soon as the Democratic nominee was apparent, and continued right up to the night before election day. At the same time many voters were asking the question, “Can we trust the polls this time?” Undoubtedly many were recalling the debacle of the 2016 election polling that predicted a relatively easy win for Hillary Clinton, but when they awoke the next morning, Donald Trump had been elected president. The Pew Research Center suggested that the question should rather be, “Which poll should we trust?” However, I suggest that another question should be considered as well: “Is it ethical to have public presidential election polling at all?”

Many ethical questions arise when we consider this type of public polling: (1) Does the polling sample reflect an accurate picture of the electorate? If it does, the veracity, or truthfulness, of the results can be more trustworthy. If it doesn’t, then the polling is skewed and results unreliable. (2) If the polling is skewed, how will voter behavior be affected? Could citizens be casting their votes based on false information? Research tells us that if a voter thinks their candidate will win, the less likely that person will vote. How does this affect one’s autonomous choices? (3) If the polling predictions are not accurate, what are the psychological effects on the electorate and candidates that may lead to negative outcomes when the election is over?  The benefits and costs of having polling information available to the electorate must be considered in such high-stakes activities like voting for the president of the United States. It’s imperative that any published polling be scientifically based.

The gold standard of scientific research sampling has long been established as random sampling where everyone has an equal chance of being chosen to participate. When all have an equal probability of being chosen, the sample will generally reflect the population from which it was drawn. The problem then becomes one of reaching more people, so as to even the odds that the sample is representative of the population. Technological advances have allowed us to do this, but in so doing have changed the face of public polling in ways that may compromise its outcomes.

Today, the internet serves as the tool of choice for pollsters. However, not everyone has internet access and therefore some selection bias in the sample will exist when this type of polling is used. Furthermore, the average American has 1.75 email accounts which only increases the possibility that the same person may receive more than one request for information, thereby decreasing the chance for a larger sample, so the chance for mismeasurement increases. It is understood that there is always sampling bias, and pollsters do attempt to make it as small as possible, keeping it at a manageable 2.5-3.5%. This bias is called margin of error (MOE). For example, if a poll has an MOE of 3%, and candidate A is at 48% in the poll while candidate B sits at 46%, candidate A does not have a two- point lead as some media personnel report; it’s a dead heat statistically. But, if the MOE is not pointed out at the time of reporting, the electorate receives false information. It’s even more important to look at this effect in swing states than in the national picture due to the fact that the Electoral College, not the popular vote, actually determines who the next president will be.

To alleviate the selection bias problem nationally and in swing states, many polling companies are using opt-in, or non-probability panels from which to collect data. This method uses ads to attract persons to participate in the polling. Since these ads may only appeal to, or even be seen by, certain demographics, the probability of establishing a representative sample diminishes. Some panels formed by answering these ads are there solely because of a modest monetary award for participation. One way that pollsters try to solve these potential drawbacks is to establish panels of up to 4,000 persons who have been chosen based on stratification criteria, so that the sample looks as much like the population as possible given the restraints mentioned here.  For example, if the population is 76% Caucasian and 14% African-American, approximately 76% of the non-probability panel would be Caucasian and approximately 14 % would be African-American.

Similar stratification methods would also be used for other demographics; however, how many demographics are accounted for when these decisions are made? The Gallup and New York Times/Siena College polls account for 8 and 10 demographics (weights) respectively, while the Pew Research Center accounts for 12 such demographics. Statistically, the more weights that are applied, the more the sample accurately represents the population from which it was drawn. The panels formed in this way stand for a certain amount of time, and are repeatedly polled during the political campaign.

Apart from these worries, there are other potential obstacles to consider. Take positive herding, for example — a term used by social psychologists to explain the phenomenon that positive ratings of an idea or person, generate more positive ratings. So, if candidate A continues to amass perceived higher polling numbers, the chance that those being polled later will align themselves with candidate A increases. Voters are even more likely to exhibit this behavior if we have not made a prior commitment to any response. So this affects the independent voter more than other voters, but that is the voter both parties must typically have in order to win an election. And this is compounded by the fact that the repeated public polling during a presidential campaign may increase this social phenomenon, and skew the polling results that could lead to unwary independent voters deciding for whom to vote.

Are the respondents on these panels answering the same questions as respondents on other panels? Questions posed to participants from different researchers are not standardized; that is, not every participant in every poll is answering the same questions. When the data are presented publicly, and polling data are compared, we may be comparing apples with oranges. If candidate A polls at 48% in poll A, and 43% in Poll B, we must consider on what issue candidate A is being polled? If candidate A is being polled on likeability in both polls, it must be asked: likeability based on what, candidate A’s stand on the economy or on immigration, or …? The information becomes amorphous.

How can the potential voter know what the data are measuring without understanding the make-up of the sample, the questions asked, and which method of polling was used to obtain that data? To get the complete picture is an ominous task even for a statistician, and certainly cannot be expected of most of the population to discover on their own. We rely on the media, and on the polling companies themselves to provide that information. While polling companies do publish this information on their websites, most voters do not have the time (or the inclination) to peruse the data, even when they know that such websites exist. Even if they make that effort, would most be able to understand the information shared?

The 2020 election has given us a real look at some psychological effects of polling on the American population. Former Vice-President Biden was reportedly set for an 8-point margin of victory (nationally) according to the New York Times/Sienna College final poll the night before the election. As late as 2 a.m. on November 4th, Biden held a slight lead nationally, but it was a dead heat in the swing states, and those states were leaning toward Trump at the time. In the final analysis Biden won those states after the mail-in ballots were counted. National polling did not publish survey results of mail-in voters vs election-day voters, yet the different modes of voting predicted the outcome. Psychologically, Trump supporters were primed to believe that ballots were “found and dumped” the day following the election.

It took 11 days for the results to be “finalized,” while the nation was in turmoil. Trump’s campaign used this time to start questioning the results; his supporters believed that the election had been stolen; after all, Biden won by only 3.4 points nationally after mail-in votes were tallied further adding fuel to the fire for Trump supporters. Recounts in swing states and counties were called for in the swing states based on this information, since the margin of victory ranged from less than one percentage point to just over one percentage point, but remained constant after the count was completed.

So what does all this say about the ethics of public polling? As can be seen, the numbers that get reported are based on a number of assumptions, and any model is only as good as the assumptions on which it is based. Are the policies of the candidate (the economy, immigration, etc.) measured the same in each poll?  Was the positive herding phenomenon a factor in responses? Were media personnel diligent in pointing out MOEs as they reported polling results? In all these cases, one’s voting autonomy can be affected because the data’s veracity is in question.

But the problem isn’t necessarily with our ability to read the data as much as with our choice to circulate that polling data in the first place. Uncertainty produces fear in humans; we often alleviate this fear through prediction, and polls provide that predictive factor. That information, however, provides only a perception of the reality.

Should News Sites Have Paywalls?

photograph of partial newspaper headlines arranged in a stack

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


If you’ve read any online article produced by a reputable newspaper in the last ten years, you’ve inevitably bumped into a paywall. Even if you’ve managed to slip through the cracks, you’ve seen a glaring yellow box in the corner, reminding you that this is your last free article for the month. Maybe this gets you thinking about the ethics of pay-to-read journalism, so you seek out articles like Alex Pareene’s piece for The New Republic, only to find that an article about the dangers of paywalls is hidden behind yet another paywall.

If you do manage to read Pareene’s piece, you’ll find that he makes some good points about what he calls “the media wars,” the uphill battle between costly but fact-based journalism (like The New York Times, which erected its paywall back in 2011) and the endless stream of accessible, but factually untrue, stories churned out by the conservative media machine.

How has reputable journalism become so unprofitable? First off, big tech companies like Google and Facebook receive the majority of ad revenue from online content, as Alex C. Madrigal explains. Local newspapers get lost in the bottomless sea of content, and are ultimately unable to compete. As a 2020 report from the University of North Carolina’s Hussman School of Journalism and Media showed, small news sources are disappearing at an alarming rate, creating “news deserts” in online spaces. Conservative propaganda machines, backed by a seemingly endless supply of money, swiftly filled that void, resulting in an increasingly homogeneous and right-leaning landscape of digital journalism.

As Pareene points out, putting up a paywall is “the only model that seems to work, in this environment, for funding particular kinds of journalism and commentary.” But if you do this, sites like Stormfront “will set up shop outside the walls, to entertain everyone unwilling to pay the toll.” Furthermore, “subscription models by definition self-select for an audience seeking high-quality news and exclude people who would still benefit from high-quality news but can’t or don’t want to pay for it. ” In other words, paywalls only perpetuate the divide between fact-based journalism and free propaganda.

But at the same time, paywalls are necessary for papers that value honest reporting. Solid journalism requires training, time, and money, and those who dedicate their life to the pursuit of the truth must be compensated for their labor. Free content is so easy to produce because it doesn’t require much time or effort to disseminate a lie.

It’s a problem without an easy fix. We might just encourage everyone to buy a newspaper subscription, but as the post-pandemic economy worsens, that solution appears less and less viable. A 2019 report released by Reuters Institute for the Study of Journalism found that a measly sixteen percent of people in the United States (the majority of whom tended to be wealthy and well-educated to begin with) pay for their news online. When only the well-off can afford quality journalism, fake news inevitably flourishes.

As Pareene says, this situation is not just a failure on the part of media outlets, but “a democratic problem, in need of a democratic solution.” This sentiment is echoed by Victor Pickard, who argues in his 2019 book Democracy without Journalism? that “Without a viable news media system, democracy is reduced to an unattainable ideal.” As the coronavirus pandemic continues to alter the fabric of everyday life, and conspiracy theories play an increasingly important role in national politics, reliable journalism is more important than ever, and new models for generating profit will have to emerge if anything is to change.

Moral Authority in America

photograph of President Trump leaving podium at border wall event

Leaving office on January 20, a disgraced Donald Trump, enraged over the failure of his attempts to overturn the election result, chastised by his latest impeachment for incitement of insurrection, sulking at being denied a farewell military parade, will be able to gloat about one thing – Joe Biden’s inauguration crowd will be smaller than his.

Trump’s presidency began in January 2016 with the petulant and much-repeated lie that his was the biggest inaugural crowd ever, despite the evidence of photographs showing the size of the crowd attending Barack Obama’s inauguration clearly refuting the claim. This gave rise to Kellyanne Conway’s absurd remark that there are ‘alternative facts’ a phrase which encapsulates the Trump presidency.

This ridiculous lie, and many others like it that issued from the president and his administration over the last four years, seems petty and, compared to other false claims, laughable.

Things have taken a much darker turn since the November election with Trump’s campaign to convince his supporters that the election was rigged culminating in the horrific events of January 6, when what should have been a routine process of certifying the electoral college vote turned, at Trump’s urging, into a violent and deadly assault on Congress by an angry mob of his supporters.

Following this failed insurrection, as the FBI continued to arrest (suspected) participants and the president faced swift rebuke with the House impeaching him, disturbing reports have continued to surface about possible collusion from inside Congress, questions have been raised about the lack of preparedness of security forces, the disparity has been noted between the anaemic response on Capitol Hill the day of the riot and the heavy-handed response to BLM protests earlier in the year; as security services remain concerned about possible sympathizers within the US armed forces and the Pentagon attempts to vet all armed personnel ahead of Biden’s Inauguration, America looks like a different place.

In the hours leading up to the inauguration of Joe Biden as America’s 46th President the world watches on anxiously, shocked by footage of Washington DC, that beacon of democracy, where streets are lined with soldiers in fatigues, and government buildings are fenced off, heavily guarded by military vehicles.

This moment, in which America and the world holds their breath, is the culmination and intersection of many factors – Trump’s election fraud lies, his persistent years-long stoking and appropriation of people’s grievances, and the permissive normalization of white supremacy which has characterized his presidency together with the inexplicable presence in the US of citizen militias legally armed to the teeth.

This period of American political and social history will no doubt keep analysts, historians, and pundits of all kinds busy for a long time.

Something we have heard a lot over the past weeks, from US lawmakers, political observers and members of the public is that these events have somehow changed America. Whether it is being called an insurrection, a domestic terror attack, a riot or the storming of the Capitol, one thing is clear – something has happened to America that has deeply and indelibly affected the country’s claim to being a beacon of democracy. Counting the cost of these last four years (and especially the last two weeks of the Trump presidency), America’s moral authority has to be reassessed.

To talk about America’s moral authority as a free, liberal democracy, jingoistically, without acknowledging factors that complicate that claim – such as the deep vein of racism which runs through American history to the present as its legacy of slavery, and America’s interference in other country’s political processes with its involvement of coups d’état in Latin America during the Cold War era – would be naïve.

But eschewing the simplistic patriotism which leads to sloganizing of America as ‘the greatest country on Earth’ – a cliché that has long irked many non-Americans – still leaves room for America to be justifiably proud of the central role held by liberal democratic values like freedom, equality, civil rights, justice, and the rule of law.

As the era of the Trump presidency (if not of Trumpism) closes, those values have taken a hit. Whether the wounds are fatal is yet to be seen, and depends on what happens next.

However, as the Trump presidency has marched and stumbled inexorably towards the events of January 6, some of the country’s moral authority has been lost.

Moral authority is a difficult, somewhat fuzzy concept. It is not the authority of power, but of example. A person, institution, idea, or indeed a society possesses moral authority when it has over time exemplified some important moral stance. Moral authority exemplifies ‘the good’ not in the shallows of moralism but in the deeper waters of virtue.

Donald Trump has never had any personal moral authority. He has power, and authoritative sway in the form of might, but he does not possess the kind of authority that comes in principle and by example. He has in fact always mistaken power for authority. Of the many instances that demonstrate this confusion is the tone of his attempt to persuade Georgia’s secretary of state to change the election results in early January. Trump has used his power to demand loyalty at all costs, and the costs have been high.

As he has tried more and more to wield his power with sound and fury, real authority has become more and more remote from him.

Trump has of course not single-handedly caused the current crisis in American social and political life that has seen white supremacist extremism move from the fringes to entering the mainstream, but he has used the resentments boiling away in American life ruthlessly to his own ends – to gain power and feed his insatiable ego. As we try to unpack this whole mess, the question of America’s moral authority will have to be wrested back from that of Trump’s – and we have yet to see what is left.

In her book Too Much and Never Enough, Trump’s niece Mary Trump writes:

“The fact is, Donald’s pathologies are so complex and his behaviours so often inexplicable that coming up with an accurate and comprehensive diagnosis would  require a full battery of psychological and neuropsychological tests that he’ll never  sit for.”

Diagnosing Trump is one thing, diagnosing the state of the American democracy is another. I believe American democracy is resilient, and that it will win out against the dark forces not just at its door but well and truly inside the gates – but only if America is prepared to learn the lessons here.

Moral authority will not be preserved fully intact after these events, which may not yet be over; but neither will it be lost if we keep hold of the idea that authority is not about being faultless, and it is not about power, or strength in the form of power. Moral authority comes from the way a person, an institution, a country copes with its challenges, and how it responds to its own failings. For such authority to return, power and moralism, will have to step back.

The Social Justice of Copyrights and “Public Domain Day”

photograph of Duke Ellington record

In addition to starting a new calendar year, January 1st marks “Public Domain Day” when copyright restrictions expire for a new batch of artworks, thereby allowing new audiences to view them more easily and new artists to adapt them without needing special permission from the copyright holder. This year, the United States saw certain works from Buster Keaton, Gertrude ‘Ma’ Rainey, Duke Ellington, Virginia Woolf, Agatha Christie, and more enter the public domain, including the classic jazz song “Sweet Georgia Brown” and F. Scott Fitzgerald’s famous book The Great Gatsby.

On the one hand, it might seem like increasing accessibility to cultural artifacts is simply obviously good; given how many high school English classrooms rely on battered copies of Fitzgerald’s story, for example, we can see immediate benefits (both aesthetic and practical) to making it easier and cheaper to purchase new books. But, taken to its logical conclusion, this kind of argument seems to suggest that it might always be necessary for artworks and artifacts to be so accessible. If Gatsby really is so valuable, and if it is so embedded within American culture that it is often called “the great American novel,” then why should Americans have had to pay to read it in the first place? Put differently: why is The Great Gatsby only just now entering the public domain?

In brief, the concept of a copyright offers two related basic protections:

  1. It ensures that artists are compensated for the work that they perform, in a way that
  2. Ensures that society will continually benefit from the work of new artists (who, following from (1), will feel free to pursue their art).

This is why, for example, the Constitution specifically grants Congress the power to “promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” Basically, in theory, copyrights work to level the social playing field a bit so that artists can (at least potentially) enjoy sufficient financial security to be able to practice their art. In effect, this makes copyrights a matter of social justice, since the people who benefit from these protections the most are precisely those from less-affluent or otherwise disadvantaged backgrounds. Although F. Scott Fitzgerald was not exactly socially disadvantaged, the person aiming to write the next great American novel could easily be discouraged from doing so without the hope of protected financial recompense for their labor offered by the copyright system. That is to say: aspiring writers might instead spend their energy towards non-artistic ends if their Gatsby was to simply immediately enter the public domain without helping the writer to, say, buy groceries.

To illustrate, imagine two people who both have an interest and talent for music: Thomas is born to a wealthy family in Hollywood, while Susan grows up in a lower-middle-class family in the Ozarks. Even if copyrights don’t exist, Thomas still has the luxury to pursue his art to his heart’s content: his family’s wealth offers him a level of comfort that shields him from the risk of “wasting time” on a hobby with no guarantee of compensation. The same cannot be said of Susan so easily: while she might still have plenty of personal reasons for playing music on her own, if the realities of her social position, say, require her to work a full-time job in order to provide for basic necessities, then she would be taking on considerable risk to herself if she instead chooses to devote her time to her art without any real guarantee that her music could offer her a profitable career. In principle, copyright laws offer Susan the promise of some financial protection such that if her art ends up becoming profitable, then she will be able to uniquely enjoy the monetary fruits of her labor without other artists being allowed to copy her work (at least for a time); it’s true that Thomas gets this benefit too, but notice that it doesn’t really affect him — he already had the financial protection to do as he liked with his art in the first place.

So, philosophically speaking, copyrights serve as a mechanism to help underwrite the kind of equality that John Rawls talks about with his first principle of justice: in explaining his view of a free and fair, egalitarian society in A Theory of Justice, Rawls argues that “each person is to have an equal right to the most extensive total system of equal basic liberties compatible with a similar system of liberty for all.” Insofar as copyrights can serve to more fairly distribute opportunities to develop artistic skill and create artworks, they might be thought of as components of a just society. Without protections like this in place, it would become, in principle, roughly impossible for anyone not born into privilege to pursue a career in the arts.

It’s worth noting that this is also why artists cannot copyright “generic concepts” or natural elements of normal life: a copyright is only valid for unique artistic creations. In mid-2020, the estate of Sir Arthur Conan Doyle sued Netflix over the depiction of Sherlock Holmes in its film Enola Holmes; while many of Doyle’s stories involving the character of Holmes have entered the public domain, they all tend to present Holmes as a generally cold and unemotional person. Because it is Doyle’s later stories (that are still under copyright) that see Holmes display more warmth and kindness, the caring demeanor the detective shows his younger sister in the Netflix film provoked the copyright-holder to sue. However, the generally-ridiculed lawsuit was settled out of court in December, presumably because “warmth and kindness” are hardly unique artistic creations.

But this also evidences the problem with the other side of copyright laws: artworks are importantly different than commodities or other products for sale. Fitzgerald and Doyle weren’t just “doing their jobs,” for example, when they wrote The Great Gatsby and the Sherlock Holmes stories: they were effectively contributing to the cultural fabric of our society and the artworks that we collectively use to texture our social fabric with shared points of understanding and reference. It might be argued that, just as “warmth and kindness” are ubiquitous to the point of being un-copyrightable, the cultural familiarity of a character like “Sherlock Holmes” is (or is becoming) similarly un-copyrightable.

Such is the argument for “Public Domain Day.” Only the most radical defenders of the public domain would argue that copyrights are, in principle, problematic: indeed, artists both need and deserve to be secure to create their art (consider also: how else might audiences expect to come by new art to appreciate?). However, over time, the sedimentation of individual artifacts into the cultural consciousness makes a unique property claim on them less clearly valid — particularly after the original artist’s death. Though details differ by country, it is common now for copyrights to extend (in general) for either fifty or seventy years after the death of the artist, allowing both the original creator and their dependents to uniquely benefit from the artwork for a limited amount of time before legal ownership of the artifact is distributed collectively.

Rawls also carves out a space for thinking about copyrights in this way within his Difference Principle that allows for some individuals to benefit more than others if that inequality also serves to benefit the least advantaged in society: presumably promoting the further and continued creation of new artworks (as copyrights are designed to do) is just such a public benefit. But once the general welfare is no longer upheld by the existence of a copyright, it would be just for the copyright to dissolve — as indeed we see demonstrated and celebrated each year on Public Domain Day.

(A crucial note: you may have noticed my repeated hedging in previous paragraphs as I have defended copyright law “in principle” or “philosophically.” This is because the actual practice of copyright law in the United States is fraught with problematic and unfair issues that Rawlsian principles of justice would struggle to support. Indeed, the extension of copyright terms seen in the last few decades, the corporate interests apparently motivating such legislation, and other threats to a shrinking public domain (as well as unique questions posed by new forms of art and media) are all issues that deserve both philosophical and legislative attention in a way that is far more complicated than the simple picture I’ve sketched in this short article!)

Still, copyrights play an important part for anyone looking to protect the financial interests they have bound up in their art; for the rest of us, Public Domain Day grants us the green light to continue bearing back into the past to bring it forward into today.

Hilaria Baldwin and Fake Identities

photograph of Alec and Hilaria Baldwin at event

For years, tabloids, newspapers, and even apparently her husband, reported that Hilaria Baldwin was from Spain. The perception of Baldwin’s Spanish identity shattered in late December, when an anonymous Twitter user outed her as a grifter. A frenzy to dig up facts about Baldwin’s past ensued, and she opted for a New York Times interview to clear the air. Baldwin claimed that she never intended to mislead the press or the public about her nationality. However, her occasional accent, previous uncorrected biographies, and statements made on a recent podcast have given many the impression that she did indeed desire to be perceived as a Spain immigrant.

Was Baldwin’s implication that she was Hispanic comparable to trans-racial scandals? Could her impersonation be considered cultural appropriation?

Baldwin’s impersonation as a native Spaniard has been criticized as unethical due to the underlying implication that she is a Hispanic immigrant. Onlookers have compared Baldwin’s Spanish self-identification as comparable to those who self-identify as a different race. Is it fair, for example, to compare Hilaria Baldwin to Rachel Dolezal? While not the first instance, the exposure of the ex-NAACP chapter President, Rachel Dolezal, brought trans-racial topics into the modern consciousness. Some have compared Baldwin’s trans-national identity to Dolezal’s trans-racial identity. To others, Baldwin’s Spanish identity may have played on ethnicity and language but should not be seen as comparable to trans-racial scandals. (Baldwin has clarified that she is white, and many native Spaniards are also white.)

If Baldwin is not claiming to be a different race, why do so many people find her self-proclaimed Spanish identity and allegedly fake accent racially dishonest and unethical? Baldwin’s accented English and Spanish self-identification have effectively mimicked a Hispanic identity, which, according to the U.S. Census Bureau, denotes “a person of Cuban, Mexican, Puerto Rican, South or Central American, or other Spanish culture or origin regardless of race.” Discriminatory attitudes toward Hispanic people have permeated U.S. culture for hundreds of years. Much of this oppression was directed in the form of racism toward indigenous people and Latin Americans who spoke Spanish. Hispanophobia, or the “fear, distrust of, aversion to, hatred of, or discrimination against the Spanish language, Hispanic people, and/or Hispanic culture” is well-documented in the U.S. Though today 65% of Hispanic people in America are white, the notion that Hispanic denotes race is still common. Experts, such as Dr. Jhonni Carr, have contended that Hispanophobia is less about language, and more about “the association of language with race, with socioeconomic status, and a lot of times with cultural values.” One modern example of associating Hispanic identity with race occurred in February 2020, when the Academy of Motion Picture Arts and Sciences mistakenly labeled Spaniard Antonio Banderas as a person of color. It should also be noted that many self-identifying Hispanic people consider their Hispanic background as part of their racial background.

The association of the Hispanic identity with race is commonly pointed to as the reason that Baldwin’s false identity is unethical. The woman who first called out Baldwin for faking her Spanish identity did so through an anonymous twitter account, @leniebriscoe. “Briscoe” said in a recent interview with The Daily Mail that Baldwin’s impersonation was “offensive and wrong.” Baldwin claimed that she was once stereotyped as a nanny to her children after speaking Spanish in a public park. Briscoe argued that this was offensive as that phenomenon “is something that happens to moms of color who actually have an accent.” In essence, many of Baldwin’s critics have a problem with her appropriation of the struggles which come from a Hispanic identity. Baldwin has shown that she can switch between accents depending on her mood. Baldwin’s ability to “opt-out” of a perceived Hispanic accent is an indicator of cultural appropriation since she is able to walk away from a cultural identity when it no longer suits her.

So how should we assess the morality of Baldwin’s Spanish cultural impersonation? It might be good to start by examining her potential profits, as well as the signs of cultural appropriation or dishonesty.

Did Baldwin use her false identity as a Spaniard to profit? Revisiting the comparison to Rachel Dolezal, Baldwin’s gains are hard to pin down. While Dolezal clearly used her false racial identity to pursue social and career opportunities within the NAACP, it is unclear what Baldwin has specifically sought or gained from her cultural identity. As an influencer, Baldwin relied upon her identity to build a community of followers which she could then monetize through advertising. Her self-identification with the Spanish language and culture could have contributed to the followers, popularity, and wealth she has gained, but it is hard to decipher exactly how much her Spanish identity financially and socially enriched her. Actions which are more clearly immoral might include those in which she took opportunities only afforded to her due to her perceived status as a Hispanic immigrant, such as her feature on ¡Hola! Magazine, among others.

If we assume that Baldwin did not gain anything from pretending to be a native Spaniard, was her decision to adopt a different cultural identity inherently wrong? Adopting a different cultural identity for personal gain might be considered wrong as both a form of cultural appropriation and an inherently dishonest act. By taking on the accent, language, and culture of native Spaniards, Baldwin arguably committed cultural appropriation. Regardless of her intent, Baldwin might still have had a negative impact on native Spaniards or other Hispanic people by claiming the culture and ethnicity as her own. One Twitter user pointed out that Baldwin’s use of a fake-accent is particularly egregious due to the fact that many Hispanic people are denied opportunities because of their accents, and studies support the contention that accent perception can have a significant influence on social and socioeconomic opportunities.

Even if Baldwin’s adoption of the Spanish identity is not cultural appropriation, it might still be considered dishonest, depending how one defines cultural identity. If Baldwin intended for others to believe she was born in Spain and immigrated to the U.S., she was clearly acting dishonestly. However, if she simply intended to imply that a large part of her cultural background shares a loose association with Spain, her dishonesty becomes less clear. Defining one’s cultural identity is a deeply personal matter, and Baldwin has claimed to have grown up in both American and Spanish cultures. Though some have implied that Baldwin’s only ties to Spanish culture is from vacationing with her family, her father has made clear that Spanish culture has influenced his identity for the better part of 30 years. Growing up with a parent deeply engrossed in the Spanish language and culture likely had an impact on her identity. Additionally, her parents have also lived in Spain for nearly 10 years, casting more doubt onto the assertion that Baldwin’s ties to Spanish culture are clearly dishonest. Ultimately, the case against Baldwin on the grounds of cultural dishonesty alone is difficult to argue. Detractors who criticize Baldwin’s actions face difficulty in morally distinguishing her actions from those of immigrants and expatriates, especially considering her various geographical ties to the region.

Unlike others caught and “canceled” for faking their identities, Baldwin has refused to admit that she is not culturally Spanish, or that she has done anything wrong. After her New York Times interview, Baldwin uploaded a video on Instagram where she earnestly stated, “I’m proud that I speak two languages, and I’m proud that I have two cultures… I’m proud that my family is that way. And I don’t really think that that’s a negative thing.” Despite weeks of media attention, Baldwin clearly does not see, or chooses not to see, why so many see her Spanish impersonation as potentially wrong. It is likely she will exhibit a similar lack of understanding if anyone ever decides to challenge her yoga business.

Insurrection at the Capitol: Socratic Lessons on Rhetoric and Truth

photograph of Capitol building looking up from below

In his 1877 essay The Ethics of Belief, philosopher W.K. Clifford told the story of a religiously divided community. Some members of the dominant religious group formed vicious beliefs about their rivals and started to spread those beliefs far and wide. The rumor was that the rival religious group stole children away from their parents in the dead of night for the purposes of indoctrinating them to accept all sorts of problematic religious doctrines. These rumors worked the local community into a fervor. The livelihoods and professional reputations of members of the rival group were irreparably harmed as a result of the accusations. When a committee was formed to look into the allegations, it became clear that, not only were the accusations false, the evidence that they were not true was quite easy to come by had those spreading the rumors bothered to look. The consequences for the agitating group were harsh. They were viewed by their society “not only as persons whose judgment was to be distrusted, but also as no longer to be counted honourable men.” For Clifford, the explanation for why these men were rightly viewed as dishonorable did not have to do with what their belief was, but how they had obtained it. He points out that, “[t]heir sincere convictions, instead of being honestly earned by patient inquiring, were stolen by listening to the voice of prejudice and passion.”

The January 6th attack on the U.S. Capitol Building was motivated, at least in part, by a wide range of false beliefs. Some participants were believers in the QAnon conspiracy theory which maintains that the Democratic party, led by Joe Biden, is a shell for a massive ring of pedophiles and Satanists who consume the flesh of babies. Many of these people believe that the attack on the capitol was a precursor to “The Storm” — a day of reckoning on which all of Trump’s political foes will be executed and Trump, sent by God to perform this task, will follow through on his promise to “Make America Great Again” by ridding the world of liberals. A conspiracy-based belief that all rioters seemed to share in common was that the presidential election was massively fraudulent, that democrats rigged the election in favor of Biden, and that the election had been “stolen” from the rightful winner, Donald Trump. They believed and continue to believe this despite the fact that the election has been adjudicated in the courts over 60 times, and no judge concluded that there was any evidence of voter fraud whatsoever. The basis of this commonly held belief is a series of lies Trump and his acolytes have been telling the public since November, when the results of the election became clear.

On one level, the events of January 6th are attributable to a lack of epistemic virtue on the part of the participants. The insurrection featured confirmation bias on center stage. There is no credible evidence for any of the claims that this group of people believe. Nevertheless, they are inclined to believe the things that they believe because these conspiracy theories are consistent with the beliefs and values that they had before any of this happened. When we play Monday morning quarterback (if, indeed, there ever is a Monday morning), we might conclude that the only productive path forward is to educate a citizenry that has higher epistemic standards; that is, we should do what we can to produce a citizenry that, collectively, has a more finely tuned nonsense-detector and is capable of distinguishing good evidence from bad. We should cultivate communities that have high levels of technological literacy, in which people know that the fact that an idea pops up on a YouTube video or a Twitter feed doesn’t make it true.

That said, placing the blame for false beliefs too firmly on the shoulders of those who hold them may be misguided. Such an approach assumes doxastic volunteerism — the idea that we have control over what it is that we believe. If a person, even the smartest person, is living in an epistemic environment in which they are perpetually exposed to brainwashing and propaganda, it might actually be pretty surprising if they didn’t come to believe what they are being actively coerced into believing.

This is not a new problem — in fact, it’s as old as philosophy itself. In many of Plato’s dialogues, Socrates — Plato’s teacher and the main character in his work — is quite critical of those who teach, study, and practice rhetoric. It was a common practice at that time for fathers to send their sons to study rhetoric from a Sophist, a person who was skilled in the ability to “make the weaker argument the stronger.” Students who undertake this course of study learn the art of persuasion. Having these skills makes a person more likely to get what they want in business, in the courts, and in social life. Strong rhetorical skills reliably lead to power.

It may appear as if, when Athenian fathers sent their children to study rhetoric, they were sending them to learn to construct strong arguments. This was not the case. Arguments raised by rhetoricians need not be strong in the logical sense — they need not have premises that support conclusions — they need only to be persuasive. As the Sophist Gorgias puts it in Plato’s dialogue of the same name, “For the rhetorician can speak against all men and upon any subject. In short, he can persuade the multitude better than any other man of anything which he pleases.” A strong rhetorician, faced with an audience already primed to believe conspiracy theories and propaganda, can manipulate those inclinations with great flourish and toward great danger.

So, on another level, perhaps we should place the blame for the insurrection firmly on the feet of the politicians who knowingly used the rhetoric of conspiracy theories to gain power and popularity with their vulnerable constituents. These politicians knew they were playing with fire. Terrorist attacks perpetrated by right-wing extremists like Timothy McVey are part of our country’s collective consciousness. Yet they poked the bear anyway, over and over, benefiting from doing so in the form of both money and power. These politicians fuel the fire of ignorance about more topics than voter fraud or Satanic pedophile rings; they also use rhetoric to manipulate people on topics like anthropogenic climate change and the seriousness of COVID-19. As Socrates says, “The rhetorician need not know the truth about things, he has only to discover some way of persuading the ignorant that he has more knowledge than those who know.” It may do no harm and may actually do some real good to cultivate a citizenry that has strong critical thinking skills, but we’ll never fix the problem until we get rid of politicians who use rhetorical tools to manipulate. We have to start holding them accountable.

Near the end of the Gorgias, Socrates debates with Callicles, who argues that a good life is a life in which a person pursues their own pleasure, holding nothing back. In a Nietzschean fashion, he argues that restrictions on power are just social conventions used by the weak masses to keep the strong in check. He insists that the strong should rightly rule over the weak. Using rhetoric to manipulate others is just one way of pursuing pleasure through the use of one’s strengths. The strong should not be prevented from pursuing their best life.

Socrates has a different view of what constitutes the good life. If a person goes searching for this kind of life, they should search after truth and justice. They shouldn’t study manipulation; they should study philosophy. Our goal should never be to make the weaker argument the stronger; we should commit to seeking out the stronger argument to begin with.

If history is any indication, this suggestion is nothing but doe-eyed optimism. Callicles would call it childish. He thought that studying philosophy was noble in youth, but that adult human beings should be more realistic about human nature. As a practical matter, perhaps he was right — after all the Athenians grew tired of Socrates’ influence on the youth of Athens and sentenced him to die by drinking hemlock. As a matter of principle, Socrates is the martyr for the life lived in pursuit of truth and justice and we should all strive to do the same ourselves and to do what we can to hold our politicians to the same standard. After all, there was a reason that politicians in Athens were afraid of Socrates.

Time to Let Up or Double Down?

photograph of woman with face mask sitting in large, empty street dining area

Rollout of COVID-19 vaccines represents a significant step in combating the pandemic, one that will likely alter people’s behavior to this global health crisis in significant fashion. With a vaccine on the horizon, risk assessment can change in two very different ways:

On the one hand, it can alter the risk associated with individual behaviors. For instance, with a risky behavior, the prospect of safety can reduce the perspective of associated risk. Here we could think of jumping out an airplane, which seems less risky because there is a parachute. With a vaccine in circulation, taking one’s chances with exposure can seem a more reasonable thing to do. Vaccination will (hopefully) mean there will be fewer people contracting it, lowering the impact on the societal concerns overall. This means risk is assessed in short-term frames: if every risk of exposure over 4 months compares to 12 months, one could think that they might as well lighten restrictions.

On the other hand, the prospect of a vaccine can alter the way we assess risk in a long-term context. When fighting a disease with a radical course of treatment, having an indeterminate time frame versus a given length of time to “push through” makes a great deal of difference. When the end point is unclear, it makes sense to consider harsh conditions unrealistic or unreasonable. In less dire cases, say a highly demanding and stressful workload at work, the expected length of time makes a significant difference in deliberation. Altering the long-term structure of your life around such demands can seem less than feasible, and compromises in meeting those demands can make a great deal of sense. It can make less sense, on the other hand, if the heightened demands are only for a short period of time and come with an important payoff.

With a vaccine in sight, much rests on how the adjustments to daily life given the risk of exposure are reassessed. One reason many give for not complying with state restrictions is that the virus is just something we “have to learn to live with,” or that it is a new way of life. Treating the vaccine as a parachute, as a dialing down of the harm associated with individual actions that put others at risk of contracting the virus, increases danger until the vaccine can come into effect. Letting up on the adjustments to behavior continues to do all the harms that have been associated with the spread of the virus: the deaths, the long-term effects of contracting the virus, the impact on our healthcare system, the systemic impact on the most marginalized populations, the destruction of our economy due to essential workers becoming ill, etc. These effects will not stop simply because of the prospect of a vaccine. The goals remain the same as they have been since February.

With the prospect of improving the fight against the pandemic, the reasonable choice could actually be to double down because we lose one reason to avoid the restrictions. The counterargument that pushes that long-term restrictions will harm the economy, will undermine the values in daily lives, etc. has been weakened considerably as we are now facing a short-term sacrifice for a long-term reward. But until inoculation reaches critical mass, we can’t point to our parachute to justify a refusal to exert effort in pursuit of our shared end goal.

How Should One Call It like It Is?

photograph of threatening protestor group with gas masks

This week in response to the Capitol attack many have urged that we “call the event what it is.” Given the events which took place in Washington this week, perhaps the most prominent moral question facing everyone is how should one describe something? Initially this was even the case before the 6th when the details of Trump’s call to the Georgia Secretary of State became public and it became known that he wanted to “find” votes. Is that an attempt to intimidate a public official to overturn an election or is it merely the innocent efforts of a person to rectify a perceived slight? Following the 6th, this type of question gained new importance. How should we describe such an event? Was it an attempted coup? Was it a protest? An insurrection? Domestic terrorism? How do we describe the day? Did the president have a rally with heated rhetoric that got a crowd out of control or did he unleash a mob on Congress with the intention of preventing them from following the Constitution? Answering a question like “how should we describe such events?” reveals just how complicated of a moral problem language can be.

In his account of inquiry, philosopher John Dewey argued that the nature of any judgment is to be able to link some selected content (a subject) to some selected quality of characterization (a predicate). His central point is that determining how to characterize the predicate and how to characterize the subject is the work of inquiry; neither is simply given to us in advance and our own inquiries require us to appraise how we should characterize a subject and predicate in relationship to each other. Moral inquiry is no different, and thus whether we characterize the people who invaded the Capitol as protestors or insurrectionists depends on what appraisals we make about what information is relevant in the course of moral inquiry. Of course, one of the means that society has at its disposal to do this work is the legal system.

The question about what legally took place is complicated. For example, does the storming of the Capitol constitute domestic terrorism? Despite some, including President-elect Biden, calling the act sedition, in reality many of those who participated may only be legally guilty of trespassing (though there may be a stronger case against some particular individuals who may be charged with seditious conspiracy and assaulting police). Even for the president, and many in Congress who spread lies about the election and stoked the crowd before the riot, it isn’t abundantly clear they can be held legally responsible. Legally speaking, was the president and his supporters in Congress only practicing their First Amendment right to free speech or were they participating in an attempted coup? Again, legally it is complicated with many precedents setting a high bar to prove such charges in court.

But a legal determination is only one way of evaluating the situation. For example, in addressing whether the attack constitutes domestic terrorism, a recent Vox article points out, “It’s useful to think about terrorism as three different things: a tactic, a legal term, and a political label.” In each case the application of the term requires paying attention to different contexts and points of interest. Morally speaking, we will each have to determine how we believe the events of this week should be characterized. But, as a moral declaration how do we make such determinations? Outside of mere political rhetoric, when does it become appropriate to label someone a “fascist”? At what point does a protest become a “coup attempt”? Should we call the people who stormed the Capitol “terrorists,” “insurrectionists,” “protestors,” or as others have called them, “patriots”? Were Trump and his supporters merely expressing grievances over an election that many of them genuinely believe was fraudulent?

One way of trying to come to a justified determination is to compare the situation to similar examples from the past. Case-based reasoning, or casuistry, may be helpful in such situations because it allows us to compare this case to other cases to discover commonality. But what cases should one choose to compare it with? For example, is what happened on the 6th similar to Napoleon storming the French legislature? Napoleon arranged a special session and used bribery, propaganda, and intimidation to get the legislature to put him in charge and then cleared them out by force when they refused to step aside. Or is this case more similar to the crisis in Bolivia? International scholars have been divided over whether that was a coup or a popular uprising following assertions of a rigged election.

Unfortunately, such reasoning is problematic because it all depends on which elements we choose to emphasize and which similarities and differences we think most relevant. Do we focus on the fact that many of these people were armed? Do we focus on the political rhetoric compared to other coups? Does it matter whether the crowd had a coherent plan? It’s worth pointing out that Republican supporters and Trump supporters won’t necessarily make the same connections. 68% of Republicans do not believe the storming of the Capitol was a threat to democracy. 45% of Republicans even approve storming the Capitol. As YouGov points out, “the partisan difference in support could be down to differing perceptions of the nature of the protests.” Thus, comparing this case to others is problematic because cases like this do not come with a label, thus making it easy to make comparisons that are politically motivated and logically circular rather than being morally justified. As G.E. Moore noted “casuistry is the goal of ethical investigation. It cannot be safely attempted at the beginning of our studies, but only at the end.”

What alternative is there to comparing cases? One could assert a principle stating necessary and sufficient conditions. For example, if X acts in a way that encourages or causes the government from being unable to fulfill its functions, X is engaging in a coup. The problem with these principles, just like casuistry, is the temptation to engage in circular reasoning. One must describe the situation in just such a way for the principle to apply. Perhaps the answer is not to focus on what happened, but on the threat that still may exist and take an inductive risk strategy. Even if the benefit of historical hindsight may one day lead us to say otherwise, we may be justified in asserting that the attack was an attempted coup because of the extremely high risks of getting it wrong. This requires us to be forward-looking to future dangers rather than focusing on past cases.

In other words, given the possible grave threat it may be morally justified to potentially overreact to a possibly false belief in order to prevent something bad from happening. By the same token, a Trump supporter who believes that the election was rigged (but is ultimately committed to democracy despite their mistaken beliefs) would be in a worse position for underreacting to an attempted coup if they are wrong about the election and about Trump’s intentions. Such judgments require a careful appraisal of available evidence compared to future possible risks of action or inaction. However, given that the population overall does not see this situation in the same light, the need for having clear reasons, standards, and justifications which can be understood and appreciated by all sides becomes all the more important.

Trump and the Dangers of Social Media

photograph of President Trump's twitter bio displayed on tablet

In the era of Trump, social media has been both the medium through which political opinions are disseminated and a subject of political controversy itself. Every new incendiary tweet feeds into another circular discussion about the role sites like Twitter and Facebook should have in political discourse, and the recent attack on the U.S. Capitol by right-wing terrorists is no different. In what NPR described as “the most sweeping punishment any major social media company has ever taken against Trump,” Twitter has banned the president from using their platform. Not long before Twitter’s announcement, Facebook banned him as well, and now Parler, the conservative alternative to Twitter, has been removed from the app store by Apple.

While these companies are certainly justified in their desire to prevent further violence, is this all too little, too late? Much in the same way that members of the current administration have come under fire for resigning with only two weeks left in office, and not earlier, it seems that social media sites could have acted sooner to squash disinformation and radical coordination, potentially averting acts of domestic terror like this one.

At the same time, there isn’t a simple way to cleanse social media sites of white supremacist violence; white supremacy is insidious and often very difficult to detect through an algorithm. This places social media sites in an unwinnable situation: if you allow QAnon conspiracy theories to flourish unchecked, then you end up with a wide base of xenophobic militants with a deep hatred for the left. But if you force conspiracy theorists off your site, they either migrate to new, more accommodating platforms (like Parler), or resort to an ever-evolving lexicon of dog-whistles that are much harder to keep track of.

Furthermore, banning Trump supporters from social media sites only feeds into their imagined oppression; what they view as “censorship” (broad social condemnation for racist or simply untrue opinions) only serves as proof that their First Amendment rights are being trampled upon. This view, of course, ignores the fact that the First Amendment is something the government upholds, not private companies, which Trump-appointee Justice Kavanaugh affirmed in the Supreme Court in 2019. But much in the same way that the Confederacy’s romantic appeal relies on its defeat, right-wing pundits who are banned from tweeting might become martyrs for their base, adding more fuel to the fire of their cause. As David Graham points out, that process has already begun; insurrectionists are claiming the status of victims, and even Republican politicians who condemn the violence in one moment tacitly validate the rage of conspiracy theorists in another.

The ethical dilemma faced by social media sites at this watershed moment encompasses more than just politics. It also encompasses the idea of truth itself. As Andrew Marantz explained in The New Yorker,

“For more than five years now, a complacent chorus of politicians and talking heads has advised us to ignore Trump’s tweets. They were just words, after all. Twitter is not real life. Sticks and stones may break our bones, but Trump’s lies and insults and white-supremacist propaganda and snarling provocations would never hurt us.” But, Marantz goes on, “The words of a President matter. Trump’s tweets have always been consequential, just as all of our online excrescences are consequential—not because they are always noble or wise or true but for the opposite reason. What we say, online and offline, affects what we believe and what we do—in other words, who we are.”

We have to rise about our irony and detachment, and understand as a nation that language is not divorced from reality. Conspiracy theories, which depend in large part on language games and fantasy, must be addressed to prevent further violence, and only an openness to truth can help us move beyond them as a nation.

Accountability, Negligence, and Bad Faith

photograph looking up at US Capitol Building

The wheels of justice are turning. As I write this, there are a number of movements afoot — from D.C. police continuing to arrest agitators and insurrectionists on possible sedition charges to Representative Ilhan Omar drawing up articles of impeachment — designed to separate the guilty from the guiltier and assign blame in appropriate proportions. And there is a great deal of blame to go around. Starting with the president’s inciting words just blocks away to the mob he steered to breach the Capitol intending to effect their political will, these are culpable parties. But we might consider others. Those members of Congress, like Senators Josh Hawley and Ted Cruz, willing to lend the considerable credibility of their office to unsupported (deunked and repeatedly dismissed) accusations of a stolen election, surely share some portion of the blame. To hold these parties to account, Representative Cori Bush is introducing legislation to investigate and potentially remove those members of Congress responsible for “inciting this domestic terror attack.” In the meantime, the calls for Senators Cruz and Hawley to resign are only growing louder.

But what are these lawmakers really guilty of? On what grounds could these public, elected officials possibly be threatened with removal from office? To hear them tell it, they were merely responding to the concerns of their constituents who remain convinced that the election was stolen, robbing them of their God-given right to be self-governing. They are then not enemies of democracy, but its last true defenders.

Nevermind that people’s belief in election malfeasance is not evidence of election malfeasance (especially when that belief is the product of misinformation disseminated by the very same “defenders”), this explanation fails to appreciate the design of representative democracy. Ours is not a direct democracy; citizens are not called upon to deliver their own preferences on each individual question of policy. Instead, we elect public servants that might better represent our collective interests than any one individual might herself. The hope is that this one representative might be better positioned than the average citizen to engage in the business of governing. Rather than pursuing any and all of their constituents’ interests come what may, these lawmakers are tasked with balancing these competing interests against fealty to the republic, the Constitution, and the rule of law. In the end, these officials are people who can, and should, know better. As Senator Mitt Romney argued Wednesday, “The best way we can show respect for the voters who are upset is by telling them the truth.” That there is no evidence that the results of the presidential election are in error, and that Joe Biden won the election. “That is the burden, and the duty, of leadership.”

Perhaps, then, these legislators were merely negligent, inadequately discharging their duties of office and ultimately unable to anticipate the outcome of things beyond their control. (Who could have predicted that paying lip service to various conspiracy theories would be enough to give them the weight of reality?) And so when words finally became deeds, the violence displayed at the Capitol was enough to make several Congressmembers reconsider their position. It was fine to continue to throw sand in the gears as a political statement, but now faced with such obvious and violent consequences (as well as the attending political blowback) even Senator Lindsey Graham was willing to say “enough is enough.

But negligence is a slippery thing to pin down; it rests on a contradiction: that one can simultaneously be instrumental yet removed, responsible but unaware. Many might agree that these lawmakers’ actions betray a failure to exercise due care. These senators and representatives underestimated risk, ignored unwanted or unintended consequences, and failed to appreciate the cultural, societal, and political moment. But establishing that these members of Congress acted negligently would require demonstrating that any other reasonable person placed in their shoes would have recognized the possible danger. (And “reasonableness” has proven notoriously difficult to define.)

For these reasons, demonstrating negligence would seem a tall order, but this charge also doesn’t quite fit the deed. The true criticism of these lawmakers’ actions has to do with intention, not merely the consequence. Many of these public officials not only failed to take due care in discharging their duties of office and serving the public’s interests, but were also acting in bad faith when doing so. Theirs was not merely a dereliction of duty, but a failure borne of dishonest dealings and duplicitous intent. The move to object to the Electoral College certification, for example, was never intended to succeed. Even Senate Majority Leader Mitch McConnell was willing to condemn the cowardice and self-serving aggrandizement involved in making a “harmless protest gesture while relying on others to do the right thing.” Similarly, the vote led Senator Mitt Romney to question whether these politicians might “weigh [their] own political fortunes more heavily than [they] weigh the strength of our republic, the strength of our democracy, and the cause of freedom.”

In the end, the use made of folks’ willingness to believe — to believe in a deep-state plot and broad-daylight power grab — all for private political gain, pushes us past a simple charge of negligence. The game these politicians were playing undermines any claim to be caught unawares. The fault lies with choice, not ignorance. A calculated gamble was made — to try to gain political points, retain voter support, and fill the re-election coffers by continuing to cast doubt on the election results and build on some constituents’ wildest hopes. The problem isn’t merely with the outcome, it’s with the willingness to trust that private gain outweighs public cost. But as Senator Romney asks, “What’s the weight of personal acclaim compared to the weight of conscience?”

As it stands, there are far too many guilty parties, and not enough blame to go around.

The Capitol Coup and the Rhetoric of Essentialist Exceptionalism

photograph of a burning tire with the feet of a crowd of protestors in the background

On January 6, 2021, a mob of Trump supporters stormed the U.S. Capitol, disrupting Congress’s certification of President-elect Joe Biden’s electoral college win for a few hours. Law enforcmenet deployed tear gas in the Capitol Rotunda, and at least four people died; one woman was shot and killed. It was a deeply depressing spectacle that underscored two facts: that millions of Americans live in an alternative reality in which President Trump, the nemesis of shadowy, rootless “globalists” and other vaguely Semitic “swamp-dwellers,” won a second term in a landslide; and that Trump himself, pathologically fixated on his electoral loss, will gladly incite violence against his own government in order to cling to power.

Even as it was happening, media commentators registered their bewilderment that something like this was happening here, and not some other place — Iraq, maybe, or perhaps (as CNN’s Jake Tapper imagined) Bogotá. The by now well-worn cliché that it was something that might happen in a “banana republic” was trotted out. Echoing these sentiments, in his remarks on that day, President-elect Biden said that “the scenes of chaos at the Capitol do not reflect the true America.”

There is, I think, a deep connection between the commentators’ surprise and Biden’s rhetoric. Many people in this country seem to subscribe to a metaphysics of America, or of American political culture, that is essentialist in that it says that there is something that the culture essentially or truly is — that there are qualities which define America and without which America as we know it would not exist. Usually, the outlines of this conception of America’s essence are drawn by exclusion: by saying what America is not. Thus, Biden tells us that the “true” America is not whatever-it-is that the Capitol insurrection represents — probably that it is not violent or lawless. Other invocations of America’s essence have claimed that America is essentially liberal or conservative, or essentially tolerant. In general, we can say that American essentialism defines what America is in terms of what the one doing the defining thinks it ought to be. Frequently combined with this claim about America’s essence is the idea that this essence is exceptional; that America has a unique essence that distinguishes it from other countries. Thus, those who hold to American essentialism often define America not only by what it is not, but they suggest that what it is not is what other countries are. 

Put these two beliefs together — that America has an essence, and that this essence is unique — and you can readily explain why it should seem shocking or unbelievable that something like the Capitol coup occurred. If America is essentially not what, say, Iraq is — violent, lawless, prone to coup attempts — then what happened at the Capitol is almost unthinkable.

But American essentialist exceptionalism is doubly untrue. First, even if America’s political culture had an essence, it would be implausible to claim that this essence is peaceful or law-abiding. Since its founding, America has been the site of extreme political violence. Periods of relative peace have, if anything, been the exception, not the rule. Second, it is simply implausible to think that political cultures have essences. What makes this particular political culture American is simply that it is comprised of the political beliefs and practices of citizens of the United States, a particular political entity. Those beliefs and practices can (and have) changed dramatically over time and yet remain American. 

Defenders of the rhetoric of essentialist exceptionalism might call on Plato or Government-House utilitarians for support, arguing that even if untrue it is a “noble lie” that helps bind the political community together. On this view, saying that America is essentially good motivates its citizens to love it, thus making it more likely that they will help preserve it across time.

However, we must balance this benefit against the costs, which in my view are considerable. First, the exceptionalist aspect of American essentialist exceptionalism encourages Americans to view the political cultures and systems of other countries with unthinking disdain. That disdain was on full display in commentators’ casual invocation of Iraq, Ukraine, and other countries as examples of places where a Capitol coup would somehow be more appropriate. In fact, Americans likely have much to learn from the struggles of other democracies.

Second, the essentialist aspect of American essentialist exceptionalism may encourage complacency about America’s prospects: if America is essentially democratic, non-violent, tolerant, law-abiding, and so on, then the acts of individual political actors seem to matter less in the scheme of things — it just can’t happen here. Put another way: if in some sense we already are what we ought to be, then what’s the point in struggling to achieve our ideals? It is perhaps just this sort of complacency that was at play in the acts of the Republican congressmen and -women who chose to contest Biden’s electoral win, or the failure of the Capitol police to anticipate the possibility that Trump supporters might assault the building. Now the costs of that complacency are available for all to witness.

Third, the idea that there is a true America can easily be hijacked to serve nefarious political ends. Instead of arguing that American political culture is essentially tolerant, liberal, and democratic, some on the far right believe that it is essentially white, Christian, and patriarchal. Thus, the belief in American essentialism can motivate the exclusion of many members of actual American society as fundamentally “alien” to the culture.

The best course, then, is to jettison both our essentialism and our exceptionalism. There simply is no “true” America, and there are no qualities, good or bad, which define our political culture for all time. There are only the beliefs and practices of Americans in their roles as citizens, jurors, office-holders, and the like; and whether these beliefs and practices are, on the whole, good or bad depends upon the choices of each and all of us.

On Some Philosophical Roots of Pixar’s “Soul”

image of "Soul" logo

[SPOILER WARNING: This article discusses several plot details of Disney and Pixar’s new movie “Soul.”]
On December 25th, the 23rd feature film from Pixar Animation Studios was released on the Disney+ streaming platform to great popular acclaim; after nearly a week, “Soul” has steadily retained a 90% score at Rotten Tomatoes with over 2600 audience reviews. Although it has garnered some criticism over at least one of its casting choices, the film’s presentation of a man struggling to come to terms with his life choices (while simultaneously trying to convince a skeptical spirit of life’s value) has resonated with viewers. And, as is often the case with Pixar products, there is plenty of philosophical material to unpack.
Beginning with the death of long-aspiring jazz musician Joe Gardner, much of “Soul” portrays a metaphysical universe that, while cartoonish, might look familiar to anyone who has taken a class on ancient Greek philosophy. According to Plato, something like a spiritual world (the world of the Forms) is more fundamental to reality than the familiar physical world and all human souls exist there before they enter human bodies; Joe Gardner’s discovery of the Great Before, where nascent souls are formed prior to being born on Earth, functions in a similar kind of way to Plato’s sense of a “pre-existence” to life on Earth. However, Plato’s Forms have little to do with a soul “finding their spark” to get their pass to Earth; the character of 22 would need a mentor, in Plato’s perspective, after birth (to be able to remember their innate knowledge of reality, as described in the Meno dialogue), not before it (as in the movie — although Plato does include something similar to “Soul”’s instructors in the Myth of Er at the end of the Republic). “Soul” never explains what happens when a person’s spirit enters the Great Beyond (but its depiction is ominously reminiscent of a bug-zapping lamp), so it’s hard to compare its sense of the afterlife to anything, but at least some Christian traditions (most notably, those stemming from the third century theologian Origen and the 19th century revolutionary Joseph Smith) whole-heartedly embrace a literal sort of pre-existence for human souls.
This sort of dualistic framework (that sees a human being as the composite of two substances: a physical body and nonphysical soul) would go on to powerfully influence Western philosophers and theologians alike; indeed, many contemporary beliefs about human nature bear some form of the ancient Greek stamp (consider, for example, just how many popular stories hinge on some kind of philosophical dualism). “Soul” not only mines this Platonic concept for its setting but for its plot as well when Joe’s spirit accidentally falls into a cat (while 22 temporarily takes over Joe’s body). This kind of event is roughly dependent on what is sometimes called a “simple” view of personal identity (as expressed by, for example, Descartes) whereby what makes a person themselves is simply a matter of their soul (their body is, in a sense, “extra” or “unnecessary” for such calculations).
Many reviews of “Soul”, however, focus less on its metaphysical framework and more on its existentialist message. Granted, existentialist themes — especially those focusing on individuals discovering personal meaning for their lives and “finding their place in the world” — are tropes long trod by Pixar since it released “Toy Story” in 1995 and appear also in films like “A Bug’s Life,” “Toy Story 2,” “The Incredibles,” “Cars,” “Ratatouille,” and “Toy Story 4” (that last one even helped Dictionary.com select “existential” as its 2019 Word of the Year). In a similar way, other releases (like “Finding Nemo,” “Up,” “Coco,” “Toy Story 3,” “Inside Out,” and “Onward”) grapple with the meaning of life specifically within the context of grief, loss, and death. In this way, “Soul” is but the latest in a long line of entertaining animated depictions of philosophical reflections on what it means to be human.
What makes “Soul” unique, however, is that, rather than focusing on what makes individuals special, the film highlights what we all have in common. The climax of the movie comes when Joe Gardner, after accidentally helping 22 find their spark that will allow them to go to Earth, learns that such sparks are not measures or definitions of a soul’s purpose or calling — they are simply an indication that a soul is “ready to live.” Throughout the film, Joe had been operating on the assumption that his spark was “music” because hearing and playing jazz filled him with such passion for life that he felt satisfied and happy in a way far beyond any other experience. Early on, Joe tries, with little luck, to help 22 discover their own passion; it is only after 22 gets an accidental taste of life in Joe’s body that they are truly ready to live — even though 22 never discovers specifically what their “calling” in life might be.
This kind of thinking smells less like Plato than it does his student Aristotle. While Aristotle has a rather different view of the soul than his predecessor (for example: Aristotle denies that a “soul” can sensibly be separated from a “body” like Platonic dualists might allow), Aristotle nevertheless recognizes that something like a soul is a crucial part of our makeup. To Aristotle, your soul is what explains how your body moves and changes, but it isn’t something substantively distinct from it; for example, he draws an analogy to a bronze statue of Hermes: just like how you could not remove the “shape of Hermes” from the bronze without destroying the statue, you could not remove the soul from a body without destroying a person (for more, see his explanation of “hylomorphism”). So, if the soul is something like a power that directs a body to perform different actions, the big question is “what actions should a soul direct a body to perform?” Crucially, Aristotle thinks that the answer to this question is the same for all humans, simply in virtue of being human: we all have the same ergon (“function” or “task”), so what’s “good” for all humans is the same: in the Nicomachean Ethics, Aristotle says that this amounts to “activity of soul in accordance with virtue.”
So, unlike what he originally assumed, what was ultimately “good” for Joe Gardner was not simply a matter of “playing jazz” — it was a matter of living life in the right way. True happiness (what Aristotle calls “eudaimonia”) is not simply a matter of performing a single task well, but of living all of life, holistically, in a manner that fits with how human lives are meant to be lived. Similarly, whatever sort of passions 22 might discover during their life on Earth, what’s “good” for 22 will also amount to living life in the right way (maple seeds, lollipops, and all). The reason why Jerry (the interdimensional being in charge of the Great Before) explains to Joe that a spark is not a life’s “purpose” is because life itself is the purpose of all souls — empowering beings to live their lives is why souls exist, at least according to Aristotle.
In the scene that sets up the climax of the film, Dorothea Williams tells Joe a story about a dissatisfied fish looking for the ocean, not realizing that he was swimming in it all along; in different ways, both Plato and Aristotle offer their own commentaries on how we can forget (or fail to notice) the sorts of things that give our lives real meaning. Sometimes, it’s nice to have movies like “Soul” to help us remember.

Movies, Beliefs, and Dangerous Messages

photograph of a crowd marching the streets dressed as witches and wearing grotesque masks

In spite of being a welcome part of our lives, movies are not always immune from criticism. One of the most recent examples has been the movie adaptation of Roald Dahl’s book “The Witches,” starring Anne Hathaway. Hathaway, who plays the role of the leader of the Witches, is depicted as having three fingers per hand, which disability advocates have criticized as sending a dangerous message. The point, many have argued, is that by portraying someone with limb differences as scary and cruel – as Hathaway’s character is – the movie associates limb differences with negative character traits and depicts people with limb differences as persons to be feared.

Following the backlash, Hathaway has apologized on Instagram, writing:

“Let me begin by saying I do my best to be sensitive to the feelings and experiences of others not out of some scrambling PC fear, but because not hurting others seems like a basic level of decency we should all be striving for. As someone who really believes in inclusivity and really, really detests cruelty, I owe you all an apology for the pain caused. I am sorry. I did not connect limb difference with the GHW [Grand High Witch] when the look of the character was brought to me; if I had, I assure you this never would have happened.”

As Cara Buckley writes in The New York Times, examples of disfigured people being portrayed as evil abound. From Joker in “The Dark Night” to the “Phantom of the Opera,” cases where disabilities are associated with scary features are far from isolated instances. For as much as these concerns regarding “The Witches” might be justified (as I believe they are), critics seem to already anticipate a criticism of their criticism: that the backlash against the movie is exaggerated, and sparked by, to use Hathaway’s words, a “scrambling PC fear.” As Ashley Eakin, who has Ollier disease and Maffucci syndrome, remarks in Buckley’s article, “[o]bviously we don’t want a culture where everyone’s outraged about everything.”

So, is the backlash against the movie exaggerated? I want to suggest here that it is not; I want to suggest that the association between disability and evil portrayed by movies is a real issue, one that connects with recent philosophical discussions. The argument in favor of the dangers of this association seems to be that by portraying people with disabilities as ugly or scary, viewers may internalize that association and then transfer it onto the real world, thus negatively (and unjustly) impacting the way they see people with visible differences. As quoted in Buckley’s piece, Penny Loker, a visible difference advocate, argues that one of the problematic aspects of “The Witches” is that it is a family movie, and this might make the association between limb differences and evil even more pernicious because “kids absorb what they learn, be it through stories we tell or what they learn from their parents.”

Loker’s line of reasoning touches upon an issue that has recently been examined in philosophy and psychology, particularly with respect to how individuals differentiate facts from fiction. Researcher Deena Skolnick Weisberg, for example, who studies imaginative cognition, argues that despite children being competent in distinguishing imagination from reality, they not always and necessarily do so when it comes to consuming fiction. Quoting a study a study from Morison and Gardner (1978), Weisberg suggests that “[e]ven though children tend not to confuse real and fictional entities when asked directly, they do not necessarily see these as natural categories into which to sort entities.” This is made even more acute in the presence of negative emotions. Weisberg says that “[c]hildren are more likely to mis-categorize pretend or fictional entities that have a strong emotional valence, particularly a negative emotional valence.” Weisberg remarks, and this is my own conjecture, seem to be relevant for the depiction of Hathaway as a witch to be afraid of. One may worry that children who are scared of Hathaway’s character might have more difficulties separating fiction from reality, thus making Loker’s concern even more pressing.

If what has been said so far may apply to children, then what about their parents? Intuitively, one would think that when adults know that what they are watching is fictional, then the worry of associating limb differences with evil does not have any application to reality precisely because adults would categorize what they see in the movie as being purely fictional.

Yet, things are not as simple. As philosopher Neil Levy argues, adults are not always good at categorizing mental representations (such as beliefs, desires, or imaginings). Levy’s argument focuses on fake news and suggests that consuming news that we know to be fake does not insulate us from dangerous consequences. Meaning, under certain circumstances we can “acquire” information as well as beliefs even when we know that the “source” is “fictional.” The main context that situates Levy’s argument is fake news, but I think that its conceptual import can teach us something even in the context of movies. If it is true that even adults have a hard time categorizing mental representations when they know they are fake, then this could potentially impact the way adults, similarly to children, absorb what they see when watching films as well as how they employ it in real life.

What should we make of this? I think one important lesson to draw from this reflection is that once the movie industry recognizes the considerable impact their films can have in the way both children and adults internalize what they see, the industry has an obligation to consider the consequences that portraying certain connections can have. Given how viewers absorb what they see, regardless of their age, the movie industry should strive to be more alert and spot problematic associations like this one.

Why I Am Not like You: The Ethics of Exceptions

photograph of long line of people queuing to enter store

Consider two different arguments. The first that it was okay for me to travel in early December, the second that I should be given early access to a COVID vaccine.

My Travel: I understand that traveling was irresponsible in general, and that it was important that people not do so. Had COVID been happening in any other year I would have not traveled at all during the holiday season. But since it was this year, I had good reasons to carve out an exception for myself. First, it was really important for my girlfriend to meet my parents in person before we could get engaged, most people did not have such major life plans put on hold by the inability to travel over the holidays. Second, my grandfather is not doing well and so the consequences of delaying a visit could not be known. Third, this was the first time in six years my parents were back in the states for Christmas. Fourth, my girlfriend and I could take steps to minimize the risk: we drove instead of flying, we could travel in between the Thanksgiving and Christmas rushes, we both got tested before the trip, and I was able to aggressively quarantine the week before traveling.

My Vaccine: While I should not get the vaccine before the elderly, I should get it before it is open to the general public. First, I am teaching an in-person class in the spring and doing so, at least in part, because the state government of Florida is pushing to increase the percentage of college classes taught in-person in the spring. I offered to teach in person to help out, but it seems like the least that the state government could do after I agree to be around (I expect) irresponsible undergraduates is help make sure I have access to a vaccine. Second, I have been extremely aggressive in my social distancing. This means I should get the vaccine early since a) I have already taken on more inconvenience than most to help protect the public good and b) I’m more responsible than most, so I’ll be a larger drain on the economy if I remain unvaccinated. Third, I’m hoping to get married fairly soon, and that is an important life event that should qualify me for some priority.

— — —

I think the first argument is pretty good and the second one pretty bad. I really should not get priority vaccine access, but I think it was OK for me to travel in early December. But what I want to discuss in this post are some of the challenges in identifying when you should be an exception to a general rule.

Each argument tries to make out that I am, in some sense, special. And if you are going to exempt yourself from a rule you think others should generally follow, then you need to provide a compelling explanation for what makes your case unique. This follows from a deep moral principle about the moral equality of persons (one of the principles Immanuel Kant was getting at in his first formulation of the categorical imperative).

Suppose I don’t want to wait in line at the coffee shop. Can I jump the line? No. If ‘not wanting to wait’ was an adequate reason for anyone to cut in line, then everyone would cut in line (since basically no one wants to wait). But if everyone cut in line, then there would no longer be any line at all. My impatient cutting in line relies on the patient waiting of everyone else.  But here we bring in our deep moral principle: I am not special, which is to say that if I should get to do something, other people should as well. So if ‘not wanting to wait’ is a good reason for me, it must be a good reason for everyone. Since we have already seen it cannot be a good reason for everyone, we can conclude it is not a good reason for me.

So, if I want to cut in line, then I had better have a special reason to do it — a reason that will not apply to everyone else as well. Suppose I arrive at the hospital with a child suffering an anaphylactic shock. I see there is a long line of people waiting to get their severed thumbs reattached (I’ll leave it to you the reader to explain the sudden epidemic of thumb severings).

Here it is permissible for me to cut in front of people waiting to get their thumbs reattached. It is permissible because my reason for cutting will not generalize. If we changed the case so the line was all other parents with children suffering anaphylaxis, then it would not be permissible to cut (since we would otherwise return to our original problem).

Okay, so to carve out an exception there must be something unique about me. Well, there are things that are fairly unique about me, does that mean I should get to jump the vaccine line?  Well no. It was not just that anaphylaxis was different from a severed thumb, it also needed to be  more important. A broken leg, just because it is a different injury, would not make it okay to cut in line.

And here we come to a problem. While there are some things unique to me that suggest I should take precedence, basically everyone has some reason why they should be an exception. Sure, I’m hoping to get married but others, who are about to have their first child, will need to spend some time in a hospital and could really use the in-person support of grandparents. Sure, I’m teaching in person, but others are taking (more than one) classes in person. Syndrome was right, if everyone is special, no one is — at least in the sense that if everyone can identify reasons why they should be able to skip to the front of the line, then no one gets to skip.

And indeed, even if I decided I really was more special than others, it is still probably a bad idea to let me jump in line. That is because we, as a general rule, do not want society making thousands of fine-grained decisions comparing every possible special exception. It opens up far too many possibilities for bias and corruption, and besides that, it becomes democratically problematic because it is impossible to adequately articulate the thousands of priority decisions to the citizenry.

Alright, so I should not get to cut the vaccine line.

But what about my choice to visit my parents in early December? I think most people should stay home, but I also really thought I had a better reason to travel than others. Is that enough to justify my exception.

Not quite, there are two complications I need to consider.

First, I need to factor in my biases. Lots of biases may play a role, but let’s just look at an availability bias. I know the details of my life quite well; I do not know the details of yours. Thus even if my case looks more exceptional to me, that might not be because it is, but just because my own specialness is easier to see.

Second, even if I factor in all those biases and still think I’m exceptional, there is a problem with taking that as sufficient to make an exception. That is because I’m not only making a first-order decision, I’m also making a second-order decision. I’m not only deciding that my case is exceptional, I’m also regarding myself as a competent judge to decide on my own exception. This creates a problem because I expect most people are biased, and so if most people decide for themselves whether they should be an exception, far too many will make the wrong choice.

One way to see this problem is to note that others will disagree with me about what is an important reason for an exception. Let’s explain this with an analogy. Something like over 90% of teachers believe they are above average. Now, this might be that teachers are biased (I expect that is likely), but there is another explanation. Perhaps Anne and Barnie are above average lecturers and Chloe and Darius are above average mentors. Anne and Barnie think lecturing is the most important part of teaching (thus why they spent time getting good at lecturing) and Chloe and Darius think mentoring is the most important part of being a good teacher (thus why they invest so much in mentoring students). Here, even if each of them accurately judges how good they are at various teaching techniques, we will still get everyone thinking they are an above average teacher.

Similarly, if everyone decides for themselves whether they should be an exception. We could well end up with many people thinking they are one of only a few who deserve an exception. Not because they are wrong about any of the details, but simply because different people have different priorities. So even if 100% of people think only the 5% of people with the most pressing reasons to travel should travel, you could still easily get 30% or 40% people honestly deciding they fall in the 5%.

Of course, I think my priorities are right. I think I am better at thinking these things through then the average person. But is that enough to let myself treat myself as an exception? Probably not, since I also think that others think their priorities are right, and I expect that others think that they are better than average at thinking these issues through. So the question I am forced to ask is not just, am I better at making decisions, but rather should anyone who thinks they are better at making decisions be allowed to decide for themselves. If my answer to that latter question is no, then it might still be wrong to carve out the exception.

So was I wrong to travel in early December? It is hard to say. On the one hand, I really do think I had a good reason to do so. But on the other hand, I do not think most people should get to carve out their own exceptions just because they think the exception is warranted (of course, maybe it is not actually hard to say but I just do not want to admit I made the wrong choice).

The Bigger Problem with “COVID Conga Lines”

photograph of full subway car with half of the passengers unmasked

On December 9th, days before New York would again order the re-closing of bars and restaurants in an attempt to stem the resurgence of COVID-19 cases seemingly spread by holiday travelers, dozens of members of New York’s Whitestone Republican Club gathered together for a holiday party at a restaurant in Queens; weeks later, multiple attendees have tested positive for the coronavirus and at least one partygoer has been hospitalized. Although restaurants were allowed to open at 25% capacity on the day of the party, restaurant visitors were also required to wear face masks while not eating; videos of the event — including one showing a prominent local candidate for city council happily leading a conga line — revealed that the majority of people in attendance neglected many of the public health guidelines designed to mitigate the spread of COVID-19.

In response to media coverage of its party, the Club released a statement that read, in part, “We abided by all precautions. But we are not the mask police, nor are we the social distancing police. Adults have the absolute right to make their own decisions, and clearly many chose to interact like normal humans and not paranoid zombies in hazmat suits. This is for some reason controversial to the people who believe it’s their job to tell us all what to do.”

Evoking something like “liberty” to defend the flaunting of public health regulations is, at this point, a common refrain in conversations criticizing official responses to COVID-19. According to such a perspective, the coronavirus pandemic is viewed more as a private threat to individual freedoms than a public threat to health and well-being. For various reasons (ranging from basic calculations about personal risk to outright denials of the reality of the virus as a whole), the possibility that someone could unintentionally spread the coronavirus to strangers while unmasked in public is ranked as less significant than the possibility that someone could have their personal liberties inhibited by inconvenient regulations. As some anti-mask protestors (including Representative-elect Marjorie Taylor Greene from Georgia’s fourteenth congressional district) have said: “My body, my choice,” co-opting the long-standing pro-abortion slogan to refer instead to their asserted right to keep their faces uncovered in public, without qualification.

Critics of this perspective often call it “reckless” and chastise so-called “anti-maskers” for being cavalier with their neighbors’ health; in at least one case, people have even been arrested and charged with reckless endangerment for knowingly exposing passengers on a plane to COVID-19. Against this, folks might respond by downplaying the overall effect of coronavirus morbidity: as one skeptic explained in August, “I hear all the time, people are like, ‘I’d rather be safe than sorry, I don’t want to be a grandma killer.’ I’m sorry to sound so harsh — I’m laughing because grandmas and grandpas die all the time. It’s sad. But here’s the thing: It’s about blind obedience and compliance.”

At present, the United States has registered more than 20 million cases of COVID-19 and over 340,000 patients have died from the illness; while these numbers are staggering to many, others might do some simple math to conclude that over 19 million people have (or might still potentially) recover from the disease. For those who view a mortality rate of “only” 1.5% as far too low to warrant extensive governmental regulation of daily life, they might weigh the guarantee of government control against the risk of contracting a disease and measure the former as more personally threatening than the latter. (It is worth reiterating at this point that COVID-19 patients are five times more likely to die than are flu patients — the law of large numbers is particularly unhelpful when trying to think about pandemic statistics.) Even if someone knows that they might unintentionally spread the coronavirus while shopping, boarding a plane, or partying during the holidays, they might also think it’s unlikely that their accidental victim will ultimately suffer more than they might personally suffer from an uncomfortable mask.

To be clear, the risks of contracting COVID-19 are indeed serious and evidence already suggests that even cases with only mild initial symptoms might nevertheless produce drastic long-lasting effects to a patient’s pulmonary, cardiovascular, immune, nervous, or reproductive systems. But let’s imagine for a moment that none of that is true: what if the perspective described above was completely and unequivocally correct and the Whitestone Republican Club’s recommendation to “Make your own calculated decisions, don’t give in to fear or blindly obey the media and politicians, and respect the decisions of others” was really as simple and insulated as they purport it to be?

There would still be a significant problem.

In general, we take for granted that the strangers we meet when we step out of our front door are not threats to our personal well-being. Some philosophers have explained this kind of expectation as being rooted in a kind of “social contract” or agreement to behave in certain ways relative to others such that we are afforded certain protections. On such views, individuals might be thought of as having certain duties to protect the well-being of their fellow citizens in certain ways, even if those duties are personally inconvenient, because those citizens benefit in turn from the protection of others (shirking public health regulations might then be seen, on this view, as a kind of free rider problem).

However, this doesn’t clearly explain the sort of moralizing condemnation directed towards anti-maskers; why, for example, might someone in a city far from Queens care about the choices made at the Whitestone Republican Club’s holiday party? Certainly, it might seem odd for someone in, say, central Texas to expect someone else in southeast New York to uphold a kind of give-and-take contractarian social contract!

But, more than just assuming that strangers are not threats, we often suppose that our civic neighbors are, in some sense, our partners who work in tandem with us to accomplish mutually beneficial goals. Here an insight from John Dewey is helpful: in his 1927 book The Public and Its Problems, Dewey points out that even before we talk about the organization and regulation of states or governments, we first must identify a group of people with shared interests — what Dewey calls a “public.” After considering how any private human action can have both direct and indirect consequences, Dewey explains that “The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for.” On this definition, many different kinds of “publics” (what others might call “communities” or “social groups”) abound, even if they lack clearly defined behavioral expectations for their members. To be a member of a public in this sense is simply to be affected by the other members of a group that you happen to be in (whether or not you consciously agreed to be a part of that group). As Dewey explains later, “The planets in a constellation would form a community if they were aware of the connection of the activities of each with those of the others and could use this knowledge to direct behavior.”

This might be why negligence in New York of public health regulations bothers people even if they are far elsewhere: that negligence is evidence that partygoers are either not “aware of the connection of the activities of each with those of the others” or they are not “us[ing] this knowledge to direct behavior.” (Given the prevalence of information about COVID-19, the latter certainly seems most likely.) That is to say, people who don’t attend to the indirect consequences of their actions are, in effect, not creating the collective “public” that we take for granted as “Americans” (even apart from any questions of governmental or legal regulations).

So, even if no one physically dies (or even gets sick) from the actions of someone ignoring public health regulations, that ignorance nevertheless damages the social fabric on which we depend for our sense of cultural cohesion that stretches from New York to Texas and beyond. (When such negligence is intentional, the social fabric is only rent deeper and more extensively). Americans often wax eloquently about unifying ideals like “E Pluribus Unum” that project an air of national solidarity, despite our interstate diversity: one of the many victims of the COVID-19 pandemic might end up being the believability of such a sentiment.