Back to Prindle Institute

The Meaning of Monarchy

photograph of Queen Victoria statue at Kensington Palace

A prominent figure for nearly a century, the death of Queen Elizabeth II leaves a tremendous void behind. Many are deeply affected by this loss. For some, however, the death of the Queen also breaks a spell. As perhaps the most famous representation of monarchy, her image seems to have put some questions about the nature of monarchy on hold for some time. Almost immediately after her death, however, questions about the future of monarchy – generally, or in the U.K. specifically – began to swirl. While some have started to map out the succession line, others voiced criticism of their historical (colonial) ties to the monarchy. Now, there is a strong call to take a moral stance regarding monarchy, in general. Questions concerning the role of monarchy coupled with the financial, diplomatic, and moral burden of the royal families are coming to the fore.

The problem with taking a moral stance, however, is that the monarchy today is quite different from the monarchy in the past: Not many “Royalists” in the traditional sense remain standing in the Western world. These domesticated and tranquilized monarchies hold almost no political power; they are merely symbolic.

But what’s not clear is what exactly these “Symbolic Monarchies” symbolize, and whether one is morally obligated to support or oppose what they represent. Do they pose some threat of oppression like in the past or are they now somehow “redeemed”?

From my regional point of view, the idea of monarchy is still a very dangerous one. The Middle East, in general, is quite accustomed to a very strong central figure, and developing democracy or civil society is always under threat by an autocratic one-man regime. As the West of “The East” or the East of “the West,” Turkiye is an interesting boundary case where the idea of monarchy is both very weak and yet still somehow scary at the same time. From time to time, the possibility of a symbolic Ottoman Emperor is jokingly suggested. Most react radically to even the mere mention, whereas some think a powerless monarchy has some kind of emotional and historical nostalgic value – mainly in a more cultural, diplomatic, and touristic sense. One year after the establishment of the Turkish Republic, the Osmanoglu Family were sent into exile in 1924 since it was thought they would pose a threat to the newly found republic. In the first half of the 70s, this exile was lifted for all members as any dream to resurrect the Ottoman Empire seemed unrealistic. (I believe it is still unrealistic.)

However, some events involving members of the Osmanoglu Family are worrying. Some, claiming to be the rightful inheritors of the Ottoman Sultan, Abdulhamid II, demand lands on the ground that these were the personal property of the Sultan. Meanwhile, Abdulhamid Kayıhan Osmanoglu, claiming to be a royal family member, has entered into politics in “New Welfare Party” – a revamp of a radically conservative party – and usually goes around in Sehzade/Prince clothes. Additionally, the 21st century has brought a “Neo-Ottoman” political and cultural wave in Turkiye. From the very beginning, some have pointed out a relation between governing “Justice and Development Party (AKP)” and this “Ottomania” or “Neo-Ottomanism.”

In one sense, no one – including AKP – appears to be seriously considering abolishing democracy and bringing the monarchy back. However, in another sense, interest in monarchy appears to be very much revived.

This situation is enough to make any citizen of a country with a similar history uneasy.

From a North American or European point of view, these events may not seem relevant since the threat – especially in the Middle Eastern region – is not about Symbolic Monarchy, but the possibility of reinstating Traditional Monarchy. The general belief that a Symbolic Monarchy is safe, harmless, or powerless is generally accepted unquestioningly in the West. Its assumed lack of political power is so overemphasized that its message – what a monarchy represents – is generally overlooked.

Some treat Symbolic Monarchy like they do fictional entities like Santa Claus. Having great cultural importance, this take on monarchy assumes that Symbolic Monarchy is not really a “monarchy” as much as a glamorous imitation for show. These declawed figureheads are like Santa Claus giving out gifts in malls and ringing a bell on the corner of the streets. Perhaps the Queen was not a “Queen” after all. Maybe she represented the fantasy of an ideal benevolent ruler that we know doesn’t exist. It was simply an unforgettable role played by the actor Elizabeth Alexandra Mary Windsor, whose fans now feel they’ve lost their heroine.

But for those more familiar with the monarchies of old, this preoccupation with pageantry is naïve.

Even if it is merely a role to be played, we must still ask: What does this role represent? Even though we may appreciate the actors who play them, the public performance of roles like “Queen,” “Tzar,” and “Sultan” evoke real historical associations.

This is where these roles get their power, and contrary to the nostalgic reception, some of these associations are negative. Reminiscing often comes with memories of colonization, abuse, and torture. Many people are reminded of how they or their ancestors were oppressed. The very existence of Symbolic Monarchy and its global glorification seems capable of hurting many or being used as a tool to numb people to historical harms. Even in its lightest form – where we assume the characters are benevolent and we acknowledge and condemn past transgressions – monarchy represents inequality, as Nicholas Kreuder has recently argued; it contradicts the natural or essential equality of all human beings.

Is this necessarily true? I’m sympathetic to Benjamin Rossi’s critique that suggests that as long as people can in some way voluntarily embrace it, monarchy can be morally legitimate. But, from another point of view, it’s difficult to judge whether the adoption is voluntary or not.

In historically oppressed cultures, it is common to observe the adoption of their oppressors’ values, language, and religion, which were forced on them in the past. For such people, an idea like “Queen” can be very damaging and, controversially, soothing at the same time.

Ultimately, Symbolic Monarchy is thought to exercise some influence on the public but a low political influence: They seem to play only a support role to major events with no decision-making power. But what is “real” power these days? Members of a royal family have many powerful symbols at their disposal that wield great influence, including casting a shadow over politics. Though they are generally not allowed to endorse an ideology, party, or politician, the line between supporting a moral cause and supporting a policy is incredibly blurry. As “influencers,” their discretion in addressing particular issues rather than others, their preferences on charity and patronage, or the moral positions they adopt in royal dramas are not easily separable from political issues. While royal families’ political influence is difficult to quantify, there is no doubt – from their fashion sense to their diplomatic missions – that they possess “power.”

Today, monarchy is under increasing scrutiny following the Queen’s death. Some admire these royal families. Some remember the yoke of oppression. Some fear past monsters may rise again. What power the idea has left – be it Traditional Monarchy or Symbolic – remains an open question.

Monarchy and Moral Equality

photograph of Queen's Guard in formation at Buckingham Palace

In a recent column, Nicholas Kreuder argues that the very idea of monarchy is incompatible with the moral equality of persons. His argument is straightforward. He claims that to be compatible with moral equality, a hierarchy of esteem must meet two conditions. First, the person esteemed must be estimable — in other words, esteem for her must be earned, or at least deserved. Second, deferential conduct toward the esteemed person must not be coerced or otherwise involuntary. But the deference demanded by a monarch is neither warranted, nor voluntarily given: monarchs are esteemed only for their royal pedigree, and their subjects are expected to show esteem even though they are not, at least in the typical case, subjects by choice. Therefore, the hierarchy of esteem between monarch and subject is fundamentally incompatible with moral equality.

This argument is compelling, and as a confirmed republican, I confess bewilderment at the practice of paying a woman to live in fabulous wealth for a century so that she can christen the nation’s boats.

Nevertheless, for the sake of argument, I would like to critically examine Kreuder’s premises to see whether they really establish his sweeping conclusion.

The first question to consider is a simple one: what is a monarch? The argument against monarchy from moral equality appears to assume that monarchies are by definition hereditary, and that they are never elective. In fact, elective or non-hereditary monarchies are not unusual in human history. In Ancient Greece, the kings of Macedon and Epirus were elected by the army. Alexander Hamilton argued for an elective monarchy in a speech before the Constitutional Convention of 1787; he thought the American monarch should have life tenure and extensive powers.

In truth, authoritative sources seem confused about just what a monarchy is. For instance, the Encyclopedia Britannica defines “monarchy” as “a political system based upon the undivided sovereignty or rule of a single person.” Yet the accompanying article acknowledges that in constitutional monarchies, the monarch has “transfer[red] [her] authority to various societal groups . . . political authority is exercised by elected politicians.” That does not sound like undivided sovereignty to me.

My conclusion is that “monarch” is a label promiscuously affixed to wildly different kinds of regimes, leaving the concept monarch without much determinate content. A monarchy can be limited or absolute, elective or hereditary.

It’s difficult to argue that a concept with little determinate content is incompatible with moral equality. However, if we limit the argument to hereditary monarchies, then the argument against monarchy from moral equality appears to get back on track. If the monarch is not elected, then the deference she demands is not voluntary. And if her claim to esteem is inherited, then it is certainly not deserved.

Yet even when restricted to hereditary monarchies, the argument does not seem entirely plausible. The problem is that in some cases, the hierarchy of esteem between a particular hereditary monarch and her subjects seems voluntary. Consider the United Kingdom’s hereditary but constitutional monarchy. The citizens of that country appear to have widely divergent views about both their monarchs and their monarchy. Some people detest the newly-crowned King Charles III, yet have no qualms with the monarchical institution. Some liked Queen Elizabeth on a personal level but are staunch republicans. Moreover, Britons do not keep their opinions on this score a secret, and they are not generally thrown in jail for publicly criticizing the monarch in the harshest terms. (Although in saying this, reports of anti-royal protestors being arrested on bogus charges of breaching the peace make me pause.)

No one is forced to sing “God Save the Queen” who doesn’t wish to do so. In short, in the U.K., deference to the monarch may be encouraged, but it is certainly not required. A Briton can thrive in her society without ever showing the slightest deference to her monarch.

With respect to the hierarchy of esteem between the U.K.’s monarch and her subjects — as opposed to her constitutional functions or public prominence — the situation seems somewhat akin to the relationship between Catholic priests and the rest of society in the United States. Even non-Catholics regularly refer to priests as “father,” a gesture of deference that is less required than customary. (That said, I refrain from this practice if at all possible; it mildly affronts my democratic temperament. This did not go over particularly well at Notre Dame.)

It might also be doubted whether every hereditary monarch does not deserve esteem. Many Britons seem to think that with her stoicism and quiet dignity, Queen Elizabeth provided stability over the course of a turbulent twentieth century. I take no stance on that proposition, but it certainly seems conceivable that a monarch could come to earn esteem through her exemplary conduct either before she ascends to the throne or while she serves as monarch.

Thus, the extent to which monarchy cuts against moral equality really depends on the conditions of the society in which a particular monarchical institution exists.

The concept of a hereditary monarchy might seem incompatible with moral equality at a very high level of abstraction, but some of its instantiations may be perfectly consonant with it.

This does, however, lead me to a more philosophical point. In the argument against monarchy from moral equality, the test for whether a hierarchy of esteem is morally legitimate requires it to meet both the conditions of deservedness and voluntariness. But it appears that voluntariness alone is sufficient to make such a hierarchy compatible with moral equality. If I routinely genuflect before my girlfriend because I believe her gorgeous auburn hair possesses mystical powers, that does not seem particularly demeaning to my dignity so long as my delusional belief cannot be said to undermine the voluntariness of my deferential act — even though the deference is wholly undeserved. Likewise, so long as a Briton is not forced to pay obeisance to King Charles III, her acts of deference seem to be compatible with her dignity even if the king doesn’t deserve them.

Indeed, voluntariness seems not only sufficient to legitimize a hierarchy of esteem, but also necessary. Martin Luther King, Jr. and Malcolm X are both, in my view, figures richly deserving of esteem. Yet if I were forced to regularly kiss their feet, that hierarchy of esteem would be an insult to my dignity as a moral equal.

Philosophers love abstractions, and I am no exception. Sometimes, however, what appears to be a strong argument at a high level of abstraction loses some of its luster once the messy reality of human existence is brought into view. Such is the case, I think, with the claim that monarchy is per se incompatible with human equality.

Should Monarchies Be Abolished?: The Argument from Equality

photograph of British monarchy crown

On September 8th, 2022, Queen Elizabeth II of England died at the age of 96. She held the crown for 70 years, making her the longest reigning monarch in the history of Britain. Her son, now King Charles III, will likely be coronated in mid-2023.

The death of the British monarch has drawn a number of reactions. Most public officials and organizations have expressed respect for the former monarchies and sympathy towards her family. However, others have offered criticism of both the Queen and the monarchy itself. Multiple people have been arrested in the U.K. for anti-royal protests. Negative sentiment has been particularly strong in nations that were previously British colonies – many have taken to social media to critique the Crown’s role in colonialism: the Economic Freedom Fighters, a minority party in South Africa’s parliament released a statement saying they will “not mourn the death of Elizabeth,” and Irish soccer fans chanted that “Lizzy’s in a box.” Professor Maya Jasanoff bridged the two positions, writing that, while Queen Elizabeth II was committed to her duties and ought to be mourned as a person, she “helped obscure a bloody history of decolonization whose proportions and legacies have yet to be adequately acknowledged.”

My goal in this article is to reflect on monarchies, and their role in contemporary societies. I will not focus on any specific monarch. So, my claims here will be compatible with “good” and “bad” monarchs. Further, I will not consider any particular nation’s monarchy. Rather, I want to focus on the idea of monarchy. Thus, my analysis does not rely on historical events. I argue that monarchies, even in concept, are incompatible with the moral tenets of democratic societies and ought to be abolished as a result.

Democratic societies accept as fundamentally true that all people are moral equals. It is this equality that grounds the right to equal participation in government.

Equal relations stand in contrast to hierarchical relationships. Hierarchies occur when one individual is considered “above” some other(s) in at least one respect. In Private Government, Elizabeth Anderson distinguishes between multiple varieties of hierarchy. Particularly relevant here are hierarchies of esteem. A hierarchy of esteem occurs when some individuals are required to show deference to (an) other(s). This deference may take various forms, such as referring to others through titles or engaging in gestures like bowing or prostration that show inferiority.

Generally, hierarchies of esteem are not automatically impermissible. One might opt into some. For instance, you might have to call your boss “Mrs. Last-Name,” athletes may have to use the title “coach” rather than a first name, etc. Yet, provided that one freely enters into these relationships, such hierarchies need not be troubling. Further, hierarchies of esteem may be part of some relationships that one does not voluntarily enter but are nonetheless morally justifiable – children, generally, are required to show some level of deference to their parents (provided that the parents are caring, have their child’s best interests in mind, etc.), for instance.

The problem with the monarchy is not that it establishes a hierarchy of esteem, but rather that it establishes a mandatory, unearned hierarchy between otherwise equal citizens.

To live in a country with a monarch is to have an individual person and family deemed your social superiors, a group to whom you are expected to show deference, despite your moral equality. This is not a relationship you choose, but rather, one that is thrust upon you. Further, the deference we are said to owe to, and the higher status of, monarchs is not earned. Rather, it is something that they are claimed to deserve simply by virtue of who their parents are, who in turn owe their elevated status to their lineage. Finally, beyond merely commanding deference, monarchs are born into a life of luxury; they live in castles, they travel the world meeting foreign dignitaries, and their deaths may grind a country to a halt as part of a period of mourning.

So, in sum, monarchies undermine the moral foundation of our democracies. We value democratic regimes (in part) because they recognize our equivalent  moral standing. By picking out some, labeling them as the superiors in a hierarchy of deference due to nothing but their ancestry, monarchies are incompatible with the idea that all people are equal.

However, there are some obvious ways one might try to respond. One could object on economic grounds. There is room to argue that monarchies could potentially produce economic benefits. Royals may serve as a tourist attraction or, if internationally popular, might raise the profile and favorability of the nation, thus increasing the desirability of its products and culture. So perhaps monarchies are justified because they are on the whole beneficial.

The problem with this argument is that it compares the incommensurable. It responds to a moral concern by pointing out economic benefits.

My claim is not that monarchy is bad in every respect. Indeed, we can take it for granted that having a monarchy produces economic benefits. However, my claim is that it undermines the moral justification of democracy.

Without a larger argument, it does not follow that economic benefits are sufficient to outweigh moral concerns. This would be like arguing that we should legalize vote-selling due to its economic benefits – it seems to miss the moral reason why we structure public institutions the ways that we do.

Another objection may be grounded in culture. Perhaps monarchies are woven into the cultural fabric of the societies in which they exist; they are part of proud traditions that extend back hundreds or even thousands of years. To abolish a monarchy would be to erase part of a people’s culture.

While it’s true that monarchies are long traditions in many nations, this argument only gets one so far. A practice being part of a people’s culture does not make it immune to critique. Had the Roman practice of gladiatorial combat to the death for the sake of entertainment survived to this day, we would (hopefully) think it ought to be eliminated, despite thousands of years of cultural history.

When a practice violates our society’s foundational moral principles, it ought to be abolished no matter how attached to it we have become.

Finally, one might argue that abolition is unnecessary. Compared to their status throughout history, monarchies have fallen out of grace in the 20th and 21st centuries. Of the nations with monarchies, few have a monarch which wields anything but symbolic power (although some exceptions are notable). This argument relies on a distinction between what we might call monarchs-as-sovereigns and monarchs-as-figureheads. Monarchs-as-sovereign violate the fundamental tenets of democracy by denying citizens the right to participate in government, while monarchs-as-figureheads, wielding only symbolic power, do not, or so the argument goes.

The issue with this argument is that it underappreciates the full extent of what democracy demands. It does get things right by recognizing that the commitment to democracy arises from the belief that people deserve a say in a government that rules over them. However, it is just not that all citizens deserve some say, but rather, that all citizens deserve an equal say. One person, one vote.

Part of the justification for democracy is that individuals ought to be able to shape their lives, and thus deserve a say in the institutions that affect us all.

Although individuals may vary in their knowledge or other capabilities, to give some greater say in our decision making is to give them disproportionate power to shape the lives of others. No one individual should automatically be someone to whom we all must defer. We might collectively agree to, say, regard someone as an expert in a particular matter relevant to the public good and thus defer to her. However, this only occurs after we collectively agree to it in a process where we all have equal say, either by voting directly for her, or voting for the person who appoints her. Unless we have a parity of power in this process, then we diminish the ability of some to shape their own lives.

On these grounds, perhaps a monarchy could be justified if the citizens of a nation voted the monarch into power. This would simply be another means of collective deferment. But since electorates are constantly changing, there would need to be regular votes on this to ensure the voters still want to defer to this monarch. Yet current monarchies, by elevating the monarch (and family) above others while leaving this outside the realm of collective decision-making, violate the moral justification of democracy – some are made superior by default in the hierarchy of esteem. The establishment of democracy and abolition of all monarchy are proverbial branches that stem from the same tree. Our recognition of human equality should lead us to reject monarchy in even innocuous, purely symbolic forms.

Leaders Behaving Badly: Executive Overreach and Dangers to Democracy

photograph of Donald Trump and Scott Morrison at White House press conference

In the same week that Donald Trump was being pilloried for taking classified documents from the White House, Australia was facing its own crisis of executive overreach. Reports surfaced that our former Prime Minister, Scott Morrison, had ignored the unwritten rules of Australian democracy and given himself responsibility for a variety of government portfolios, extending his power way beyond his remit. This extraordinary concentration of power in the hands of one man represented a significant threat to our venerable system of government. It also raises an interesting question about the nature of democracy: what is the best way to ensure that the voices of the population are represented in the halls of power?

What’s so great about democracy?

There are a couple of normative benefits to democracies over alternative forms of government. One is that executive power is limited, saving us from the sort of governmental overreach which characterizes totalitarian regimes. As political philosopher George Kateb wrote, “in contrast to dictatorship, oligarchy, actual monarchy or chieftainship, or other forms [of government], representative democracy signifies a radical chastening of political authority.” Both presidential and parliamentary democratic systems achieve this chastening by dividing powers between branches of government and providing checks and balances on executive authority. (That said, American presidents tend to have far more individual power than Australian prime ministers – despite the separation of powers in the U.S., executive orders are incredibly common).

For this chastening to be successful, however, strong constitutional or legal protections must be in place to ensure that power doesn’t become overly concentrated.

As we’ll return to in a moment, Australia’s reliance on unwritten laws, precedent, and tradition means that we are at risk of unscrupulous actors accumulating excessive power and wielding unfettered political authority.

Another positive of representative democracy is right there in the name – it is representative. Parliament, or congress, is made up of people from across the nation, and is supposed to represent the interests of those people; allowing them a say in, and control over, the laws and institutions that determine their lives. Australian philosopher Elaine Thompson equated representation with fairness: democratic systems are representative only insofar as “the parliament is accepted [by the people] as representing the people who elected it.”

The Australian parliamentary system

Before diving into issues of representation, it’s worth giving some background on Australian governance. There are quite a few differences between the Australian and American political systems but the major one is that, in Australia, we don’t directly elect our leader. Both Australians and Americans vote for local representatives and for senators to represent their states.

But whereas every American has the opportunity to vote for their president (ignoring the vagaries of the electoral college), Australia’s prime minister is chosen by the aforementioned local representatives.

Currently, the Labor party holds a majority in the House of Representatives and have elected one of their own, Anthony Albanese, to the Office of Prime Minister. But if one party doesn’t hold a majority in their own right, parties must work together to form governing coalitions. Once a prime minister is elected, they select a ministry of members of parliament who are given responsibility for different portfolios – things like health, education, trade, foreign affairs, and so on. The minister is then supposed to wield authority over their area, meaning they make the big decisions on policy matters and (occasionally) take responsibility when things go wrong.

So, the Australian flavor of representative democracy is quite different to the American one. But if representation is the goal, what offers better representation – parliamentary or presidential systems?

President or Parliament?

On the one hand, American presidents are directly elected by the whole nation, which might make them more representative than Australian prime ministers. Presidential candidates can’t afford to only appeal to small minorities or particular geographical areas: they have to garner support across the country. Theoretically, at least, this should temper their wilder inclinations as they attempt to cast as broad a net as possible (although empirical evidence might suggest otherwise). On the other hand, it might be unreasonable to think that anybody could truly reflect the diversity of a huge country like the U.S.

Unlike presidential candidates, local representatives can (and perhaps should) pander only to their narrow constituencies. This means they can take up local matters or focus on representing minority groups, although that narrow focus can mean they are less representative of the nation as a whole.

In Australia’s system, the issue of representative leadership is somewhat offset by the existence of parliament: although any one member might not be particularly representative of the entire nation, the parliament as a whole – all 151 members of the house, plus the senate – ought to offer a decent reflection of the nation. And because decision-making isn’t centralized in the prime minister, it’s not such a huge issue that they are only elected to parliament by a small proportion of the population. By spreading decision-making responsibility across members of parliament, representing different people from different places, we avoid the need to have any single, broadly representative, head of state or government.[i] Lately, however, this hasn’t been happening.

The secret ministries

Last week, news surfaced that during the pandemic (now former) Prime Minister Scott Morrison secretly swore himself in to five different ministries: Home Affairs; Finance; Health; Industry, Science, Energy and Resources; and Treasury. So rather than having responsibility for policy decisions spread across members of parliament, we had an unprecedented concentration of power in Australia – something closer to the American presidential system than the system we are used to.

What’s worse, we didn’t get any of the benefits of the presidential system.

Instead of having a president elected by the entire country and entrusted with heading government, we had a prime minister with a huge amount of centralized power elected by a small group of people from south-east Sydney – an area richer, whiter, and more religious than Australia as a whole.

Essentially, we had the worst of both systems: an unrepresentative leader with too much individual power. Thompson’s fairness was nowhere to be seen, and the chastening of power that Kateb wrote about had been eroded from within.

Despite public outrage and condemnation of Morrison’s actions (including from those in his own party), they were perfectly legal – even if they “fundamentally undermined” the practice of responsible government. Luckily, Morrison did little with his extreme power, other than cancel a permit for a gas project off the coast of Sydney. Next time, however, we might not be so fortunate. What the Morrison saga shows us is that regardless of whether we live in a presidential or parliamentary system, we can’t rely on convention, tradition, and unwritten rules. Strong laws limiting individual power are essential to the creation of democracies which truly represent the will of the people.

 

 

[i]  (For an excellent overview of the strengths and weaknesses of parliamentary and presidential systems, check out political scientist Steffen Ganghof’s recent book).

Corporate Activism and Non-Ideal Democracy

photograph of Disney and Mickey with castle in the background

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: “Woke Capitalism.”

In March, Florida Governor Ron DeSantis signed the Parental Rights in Education Act (PREA). The “Don’t Say Gay” law restricts classroom instruction about sexual orientation or gender identity and empowers parents to sue school districts over teachings they don’t like.

Many are critical of the PREA, including, controversially, the Walt Disney Company. On the day it was signed, Disney released a statement saying that the PREA “should never have been signed into law” and declared that its “goal as a company is for this law to be repealed” or “struck down.” DeSantis and the state legislature retaliated by canceling some important privileges afforded to Disney. DeSantis described this as a wakeup call, declaring that Disney needs “to get back to the mission” and “back on track.”

The quarrel between DeSantis and Disney is representative of a broader ongoing controversy about the proper role of corporations in politics and public discourse.

Recent developments have propelled this issue into the spotlight. In 2010, the Supreme Court ruled that the First Amendment prohibits the government from restricting corporations from independently advocating for or against political candidates, opening the door to unlimited corporate spending. Moreover, corporations have recently become increasingly active in signaling support for progressive social causes, a trend which has been described as “woke capitalism.”

There are many reasons to be critical of corporate involvement in politics and public discourse. In most cases it’s probably motivated mainly by a cynical desire to curry favor with lawmakers, distract from corporate exploitation, or otherwise advance profits; corporate activism can exacerbate cultural divides and grievances; it’s unclear whether corporations have a moral right to free speech. And, most importantly, a democracy should be governed by the people, not by businesses or the economic elite.

Let’s suppose (as seems plausible) that there are many good objections against corporate activism and that in a well-functioning liberal democracy, corporations have no place in politics or public discourse. It does not follow that corporations should not participate in politics or public discourse in our society. The significance of this supposition for the Disney-PREA case (and the general controversy) depends largely on whether we live in a just and well-functioning liberal democracy. I’d like to suggest that we don’t.

If we live in a society that is only partially democratic and only partially liberal, a society that is characterized by serious systemic injustices, then perhaps we should welcome the efforts of the powerful, including corporations, when they act to redress injustices.

Perhaps corporate activism is less than ideal but nevertheless all-things-considered justified in our non-ideal situation.

To explore this line of thinking, I need to paint an ugly picture. We are told in school that the United States is a beacon of freedom and hope for the world. We are told that the U.S. is a liberal democracy, a state committed to protecting the basic freedom and equality of all its citizens, governed by the will of the people.

There are good reasons for thinking this is a convenient bit of propaganda that is only partially true.

We can look outwards first. The U.S. is an empire of sorts. Old-style empires exerted power over territories by conquering and directly ruling them. Contemporary empires like the U.S. exert imperial power less directly. The U.S. furthers its international interests through soft-power and diplomacy, like when it leverages its considerable power in the UN to influence foreign governments. It also wields unprecedented hard power. For example, the U.S. has about 800 foreign military bases in 80 countries. It uses its economic and military might to overthrow foreign governments, influence foreign political and revolutionary movements, and generally meddle in the affairs of other countries.

Although those who have a grip on the levers of U.S. imperial power are ostensibly accountable to voters, we voters have virtually no de facto control over U.S. foreign affairs.

Consider the presidency. The president has a lot of say over how U.S. military power is deployed in the world. But voters only have two real options in presidential elections. And despite the standard rhetoric to the contrary, presidents from both parties tend to wield military power in more or less continuous ways. The War in Afghanistan is a representative example. This war was started by a Republican (Bush) and expanded by a Democrat (Obama). A Republican (Trump) initiated withdrawal from the region, which was completed by a Democrat (Biden).

Things look about the same looking inwards. The Declaration of Independence states that governments derive their just powers from the consent of the governed. Yet our laws routinely fail to conform to the will of the people. For example, U.S. federal laws currently fail to reflect that a majority of voters support changing the electoral college (55%), protecting access to abortion (61%), greater action on climate change (65%), decriminalizing marijuana (68%), health insurance public options (68%), universal background checks on gun purchasers (84%), and price limits on lifesaving drugs (89%). Many entrenched factors contribute to this, from the fact that some voters have far more power than others, to the influence of industries and economic elites (especially super-rich private donors) on public policy, the disproportionate wealth of lawmakers, the various demagogues clogging public discourse with inane conspiracy theories, and so on. The undemocratic elements in our society are coupled with illiberal systemic injustices like extreme economic inequality and laws that protect freedoms selectively. For example, in 2021, the top 1% of households held 32.3% of all household wealth, while the bottom 50% held only 2.6%. And at the time of this writing, federal law does not protect LGBTQ people from discrimination in employment and housing (although 70% of people support such protections).

The picture that is emerging is one of an empire that, despite having democratic and liberal elements, is largely run by elites and routinely fails to protect the basic freedom and equality of its citizens.

Suppose this picture is roughly accurate. Also suppose for the sake of argument that the PREA is seriously unjust. Since it is seriously unjust, we citizens should work to see it repealed. But we do not have as much power to affect legislation as we are encouraged to believe. Wealthy corporations have power, however, and we can solicit assistance from them. Now if the U.S. had legitimate democratic institutions, then corporate meddling in democratic processes would threaten the legitimacy of those institutions. But by supposition that legitimacy is already seriously compromised by entrenched factors. So, arguably we should solicit and welcome assistance from powerful entities like Disney insofar as this increases the likelihood that the PREA will be repealed and the expected side effects are acceptable. And arguably this is compatible with maintaining that corporate activism is ultimately a bad thing.

Here’s an imperfect but suggestive analogy. Imagine we live under a dictatorship. Many people are oppressed by harmful laws. But the dictator’s counselor is sympathetic to the oppressed. It seems to me that we could, without logical inconsistency or hypocrisy, both beseech the counselor to convince the dictator to change the harmful laws and also maintain that neither the dictator nor his counselor should have any power over us.

This suggests that corporate activism can be justified in our non-ideal situation, but only to the extent that it is efficaciously directed at making our society more just.

This marks a difference between corporations and citizens. Citizens have an autonomy-based moral right to participate in collective governance and public discourse, which entitles them to sincerely advocate for positions that are in fact unjust. Corporations have no such right. Their entitlement to advocacy is derived exclusively from the special power they have to improve our society.

It’s sensible to reject this argument if you are less pessimistic than I am about the state of our union. But I don’t think the argument should be rejected because of cynicism about corporate motivations. True, corporations are out to make a profit. Mickey is a rapacious mouse. Nevertheless, from time to time the motive of profit partially aligns with the cause of justice. We should do what we can to remind corporations of this.

Left unaddressed is the difficult practical problem of how we can effectively make use of corporate activism while also advocating for a society that is truly governed by the people, not corporations or elites. I don’t know how this problem can be solved. But I am hopeful that it can be.

Are Politicians Obligated to Debate?

photo of empty debate stage

In the leadup to the provincial election in Ontario, many members of Ontario’s Progressive Conservative party have been avoiding the debates taking place in their respective ridings. In fact, 22 out of 34 Conservatives have recently failed to show up to debates in which members of their rival parties were participating, a number that greatly exceeds the absences from all other parties combined. When asked to comment on the situation, a campaign official speaking on behalf of the Conservatives stated that the party’s mandate was to have each candidate “carefully assess the value” of participating in a debate in order to “limit the risk” of doing so. He also stated that debates are of “low value” and a candidate’s time can be better used in other ways.

Debates ahead of elections are common in democracies around the world. So, too, are instances of politicians avoiding them. For example, in the run-up to the recent presidential election in the Philippines, candidate Ferdinand “Bongbong” Marcos Jr. participated in only one out of four scheduled debates; when asked to explain his absence, he cited the desire to keep his campaign “positive” (although many of his critics speculated that his failure to attend the debates was motivated by a desire to avoid discussing his family’s history). The strategy seems to have paid off, as he is presumed to have won the election.

Some who disapprove of Conservative Party candidates skipping debates in Canada have called the move “anti-democratic”; in the Philippines, Marcos’ opponent Leni Robredo said that participating in debates is something that candidates “owe…to the people and to our country.”

Is this right? Do politicians have any specific obligation to participate in debates? And if so, what kind of obligation?

There is one sense in which political candidates like those mentioned above are not obligated to participate in debates, given that not participating does not preclude one from running. We might think that there is a different kind of obligation involved, though, one associated with “playing fair” or maybe “being a good sport”; such norms, however, have rarely held much water in the world of politics. Of course, one risks losing face in front of one’s constituents by failing to appear for debates, but if a politician can make up that loss in other campaign activities, or if one’s target constituency doesn’t really care about the outcomes of political debates anyway, then it might be more prudent to skip debates altogether, especially given the risk of hurting one’s campaign by getting caught off-guard by a question or saying something dumb.

So we might think that politicians who refuse to attend debates are not violating any explicit electoral processes, or being imprudent, but are instead lazy, or cowards (or both). But this is perhaps a far cry from the accusations above of being “anti-democratic.”

Indeed, there does seem to be something more egregious about avoiding political debates, namely that doing so undercuts informed citizenship, something that is a necessary condition for a well-functioning democracy.

To defend this kind of argument we need to consider what we mean by “informed” and “well-functioning.” But in general, the claim is this: if those in positions of political power are meant to be reflective of, and act in service to, the will of the people, broadly construed, then those people need to be informed about what candidates’ positions are on important issues.

That’s glossing over a lot of nuances, of course. And it’s not as if every voter needs to be extremely knowledgeable about all the details of every candidate’s respective platform, or stance on every policy issue, in order to be well-informed. Regardless, the loose argument is that better-informed voters will tend to make better voting choices, and the responsibility to inform citizens lies not just with said citizens, but with the politicians, as well. Political debates are, arguably, a significant source of information about candidates. Failing to participate in such debates thus prevents voters from getting important information they need to be well-informed. We can then see why one might think that avoiding political debates is anti-democratic, as doing so is antithetical to the democratic process one is participating in.

One might think, though, that there are surely other ways in which one can become well-informed about the candidates in an election – one could, say, look up relevant information online.

Doesn’t such readily available information make political debates more or less obsolete, at least in terms of their ability to inform the public?

No, for a few reasons. First, reading statements online does not give one the same kind of information that might come up at a debate, as there are no opportunities for rebuttals or follow-up questions. Second, one does not get to compare candidates in the same way when simply reading information online. Finally, people are not great at actively seeking out information about candidates who are members of parties one does not already endorse. It seems less likely that one would change one’s mind when doing self-directed research, in comparison to a debate.

Here is the kind of being-well-informed that seems especially crucial for a well-functioning democracy: not just knowledge about what one’s favorite candidate is all about, filtered through one’s preferred news outlet or website, but information about how different candidates compare, as well as information about other choices one may not have considered. More than just custom or nuisance, debates serve an important function of helping to inform the voting public, and failing to engage in them violates obligations central to democracy.

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

The Democratic Limits of Public Trust in Science

photograph of Freedom Convoy trucks

It isn’t every day that Canada makes international headlines for civil unrest and disruptive protests. But the protests which began last month in Ottawa by the “Freedom Convoy” have inspired similar protests around the world and led to the Canadian government declaring a national emergency and seeking special powers to handle the crisis. But what exactly is the crisis that the nation faces? Is it a far-right, conspiratorial, anti-vaccination movement threatening to overthrow the government? Or is it the government’s infringement on rights in the name of “trusting the experts”?

It is easy to take the view that protests are wrong. First, we must acknowledge that the position that the truckers are taking in protesting the mandate is fairly silly. For starters, even if they were successful at getting the Canadian Federal Government to change its position, the United States also requires that truckers be vaccinated to cross the border, so this is a moot point. I also won’t defend the tactics used in the protests including the noise, blocking bridges, etc. However, several people in Canada have pinned part of the blame for the protests on the government, and Justin Trudeau in particular, for politicizing the issue of vaccines and creating a divisive political atmosphere.

First, it is worth noting that Canada has relied more on restrictive lockdown measures as of late compared to other countries, and much of this is driven by the need to keep hospitals from being overrun. However, this is owing to long-term systemic fragility in the healthcare sector, particularly a lack of ICU beds, prompting many – including one of Trudeau’s own MPs – to call for reform to healthcare funding to expand capacity instead of relying so much on lockdown measures. One would think that this would be a topic of national conversation with the public wondering why the government hasn’t done anything about this situation since the beginning of the pandemic. But instead, the Trudeau government has only chosen to focus on a policy of increasing vaccination rates, claiming that they are following “the best science” and “the best public health advice.”

Is there, however, a possibility that the government is hoping that enough people get vaccinated and with enough lockdown measures, they can avoid having the healthcare system collapse, expect the pandemic blows over, and escape without having to address such long-term problems? Maybe, maybe not. But it certainly casts any advice offered or decisions made the government in a very different light. Indeed, one of the problems with expert advice (as I’ve previously discussed here, here, and here) is that it is subject to inductive risk concerns and so the use of expert advice must be democratically-informed.

For example, if we look at a model used by Canada’s federal government, one will note how often its projections are based on different assumptions about what could happen. The model itself may be driven by a number of unstated assumptions which may or may not be reasonable. It is up to politicians to weigh the risks of getting it wrong, and not simply treat experts as if they are infallible. This is important because the value judgments inherent in risk assessment – about the reasonableness of our assumptions as well as the consequences of getting it wrong and potentially overrunning the healthcare system – are what ultimately will determine what restriction measures the government will enact. But this requires democratic debate and discussion. This is where failure of democratic leadership breeds long-term mistrust in expert advice.

It is reasonable to ask questions about what clear metrics a government might use before ending a lockdown, or to ask if there is strong evidence for the effectiveness of a vaccine mandate. But for the public, not all of whom enjoy the benefit of an education in science, it is not so clear what is and is not a reasonable question. The natural place for such a discussion would be the elected Parliament where representatives might press the government for answers. Unfortunately, defense of the protest in any form in Parliament is vilified, with the opposition being told they stand with “people who wave swastikas.” Prime Minister Trudeau has denounced the entire group as a “small fringe minority,” “Nazis,” with “unacceptable views.” However, some MPs have voiced concern about the tone and rhetoric involved in lumping everyone who has a doubt about the mandate or vaccine together.

This divisive attitude has been called out by one of Trudeau’s own MPs who said that people who question existing policies should not be demonized by their Prime Minister, noting “It’s becoming harder and harder to know when public health stops and where politics begins,” adding, “It’s time to stop dividing Canadians and pitting one part of the population against another.” He also called on the Federal government to establish clear and measurable targets.

Unfortunately, if you ask the federal government a direct question like “Is there a federal plan being discussed to ease out mandates?” you will be told that:

there have been moments throughout the pandemic where we have eased restrictions and those decisions have always been made guided by the best available advice that we’re getting from public health experts. And of course, going forward we will continue to listen to the advice that we get from our public health officials.

This is not democratic accountability (and it is not scientific accountability either). “We’re following the science” or “We’re following the experts” is not good enough. Anyone who actually understands the science will know that this is more a slogan than a meaningful claim.

There is also a bit of history at play. In 1970, Trudeau’s father Pierre invoked the War Measures Act during a crisis that resulted in the kidnapping and murder of a cabinet minister. It resulted in rounding up and arrest of hundreds of arrests without warrant or charge. This week the Prime Minister has invoked the successor to that legislation for the first time in Canadian history because…trucks. The police were having trouble moving the trucks because they couldn’t get tow trucks to help clear blocked border crossing. Now, while we can grant that the convoy has been a nuisance and has illegally blocked bridges, we’ve also seen the convoy complying with court-ordered injunctions on honking, we’ve also seen the convoy organizers opposing violence, with no major acts of violence taking place. While there was a rather odd proposal that the convoys could form a “coalition” with the parliamentary opposition to form a new government, I suspect that this is more owing to a failure to understand how Canada’s system of government works rather than a serious attempt to, as some Canadian politicians would claim “overthrow the government.”

The point is that this is an issue that has started with a government not being transparent and accountable, abusing the democratic process in the name of science, and taking advantage of the situation to demonize and delegitimize the opposition. It is in the face of this, and in the face of uncertainty about the intentions of the convoy, and after weeks of not acting sooner to ameliorate the situation, that the government claims that a situation has arisen that, according to the Emergencies Act, is a “threat to the security of Canada…that is so serious as to be a national emergency.” Not only is there room for serious doubt as to whether the convoy situation has reached such a level, but this is taking place during a context of high tension where the government and the media have demonstrated a willingness to overgeneralize and demonize a minority by lobbing as many poisoning the well fallacies as possible and misrepresenting the nature of science. The fact that in this political moment the government seeks greater power is a recipe for abuse of power.

In a democracy, where not everyone enjoys the chance to understand what a model is, how they are made, or how reliable (and unreliable) they can be, citizens have a right to know more about how their government is making use of expert advice in limiting individual freedom. The politicization of the issue using the rhetoric of “following the science,” as well as the government’s slow response and opaque reasoning have only served to make it more difficult for the public to understand the nature of the problem we face. Our public discourse has been stunted by transforming our policy conversations into a narrow one about vaccination and the risk posed by the “alt right.” But there is a much bigger, much more real problem here: the call to “trust the experts” can be used just as easily as a rallying call for rationality as it can be a political tool to demonizing entire groups of people to justify taking away their rights.

‘Don’t Look Up’: Willful Ignorance of a Democracy in Crisis

image of meteor headed toward city skyline

Don’t Look Up spends over two hours making the same mistake. In its efforts to champion its cause, the film only alienates those who most need to be moved by its message.”

Holly Thomas, CNN

“it’s hard to escape the feeling of the film jabbing its pointer finger into your eye, yelling, Why aren’t you paying attention! … The thing is, if you’re watching Don’t Look Up, you probably are paying attention, not just to the news about the climate and the pandemic but to a half-dozen other things that feel like reasonable causes for panic. … So when the credits rolled — after an ending that was, admittedly, quite moving — I just sat there thinking, Who, exactly, is this for?”

Alissa Wilkinson, Vox

“[The film’]s worst parts are when it stops to show people on their phones. They tweet inanity, they participate in dumb viral challenges, they tune into propaganda and formulate conspiracy theory. At no point does Don’t Look Up’s script demonstrate an interest in why these people do these things, or what causes these online phenomena. Despite this being a central aspect of his story, McKay doesn’t seem to think it worthy of consideration. There’s a word for that: contempt.”

Joshua Rivera, Polygon

And so on, and so on. Critics of Adam McKay’s climate change satire all point to the same basic defect: “Don’t Look Up” is nothing more than an inside joke; it isn’t growing the congregation, it’s merely preaching to the choir. Worse, the movie flaunts its moral superiority over the deplorables and unwashed masses instead of shaking hands, kissing babies, and doing all the other politicking necessary for changing hearts and minds. When given the opportunity to speak to, it speaks down. In the end, this collection of Hollywood holier-than-thou A-listers sneers at their audience and is left performing only for themselves.

But what if the critics have it all wrong? What if the movie’s makers have no intention of wrestling the various political obstacles to democratic consensus? Indeed, they seem to have absolutely zero interest in playing the political game at all. Critics of “Don’t Look Up” see only a failed attempt at coalition-building, but what if the film’s doing precisely what it set out to do – showing us that there are some existential threats so great that they transcend democratic politics?

“Don’t Look Up” takes a hard look at the prospects of meaningful collective action (from COVID to the climate and beyond) with democratic institutions so corrupted by elite capture. (Spoiler: They’re grim.) Gone is any illusion that the government fears its people. In this not-so-unfamiliar political reality, to echo Joseph Schumpeter, democracy has become nothing more than an empty institutional arrangement whereby elites acquire the power to decide by way of a hollow competition for the people’s vote. This political landscape cannot support anything as grand as Rousseau’s general will – a collection of citizens’ beliefs, convictions, and commitments all articulating a shared vision of the common good. Instead, political will is manufactured and disseminated from the top down, rather than being organically generated from the ground up.

The pressing question “Don’t Look Up” poses (but does not address) is what to do when democracy becomes part of the problem. If our democratic processes can’t be fixed, can they at least be laid aside? With consequences as grave as these, surely truth shouldn’t be left to a vote. When it comes to the fate of the planet, surely we shouldn’t be content to go on making sausage.

Misgivings about the democracy are hardly new. Plato advised lying to the rabble so as to ensure they fall in line. Mill proposed assigning more weight to certain people’s votes. And Rousseau concluded that democracy was only rightly suited for a society composed entirely of gods.

Like these critical voices, Carl Schmitt similarly challenged our blind faith in democratic processes. He remained adamant that the indecisiveness that plagued republics would be their downfall. Schmitt insisted on the fundamental necessity of a sovereign to address emergency situations (like, say, the inevitable impact of a planet-killing comet). There has to be someone, Schmitt claimed, capable of suspending everyday political norms in order to normalize a state of exception – to declare martial law, mobilize the state’s resources, and organize the public. Democracies which failed to grasp this basic truth would not last. The inability to move beyond unceasing deliberation, infinite bureaucratic red tape, and unending political gridlock, Schmitt was convinced, would spell their doom. In the end, all governments must sometimes rely on dictatorial rule just like ancient Rome where time-limited powers were extended to an absolute authority tasked with saving the republic from an immediate existential threat.

This is the savior that never appears. The tragedy of the movie is that our protagonists know the truth, but cannot share it. There remain no suitable democratic channels to deliver their apocalyptic message and spur political action. They must sit with their despair, alone. Like Dewey, Kate Dibiasky and Dr. Mindy come to recognize that while today we possess means of communication like never before – the internet, the iPhone, Twitter, The Daily Rip – (so far) these forces have only further fractured the public rather than being harnessed to bring it together.

By the end, when the credits roll, the film leaves us in an uncomfortable place. In documenting the hopelessness of our heroes’ plight, is “Don’t Look Up” merely highlighting the various ways our democracy needs to be repaired? Or is it making the case that the rot runs so deep, democratic norms must be abandoned?

Whatever the answer, it’s a mistake to think “Don’t Look Up” fails to take the problem of political consensus seriously. It simply treats such division as immovable – as inescapable as the comet. The question is: what then?

Intervention and Self-Determination in Haiti

photograph of Hispaniola Island on topographic globe

Haiti is in crisis, though that fact is not new. Its president, Jovenel Moïse, has been assassinated, probably by foreign actors, after refusing to leave office following the end of his term — a term that began with a contested election. Though, this isn’t the first time that’s happened either. Haiti has been beset by conflict nearly since its founding with almost all the brief periods of “peace” accompanied by ruthless, authoritarian control either by native dictators or foreign powers. Using terminology from MLK Jr., there has never been the positive peace of liberation in Haiti, though for brief periods there has been the negative peace of iron-fisted oppression.

No one yet knows who assassinated Moïse or why they did it. However, there are some clues. The assassination was “well-orchestrated” with numerous vehicles full of upwards of 20 people storming the president’s home early in the morning while most of his guards were noticeably absent. And, Moïse had many enemies: he was unpopular, many powerful business-controlling families opposed him, and the leader of G9, the most prominent confederation of gangs, expressed opposition to his reign.

As a result of the assassination, the country has fallen into a chaos of leadership. At least three people have been claiming legitimate authority over the Haitian government: Claude Joseph, the acting Prime Minister who was fired by Moïse just a week before his death; Ariel Henry, the man Moïse appointed to replace Joseph; and Joseph Lambert, the President of the Haitian Senate who the Senate voted for to succeed Moïse. Meanwhile, the legislature is mostly empty, since the terms of all the representatives in Haiti’s lower house and two-thirds of those in the upper house expired last year and elections to replace them were not held. Because of this situation, Moïse was ruling by decree and advocating a constitutional referendum to increase the power of the presidency. Thus, when he was killed, there was no obvious authority to replace him. (Claude Joseph has agreed to hand power over to Ariel Henry, but, as NPR reports, “some lawmakers. . .  said the agreement lacks legal legitimacy.”)

Without clearly legitimate leadership, several ongoing crises in Haiti are likely to worsen: the spread of COVID-19 variants, the dysfunctional economy, and the growing power of violent gangs. The situation is unconscionable. Surely, Haiti is in need of aid and would benefit from the help of its rich, powerful neighbor, the United States, right? Unfortunately, it’s not that straightforward.

There are two main camps on this issue. Some people, mostly liberal American commentators, are pro-intervention for basically the reason expressed above: the situation is dire, requires an immediate fix, and the people of Haiti cannot do it alone. Others, including socialists, anti-imperialists, and activists in Haiti, oppose intervention, citing the long history of foreign intervention in Haiti that has only made things worse, furthering the interests of everybody except the Haitian people.

Before we turn to assessing the merits of these two positions, it’s important to appreciate some of the context since, without a sense of the history here, we seem doomed to repeat it. We’ll look at how Haiti has actually been governed, the history of intervention in Haiti by foreign powers, and why there is disagreement about who should lead the government.

Putting the Problem in Context

History of Foreign Influence in Haiti

In practice, Haiti has rarely lived up to the ideal of a constitutional republic. The Spanish and then the French colonized the island from 1492 to 1804, when Haitians declared independence. For most of its history thereafter, Haiti has been led by a local dictator (such as François Duvalier), a military junta, or a foreign occupying military (most often the United States).

The U.S. occupied Haiti from 1915-1937 and from 1994-1995 and participated in the 2004 coup d’état of Haiti’s first truly democratically elected President Jean-Bertrand Aristide. In the first occupation, the United States military compelled Haiti to rewrite its constitution to allow foreign ownership of Haitian land. They killed fifteen thousands rebelling Haitians. And, they introduced Jim Crow laws, reintroducing racism to the island after its founders had declared that all its citizens would be considered Black. They did all this to reinforce American business interests on the island and to strengthen the United States’ imperialist interests in the region.

The UN then occupied Haiti from 2004 to 2017, ostensibly to keep the peace. They brought cholera, killing thousands. And, there were credible reports of UN soldiers very frequently sexually assaulting the Haitians they were stationed there to protect.

And, already, it has come out that some of the Colombian mercenaries involved in the assassination were U.S.-trained, if not actually U.S.-led. The U.S. trained these Colombian mercenaries to fight against drug cartels in Central and South America, just one more example of U.S. foreign intervention with unforeseen consequences.

Given all this foreign influence and the changes those foreign influences have had on the Haitian Constitution, the constitution in Haiti is not treated with the same reverence as the United States Constitution is in the U.S. Nonetheless, for those who don’t claim to rule by sheer force (as opposed to the numerous gangs who do, and, in practice, control large parts of the capital city of Port-au-Prince) the constitution is the sole source of authority.

Origin of the Leadership Controversy

The current Constitution, from 2012, says that the prime minister assumes the role of the president should the sitting president die. Thus, Claude Joseph and Ariel Henry, both of whom claim the Prime Ministership, claim the power to serve as acting president. But, as the Haitian Times reports, “the constitution also says that if there is a vacancy ‘from the fourth year of the presidential mandate,’ the National Assembly will meet to elect a provisional president.”

Unfortunately, the National Assembly has been almost entirely empty since last year when the terms of two-thirds of the Senators expired along with the terms of all the House Deputies. Thus, the remaining 10 Senators, who are the only elected representatives in office, claim the authority to elect the provisional president. Of those ten, eight agreed on Joseph Lambert, the President of the Senate, who is the third to claim the power of the presidency. With the president assassinated, the Chief Justice of the Supreme Court recently having passed away from COVID-19, and the legislature virtually empty, all three branches of government lack straightforwardly legitimate leadership.

Ethical Perspectives on Intervention

Pro-Intervention

So, what should be done about this mess, if anything? The pro-intervention camp varies in their prescriptions, but we can identify two main suggestions that repeatedly crop up: first, they support a U.S.-led investigation into the assassination. As Ryan Berg of CNN states, “The international community. . . should push for an investigation. . . lest [the perpetrators] benefit from the impunity that is all too common in Haiti.” If unelected interests can simply kill politicians they don’t like, the government isn’t much of a government at all.

Second, they recommend the U.S. or UN organize an immediate election to refill the legislature and office of the president. In Haiti, the government is responsible for running elections. But, as we’ve seen, there isn’t much of a government left. Thus, as the editorial board of The Washington Post argues,

“The hard truth, at this point, is that organizing them and ensuring security through a campaign and polling, with no one in charge, may be all but impossible.”

Anti-Intervention

The anti-interventionists staunchly disagree. Kim Ives, an investigative journalist at Haïti Liberté explained in an interview with Jacobin that the assassination was likely a response to socialist Jimmy Cherizier. Cherizier brought together nine of the largest gangs in Port-au-Prince into a single organization called G9 and advocated against foreign ownership of Haitian businesses. He made a statement on social media, saying, “It is your money which is in banks, stores, supermarkets and dealerships, so go and get what is rightfully yours.” Ives supports the “G9 movement,” as he calls it and so opposes intervention that would serve to crack down on the “crime” he sees as revolutionary. As he says, the interests of Haiti’s rich are “practically concomitant with US business interests” and so U.S. intervention would “set the stage for the repression, for the destruction of the G9 movement.”

But, you need not be a revolutionary socialist to oppose intervention in Haiti. A great many Haitians oppose U.S. intervention. They tend to give two reasons: first, foreign intervention has frequently hurt Haiti, intentionally or unintentionally, far more than it has helped; and second, as André Michel, a human rights lawyer and opposition leader demands, “The solution to the crisis must be Haitian.” Racism and classism has led outside nations to think Haitians cannot solve their own problems. But, they have always failed. As Professor Mamyrah Douge-Prosper urges, “Rather than speaking authoritatively while standing atop long-standing racist tropes, it is important more than ever to be humble, ask questions, and focus on the deeper context.”

In short, it is Haitians who know best how to fix Haiti. Its problems are largely a result of colonialism and imperialism from foreigners. France forced Haiti into debt to preserve its independence. The UN brought cholera and sexual violence. Foreign aid money has destroyed the local economy. Foreign entanglement has always been the problem, not the solution.

Resolving the Disagreement

What are we to make of this disagreement between those in favor of and those against foreign intervention? One solution is to appeal to democracy and simply do what the Haitian people want. The people of Haiti may have a right to self-determination that we must respect. The value of respecting national sovereignty as a rule might be more important than the benefits accrued from a particular successful violation of that sovereignty. Now, Claude Joseph has requested U.S. or UN military intervention. But, as we’ve seen,

Haiti’s government is currently far from representing its people.

A strictly consequentialist view would be hard-pressed to justify intervention given the damage past interventions have done. But, perhaps we nonetheless have a duty to do something. Intuitively, it seems hard to say we can just do nothing. Just because the interventions of the past have failed does not mean that this one must too. Surely it’s possible that we might learn from our mistakes. And so, perhaps a limited intervention made with good intentions and careful consideration of past errors could do good.

If you’re a socialist, you might be inclined to oppose intervention in the hope that the G9 movement prompts a real revolution. But, if you agree that the past predicts the future when it comes to the inefficacy of foreign intervention, you must also consider how past socialist revolutions have resulted in dictatorships just as bad if not worse than the governments they were intended to replace. This can be seen least controversially in North Korea, Cambodia, and the Soviet Union.

Conclusion

Regardless of which way you swing on the issue, there are several uncontroversial conclusions we can draw about the situation in Haiti:

First, there is no simple solution. U.S. intervention will not immediately make things all better, nor will simply hoping that Haitians solve their crises on their own without addressing the systemic issues that have led to the present situation. There are a mess of interested players, from wealthy business families, to the abundant political parties, and to socialist gang confederations. Additionally, there are many axes of conflict relevant to this situation: bourgeois vs. proletariat, mulattos vs. Blacks, and colonizers vs. colonized, among others.

Anti-interventionists suggest we respect the autonomy of Haitians by respecting their preferences. But, given all these divisions, there’s no real majority preference to be respected. Respecting any preference would be taking a side. And, more than that, say the pro-interventionists, why do their preferences matter if intervention would make them all better off? It’s a valid concern, but is also the argument that has been given over and over again to justify intervention from foreign nations to ill effect.

Thus, second, we must act in the context of history. Any intervention that is carried out must be done extremely cautiously in light of all the harm past interventions have done. For Haitians to succeed in resolving their problems, they must be treated as capable of resolving their own  problems. An intervention that is not Haitian-led will reinforce the belief of many Haitians that they are not the ultimate agents of their own affairs. With that consciousness, Haiti will not retain any positive changes that are made.

Finally, as we began with, the status quo in Haiti is unacceptable. Something must be done. The situation in Haiti is the complex result of the involvement of numerous nations. These nations have a duty, if not to intervene, then at least to ensure that the sort of harms they caused (and continue to cause) Haiti do not follow it into the future. For example, the French might owe Haiti the enormous debt they unfairly levied on their former colony. Likewise, the United States might be obligated to end the American property holdings in Haiti that were only possible because of the revisions the United States forced upon their constitution. And finally, it may be that colonizing nations more broadly might have an obligation to invest more in colonized nations, to make up for the damage colonization has wrought on the Global South. Haiti’s crisis is just one more example of how the consequences of colonialism and imperialism can filter down across the centuries.

“Stand Back and Stand By”: The Demands of Loyal Opposition

photograph of miniature US flag with blurred background

An incendiary essay is currently making the rounds. Glenn Ellmers’s “‘Conservatism’ is no Longer Enough” is a call to arms: “The United States has become two nations occupying the same country.” The essay details a kind of foreign occupation:

“most people living in the United States today—certainly more than half—are not Americans in any meaningful sense of the term. […] They do not believe in, live by, or even like the principles, traditions, and ideals that until recently defined America as a nation and as a people. It is not obvious what we should call these citizen-aliens, these non-American Americans; but they are something else.”

Given this dire situation where there is “almost nothing left to conserve,” “counter-revolution” represents “the only road forward.” Those brave enough to grasp this grave truth also possess the clarity of vision to see that “America, as an identity or political movement, might need to carry on without the United States.” For if true patriots fail to find the courage to mobilize and take action, “the victory of progressive tyranny will be assured. See you in the gulag.”

While it may seem irresponsible to grant such obvious propaganda further attention, this piece of writing is worthy of consideration for two reasons. First, it bears the seal of a prominent conservative think tank. Published by The American Mind with direct ties to the Claremont Institute (where Ellmers graduated and serves as fellow), the essay is endorsed by a body with not insignificant conservative cachet. The various fellows and graduates, for instance, have ties to major universities. It would be a mistake to see this as obscure preaching to a small flock; the narrative communicated by the piece is emblematic. This isn’t everyday internet debris; this is an intellectualized version of the hard-right’s position serving as mission statement for the Claremont Institute for the Study of Statesmanship and Political Philosophy whose name Ellmers invokes.

Second, the essay has important implications for the various efforts to overturn the results of the presidential election, the January 6th Capitol riot, as well as voting legislation in Georgia (and elsewhere) attempting to restrict the franchise to “real” Americans. Ellmers’s essay offers a compelling framework by which to understand the motives of those behind these events. Like Michael Anton’s “The Flight 93 Election” (another Claremont fellow whose piece was published by the same body), Ellmers’s essay paints the current political moment as a desperate choice: fight or face extinction, rush the cockpit or die.

Ellmers’s essay has received attention in no small part due to its eerie similarity to Weimar-era German political writings. Echoing the kind of language used by Carl Schmitt – the constitutional scholar and jurist who embraced National Socialism while penning substantial critiques of liberalism – the essay emphasizes the need to declare a state of emergency and purge those who have infiltrated the state and subjected American politics, all in an act of restoration and purification. “What is needed, of course,” Ellmers claims, “is a statesman who understands both the disease afflicting the nation, and the revolutionary medicine required for the cure” — a pronouncement which seems strikingly similar to Schmitt’s explanation of the role of the sovereign to normalize the situation by embracing the responsibility to deliver the miracle of the decision – that is, the extra-legal authority to say whether everyday legal norms should apply.

Likewise, the essay seconds Schmitt’s conviction that the basis of politics rests on distinguishing friends from foes and treating them as such. For any state to continue to be, it must be willing and able to forcibly expel those who might undermine its fundamental homogeneity in order to save itself from corruption from within. Again, following Schmitt, the essay issues a dire warning on the supposed political virtue of tolerance and questions our blind faith in democracy’s ability to assimilate conflicting and antagonistic viewpoints and house them under the same roof.

Lost in all the fascist rhetoric is an important philosophical problem. The challenge is familiar to students of political obligation: how can citizens feel any tie to the law when it isn’t their team who’s making the rules? It is what David Estlund has called the “puzzle of the minority democrat”: how can those in the minority consider themselves self-governing if they are subject to laws they have not explicitly endorsed?

This is no small thing; resolving this tension is the key to the bloodless transition of power. Ensuring citizens can adequately identify with the law and see themselves sufficiently reflected in their government is a necessary component of the exercise of legitimate political authority. We need a compelling answer for how citizens might still see themselves as having had a hand in authoring these constraints even when their private preferences have failed to win the day. Why should those in the minority sacrifice their own sense of what is right simply because they lack numbers on their side on any particular occasion?

Our answers to this puzzle often begin by emphasizing that democratic decision-making is essentially about compromise. Majority rule acknowledges our basic equality by publicly affirming the worth of each citizen’s viewpoint. It privileges no single individual’s claim to knowledge or expertise. It grants each citizen the greatest share of political power possible that remains compatible with people’s basic parity. From there, explanations begin to diverge.

Some accounts emphasize the duty to live by the result of the game in which we’ve been a willing participant. Others highlight the opportunity to impact the decision, voice concerns, and engage in reason-giving. A few maintain faith in the majority’s ability to come to the correct decision.

Regardless of the particulars, each of these accounts makes a virtue of reciprocity; individual freedom must be balanced against the equally legitimate claims to liberty by one’s fellows. Refusing to acknowledge this binding force usurps others’ right to equal discretion in shaping our shared world and thus violates our moral commitment to the fundamental equality of people.

These considerations about how best to accommodate deep, and potentially incompatible, disagreement have important implications for our politics today. For example, the ongoing debate over reforming the filibuster is a conversation about, among other things, the appropriate portion of power those in the minority should wield. Different people articulate different visions of the part the opposition party needs to play. But we seemingly all agree that this role must be more robust than one wherein those in the minority simply bide their time until they can rewrite the law and install their own private political vision. Instead, we must continue to articulate the significant demands the concept of loyal opposition makes on all of us. Responsible statesmanship is not solely the burden of those who wear the crown.

The Value of Secrecy in Congress

photograph of C-SPAN floor vote TV coverage

During both of the most recent impeachments, an old argument resurfaced. Afraid of retribution, many spoke out to advocate a way Republican members of Congress could get rid of Trump and keep their own seats. They suggested that the impeachment vote in the House and the Senate should be done in secret. Republican voters would know some Republicans voted to convict, but the blame would be diluted, spread across all 50 or so Republican senators. And so each Republican senator would individually be unlikely to lose his or her seat.

But this raises a question: if it would be good to convict Trump secretly, why not make the votes on all sorts of controversial issues secret? The people would know what laws were passed of course, but no one would be allowed to see committee meetings. No congressional sessions or votes would be broadcast on TV. You would vote in your representatives and then for two, four, or six years, you would simply trust that they voted in the way that was best. Members of Congress could pass legislation that might be unpopular to their constituency, but important for the nation at large. And neither ordinary citizens nor lobbyists could influence the legislative process after election day.

Many, however, are horrified by this idea. Making acts of Congress secret would be akin to government by aristocracy, rule by the elites, not democracy. Transparency is vital because it allows citizens to accurately judge whether their elected representatives are actually representing them instead of simply voting their own interests.

Let’s consider the arguments on both sides here and see if we can develop a better understanding of the issue. What are the benefits of congressional secrecy? And, are the costs to democracy too severe?

The first reason why one might think congressional votes should be secret is because this secrecy would allow Congress to stop acting only along party lines. Congress is extremely partisan nowadays and this hasn’t always been the case. Furthermore, this unwillingness to cross the aisle leads to difficulties in Congress achieving popular political ends. For example, nearly 60 percent of Americans supported Trump being convicted and removed from office after the second impeachment trial. Even more Americans, including 64 percent of Republicans, support stimulus checks. But, no Republican members of Congress voted for Biden’s stimulus check, despite voting for Trump’s. And finally, a majority of Republicans support increasing the minimum wage, but Republican members of Congress vote against it when the issue is raised by Democrats. Voting against political opponents seems to be more important to members of Congress than passing popular legislation.

The fact of the matter is that Congress isn’t beholden to your average voter. Nor even the average voter from their party. Members of Congress are beholden to the partisans of their party because of the primary system. According to a study from the Social Science Research Council, primary voters tend to prefer politically extreme candidates. And if candidates can’t make it past the primaries, it doesn’t matter how popular they would be in the general election. (Some have suggested primaries are responsible for Trump’s nomination.) In any case, if Congressional votes were more often secret, congresspeople could give lip service to extremism in the primaries while looking to what’s best for the country when they actually vote. Those extreme partisans wouldn’t know who betrayed them. Thus, legislation that is broadly popular, but not popular among extreme partisans, could be passed and perhaps we’d be better off.

But, partisans and primaries aren’t the only reason Congress doesn’t pass popular legislation. Another problem congressional secrecy, especially in committee meetings, could solve is the influence of lobbyists and donors. As I have written elsewhere, money in politics is a seriously corrupting influence. Lobbyists and donors frequently control the legislative agenda. But, again, this hasn’t always been the case. The number of lobbyists skyrocketed in the 1970s with the passage of so-called “Sunshine Laws” meant to improve government transparency. Some of these are good: Freedom of Information Act requests allow the people to have access to a great deal of information about the operation of government that would be otherwise hidden from them. But, they also allowed lobbyists to flow in from the lobby through the previously closed doors of committee meetings. As is argued by the Congressional Research Institute (a think tank, not part of Congress), these laws “enormously enhanced the ability of ‘outside’ lobbyists and powerful entities to influence the legislative process,” and so they claim “all legislative transparency overwhelmingly benefits special interests and the powerful.”

Think of it this way: before, lobbyists and donors could monitor how congressional votes shook out. If particular members of Congress voted how the donors wanted, they would get more campaign donations, and if not, they wouldn’t. This influence has always been around. But since the passage of the Sunshine Laws, lobbyists can monitor the entire legislative process: they can write the legislation, follow along with congressional committee meetings to make sure no revisions are made they don’t like, and display their approval or disapproval to members of Congress throughout the process. Of course, ordinary citizens can do this too, but they tend not to have the resources to lobby as powerfully as massive corporations or billionaires. If the relevant “Sunshine Laws” were reversed, many of these problems would go away, and if congressional votes were made secret too, lobbying would become a very bad investment. Donors could spend money on lobbying and campaign donations and hope that the legislator feels pressured by it, but they would never be sure if it worked. Thus, the influence of money in politics would be diminished.

However, there remains an enormous counter-argument to making the acts of Congress secret. I have been making a very utilitarian case for secrecy. It would achieve better results for the American people. But that may not be the only thing that matters. One might argue that the ends aren’t the only things that matter; the means do too. Making the acts of Congress secret would allow lawmakers to ignore the interests of the people in favor of their own opinions and values. It would allow members of Congress to lie to the people about how they voted with little to no consequence. Perhaps transparency should be considered a virtue such that if maintaining transparency means lobbyists and donors get their way, so be it.

One might say getting something they consider important, like removing Trump from office, or getting stimulus to the people, or raising the minimum wage, isn’t worth the cost of allowing Congress to be unbeholden to voters. Is a democracy led by representatives who can ignore the voters really a democracy at all? Many political philosophers, like John Locke and Jean-Jacques Rousseau, have argued that government derives its power from the consent of the governed. One might hold that doing what the people want, even if it’s wrong, is more important than doing right, if it means ignoring the will of the people. A government that doesn’t act for the people may not be much of a government at all. And why should we think representatives know better than the population at large? They are only human. And more than that they are an unrepresentative sample of the country, being more white, more male, older, and wealthier than the American population.Thus, on this view, making the acts of Congress secret is untenable: it is valid only according to a consequentialist framework and anyone who disagrees with such a framework will abhor the fact that legislators will be incentivized toward dishonesty and away from democratic principles. As Aristotle wrote in the Nicomachean Ethics, to act “at the right times, with reference to the right objects, towards the right people, with the right motive, and in the right way, is what is both intermediate and best, and this is characteristic of virtue,” nothing more, nothing less. This is a far higher standard than simply weighing the consequences and one we should strive for.

Making the acts of Congress secret would be an enormous change and not one to be taken lightly. As I have shown, your thoughts on this issue can vary significantly based on which moral framework you follow. The case, at least in the short term, is clear for the consequentialist. But for the virtue ethicist or deontologist, things are far murkier. Answering this question, as with many moral questions requires us to consider which of our values cannot be crossed? Which do you value more, if one has to be sacrificed: transparency and democracy, or the people’s welfare? In any case, something needs to be changed so that the problems of political partisanship and the influence of money in politics are resolved. Making the acts of Congress may be one solution but there are surely others. Perhaps we should reform the primary system. Perhaps we should overturn Citizens United to diminish the power of donors and lobbyists. The number of ethical solutions is only limited by our creativity, something which must be trained by continual practice and reflection.

Climate Services, Public Policy, and the Colorado

photograpg of Colorado River landscape

What does the Colorado River Compact of 1922 have to do with ethical issues in the philosophy of science? Democracy, that’s what! This week The Colorado Sun reported that the Center for Colorado River Studies issued a white paper urging reform to river management in light of climate change to make the Colorado River basin more sustainable. They argue that the Upper Colorado River Commission’s projections for water use are inflated, and that this makes planning the management of the basin more difficult given the impact of climate change.

Under a 1922 agreement among seven U.S. states, the rights to use water from the Colorado River basin are divided into an upper division — Colorado, New Mexico, Utah, and Wyoming — and a lower division — Nevada, Arizona, and California. Each division was apportioned a set amount with the expectation being that the upper division would take longer to develop than the lower division. The white paper charges that the UCRC is relying on inflated water usage projections for the upper division despite demand for development in the upper basin being flat for three decades. In reality, however, the supply of water is far lower than projected in 1922, and climate change has exacerbated the issue. In fact, the supply has shrunk so much that upper basin states have taken efforts to reduce water consumption so that they do not violate the agreement with lower basin states. As the Sun reported, “If it appears contradictory that the upper basin is looking at how to reduce water use while at the same time clinging to a plan for more future water use, that’s because it is.”

To see how this illustrates an ethical problem in philosophy of science, we need to first examine inductive risk. While it is a common enough view that science has nothing to do with values, a consensus among several philosophers of science has formed in the past decade which suggests that not only does science use values, but that this is a good thing. Science never deals with certainty but with inductive generalizations based on statistical modelling. Because one can never be certain, one can always be wrong. Inductive risk involves considering the ethical harms which one should be aware of should their decisions turn out to be wrong. For example, if there is a 90% chance that it will not rain, you may be inclined to wear your expensive new shoes. On the other hand, if you are wrong about that 90% chance, your expensive new shoes will get ruined in the rain. In a case like this, you need to evaluate two factors at the same time: how important are the consequences of being wrong, and, in light of this judgment, how confident do you need to be in your conclusion? If your shoes cost $1000 and ruin very easily, you may want a level of confidence close to 95% or 99% before leaving home. On the other hand, if your shoes are cheap and easy to replace, you may be happy to go outside with a 50% chance of rain.

When dealing with what philosophers call socially-relevant or policy-relevant science, the same inductive risk concerns arise. In an inductive risk situation, we need to make value judgments about how important the consequences of being wrong are, and how accurate we thus ought to be. But what values should be used? According to many philosophers of science, when dealing with socially-relevant science, only democratically-endorsed values are legitimate. The reason for this is straightforward; if values are going to be used that affect public policy-making, then the people should select those values rather than scientists, or other private interests, as that would give them undue power and influence in policy-making.

This brings us back to the Colorado River. A new area of climate science known as “climate services” aims to make climate data more usable for social decision-making by ensuring that the needs of users are central to the collection and analysis of data. Typically, such climate data is not organized to suit the needs of stakeholders and decision-makers. For example, Colorado River Basin managers employed climate services from state and national agencies to create model-based projections of Lake Mead’s ability to supply water. In a recent paper, Wendy Parker and Greg Lusk have explored how inductive risk concerns allow for the use of values in the “co-production” of climate services jointly between providers and users. This means that insofar as inductive risk is a concern, the values of the user can affect everything from model creation, the selection of data, and even the ultimate conclusions reached. Thus, if a group wished to develop land in the Colorado basin, and sought the use of climate services, then the values of that group could affect the information and data that is used and what policies take effect.

According to Greg Lusk, however, this is potentially a problem since if any user who pays for climate services is able to use their own values to affect scientifically-informed policy-making, then this would violate the need for the values to be democratically endorsed. He notes:

“Users could refer to anyone, including government agencies, public interest groups, private industry, or political parties …. The aims or values of these groups are not typically established through democratic mechanisms that secure representative participation and are unlikely to be indicative of the general public’s desires. Yet, the information that climate service providers supplies to users is typically designed to be useful for social and political decision making.”

It is worth noting, for example, that the white paper issued by the Center for Colorado River Studies was funded by the Walton Family Foundation, the USGS Southwest Climate Adaptation Science Center, the Utah Water Research Laboratory, and various other private donors and grants. This report could affect policy maker’s decisions. None of this suggests that the research is biased or bad, but to whatever extent values can influence such reports, and to whatever extent such reports affect policy-making, is the extent to which we should question whose values are playing what roles in information-based policy-making.

In other words, there is an ethical dilemma. On the one hand, climate services can offer major advantages to help users of all kinds prepare for, mitigate, adapt to, or plan development in light of climate change. On the other hand, scientific material designed to be relevant for policy-making, yet heavily influenced by non-democratically endorsable values, can be hugely influential and can affect what we consider to be good data-driven policy. As Lusk notes,

“According to the democratic view, then, the employment of users’s values in climate services would often be illegitimate, and scientists should ignore those values in favor of democratically endorsed ones, to ensure users do not have undue influence over social decision making.”

Of course, even if we accept the democratic view, the problem of defining what “democratically endorsable” means remains. As the events of the past year remind us, democracy is about more than just voting for a representative. In an age of polarization where the values endorsed may be likely to swing radically every four years, or where there is disagreement among various elected governments, deciding which values are endorsable becomes extremely difficult, and ensuring that they are used becomes more impactable. Thus, deciding what place democracy has in science remains an important moral question for philosophers of science, but even more so for the public.

When Should We Be Undemocratic?

photograph of the White House at night

I am inclined to think the following two things:

  1. The Senate should have convicted former President Trump and prohibited him from holding future office (as permitted by Article I, Section 3, Clause 7 of the U.S. constitution).
  2. It would have been undemocratic for the Senate to bar President Trump from future office.

Why do I think it undemocratic to bar President Trump from office? Simply because it removes the ability of the democratic populous to select him once again as president. Certainly, I think his behavior should disqualify him from ever holding public office again; but there are a great many people who I believe should never hold public office, and yet it would be undemocratic for my will to be decisive in preventing my fellow citizens from electing them.

Barring a president from future office, then, is actually far more profoundly undemocratic than removing a president who was voted into office. After a president has been elected, it takes four years before the people could vote him or her out. Thus, impeachment and removal is necessary to maintain an interim political check. The problem with barring someone from future office, however, is that future elections already provide this democratic check. The people can choose to not reelect someone! To bar someone from holding office says: even if the people choose to reelect, even then, he or she should not be allowed to take that seat.

I’m tempted to console myself here; to tell myself that President Trump’s behavior made him a threat to democracy, and as such it is not undemocratic to remove his name from the list of potential candidates. This, however, I think would just be a pleasing rationalization. It is, itself, undemocratic for me to unilaterally decide which threats to democracy should (and should not) bar one from future office. For a long time, people thought that there was something essentially undemocratic about electing a Catholic to high office, since that would put U.S. decision-making under the moral control of the Pope. Of course, this was just anti-Catholic bigotry; but who am I to say the argument about Catholics is wrong and the argument about President Trump is right? When I look at the evidence this seems clear, but looking at the evidence I also thought Trump should never be president, and it would clearly have been undemocratic to make that choice for the nation.

To see the worry, note that I think there are many undemocratic aspects of both the Democratic and Republican platforms. But it would clearly be undemocratic to prohibit any Republicans or Democrats from running for office. To decide what undemocratic behavior disqualifies one from office should, in a democracy, be up to the people.

Most arguments I heard against impeachment seemed bad to me, but even I had to admit there was something to the worry that it would be undemocratic to not let the people decide for themselves.

Of course, there are goods other than democracy, and those goods speak in favor of impeaching President Trump. In particular, it seems important that we maintain a credible political threat against lame-duck presidents who have been voted out of office. If the Senate cannot impose a penalty barring future office, if the president is already on the way out the door, and if we want to preserve the norm against criminally prosecuting political enemies, then it is unclear what threat there is to hold a president in line other than impeachment (of course, this problem will still apply to president’s in their second term; so even impeachment is not an altogether adequate solution).

Now, I don’t want to here analyze whether it was right to bar President Trump from office. (I think it is, at least in this case, rather clear that barring him from office would have been the right thing to do all things considered.)

But I’m still worried, because I have no general principle for how to make these tradeoffs. I have no idea how to make comparisons between the undemocratic nature of barring someone from future office, and the importance of the social goods granted by the threat of impeachment. In this case, I have the strong intuition that the limited harm to democracy is unimportant when compared to the gains granted by deterrence. And, in fact, in this case, I’m actually pretty confident in that intuition. If any case is clear, it seems to me that this is going to be this one.

But what if the case were messier; what if the president’s behavior was itself less brazenly undemocratic? How would I go about comparing the good of democracy to other social goods? In a previous Prindle Post piece, I argued that, psychologically, we often make these decisions by intensity matching. How undemocratic does impeachment feel? How terrible do the president’s actions feel? If the president’s actions feel more terrible than impeachment feels undemocratic, then we should impeach and bar from future office. If the impeachment feels more undemocratic than the president’s behavior feels terrible; then impeach but don’t bar from future office. As I argued in that piece, however, the problem with intensity matching is that it does not reliably connect with any moral reality.  It depends on how one anchors their own scale, and often produces morally bizarre behavior (like a willingness to spend the same amount of money to save one hundred or one hundred thousand birds from oil spills).

So if our gut intuitions don’t tell us how to make this comparison, we need some principle. But right now I don’t see what that principle could be; and I think that should make us all a little more cautious in our calls for political action.

The Cost of Free Speech

cartoon image of excited speech bubble

As 2021 got underway, and the United States was dealing with the fallout from the January 6 insurrection, a much smaller-scale political controversy was blowing through Australia’s sweltering summer. The prime minister was on holiday, his deputy Michael McCormack was in charge, and Craig Kelly, an outspoken member of the leading party who is a notorious climate skeptic, alternative COVID-19 treatment theorist, and vaccine doubter, had a hold of the mic and was getting plenty of attention proffering conspiracy-style views on his social media accounts.

Australia has done exceptionally well in keeping the global coronavirus pandemic at bay with strict lockdowns in response to outbreaks, effective contact tracing, and strict quarantine rules for all international arrivals. The country of 25 million, has recorded fewer than 1,000 deaths since the pandemic hit last March. Though the community is generally willing to comply with expert public health advice, there has been some dissent from conspiracy theorists and anti-vaxxers.

As Australia began preparing to roll out its COVID-19 vaccination program, Craig Kelly, that zealous critic of scientific evidence, was hard at work on his personal Facebook page posting in favor of unproven treatments and against vaccines and other public health measures, such as the wearing of masks.

Kelly has a large social media following, and public health officials in Australia, including the Australian Medical Association and the chief medical officer, pushed back hard, expressing concern that his views pose a danger to public health, and calling on senior government figures – the acting Prime Minister Michael McCormack and the Health Minister Greg Hunt – to condemn those views and rebuke Kelly. But no rebuke came. Instead, McCormack had this to say:

“Facts are sometimes contentious and what you might think is right – somebody else might think is completely untrue – that is part of living in a democratic country… I don’t think we should have that sort of censorship in our society.”

Notice how familiar this type of response is becoming: when politicians or pundits are called out for expressing views that are misleading, offensive or wrong, there is a tendency to claim a free speech defense. Notice too that McCormack makes specific reference here to what living in a democratic country involves. It is of course true that democratic legitimacy is one of the functions of free speech, but does free speech include freedom to lie, confabulate, or spread misinformation? And how do these things affect democracy? Can we untangle freedom of speech, as a fundamentally necessary democratic principle, from demagoguery?

Let’s look in a bit more detail at McCormack’s statement, which is problematic for a number of reasons, but namely in invoking freedom of speech in defense of views which ought to be rejected because they are wrong, harmful, and generally indefensible. This is a sly move, given the high importance citizens of free, democratic countries place on the right to free speech. It is also a tactic which often has little to do with defending this important right and more to do with evading a subject or shutting down an argument – contra free speech.

As a point of logic, rebuking Kelly for proffering dangerous falsehoods is not censorship. If McCormack’s assertion is that Kelly is free to make these claims then, on that argument, McCormack is free to condemn them.

Furthermore, McCormack’s assertion that facts are contentious appears to imply an ‘everyone is entitled to their own opinion’ kind of defense, which bears a strong resemblance to the free speech defense. But it simply isn’t right. In matters of fact, for example matters of science, as opposed to matters of taste, you are not entitled to your opinion; you are entitled to what you can make a case for, and what you can support through reasoned argument, true premises, and solid inferences. You are not entitled to an opinion that is demonstrably false. Both logic and good faith hold you to a standard which requires you to recognize when a belief is indefensible. Democratic legitimacy depends as much on that as it does on freedom of speech.

Following McCormack’s comments, as public and medical professional pushback grew, no senior member of Kelly’s government – not the Federal Health Minister, nor the Prime Minister himself (now back from his holiday) would bring Kelly into line. Finally, it was Facebook whose moderators intervened and Kelly was required to remove one post proffering COVID-19 misinformation and conspiracy-style rhetoric. Kelly did so, saying: “I have since removed the post… under protest.” He then gave this ominous pronouncement: “We have entered a very dark time in human history when scientific debate and freedom of speech is being suppressed.”

Perhaps Kelly is right that we have ‘entered a dark time in human history’ (if the present can be said to be history) – but not for the reasons he thinks. When we see the right of free speech being used again and again to evade responsibility and excuse lies and falsehoods, it is time to take stock, and look closely at what is at stake in our fundamental beliefs about freedom, democracy, and truth.

One reason this use of the free speech defense is so pernicious, is that most people living in open, democratic societies will agree on the importance of free speech and hold it in high regard. This invocation of freedom of speech seems to trade on the hearer not noticing that something they value highly is being used to degrade other things of value.

International law recognizes and protects the right to freedom of speech which is enshrined into the UN Declaration of Human Rights, as stated in Article 19. The antithesis of freedom of speech is censorship. Censorship is the intolerance of opposing views. This happens, politically, where the establishment fears or dislikes opposition, or where governments want to suppress information about their activities.

Democratic legitimacy is one of the most important functions of free speech. And free speech is one of the most important mechanisms of democratic legitimacy. Real democratic engagement requires the free exchange of ideas, where forms of dissent are not censored, and where differing or opposing views can be aired, discussed, and considered. In this way the citizenry can be engaged, well-informed, and part of the political process.

Even though the argument from democratic legitimacy holds free speech in high regard, very few people take an absolutist position on freedom of speech. Free speech does not imply a free-for-all. Therefore, protection of free speech always involves judgments on when and why speech might justifiably be regulated or curtailed. The answer to the question of what kind of speech causes harm and is justifiably restricted hinges on the extent to which freedom of speech is valued in itself. In liberal societies its intrinsic value is usually held to be high. If freedom of speech is curtailed, its limits will be decided around the protection of other, countervailing values, like human dignity and equality. In this sense there is a (sometimes unacknowledged) weighing-up of the value of freedom of speech relative to other values. If freedom of speech is, in itself, very highly valued, then other values may be subordinated. It is upon this scale that the right to freedom of speech is, for some, synonymous with the right to give offense.

A quick internet search of “free speech quotes” is instructive here, serving up such ideas as: “free speech is meaningless unless it tolerates the speech that we hate,” from Henry Hyde; “Free speech is meant to protect unpopular speech. Popular speech by definition, needs no protection,” from Neil Boortz; and “Freedom of Speech includes the freedom to offend,” courtesy of Brad Thor. Add to these offerings, the infamous contribution of Senator George Brandis, Australia’s erstwhile attorney general, who, in 2014 while making an argument for winding back Australia’s anti-racial discrimination laws, put it to the parliament that ”People do have a right to be bigots, you know.”

All this illustrates which values go down in ranking when free speech goes up. If we take freedom of speech to protect or our right to be bigots, that points to something we value. That is, it suggests, we value our right to be bigots more than we value equality or human dignity; that we would prefer to be allowed to vilify than to protect people from vilification.

Perhaps we will decide that we do have a right, by virtue of to the right to freedom of speech, to be bigots. If that is so, it certainly sheds light on the ethical problems that can arise from constructing our basic moral bearings around defending our rights at the expense of other ways of thinking about what is important in our moral lives. Perhaps we might orient our ethical thinking more towards questions about what we owe one another morally rather than what we can lay claim to. We might, for example, ask ourselves whether, rather than uncritically digging in about our rights, it would be better to reflect on our values in this space.

It comes back to the question of why freedom of speech is so important. If free speech, according to the democratic legitimacy argument, is so important because it allows us to better hold power to account, allows citizens to make informed decisions and engage in reasoned, open debate, then it does not make sense to defend or promote speech which itself undermines these goals — speech like Craig Kelly’s COVID-19 misinformation posts, or any picking from the multiverse of conspiracy theories currently working their way into the marrow of certain sections of society. Americans have recently experienced the very hard consequences of lies and misinformation on democratic society in the twin crises of the January 6 insurrection and the runaway COVID-19 pandemic.

In conclusion, we don’t seem to be paying close enough attention to the way that freedom of speech is being used to justify lies and to push back against demands for accountability from the powerful and privileged. If we can untangle freedom of speech as a fundamentally necessary democratic principle from demagoguery, we must do so by directing more critical attention to how it is invoked and what is at stake when freedom of speech is taken to mean freedom to lie or to further a pernicious ideology. Yes, freedom of speech is fundamentally important, and we should protect it because of its central role in the democratic process. At the same time, truth matters and lies have real consequences. When we stand up for freedom of speech, we should be thinking broadly in terms of why it is valuable, what role it serves, and what our responsibilities are in respect of each other. A broader discussion about our values will serve us better than a narrow focus on rights, no matter what they cost us.

Under Discussion: Undermining a Democratic Response

photograph of protestors with "People over Pipelines" sign

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Combating Climate Change.

On the first day of his administration, President Joe Biden issued an executive order cancelling the permit for the Keystone XL Pipeline. Premier Jason Kenney of Alberta, the province where the oil sands are located, expressed his disappointment stating that this is “not how you treat a friend and ally,” and indeed Canada’s Premiers apparently “want to go to war” over it. This kind of political posturing reminds one of recent events. The riot at the U.S. Capitol followed weeks of politicians lying to voters about the election, and as a result people, who were frustrated by a reality that was not matching what they were being told, lashed out violently. If there is one lesson that any democracy should be able to learn from such an episode it is that misleading the public does have consequences, it undermines the capacity of voters to evaluate their options, and thus it undermines democracy. Does this lesson have relevance when it comes to climate change, something which has the capacity to wreak massive economic and social instability?

Was Keystone ever viable? Are Canadian politicians simply spreading false hope that there is a significant economic future for that pipeline? If so, are they on any better moral grounds than Republicans who lied about elections? First, it is important to recognize that the province of Alberta is heavily dependent on natural resource development, particularly the extraction of bitumen from the oil sands. In recent years, this sector has been badly hit by floundering oil prices, troubles getting pipelines built, and now more recently COVID-19 has caused prices to drop resulting in even lower oil revenues. The Alberta economy, slowly recovering from a recession, has now been even harder hit. Unemployment is up, and investment is down. As a result, the Alberta government is now running a record high deficit.

In an effort to push back against these forces over the past few years, the Alberta and Canadian governments have been significant supporters of the Keystone XL pipeline. Despite the troubled history of the project, the Alberta government became an investor in order to move construction along at a cost of 1.5 billion dollars with an additional 6 billion in loan guarantees. It was expected that the pipeline could create tens of thousands of jobs. Despite all of this, the pipeline project has been troubled since its inception. It has been met with opposition from indigenous people for cultural, treaty, and health reasons, and it has been widely protested because of concerns about the climate. While some concerns involve air pollution, and the potential for an oil spill, the potential for increased carbon emissions has been especially problematic politically speaking. Extracting the crude bitumen of the Alberta oil sands involves 17% higher carbon emissions than conventional oil. As a result, then-President Obama was heavily lobbied to deny a permit for construction, ultimately doing so because it was seen as undercutting the credibility of the United States in climate change negotiations. Eventually, President Trump did issue the permit for construction, before this permit was withdrawn by President Biden.

The question is whether Keystone XL was ever a very realistic option for the Alberta government to cling to. Was the writing on the wall? As Canadian journalist Aaron Wherry notes, “The project’s fate seemed sealed years ago, but it haunts us still.” Afterall, there were years of court challenges, revisions to the design, permits were granted and taken away. The public soured on the project as well. In just four years public support in the United States for the project fell from 65% approval to 48%. A 2017 poll of Canadians found that only 48% supported the project even though 77% of Albertans supported it. Investors were shy about putting money into the project, and thus the Alberta government is now on the hook for billions of dollars. And, with the public and politicians increasingly showing a willingness to act on climate change, this project’s future was always in question.

Despite this, the Alberta government continued to, and continues to, unrealistically give the public the impression that something can be done to change this. Alberta Premier Jason Kenny, for example, was said to be counting on union support in the United States for the project, despite “not understanding American politics well enough to know that that particular ship has sailed; it was as realistic as the company’s Onion-esque last minute pledge to power the operation of the pipeline with renewable energy.” And as Warren Mabee of Queen’s University notes, “While the reaction from Alberta implies Biden’s move came as a shock, the truth is that cancelling Keystone XL was a key part of Biden’s election platform” and has suggested that Canadian politicians should get a reality check when it comes to the oil sector.

So, it is worth noting whether there are similarities between what Republicans told their constituents following the election and what Canadian politicians are telling theirs regarding Keystone XL. In both cases you have frustrated citizens, many vulnerable to unemployment and a lack of prospects. In the United States, Republican politicians granted credibility to the claim that there were significant election irregularities despite almost no evidence and were complicit in unrealistic and long-shot attempts to overturn the election in order to satisfy what their voters wanted to hear. In Alberta, politicians continue to grant credibility to the viability of pipeline projects which promise to restore good times to the province, despite evidence that the project was environmentally risky and that the project looked increasingly doomed. And now, even still Premier Kenny calls for trade sanctions which are considered “unrealistic and unproductive in the extreme” in order to appeal to a base of supporters.

In the case of the United States, the effect of this willingness to entertain lies about the election was the storming of the Capitol and the undermining of democracy. While the Canadian Parliament may be safe for now, the Alberta government has made use of inflammatory language and promises which may also undermine democracy. For example, the governing party of Alberta claimed that their predecessors “surrendered to Obama’s veto of Keystone XL” and ran on a promise threatening to hold a referendum over constitutional changes if they could not get a pipeline built. In other words, politicians trying to appeal to their base, optimistically attached their hopes to a pipeline that investors soured on and invested billions of public money into despite facing increasing political opposition at home and in the United States. As a result, the people of Alberta will likely be angrier at the Canadian federal government and the rest of Canada. Politicians could not be honest with their voters, and as a result social and democratic cohesion may suffer. Is there a moral difference between the two cases?

It is important to note that this is only a case study to demonstrate a larger moral concern. We have seen in the last year that citizens will accept complete falsehoods if it fits with what they want. Despite over 2 million people dying of COVID-19 in real time over the past year, many still believe that the virus is not real or is no worse than the flu. So, looking forward, what will happen when the effects of climate change become even more prominent? If Florida begins to sink due to rising sea levels, will that be branded as just a fluke or a bad summer? If actual economic and climate problems are facing society, it will be the convenient mistruths that will be exploited to undermine the ability of citizens to make decisions that are in their best interests.

Should News Sites Have Paywalls?

photograph of partial newspaper headlines arranged in a stack

If you’ve read any online article produced by a reputable newspaper in the last ten years, you’ve inevitably bumped into a paywall. Even if you’ve managed to slip through the cracks, you’ve seen a glaring yellow box in the corner, reminding you that this is your last free article for the month. Maybe this gets you thinking about the ethics of pay-to-read journalism, so you seek out articles like Alex Pareene’s piece for The New Republic, only to find that an article about the dangers of paywalls is hidden behind yet another paywall.

If you do manage to read Pareene’s piece, you’ll find that he makes some good points about what he calls “the media wars,” the uphill battle between costly but fact-based journalism (like The New York Times, which erected its paywall back in 2011) and the endless stream of accessible, but factually untrue, stories churned out by the conservative media machine.

How has reputable journalism become so unprofitable? First off, big tech companies like Google and Facebook receive the majority of ad revenue from online content, as Alex C. Madrigal explains. Local newspapers get lost in the bottomless sea of content, and are ultimately unable to compete. As a 2020 report from the University of North Carolina’s Hussman School of Journalism and Media showed, small news sources are disappearing at an alarming rate, creating “news deserts” in online spaces. Conservative propaganda machines, backed by a seemingly endless supply of money, swiftly filled that void, resulting in an increasingly homogeneous and right-leaning landscape of digital journalism.

As Pareene points out, putting up a paywall is “the only model that seems to work, in this environment, for funding particular kinds of journalism and commentary.” But if you do this, sites like Stormfront “will set up shop outside the walls, to entertain everyone unwilling to pay the toll.” Furthermore, “subscription models by definition self-select for an audience seeking high-quality news and exclude people who would still benefit from high-quality news but can’t or don’t want to pay for it. ” In other words, paywalls only perpetuate the divide between fact-based journalism and free propaganda.

But at the same time, paywalls are necessary for papers that value honest reporting. Solid journalism requires training, time, and money, and those who dedicate their life to the pursuit of the truth must be compensated for their labor. Free content is so easy to produce because it doesn’t require much time or effort to disseminate a lie.

It’s a problem without an easy fix. We might just encourage everyone to buy a newspaper subscription, but as the post-pandemic economy worsens, that solution appears less and less viable. A 2019 report released by Reuters Institute for the Study of Journalism found that a measly sixteen percent of people in the United States (the majority of whom tended to be wealthy and well-educated to begin with) pay for their news online. When only the well-off can afford quality journalism, fake news inevitably flourishes.

As Pareene says, this situation is not just a failure on the part of media outlets, but “a democratic problem, in need of a democratic solution.” This sentiment is echoed by Victor Pickard, who argues in his 2019 book Democracy without Journalism? that “Without a viable news media system, democracy is reduced to an unattainable ideal.” As the coronavirus pandemic continues to alter the fabric of everyday life, and conspiracy theories play an increasingly important role in national politics, reliable journalism is more important than ever, and new models for generating profit will have to emerge if anything is to change.


This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of our discussion questions, check out the Educational Resources page.

The Day after Election: A Return to Normal?

black-and-white photograph of the Capitol building at night

Much attention and energy is focused on the outcome of the election, but regardless who wins there is a great deal of work to be done — simply declaring one side the victor won’t solve our problems. So what’s the next question we should be asking after “Who won?”

Regardless of who wins the Presidential election, it is clear now that Americans are anxious about the election and the future of their democracy. A recent poll found that 9/10 believe that America is not “normal” right now. Between COVID-19, racial tensions, public unrest, and the election, many Americans yearn for a so-called return to “normalcy.” Public health experts often speak of what it will take to return to normal from a health perspective. The Biden campaign has heavily focused on returning to normalcy. As described by Glenn Reynolds in USA Today, it is a pitch that “all the Trump craziness will expire, and things will be safe, sane and familiar.” The Republican campaign has also been pitching the concept of returning to normal. But the most important morally salient question to be asked is what does “normal” even mean and why do people want to return to it?

Normal can imply two important meanings. Normal can signify actions that are consistent with norms like rules, principles, standards. If one does not act in a way governed by certain norms, then it is not normal. Normal can also signify what is usual, typical, or to be expected. For example, the Brookings Institution suggests several ideas about what returning to normal might mean after a Trump presidency: a normal president will release their tax returns, a normal president won’t associate with dictators, a normal president won’t attack democratic norms by refusing to accept the results, a normal president would be more empathetic, etc. In some cases, these may indicate norms that we think a president should follow such as respecting election results. In other cases, these are simply expectations based on past experiences. It may not be normal for a president to spend so much time on Twitter. However, it becomes problematic when we start to confuse the two, because “normal” in the second sense may mean different things to different people.

Normalcy, in the second sense I have described, is inherently conservative and backward-looking. It is a form of nostalgia, and a tendency to see through rose-colored glasses; an attempt to harken back to the good old days. For example, Ezra Klein of Vox suggests that the Biden campaign “is offering a politics of nostalgia. He is painting a sepia-toned portrait of the Obama era, and reminding voters that he was in that portrait, standing right behind a president they liked and miss.” But if this is the case, then what is “Make America Great Again” if not an appeal to a return to some perceived normalcy? Of a return to the good old days? But psychological studies of fading affect bias remind us that the good old days are not always as good as we remember. After all, President Trump isn’t the first to cozy up to dictators.

Why is a return to some previously “normal” point in time even desirable? Normal is what led to where we are. The victory of Trump in 2016 and everything that has happened since was only made possible by trends and habits that existed before the election. Polarization and fierce partisanship were on the rise well before 2016. The disproportionate shooting of Black people by the police was still present long before 2016, as was systematic racism. Normal before the pandemic left most nations unprepared and scrambling to secure the necessary equipment and resources needed to address the crisis.

Conservative media has stressed that much of what the Biden campaign and broad left are proposing is not normal. The proposals to tackle climate change, public health, and racial justice are new, not normal. In some cases, such as responding to climate change, insisting on normalcy would be bizarre. For many on the left eager for change, it is the break from the norm that is desired. For the right, Trump has already ended normalcy by significantly changing the balance on the Supreme Court. It is foolish to insist on norms that developed in the past that are not responsive to the problems of the future.

Yet, as each side seeks reform in the name of restoring normalcy, it is clear that what is “normal” is not a consensus. The rhetoric of insisting on returning to a “normalcy” that half of the country doesn’t recognize is inherently exclusionary. To be outside of what is called normal is alienating; this is true regardless of political ideology. The larger problem is whose “normalcy” will prevail? And what are the risks of excluding the other side of the normalcy they seek?

It may not even be possible to return to “normal.” Even if Trump loses the election, even if he loses badly, his success in politics has demonstrated that so many assumptions about our democracy were incorrect. That Trump and the Republicans have been able to attack the media, criticize members of the armed forces, spread misinformation, spread coronavirus, run without a new platform, completely backtrack on their own stated principles regarding court appointments, and still get over 40% support in most opinion polls reveals something more concerning. In 2004 an accusation of flip-flopping could be devastating to a candidate, but now consistency over policy barely matters compared to political affiliation. How can democracy function when almost half of the electorate is willing to overlook facts, principles, and social cohesion? Even if Trump loses, the basic strategy will live on. Voter suppression tactics will only become more subtle. Political conspiracies will continue to spread. Many on the left now embrace the advertising tactics of the Lincoln Project, who are able to run the sort of negative and manipulative messaging that used to be so devastating against Democrats. The distrust and animosity that have swelled over the past decade of American politics and the habits that have followed from this will not disappear after election day.

Third-Party Voting in 2020

photograph of citizens filling out voting ballots with "Vote" sticker on booth

In the weeks leading up to the election, many high-profile celebrities have made last minute political endorsements and pleas for individuals to vote. On October 25, Jennifer Aniston shared an Instagram photo of herself dropping her ballot in the mail. In this post, she shared she had voted for Joe Biden, and in a short PS added “It’s not funny to vote for Kanye. I don’t know how else to say it. Please be responsible.” Kanye West officially announced his presidential bid on Twitter back in July. While he is only on the ballot in 12 states, he has spent over $5 million on his campaign and traveled around the US to give campaign speeches. Perhaps this is part of the reason he did not take lightly to Aniston’s comments, facetiously quipping “Friends wasn’t funny either” in a now deleted tweet. While many might not consider West a serious candidate, he has spoken at length about his stances on political issues from abortion to police reform.

While it may not have been her intention, Aniston’s post points to a larger moral issue not only about the issues at stake in this election, but about voting in general.

Is it wrong to vote for a candidate you know has no chance of winning? Is it okay to vote third party or to cast a protest vote?

From Ralph Nader to Jill Stein, third-party candidates are treated with extreme hostility by Democrats, especially when elections are a toss-up. It seems that every year, a substantial number of voters on the right or left cast votes for candidates that they know have no chance of winning. For some, these votes are out of ‘protest’ against the two-party system which does not represent their interests. To others, it is a joke, or perhaps a statement of their apathy toward or lack of faith in our political system as a whole. Five million votes were cast for third-party candidates in the 2016 election. It is fair to say these candidates were not serious, as they were not even given a space on the debate stage. While this might not seem like a lot compared to the overall sum of 138 million votes, some argue that votes for third-party candidates cost Hillary Clinton the election, as the number of votes for Jill Stein were far larger than the margin that Clinton lost by in swing states such as Michigan and Florida. Some have pointed out the flaw in such criticisms, because they assume that third-party voters would have voted for Clinton as their second choice.

However, the 2020 election is also very different from the 2016 election. In 2016, barely any major polls predicted Donald Trump’s victory. Those casting third-party votes may have underestimated the consequential power of their actions. Donald Trump was also a wild card back in 2016, because though he made plenty of campaign promises, he had no political record to attest to his potential behavior in the White House. In 2020, both Trump and Biden are established politicians with a record. Though it’s been four years, the lingering effect of the largely unforeseen election upset has left virtually no national poll in a position to underestimate Donald Trump. Those choosing to vote outside of the established norm are well aware of the potential consequences of failing to register a preference for one of the two likely candidates.

While it’s clear that voting for a hopeless candidate in this election will generate a predictable outcome, is it possible that our vote can be morally assessed by more than the consequences we believe it will produce? Principled voting, often as a form of protest, has been labeled negatively as immoral, selfish, and wasteful. Voting as a statement is certainly not widely accepted in American culture, but that does not mean it has no moral basis. Under the “expressive theory” of voting, rather than seeking consequentialist ends, individuals vote in order to express their loyalty to a political party or an ideology. Voting might also be a way to keep in line with our principles and avoid hypocrisy. To go even further, could voting, or refusing to, be a way to keep our hands clean of any ills done by political leaders who will undoubtedly go on to make moral mistakes during their four years?

On the other hand, maybe our decision to cast a protest or principled vote is a reflection of one’s total alienation from the parties in power. Studies have shown that most of us naturally turn to consequentialist moral decision making when under pressure. Principled stands, such as voting based on value rather than strategy, are often chosen when we perceive there is little at stake.

The perception that little is at stake in a presidential election has been labeled by many as one of inherent privilege, as there is often much more at stake for historically marginalized groups when it comes to which party holds the key to the presidency. Voting is still bafflingly inaccessible to many Americans based on inequities attributable to race, socioeconomic status, and criminal history. In order to combat this lack of access to civic influence, many on the left have appealed to altruistic intuitions. Altruistic voting is the concept that we should vote not for our own selfish interests, but for the welfare of others. Those who advocate for altruistic voting see politics as a method to enhance the collective good. In her aforementioned Instagram post, Jennifer Aniston appealed to altruism by urging her followers to “really consider who is going to be most affected by this election if we stay on the track we’re on right now… your daughters, the LGBTQ+ community, our Black brothers and sisters, the elderly with health conditions.” It is fair to say that for many, this election has come to represent much more than merely who will sit in the Oval Office for four years.

Many critics of altruistic voting point out the fact that its consequential justifications are not consistent with its low probability of consequential change. Regardless of practicality, is a good moral basis for voting? One could see the nobility in choosing to put one’s selfish concerns aside for the betterment of society. However, there is often no clear moral choice when it comes to voting, as perfect candidates rarely exist. While you may seek to vote for the candidate who will protect a woman’s right to choose, they might also have a questionable record in terms of criminal justice reform. Even if one plans to take an altruistic approach, there is no guarantee, in a system which consistently demands choosing the “lesser of two evils,” that one will truly discern who to vote for.

How we moralize voting is hinged on what we really believe a vote means. Does it mean we wholeheartedly believe in the candidate on the ballot? Does it mean we think they are the most rational choice? Or is it simply another way to express who we are and what we believe in? How we answer these questions will reveal whether or not we believe voting Kanye 2020 is unethical.

The Day after Election: Democracy and Good Faith

photograph of downtown Washington D.C. with Capitol building in background

Much attention and energy is focused on the outcome of the election, but regardless who wins there is a great deal of work to be done — simply declaring one side the victor won’t solve our problems. So what’s the next question we should be asking after “Who won?”

In a recent podcast discussing the state of the American democracy, David Runciman remarked:

“The optimistic view is that democracy is a resilient and flexible form of politics… but there’s a deeper fear – which is that something has changed, something over these last three and a half years; [that the Trump presidency] has left not just a stain but a kind of permanent imprint on how people think about the institutions the values and the norms [of American democracy].”

America, and the world, will know soon if Trump gets in for a second term. There has been much talk over the past four years about how much damage Donald Trump could do, is doing, and has done to American democracy, and much discussion about the ongoing effects of the stress the Trump presidency has had on the institutions of American democracy.

If Trump loses, it isn’t yet clear how the institutions of American democracy will emerge from the crisis of his presidency. If Trump is returned to office, no one knows what the state of American democracy will be after four more years, but the prognosis would not be good.

When people talk about the ‘institutions of democracy’ they usually mean the balance between legislative and executive power, the checks and balances Congress is supposed to provide, as well as the role of an independent judiciary and a free press. The last four years, compounded by fears that Trump may refuse to concede a lost election, have demonstrated many weaknesses and vulnerabilities in all these areas. But there is another important democratic ‘institution’ rarely mentioned yet vital for a healthy and functional democracy – that of good faith.

When Utah senator Mike Lee said recently that “democracy isn’t the objective” of America’s political system, he confirmed the suspicions of many in appearing to speak out loud the agenda and tactics of the Republican Party. Other Republican figures, including the president, are on record admitting that without voter suppression tactics the Republican party could not retain, or likely ever again attain, the power of the presidency or of Congress.

Good faith means that all sides of politics respect and uphold the central principle of democracy as a system of government formed by and of and for the people. Citizen participation is needed for this. A high degree of trust is needed. For there to be trust in politicians they must be trustworthy. If you trust someone who lies and cheats, that doesn’t make you a trusting person, it makes you gullible. So there has to be the right kind of trust, which is reciprocal and earned and not misplaced.

Good faith, necessary for democracy to function, is derived from the institution itself: from respect for and deference to true democratic principles by those empowered to discharge its duties. Good faith is attached to the principle of fairness, and it is lost when the desire to win at any cost takes hold.

Erosion of good faith between political parties, where there is no recognition of a common good, only the good for one side or another, has been poisoning American democracy since before Trump descended the escalator at Trump Tower to announce his candidacy. So, while it is tempting to think of this election as centrally a test of whether the American democracy can withstand authoritarianism of whether the world’s oldest and longest surviving democracy can withstand the stress test of Donald Trump it would be incorrect to think the era of bad faith began with him, even if he is the unsurpassed master of its theatrics.

Much has been made over the last four years about the Republican Party in general, and particular key figures such as Mitch McConnell, as enabling Trump – but Sarah Churchwell makes the point that the failure of McConnell et al to reign Trump in has enabled the ideological right. Trump has been utilized by the Republican Party to pursue its arch-conservative and patently antidemocratic agenda.

Heading into the election Trump has not only helped advance the conservative ideologue’s antidemocratic agenda, but taken it to a whole new level. As Sabeel Rahman (president of the thinktank Demos) says: “A set of actors in the Trump administration and the Republican party have made it very clear that their intention is to hold on to political power at the expense of democratic institutions.” This was spelled out (although incorrectly) by Mike Lee: “Democracy isn’t the objective; liberty, peace, and prospefity [sic] are. We want the human condition to flourish. Rank democracy can thwart that.”

It has been clear leading into this election that voter suppression and intimidation is the Republican plan for winning the election. Added to this is the widespread fear that Trump won’t concede, and the uncertainty about what will happen next. Judith Butler tells David Runciman: “…I think if Trump is successful in his efforts to contest, litigate, or otherwise cling on to power, then he is there unless the government is able to act and remove him.”  At this stage, as the election looms, we don’t know how such a scenario would play out.

Democracy and the institutions and democratic norms it relies on has, at best, always been a slow dance towards a better, more inclusive, more progressive, and more just iteration of a political ideal where the views and interests and of the people are represented through various means of direct and indirect choice. The lack of good faith now at the heart of the system has severely impeded this goal. It seems that all but a few, now-powerless members of the GOP are willing to sacrifice good faith for power – and, whatever happens next week, the American democracy cannot heal without some restoration of those vital democratic institutions of trust and good faith.

The Day after Election: Procedure and Substance

photograph of US Capitol building at dawn

Much attention and energy is focused on the outcome of the election, but regardless who wins there is a great deal of work to be done — simply declaring one side the victor won’t solve our problems. So what’s the next question we should be asking after “Who won?”

No matter who wins the upcoming election, the elected administration will face questions of priority, what policies should be focused on. Should we focus on COVID or global warming? Should we pass election reform or healthcare reform? Should we deregulate now or first ensure protections for religious liberty?

These questions are always difficult. You need to weigh the ends at stake, your likelihood of success, how immediate the concern is, etc. In this post, though, I want to focus on one particularly tricky question of priority. Should one prioritize the substantive ends of government, or the procedural ends of democracy? Is a government’s first obligation to ensure those internal structures which maintain its democratic legitimacy, or is it right to prioritize lives saved over merely procedural and political rights?

To get at the distinction I’m drawing, it might be useful to think about the substantive policies as policies that any government ought to pursue. Thus whether you are a constitutional monarchy, a democratic republic, an Athenian city-state, or a theocratic oligarchy, you are obligated to promote the common good. The Holy Roman Empire in the 14th century had precisely the same kind of reason to halt the spread of the Black Death that Germany has to halt the spread of COVID today. In contrast, the procedural policies are those policies tied to the internal structure of democratic governance. These include things like ensuring fair representation (perhaps by making Washington D.C. a state) or access to democratic participation (perhaps by passing federal regulations to fight state-level voter suppression).

Now, there are two different questions of priority we need to consider. First, there is priority of sequence: what do we need to do first? Second, there is priority of importance: if we can only do one of these two things, which should we do?

Just because one thing is more important than the other, that does not mean that you should always do the most important one first.  Sometimes finishing my work is more important than sleep. A handful of times while in undergrad I faced the question of whether I would sleep or finish my paper. When faced with that choice, I would pull an all-nighter. Finishing the paper on time had greater importance-priority than getting one more night’s sleep. All the same, if I expect I can do both, sometimes it makes more sense to do the less important one first. If I will have time to sleep and write, I’ll often go to bed and finish writing refreshed the next morning. Sleep has greater sequence-priority because getting a good night’s sleep will actually help me write the paper.

So when we look at the sequencing question, what should we prioritize? The first thing to note is that certain kinds of democratic reform might be prerequisite to passing substantive policies. Ezra Klein, for example, has recently argued that unless democrats eliminate the filibuster a Biden administration will be unable to pass much meaningful policy. Similarly, perhaps you need to find some way to decrease the power of lobbyists before you will be able to corral enough senators to vote against special interests. On the other side, other democratic reforms might take a back seat to COVID relief. It will be two years until another election, so perhaps deal with the current crises and tackle election reform six months in.

The more interesting questions of priority, however, concern importance-priority. Suppose an administration could either enact healthcare reform or electoral reform, which should it opt for? This is a tricky question because it is not that clear how to compare these substantive and procedural goods.

You might try to sidestep the comparison. Maybe there is no trade-off because electoral reform will lead, in the long run, to the best substantive reform! For instance, perhaps you think that by making Washington D.C. and Puerto Rico states will help ensure future democratic control over the Senate and so, because you think democratic policies are better, prioritizing electoral reform will actually improve substantive policies in the long-run. Of course, there is something distasteful about adopting electoral reform to help your specifically preferred policy. After all, that could equally justify electoral deforming if you thought being less democratic would result in better policies in the long run. There are plenty of reasons, though, for thinking that democracies make better decisions in general. And reasons of that sort might justify giving long-term democratic reform importance-priority even over pressing substantive goals. This tends to be why I think, at least right now, the priority should be on democratic reform. Just as it is important to keep your own body in good shape, even if your goal is to be able to go and help others. So it is imperative for the government to keep its own internal deliberative form in good shape that it might be rightly accountable to the people.

But suppose, just for the sake of argument, that there really is a trade-off. Suppose we really do face the question of if we should choose a more democratic society in which people are by objective measures worse off, or a less democratic society where people are happier and more secure. In that situation, what should we choose?

One view, which I do find plausible, is that democratic goods are actually only instrumental goods. Democracy is a better form of government because democracies better secure the common good. As such, if you really do face a trade-off between democratic goods and the common good you should prioritize the common good. I’m sympathetic to this view, but it does require you to defend the counter-intuitive position that democracy has no value in itself — something I cannot possibly defend here in this post.

On the flip side you might think that democratic goods have a lexical priority over substantive goods. Because democracy is the source of a government’s legitimacy it must always prioritize that democratic structure. The problem with this view, however, is that it leads to a ‘resource black hole.’ It is probably always the case that you could make slight improvements to democratic access. So if any democratic reform takes priority over any substantive reform, then you would never get to the substance of government!

The third option, of course, is somewhere in the middle. Perhaps both of these are important, and major democratic reforms should take precedence over minor substantive reforms, just as major substantive reforms should take precedence over minor democratic ones. The problem, however, is one of incommensurability. What scale are we using when we assess what a ‘large’ democratic reform is in comparison to a ‘large’ substantive reform?

Fascinating work in behavioral economics actually helps us understand how these comparisons are made. It turns out our brains are very good at what we can call ‘intensity matching’. Take an example of Daniel Kahneman’s: if I tell you that “Julie read fluently when she was four years old” and then ask you “how tall is a man who is as tall as Julie was precocious?” You will probably give me a number at the high end of the 6-7 foot range. Most people do.

Of course, there is no meaningful question we are answering here. There is no deep sense in which a certain level of precociousness actually maps to a certain height. Rather we have a general sense for how extraordinary something is. But that sense of extraordinariness is scale-dependent. If one person’s scale of democratic norms starts with chattel slavery, and the other starts with voter I.D. laws, then we will get very different answers for what level of democratic failure corresponds to one hundred thousand deaths from COVID.

We feel confident in trading off democratic and substantive values, but it seems like we feel comfortable with those tradeoffs because we rely on a dubious form of intensity matching, rather than actually tracking something of real moral import.

Once you recognize how contingent our ‘intensity matching’ is, it really makes you pause and wonder just how do we go about comparing incommensurable values? What does it really mean when I say that mask mandates are a minor violation of liberty, one commensurate with public health crises, but that mandatory vaccinations are not? Sure, I intuitively feel that forcing someone to inject something into themselves is a far worse violation of autonomy, but is there anything philosophically real underlying the intuitive scale by which I compare that to public health threats? I don’t think there is.

So if you take the third view, a view on which you need to balance democratic and substantive norms, I think that means you’re just kind of stuck. It is unclear how we can possibly give a principled way to compare one priority to the other because those priorities are, in a very real sense, philosophically incommensurable. This, indeed, seems fundamental to what democracy is. Part of the miracle of democracy is that it provides us a way to collectively compromise on which of our incommensurable values we will prioritize and when. But if that is part of the miracle of democracy, part of the strangeness of democracy is that our prioritization of that miracle is itself something we sometimes need to compromise.

Is Microtargeting Good for Democracy?

photograph of "protecting america's seniors" sign next to podium with presidential seal

“Suburban women, would you please like me? Please. Please.” This was Donald Trump’s messaging at a campaign event this month. While there are many striking things about a statement like this, what particularly struck me is how transparent Trump is about trying to appeal to specific voting demographics rather than to women, voters, or Americans at large. This is not new of course, political campaigns have spent decades trying to find the specific target voters they need to win, but what was once the terminology of campaign logistics and pundits has become public campaign rhetoric. A campaign is able to identify and target voters through a process called microtargeting. But what is it and does it make democratic politics better or worse?

Let’s say that you enjoy a certain television program. What you may not realize is that there may be a significant correlation between your viewing habits relative to others and how you may vote. As a result, when that program goes to commercial, you may be bombarded with political advertising for a certain candidate. This actually happened in 2016. The Trump campaign determined that people who watched The Walking Dead were more likely to have specific views on immigration and as a result, Trump advertising on immigration was aired during the program. This is microtargeting, and through careful statistical analyses of large amounts of data, political campaigns can find and try to reach specific voters in order to improve their chances of winning.

Microtargeting involves the use of a large pool of data that tracks potentially thousands of variables about a person in order to determine the political messaging that you will best respond to. How this data is collected is a matter of controversy. Some of this data can be limited in scope to matters like what precinct you live in, whether you voted in previous elections, etc. Other times, the data can be much more specific including viewing habits, social media habits, personal details, and more. By using various algorithms in data analysis, a company or campaign can target you with online and television advertising, door-knocking, and even mailed literature. You may get different advertisements for a candidate than your neighbor gets for the same candidate because of this.

The issue carries a whole host of ethical problems and concerns. For example, the Facebook-Analytica scandal involved the consulting firm Cambridge Analytica providing such services using data collected from Facebook without permission from users. How this data is collected and who can access it are major concerns for those who worry about privacy. However, for my purposes I will focus on the ethical concerns that microtargeting raises as it pertains to democracy and the democratic process.

Proponents of microtargeting argue that this is just a more effective means for a campaign to reach out to potential voters. The Obama campaign made great use of microtargeting techniques in order to mobilize young people, Latinos, and single women in key swing states. Traditional forms of advertising can leave certain voters out if advertising is based only on factors like geography or party registration. This also means that advertising can be more efficient as there is no longer a heavy reliance on wide-run television advertising.

Being able to recognize people who may support a candidate and then figuring out what exactly will motivate them to vote isn’t a bad thing. Nor is it necessarily a bad thing that political parties learn more about who their voters are and what kinds of things they care about. This may reveal more about what voters care about than what is typically captured by opinion polling, media coverage, and focus groups. Such tools could be effective at identifying and perhaps re-engaging those who have dropped out or are otherwise ignored in the larger democratic conversations that take place during an election year. Likewise, it is not necessarily a bad thing for a voter to get the kind of advertising that they may wish to see.

On the other hand, microtargeting can be harmful to democracy in several ways. Microtargeting seeks to identify issues important to you and to feed you advertising that will motivate you to vote. However, democracy should not just be a matter of appealing to the often subjective and idiosyncratic views you already have. Election campaigns are not a mere matter of logistics, they are a national conversation. Microtargeting enables and encourages narcissistic voters.

Voters should be aware of the larger democratic conversation taking place at an election time and they may not understand these issues if they are only receiving targeted advertising that only focuses on narrow issues in a narrow way. If gun rights or the environment are the most important issue to you in an election, that’s great; but you should be aware of how those issues affect others and what other issues may require the attention of the public.

Another significant problem lies in the irony of microtargeting; it narrows the focus to the individual while simultaneously lumping that individual into specific segmented target groups based on correlations of certain variables in other groups. Each target group has its own interests, motivations, and desires (and fears), and campaigns are then free to exploit these as they see fit. This makes it easier to create conflict between these groups, as there is evidence that microtargeting can contribute to polarization. It means that politicians focus more on voting blocks and less on the public at large, hence why even presidential candidates now speak directly to voting blocks. It also means that a campaign doesn’t have to focus as much on a single consistent message, making it easier to tell different things to different target groups. Political parties choose their voters rather than the reverse. And it isn’t only politicians. The media coverage of the election spends an unhealthy time obsessed with which target group will support who, or how demographics in certain districts have changed over time. The election becomes about the process of the electioneering rather than about policy, character, or other issues of public importance.

Even more disturbing is that these correlations between variables may signify nothing rather than being a predictor of political preference. Models may build incorrect profiles of the groups they are targeting. Indeed, some have posited that this is little more than snake oil posing as science. The advertisements are also less accountable. These are targeted ads rather than ones that will be seen by the public at large. They are often shared on Facebook and social media and can often contain misinformation. All of this can serve to undermine political trust and transparency.

There are great benefits that microtargeting can have for democracy. It could be used as part of a massive campaign to encourage voter registration and voting. Experts will often suggest that it is neither good nor bad, but it is only how it is used that is ethically relevant. However, the larger concern is that we do not understand the effects of the use of the technology yet to know in what ways that it can be used for good or bad. Thus, while banning its use may not be wise, limiting its use in politics seems wise, at least until we learn whether it can function as a tool for the improvement of democracy.