← Return to search results
Back to Prindle Institute

COVID and Climate Change: Taking the Long-Term Seriously

photograph of ripple on lake expanding

Amid the ongoing COVID-19 pandemic, world leaders are assembling in Glasgow for COP26, the UN’s climate change conference. Both the pandemic and global warming are powerful reminders that the choices we make can have consequences that continue to unfurl over decades and centuries. But how much should we care about these hard-to-predict long-term consequences of our actions? According to some, so-called moral “longtermists,” we ought to care a great deal. Others, however, have called longtermism “the world’s most dangerous secular credo.”

COVID, climate change, and the long-term impact of our choices

The coronavirus now appears to be endemic. It is likely to continue to circulate across the globe indefinitely, causing more and more human suffering, economic damage, and disruption to our lives. The total sum of harm an endemic virus can cause is theoretically boundless. And yet, if China had better regulated its meat markets or its bio-labs (depending on your preferred origin theory), it would have likely prevented the outbreak entirely. This failure, in one place at one time, will have significant long-term costs.

The headline ambition of COP26 is for nations to commit to specific plans for achieving net zero (carbon and deforestation) by the middle of the century. Whether or not these talks are successful could have a profound long-term impact. Success could put humanity back onto a sustainable trajectory. We might avoid the worst effects of climate change: biodiversity collapse, flooding, extreme weather, drought, mass famine, mass refugee movements, possible population collapse, etc. Taking effective action on climate change now would provide a huge benefit to our grandchildren.

But the comparison between climate action and inaction does not stop there. As helping our grandchildren and great-grandchildren, the benefits of effective climate action now would likely continue to snowball deep into the next century. Instead of our great-grandchildren needing to allocate their resources and efforts on mitigating and reversing the damage of climate change, the twenty-second century might instead be spent in pursuit of other goals — eliminating poverty, making progress on global justice, and deepening our understanding of the universe, for example. Progress on these goals would, presumably, generate their own positive consequences in turn. The good we can achieve with effective climate action now would continue to accumulate indefinitely.

Commitment to taking the long-view

Both COVID and climate change make a strong intuitive case for moral “longtermism.” Longtermists think that how things go in the long-term future is just as valuable, morally speaking, as what happens in the near-term future. If you can either prevent one person from suffering today or two tomorrow, the longtermist says you morally ought to prevent the two from suffering tomorrow. But if you also had the option of preventing three people from suffering in a million years, they say you should do that instead. It doesn’t matter how far events are from us in time; morally, they’re just as significant.

The second part of the longtermist view is that we can influence the long-term future with our choices today. They argue that the long-term future that occurs depends on what humanity does in the next century. And the stakes are high. There are possible futures in which humanity overcomes the challenges we are faced with today: ones in which, over millennia, we populate the galaxy with trillions of wonderful, fulfilled lives. There are also possible futures in which humanity does not even survive this century. There is, in other words, a very valuable possibility — in moral philosopher Toby Ord’s words, a “vast and glorious” version of the future — that’s worth trying to make real.

A catastrophic future for humanity is not a particularly remote possibility. Ord, who studies existential risk, sees the next century as a particularly dangerous one for humanity. The risks that concern him are not just the cosmic ones (meteorites, supernova explosions) or the familiar ones (nuclear war, runaway global warming, a civilization-collapsing pandemic); they also include unintended and unforeseen consequences of quickly evolving fields such as biotech and artificial intelligence. Adding these risks together, he writes, “I put the existential risk this century at around one in six.” Humanity has the same odds of survival as a Russian roulette player.

The cost of failing to prevent an existential catastrophe (and the payoff of success) is incredibly high. If we can reduce the probability of an existential risk occurring (even by a percentage point or two), longtermists claim that any cost-benefit analysis will show it’s worth taking the required action, even if it incurs fairly significant costs; the good future we might save is so incredibly valuable that it easily compensates for those costs.

But, for whatever reason, reducing the probability of improbable catastrophes does not rise to the top of many agendas. Ord notes that the budget of the Biological Weapons Convention, the body that polices bioweapons around the globe, has an annual budget of just $1.6m, less than the average turnover of a McDonald’s restaurant. As Ord explains this strange quirk in our priorities, “Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it.”

Even short of generating or mitigating existential risks, the choices we make have the potential to put the world on different trajectories of radically different value. Our actions today can begin virtuous or vicious cycles that continue to create ever-greater benefits or costs for decades, centuries, or even millennia. So besides thinking about how we might mitigate existential risks, longtermists also claim we need to give more thought to getting onto more positive trajectories. Examples of this kind of opportunity for “trajectory change” include developing the right principles for governing artificial intelligence or, as COP26 is seeking to achieve, enacting national climate policies that will make human civilization ecologically sustainable deep into the future.

Challenges to longtermism

Last week, Phil Torres described longtermism as “the world’s most dangerous secular credo.” A particular worry about longtermism is that it seems to justify just about any action, no matter how monstrous, in the name of protecting long-term value. Torres quotes the statistician Olle Häggström who gives the following illustration:

Imagine a situation where the head of the CIA explains to the U.S. president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken [the longtermist] Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders. 

Longtermism entails that it’s morally permissible, perhaps even morally obligatory, to kill millions of innocent people to prevent a low-probability catastrophic event. But this can’t be right, say the critics; the view must be false.

But does Häggström’s thought experiment really show that longtermism is false? The president launching such a strike would presumably raise the risk of triggering a humanity-destroying global nuclear war. Other countries might lose faith in the judgment of the president and may launch a preventative strike against the U.S. to try to kill this madman before he does to them what he did to Germany. If this probability of catastrophic global nuclear war would be raised by any more than one-in-a-million, then longtermism would advise against the president’s strike on Germany. This is to say that if the president were a longtermist, it’s at least highly debatable whether he would order such an attack.

Of course, we can modify Häggström’s case to eliminate this complication. Imagine the chance of the madman succeeding in blowing up the world is much higher — one-in-two. In such a case, longtermism would likely speak in favor of the president’s nuclear strike to protect valuable possible futures (and the rest of humanity). But it’s also a lot less clear that such an act would be morally wrong compared with Häggström’s original case. It would be terrible, tragic, but perhaps it would not be wrong.

Maybe the real risk of longtermism is not that it gives us the wrong moral answers. Maybe the criticism is based on the fact that humans are flawed. Even if it were true that longtermism would rule out Häggström’s nuclear attack on Germany, the strategy still seems to place us in a much riskier world. Longtermism is an ideology that could theoretically justify terrible, genocidal acts whenever they seem to protect valuable long-term possible futures. And, ultimately, it’s more likely that flawed human minds perform unconscionable acts if they have an ideology like longtermism with which to attempt to justify their actions.

This last criticism does not show that moral longtermism is false, exactly. The criticism is simply that it’s dangerous for us humans to place such immense faith in our ability to anticipate possible futures and weigh competing risks. If the criticism succeeds, a longtermist would be forced to embrace the ironic position that longtermism is true but that we must prevent it from being embraced. Longtermists would have to push the view underground, hiding it from those in power who might make unwise and immoral decisions based on faulty longtermist justifications. Ironically, then, it might be that the best way to protect a “vast and glorious” possible future is to make sure we keep thinking short-term.

The Ethics of Protest Trolling

image of repeating error windows

There is a new Trump-helmed social media site being developed, and it’s been getting a lot of attention from the media. Called “Truth Social,” the site and associated app initially went up for only a few hours before it was taken offline due to trolling. Turns out, the site’s security was not exactly top-of-the-line: users were able to claim handles that you think would have been reserved for others – including “donaldjtrump” and “mikepence” – and then used their new accounts to post a variety of images that few people would want to be associated with their name.

This isn’t the first time a far-right social media site has been targeted by internet pranksters. Upon its release, GETTR, a Twitter clone founded by one of Trump’s former spokespersons, was flooded with hentai and other forms of cartoon pornography. While a defining feature of far-right social media thus far has been a fervor for “free speech” and a rejection of “cancel culture,” it is clear that such sites do not want this particular kind of content clogging up their feeds.

Those familiar with the internet will recognize posting irrelevant, gross, and generally not-suitable-for-work images on sites in this manner as acts of trolling. So, here’s a question: is it morally permissible to troll?

The question quickly becomes complicated when we realize that “trolling” is not a well-defined act, and encompasses potentially many different forms of behavior. There has been some philosophical work on the topic: for example, in the excellently titled “I Wrote this Paper for the Lulz: The Ethics of Internet Trolling,” philosopher Ralph DiFranco distinguishes 5 different forms of trolling.

There’s malicious trolling, which is intended to specifically harm a target, often through the use of offensive images or slurs. There’s also jocular trolling, actions that are not done out of any intention to harm, but rather to poke fun at someone in a typically lighthearted manner. While malicious trolling seems to be generally morally problematic, jocular trolling can certainly also risk crossing a moral line (e.g., when “it’s just a prank, bro!” videos go wrong).

There’s also state-sponsored trolling, which was a familiar point of discussion during the 2016 U.S. elections, wherein companies in Russia were accused of creating fake profiles and posts in order to support Trump’s campaign; concern trolling, wherein someone feigns sympathy in an attempt to elicit a genuine response, which they are then ridiculed for; and subcultural trolling, wherein someone again pretends to be authentically engaged, this time in a discussion or issue in order elicit genuine engagement by the target. Again, it’s easy to see how many of these kinds of acts can be morally troubling: intentional interference with elections, and feigning sincerity to provoke someone else generally seem like the kind of behaviors that one ought not perform.

What about the kinds of acts we’re seeing being performed on Truth Social, and that we’ve seen on other far-right social media apps like GETTR? They seem to be a form of trolling, but do they fall into any of the above categories? And what should we think about their moral status?

As we saw above, trolling captures a wide variety of phenomena, and not all of them have been fully articulated. I think that the kind of trolling I’m focusing on here – i.e., that which is involved in snatching up high-profile usernames and clogging up feeds with irrelevant images – doesn’t neatly fit into any of the above categories. Instead, let’s call it something else: protest trolling.

Protest trolling has a lot of the hallmarks of other forms of trolling – it often involves acts that are meant to distract a particular target or targets, and involves what the troll finds funny (e.g., inappropriate pictures of Sonic the Hedgehog). Unlike other forms of trolling, however, it is not necessarily done in “good fun,” nor is it necessarily meant to be malicious. Instead, it’s meant to express one’s principled disagreement with a target, be it an individual, group, or platform.

Compare, for example, a protest of a new university policy that involves a student sit-in. A group of students will coordinate their efforts to disrupt the activities of those in charge, an act that expresses their disagreement with the institution, governance, and/or authority figure. The act itself is intentionally disruptive, but is not itself motivated by malice: they are not acting in this way because they want others to be harmed, even though some harm may come about as a result.

While the analogy to the case of online trolling is imperfect, there are, I think, some important similarities between a student sit-in and the flooding of right-wing social media with irrelevant content. Both are primarily meant to disrupt, without specifically intending harm, and both are directed towards a perceived threat to one’s core values. For instance, we have seen how right-wing media has perpetrated violence, both in terms of violent acts and towards members of marginalized groups. One might thereby be concerned that a whole social network dedicated to the expression of such views could result in similar harms, and is thus worth protesting.

Of course, in the case of online trolling there may be other intentions at play: for example, the choice of material that’s been used to disrupt these services is clearly meant to shock, gross-out, and potentially even offend its core users. Furthermore, not every such action will have principled intentions: some will simply want to jump on the bandwagon because it seems fun, as opposed to actually expressing a principled disagreement.

There are, then, many tangled issues surrounding the intentions and execution of different forms of protest trolling. However, just as many cases of real-life protesting are disruptive without being unethical, so, too, may cases of protest trolling be potentially morally unproblematic.

Truth and Reconciliation Day

black-and-white photograph of Native American soldiers in the Canadian Expeditionary Force

On September 30th, Canada recognized its first National Day for Truth and Reconciliation following a year where the bodies of hundreds of First Nations children were discovered in mass graves on the sites of former residential schools. Across the country, the day has been considered an important step forward in addressing many of the historical wrongs perpetrated on First Nations people within Canada. However, since then most of the media and larger public attention on the day has been preoccupied with Prime Minister Justin Trudeau’s decision to forgo meetings with First Nation leaders on September 30th so that he could instead go to the beach with his family. But while many have chosen to take this opportunity to point out the moral failings of the Prime Minister, is it possible that this represents a larger moral failure of the country?

The adoption of September 30th as a statutory holiday to allow for public commemoration of the history of residential schools followed recommendations made in the final report of the Truth and Reconciliation Commission. The report called for a statutory holiday to be created to “honour Survivors, their families, and communities, and ensure that public commemoration of the history and legacy of residential schools remains a vital component of the reconciliation process.” Beyond this, however, the exact meaning and intention of this day isn’t exactly clear to everyone.

Even before the day had come, First Nations critics had noted that this was an extremely small step, representing only 1 of 8 other recommendations that have been implemented out of a total of 94. Critics also note that in light of this, the day feels like an empty promise because of the lack of plans of action for the day, making the federal statutory holiday ring hollow. According to the Canadian Heritage Minister, there were no details for any federal plans to mark the day as commemorations should be led by indigenous people. Canadian Senators were also concerned about what this day was supposed to represent. One noted again that this was only one of 94 recommendations which was rushed to adoption following the discovery of mass graves, and questioned whether a statutory holiday will simply be “a day to stay home and put up our feet and watch TV.”

Of course, the day is supposed to mean more than that. Given that this is a day of reconciliation between Canadians and First Nations, and that this was declared a national holiday, it presumably should carry meaning for the whole Canadian public as well as First Nations. But what meaning is it supposed to have exactly? Was this a symbolic gesture meant to convey a sentiment or was this a policy decision meant to change public relations when it comes to First Nations? What goals does the government hope to achieve? According to the Heritage Minister the government hopes it will be a day for Canadians to “reflect.”

This theme of “reflection” is one that the government often likes to bring up when it comes to discussing reconciliation issues. On Canada Day, when the national conversation questioned the merits of national celebration in light of the discovery of mass graves, Prime Minister Trudeau again stressed reflection. “Many, many Canadians will be reflecting on reconciliation, on our relationship with Indigenous Peoples and how it evolved and how it needs to continue to evolve rapidly.” But reflect how? In what ways? When 1/8th of your strategy for reconciliation is to create a holiday and your plan is just to tell people to “reflect,” it doesn’t inspire much faith that you’ve taken the idea that seriously.

This is not to say, of course, that serious public commemoration or reflection did not take place or that such a day of commemoration should not take place. Broadcasts took place honoring Indigenous people, articles were written suggesting different ways to meaningfully recognize the day, and across the country various events took place including flag-raising, drum performances, prayers, protests, and commemorations for the children who were victims of residential schools. But, what this does begin to suggest is that the Canadian Government wasn’t treating this day with the seriousness it should have.

What’s more is that this is a federal statutory holiday. Federal holidays only apply to a limited number of industries in Canada. However, the provinces of Ontario, Quebec, Alberta, New Brunswick, Saskatchewan, and the Yukon have refused to recognize the day as a statutory holiday. This means that more than 60% of Indigenous people in Canada will not be allowed to take the day off. While critics have made their objections to not declaring September 30th a statutory holiday at the provincial-level known, the counter argument is that a statutory holiday lowers productivity. Of course, the Ontario Government still insists that it would observe the 30th as a day to, you guessed it, “reflect.”

Nevertheless, there are no massive outcries from the public for governments to change their minds about this, at least, not big enough to make a government actually change course. My point is that both at the level of the Federal Government and at the level of the Canadian public at large, there seems to be a lack of a serious commitment to make this day mean something beyond symbolic gestures. Contrary to the idea that this day should be Indigenous led, Eagleclaw Thom notes, “This day as a holiday isn’t for the Indigenous peoples who have chosen to share their land with Canadians. It’s intended for settler Canadians, so they can recognize the pain and hurt they’ve caused.” But, neither the Canadian public nor the Canadian Government seem very willing to elevate the meaning of National Truth or Reconciliation Day beyond symbolism anyways.

This brings us back to Prime Minister Justin Trudeau and the fact that he chose to spend the day at the beach. The Prime Minister had been invited by several First Nations groups to attend various functions but declined. There has been much public and media outrage about this incident, but what is the Prime Minister guilty of that most Canadians aren’t? Why the outrage that the Prime Minister didn’t take the day more seriously when most Canadians weren’t willing to either? Of course, there are obvious answers. He’s not just anyone, he’s the Prime Minister; his government created the day in question, he prides himself on focusing so much on reconciliation, and recently won re-election planning to do more. Trudeau’s decision to go to the beach was not only politically inept, but represents a moral failure of leadership.

But had Trudeau attended a few functions that day instead of going to the beach, what might have happened? It would have made the news and the next day it would have faded from the public mind and the media’s consciousness. The main reason this topic is still in the public mind for most Canadians and media outlets is because Trudeau didn’t attend. They are outraged at the political optics when they should be outraged at the glacial pace of Trudeau’s government. His is a moral failing, to be sure, but Trudeau’s problem is representative of the larger moral failure to make National Truth and Reconciliation Day a more significant effort to affect social change.

Individual Rights, Collective Interests, and Vaccine Mandates

Despite popular support, Biden’s recent policy – requiring vaccinations for all government employees and mandatory testing for businesses with more than 100 employees – is attracting the attention of a small but vocal minority. These voices question the very notion of public health and challenge the basis for the state to supersede individuals’ fundamental claim to bodily autonomy. Given these objections, how are we to justify the policy to those who remain opposed? How are we to adjudicate between the claims of individual liberty and the demands of collective interest?

Are vaccine mandates legal? The relevant precedent concerns a 7-2 Supreme Court ruling in Jacobson v Massachusetts which determined that the local government could enforce mandatory vaccinations to fight a smallpox outbreak. In the decision, Justice Harlan argued that

in every well ordered society charged with the duty of conserving the safety of its members, the rights of the individual in respect of his liberty may at times, under the pressure of great dangers, be subjected to such restraint, to be enforced by reasonable regulations, as the safety of the general public may demand.

In fact, tyranny could just as easily come from government failing to take action and allowing individual freedom to trump collective interests. “Real liberty for all,” Harlan wrote, “could not exist under the operation of a principle which recognizes the right of each individual person to use his own [liberty], whether in respect of his person or his property, regardless of the injury that may be done to others.” In a state of nature where everyone is free to pursue his or her own interest to the furthest extent, there can be no security, no rights, and no peace.

But even if such measures have legal history on their side, can these current vaccination mandates be morally justified? As with all things these days, it depends on who you ask. Red state governors have been quick to seize on these policies as obvious government overreach. Big brother is determined to interfere with average Americans’ daily lives and tell them what they can and can’t do with their bodies. These critics claim that the directives go far beyond what is reasonably required for ensuring public safety. These invasive measures are part of a crude, ham-fisted, one-size-fits-all approach to a fairly isolated problem. Big government is making a foot-long incision to get at the issue when a couple of tiny, strategic punctures might do.

So what makes these emergency orders “unreasonable”? Despite over 725,000 deaths from COVID-19 in the U.S. alone, we’re still squabbling over whether workers are in “grave danger.” Folks like Governor Ron DeSantis claim that the choice of whether to get vaccinated “is about your health and whether you want that protection or not. It really doesn’t impact me or anyone else.” And these sentiments resonate with a not insignificant swath of the population that bristle at being told what to do and who pride themselves on being “more worried about herd instinct than herd immunity.” The trouble, as they see it, is that all the bleeding hearts fail to recognize the basic fact that “life is always a risk.” 38,000 Americans die every year in car crashes, but no one is lining up in favor of a ban on driving. “We live with these risks,” these voices contend, “not because we’re indifferent to suffering but because we understand that the costs of zero drowning or zero electrocution would be far too great. The same is true of zero Covid.” In the end, the right balance between personal liberty and public safety is always to be found in letting the people decide for themselves.

But part of our disagreement stems from misunderstanding the science. Contrary to DeSantis’s claims, vaccination is not a private choice without practical consequences for anyone else. The vaccine does not make one invulnerable to infection and having a large unvaccinated population creates a breeding ground for variants. That’s why the unvaccinated represent the greatest threat to pandemic recovery. Leaving it up to individuals won’t do; we can’t simply agree to go our own ways.

As others have noted, the current conversation resembles the standoff over smoking bans in the not-so-distant past. We’re arguing over the answer to a large and complicated question: at what point does one’s private choices about their health encroach on the rights of others to be free from having risks imposed on them by their neighbors’ behavior?

Given the deep disagreement about the predicament we’re in, finding a trustworthy authority has become paramount. One body which might seem especially well-positioned to rule on the matter is the ACLU – the American Civil Liberties Union which is devoted to protecting people’s basics rights enshrined in the Constitution.

Instead of undermining individuals’ civil liberties, ACLU officials David Cole and Daniel Mach argue that vaccination mandates “actually further civil liberties. They protect the most vulnerable among us, including people with disabilities and fragile immune systems, children too young to be vaccinated and communities of color hit hard by the disease.” Echoing Harlan’s sentiments, the ACLU reminds us that liberties and duties are two sides of the same coin; a right’s very existence imposes corresponding obligations. Making a space for others to exercise their basic freedoms means recognizing the limits of one’s individual liberty: the freedom to swing my fist ends where your nose begins. While much attention has been paid to the coercive leverage in demanding vaccination as a condition of continued employment, we fail to appreciate the situation of those who must daily weigh the risk of exposing their immunocompromised family members against the necessity of putting a roof over their heads. While the number of folks faced with this second scenario may be smaller, surely we can appreciate that the injustice in these two situations is not equivalent.

We have a tendency to speak of rights as guaranteeing individuals’ absolute freedom of choice in pursuing whatever might make them happy — rights without obligations and without bounds. We speak in reverence of individual autonomy as the fundamental basis for human dignity. When I am impeded from doing what I want to do, or (worse) made to do something which I would otherwise not, I have been disrespected and harmed. We equate being free with being unconstrained.

But this kind of autonomy fits poorly within our philosophical traditions. Hobbes encouraged us to lay down our sword in order to enjoy the benefits of neighbors who are more than obstacles to our private interests. Kant argued that only by acting from duty can one be truly free. Showing sufficient respect for others means more than simply making space for their unimpeded desiring, willing, and choosing. No one can claim absolute license to pursue their private ambitions, come what may.

Where does this leave us? We find ourselves once again at the intersection of a number of related issues. We’re bad at conceptualizing disease; we’re addicted to the anecdotal, allergic to authority, and eternally unsure of who to trust. Matthew Silk has investigated the media’s troubles in relaying vaccination information; Martina Orlandi and Ted Bitner have explored our failure to change people’s hearts and minds; Marshall Bierson has pointed out how conflicting federal, state, and local legislation is complicating the picture; and Daniel Burkett has explained why we’re upset by others’ free-riding.

So, how should we respond? Megan Fritts recently raised the question of whether doctors are justified in refusing to admit unvaccinated patients to their overbooked and especially vulnerable waiting rooms. Much like we might penalize alcoholics on a donor list for liver transplants, there is at least one line of thought that suggests that those choosing to expose themselves to greater risk should be asked to bear the cost of that choice rather than forcing others to live with the consequences of that decision. Given the scarcity of medical resources and need for emergency assistance, some form of triage is inevitable. And the mantra of personal responsibility has always proven an efficient tool for separating the “undeserving” from the rest of us.

But this solution is too neat; it neglects to investigate who exactly the unvaccinated are. Over the weekend, The New York Times attempted to put a face to this broad label. The obstinate “Don’t Tread on Me!” die-hard doesn’t always track reality. From young mothers to the various outcasts of the healthcare system, there are at least some not-so-unreasonable anxieties expressed by the “vaccine-willing.” And there are, no doubt, a number of the unvaccinated who deserve our compassion and should inspire us to show a modicum of humility. Unfortunately, those folks with a legitimate medical complication or sincerely-held religious conviction constitute a collective that is not anything as large as it purports to be. You know who you are.

Why You Should (Almost) Always Give a 5-Star Rating

photograph of grocery bag delivered on doorstep

One type of business that has not only survived but thrived during the pandemic is home grocery delivery. In addition to many grocery stores themselves offering delivery, there are app-based services, like Instacart, which work on a system much like other gigging apps like Uber or Lyft. People can sign up as a contract worker – or “shopper,” as they’re known in the Instacart world – where they can accept orders, pick them up from the relevant store, and deliver them to the customer. The service is convenient, tends not to be very expensive, and, at its outset, provided shoppers with a good source of income.

Things have since changed. Many shoppers have recently reported that their earnings have dropped drastically, sometimes by as much as 50%, due to new policies that the company has implemented. Because shoppers are contract employees, they are not protected by minimum wage laws in the U.S. or Canada, and many found that, after calculating for time and expenses, that they were now making less than minimum wage. The result has been a call for strikes by the Gig Workers Collective, a group that claims to represent a significant number of Instacart shoppers (along with gig workers in other industries). Several of their demands center on greater transparency by Instacart when determining how much jobs are paid, as well as rolling back some of the new policies. One policy of note concerns how customer ratings of Instacart employees affect their earning potential.

The rating system can be found everywhere in the gig economy. If you’ve ever taken an Uber or a Lyft or whatever other rideshare program you like best, chances are you’ve been asked to assign your driver a rating out of 5 stars once you arrived at your destination. Chances are you’ve also realized that the star system is crude, at best: while having a 4/5 rating on pretty much anything else in life means that you’re doing great, it’s easy to look at ratings in the low 4’s on these kinds of apps and wonder what someone did wrong.

The rating system used by Instacart is potentially the most punitive among the gigging apps. As one employee describes, workers who have the highest ratings get the first choice of deliveries that come in on the app. That means that the deliveries that pay the most go to the highest-rated, and that even a small dip in one’s rating could mean a loss of a significant amount of money. “Even though shoppers in the, let’s say, 4.9- to five-star range provide virtually the same quality service,” the employee said in an interview, “those even slightly below a perfect five-star rating can slip to orders that pay significantly differently.”

There seems to be a clear need for Instacart to change its policies. But what does this say about what you, the Instacart user, ought to do? Let’s say you place an order through Instacart, and something went slightly wrong. Maybe the shopper was a little late, or maybe your bread got smooshed a little bit. Nothing major, but it’s not perfect. It certainly doesn’t seem like 5-star service, so you give it 4 stars.

While it may seem from the user’s perspective that this was a fair rating, the overly punitive nature of the rating system used by Instacart means that in docking the shopper a star one is potentially significantly hindering the employee’s earning potential. It would certainly seem like an overreaction to, say, dock someone’s pay by thousands of dollars a year just because they broke a couple of eggs, and given the current rating system in place, giving anything less than a 5-star rating will potentially have this consequence.

Here, then, is a suggestion for a norm of rating gig workers: when it comes to companies like Instacart that excessively punish workers for average ratings that dip even slightly below 5-stars, one ought to forgive all minor mistakes and assign 5-star ratings in almost every circumstance.

There will be exceptions: one need not forgive all mistakes. For example, if your shopper shows up at your house 8 hours late, dumps all of your groceries on your lawn, and then knocks over your mailbox as they peel out of your driveway, assigning a rating lower than 5 stars would be justified. But in most normal circumstances given the disconnect between one’s feelings of whether a job has been done well and the consequences for imperfection, these feelings will not translate into proportional punishment by using the rating system in a way that might seem fair to the user.

One might think that the onus for making a rating system proportional should fall to the company, and not the user. Indeed, the Gig Workers Collective’s demand to adjust the system is motivated at least in part by the fact that users of Instacart will most naturally be inclined to assign ratings that they think are fair. In the same interview mentioned above, the Instacart employee is all too aware of this, noting that “the urge to rate a delivery service four stars or lower makes sense on the surface,” and that it seems that if “the service did not deliver on its promise, the customer has the right to report and penalize this service.”

In the interim, however, an employee’s livelihood seems to be much more of an important concern than needing to make sure that one is able to express one’s frustration over small issues. Given the way the system is set up currently, then, it will almost certainly be unfair to give an Instacart shopper anything lower than a 5-star rating.

A Squid Meta-Game Rule

photograph of Squid Game game board

[SPOILER WARNING: This article discusses several plot details of Netflix’s Squid Game.]

At one point in Squid Game, a competitor, Deok-su, finds himself at a decision point: Should he jump on to the right or left pane of glass in front of him? One will break under his weight and he will fall to his death. The other will hold his weight, carrying him forward to the game’s ultimate goal, crossing the bridge without dying. Instead of choosing, he throws a rival onto one of the panes sending the competitor crashing through. Many will regard Deok-su’s actions as morally wrong, but why?

Is our disapproval based merely on the fact that a competitor is being thrown to their death? This is horrific, no doubt, but in context, it is arguably not morally reprehensible. Surely we can agree that the game itself is morally reprehensible because of the stakes involved as well as its exploitative nature. The series, however, asks us to put this moral concern aside. The players have all voluntarily agreed to participate. The rules of this game have been presented to the participants, and there isn’t any reason to think Deok-su’s strategy falls outside these lines. Consider the game of poker: a player may choose to lie to their fellow players in order to win the pot through a strategy known as bluffing. Normally, lying to steal your friend’s money is not morally acceptable, but in the context of this game, where everyone knows the rules and the consequences of the game, it is a legitimate strategy and we wouldn’t morally judge a person engaging in bluffing. Likewise, Deok-su has found a legitimate strategy that isn’t strictly prohibited by the rules of the game, yet our moral condemnation still feels appropriate. How do we square these competing intuitions?

I think there is still a good reason to judge Deok-su wrong, and it has to do with the nature of what a game is. I believe that in all true games, the individual players have the ability to help determine the outcome of the game. For example, a “game” like Chutes and Ladders is not a game at all, as the players have no agency in determining the outcome of the game. The game is entirely determined by chance and chance alone. When Deok-su throws his competitor onto the next pane of glass, he strips his opponent of their agency, removing that player’s ability to choose. He’s effectively broken a meta-rule of games. These would be rules that don’t apply to a specific game, but to all games, in order to maintain their integrity as a game. (I don’t think that this is the only meta-rule of all games, but I’ll only be examining this particular one here.)

If the meta-rules of games can help us make moral judgments, then we should see similar results in other cases. We can apply this to a moment earlier in the series. Sang-woo has an advantage in the second game that the contestants are forced to play. He has a strong suspicion that the game will be Honeycomb and chooses the easiest shape to win, the triangle. He doesn’t share this information with his allies, but only watches silently as the show’s protagonist, Gi-hun, chooses the umbrella, the hardest shape. While this may not be in the spirit of the alliance that they have formed, he has not removed Gi-hun’s agency in the game. Sure, he’s violated the trust of his alliance, but given the stakes of the games, it might be considered simply good strategy to create false alliances. It is a more complex version of a bluff. But, imagine that Sang-woo, upon completing his task, went to all the other players that had yet to finish their tasks and shattered their honeycombs by kicking them. They would be eliminated from the game, but not by their own agency. The game would be taken from them. This would be morally reprehensible in the same way as a player slapping down the cards of their opponents in order to reveal them to the table in a poker game.

Let’s consider another moment from the Glass Bridge game. One of the players, a former glass maker, thinks that he can determine which plate is tempered and thus will not break. The host turns off the lights to stop him from being able to determine which is tempered. In the show, Sang-woo removes the glass maker’s agency in the same way that Deok-su does, by forcing the glass maker onto an arbitrary glass plate, because he is taking too long to decide. Are these two instances morally equivalent?

Let us suppose that Sang-woo acts differently and the Host leaves the lights on. The former glass maker could conceivably win the game at this point. He could simply stall, not making a decision until the last second, and then jump onto the correct plate in order to win the game. The other players would run out of time and lose the game. In this scenario, did the glass maker remove the agency of the players? If we understand the rules of the Glass Bridge game, no. Sang-woo could still go on to the same plate as the glass maker is on, exercise his autonomy, and choose without waiting for the glass maker to reveal the correct choice. Much like Sang-woo is not obligated to reveal the game was Honeycomb, the glass maker is not obligated to reveal to the other players the correct decision. It would be unfortunate that the players behind Sang-woo and the glass maker, Gi-hun and Sae-byeok, couldn’t advance safely. The game for them has ceased to be a “game” as they are prevented from making any meaningful choices. But would this be wrong? That is, is the glass maker blameworthy in the same way we seem to hold Deok-su responsible? Of course not. The manner in which the agency is lost in the game makes a moral difference. Direct removal of a player’s agency is fundamentally different from agency being removed by the circumstances of game play.

It isn’t only in fiction that we see such actions. We can see similar strategies in professional sports where a team or player actively aims to remove the agency of a player from a game. The most morally egregious case would be aiming to injure a player to remove them from the game. However, we can see a legitimized version of removing agency of a player in baseball. When a hot batter comes up to the plate in baseball, pitchers can choose to deliberately walk the batter so as to minimize their potential impact. This practice is so cemented into the rules of the game that now the actual throwing of the pitches isn’t required. The coach of the defending team can simply signal the umpire that they would like to intentionally walk the batter and the player will advance to first base. The intentional walk strategy, and now rule, has generated strong feelings about its “sportsmanship.” However, I suspect the actual frustration that fans are experiencing is that the strategy fundamentally takes the game out of the player’s hands. The batter has been intentionally stripped of their agency, and so the game ceases to be. Fans came to see a game played and this, momentarily, is not a game. This non-game event could have a significant impact on the outcome, and that can make it feel unjust or unfair. Fans who defend the intentional walk strategy may argue that the rules of baseball don’t disallow it, and in fact now explicitly support it. I will concede that this is the case. But while it may not break the stated rules of the game, it breaks a meta-rule of games, and thus generates a justified sense of moral unfairness.

There are many games that we play where we suspend the normal rules of morality for the sake of the game and adopt a new set of moral rules that apply to the game. Consequently, we can’t simply make moral judgements about a player’s strategies in relation to normal morality. Sang-woo is often a cunning and brutal player in Squid Game, but at least he isn’t an immoral one in the Honeycomb game. In the Glass Bridge game, both Deok-Su and Sang-woo show their moral colors not because they were breaking any stated rules of the game they were playing, but because they were undermining an aspect of what it means to play a game. Violating a meta-rule of games is at the very least dissatisfying, as we see in baseball, and would allow us to label strategies that break these rules as morally wrong, in the same way as breaking the stated rules of any game.

Making the Best of a Bad Situation: Russia and the Energy Crisis

photograph of electric power pylons in winter landscape

Europe is facing a crisis (I know, another one?!). This crisis, however, isn’t viral, ecological, economic, or migratory – although it is influenced by, and influences, these phenomena. No, I’m referring to the European energy crisis. Since the beginning of this year, the wholesale price of gas has increased by 250%. This, in turn, has caused similar price rises down the energy production and consumption chain. As a result, businesses and domestic consumers have seen their energy bills rise phenomenally, increasing the numbers of people facing fuel poverty and forcing EU leaders to call emergency meetings.

The reason for this price rise is hard to pin down because it isn’t attributable to any single cause. Instead, multiple factors – such as a shortfall in renewable energy production, an increase in demand as the global economy resurges post-COVID, and a steady phasing out of energy from coal production – have led to the crisis. However, to oversimplify it, there’s not enough energy to meet demand, causing prices to rise. And, while the situation is at its worst in Europe, there’s no reason to think that it will not eventually spread. Indeed, prices have already begun to rise in other parts of the globe.

While this is a dilemma for those countries who import all or some of their energy (be that gas, coal, oil, or electricity), it is also an opportunity for exporters. Higher prices mean greater profits as individuals, institutions, and even states become increasingly willing to part with funds to secure essential resources. On a small scale, prices being dictated by supply and demand isn’t too much of an issue (provided you’re onboard with capitalism). It’s how your local shop decides how much to charge for toilet roll – the more people want it, the more that shop can charge. But, when it comes to nation-states’ selling and purchasing power, things can become tricky as scarcity confers additional political power to those resource-rich countries, which they can leverage against the resource-poor.

It is precisely this politicization, and even weaponization, of energy supplies that several countries fear will take place within Europe. More specifically, concerns are being raised that Russia, one of Europe’s largest natural gas suppliers, is going to capitalize on the European energy crisis, using it as an opportunity to solidify its already significant bargaining position or even refuse to export energy as a means of weakening its (perceived) rivals. Of course, this is something that Russian authorities have denied, with Vladamir Putin going so far as to not only deny Russian involvement but also blame Europe for the whole affair.

This concern raises an interesting point, however. While fears have been expressed about Russia’s intentions during the crisis, it’s not entirely clear what would be wrong with them making the best of it. Why shouldn’t Russia, as one of Europe’s largest gas suppliers, take advantage of the crisis to better its fortunes, even if this does lead to an increase in gas prices?

Now, the answer might seem obvious – people are going to suffer without gas. If people can’t afford to heat their homes during winter, this will cause suffering and even death – things which we typically class as undesirable. Thus, one can argue, from a moral and political cosmopolitanism, that Russia shouldn’t act in a manner that causes harm to people regardless of their nationality. Consequentially, it should do what it can to help minimize gas prices and thus minimize harm.

Yet, it’s not entirely clear why Russia should care about the suffering of individuals beyond its borders, or at least, what it owes those people. After all, pretty much every person already has a political entity that exists to protect their interests – their own nation-state. Why should Russia pass up an opportunity to better its fortunes and act in a way that benefits the well-being of individuals for whom it holds little to no responsibility? What concern is it of Putin’s if people in the U.K. are cold because they can’t pay their gas bills? After all, those people have the U.K. government to care for them. Why should the Russian government miss out on an opportunity to better its standing and that of its citizens?

This attitude may seem callous or even cruel (indeed, I would be inclined to say it is). But a failure of a government to act in the best interests of those to whom it holds no obvious bond is arguably not a dereliction of duty. After all, it would seem uncontroversial to claim that the purpose of government is to secure the well-being of its citizens. If it fails in this purpose, that is when its legitimacy can be called into question. But to disregard the well-being of citizens of other member states, while potentially distasteful and even unethical, doesn’t seem to contradict a government’s function. For the Russian government then, if it can act in a manner that solidifies its positioning and thereby (in)directly betters the lives of its citizens, it would seem acceptable, even necessary, that it takes advantage of the unfolding crisis. The Russian government should look out for the Russian people, and passing up an opportunity to do this, simply for the benefit of those whom it holds no duty of protection, would seem antithetical to its very purpose.

Now, that is not to say that Russia would be off the hook if it did take advantage of the current situation. There is still plenty of scope for condemnation if it did drive up energy prices, resulting in suffering, simply as a means of increasing its political power (cosmopolitanism has already been alluded to as a potential basis for such criticism). But, to find fault with Russia for taking advantage of the crisis simply because it’s acting in a way that will give it political leverage over its peers or competitors seems to criticize the nation for doing its job, one which every government holds. After all, if the positions were reversed, how do you think your government would act? In the best interests of its citizens or the interests of others?

‘Squid Game’, Class Struggle, and the Good Life

image of Korean Squid Game logo

[SPOILER WARNING: This article discusses several plot details of Netflix’s Squid Game.]

Throughout the fall months of 2021, the Korean series Squid Game was a top ten listing on Netflix. It shares elements in common with movies such as the 2005 Eli Roth film Hostel and the entire Hunger Games franchise — the suffering of the poor and downtrodden serves as perverted entertainment for the incomprehensibly and unconscionably wealthy. By situating the class struggle in a 9-episode hypothetical thought experiment, the series distances the viewer from the reality behind the metaphor and prevents their analysis from being clouded by pre-existing political commitments.

The main idea of the series is that participants compete for a growing pile of cash, contained in a giant transparent piggy bank, hanging over the room in which contestants spend most of their time. Every time one of the players dies, more money is added to the bank. They participate in a variety of traditional children’s games. The winners live another day to compete for the whole pot, while the losers are exterminated and become for the others simply more money in the pile. Often the contestants are put in a position to kill one another and are frequently more than eager to do so.

Hundreds of players choose to participate in the Squid Game, all of them down and out in some way or another. The word “choose” is used loosely here. The candidates enter the competition, are allowed to leave, and then when given the option to participate again, almost all of them do. The common line of reasoning is that life is worse outside of the game — intense suffering is bound to happen, but at least in the game that suffering is more ordered and predictable. In the world outside, a person can follow all of the “rules” or, in any case, the set of norms that we’ve come to expect will point the direction of their lives away from misery and toward happiness. They can do all that and still be hit in the face with the absurdity of lived experience — with the machinations of an indifferent universe that doesn’t care about the rules and deals out misery, suffering, and death indiscriminately to rule followers and rule breakers alike. In the game, players don’t know who will go first or last, nor do they know which skills and abilities will be useful for success in the highly contingent circumstances in which they find themselves. The recognition of the absurdity of their condition is clear to the viewer from the very beginning. As the series highlights throughout and stresses in the final episode, the condition of the human person surviving in the real world is different only in the respect that it is worse while masquerading as better. We have no control over the circumstances into which we are born: whether our parents are kind and supportive or cruel and destructive, whether they have wealth to pass along, whether we are born into environments with stable and fair political systems, whether those environments have sufficient resources, whether we are born a member of an oppressed group, or whether we have skills and abilities that will make us well positioned to survive in the environments into which we are born (to name just a few). If this is what we can expect out of life, why not sign up for a game one stands a fighting chance of winning?

The idea that the characters “choose” to participate in the game motivates reflection on the nature of coercion. To how much misery and manipulation can a person be subjected before their decisions no longer count as truly free? If you think playing a game is your only way to survive another day, or your only chance to protect your mother or your child, odds are that you will end up playing. To do otherwise is to select an alternative that is not a reasonable second option. The viewer knows what is at stake in the game, and we can empathize with the fact that the players end up back inside. No one is likely to think that the characters that finance and run the competition are heroes — they are exploiting the dire circumstances of desperate people. In the real world, the losers of life’s socioeconomic lottery, like the players in the Squid Game, are often trapped in a state of unfreedom. While powerful people wearing the masks of representatives and leaders enact policies to make the rich richer on the backs of the poor, the least well off are often left, through no fault of their own, to “choose” between only bad options. Then we blame them for it. Rather than recognizing the contingency of all of the facts of our existence, we tend to treat those that suffer as if they do so purely as a result of their own life choices.

There is no justice in the game — wrongdoers engage in selfish and harmful acts with impunity. Far from being punished, such people are actually rewarded. The kindest and most empathetic people gain nothing from their good works. If people choose compassion and fellow-feeling, they’ll have to do so in recognition of the intrinsic value of those things rather than because of what they hope to get out of them. In this way, Squid Game is another manifestation of Glaucon’s challenge from Plato’s Republic. In Book Two of this most famous of Plato’s dialogues, the conversants attempt to answer the question “why be moral?” Glaucon makes the argument that, if people could get away with it and avoid the consequences, they would behave selfishly to the point of doing terrible things. He provides the fictional case of a man who is given a ring — the Ring of Gyges — that renders him invisible. Glaucon claims that the man would use it to steal all of the king’s riches and to rape his wife. Why should he care, if he will never be caught? Similarly, participants in the Squid Game either die or live to tell the tale exactly as they prefer with no one to correct them on the more gruesome details. Why shouldn’t participants behave in exactly the way they think will help them win?

Socrates’s rejoinder is that being good is valuable for its own sake, and the main character of Squid Game — Seong Gi-hun — is a Socratic hero. With one notable exception, he refuses to harm or kill other participants and seems to keep the humanity of others in full view throughout the proceedings. When he feels an impulse to deviate from this norm, he is quickly reminded by a friend, “that’s not you.” Though he seems blind to his own virtuous character, his behavior demonstrates an unwillingness to give up on virtue for virtue’s sake or on the inherent value of life and friendships. The game concludes with the Socratic hero as the winner; all of the money is now his and all he wants to do is use it to improve the lives of the people he cares about. Unfortunately, when he emerges from the game, they are all gone. His mother lies dead on the floor of the squalid apartment that they once shared. His daughter has moved to the United States with her mother and stepfather. He is left alone with more money than he ever imagined having in his wildest dreams. Under these conditions, it’s all worthless. What constitutes the good life? Even if we allow (as we should) for a pluralism of views on this topic, most well-considered accounts will agree that it involves delight in knowledge, awe in beauty, joy in hobbies, and the contentment that comes with spending substantial and meaningful time with the people we care about.

Material comfort is not identical to the good life, but economic stability is a necessary condition for people to have the freedom to participate in the goods of life. We can’t spend time with our loved ones if we’re constantly pushing a rock up a hill or, what amounts to the same thing, working for exploitation wages. Squid Game provides us with a hypothetical thought experiment to help us to recognize that what’s true in this fictional universe is no less true in the actual world. If we think just conditions of human life require providing a structure in which everyone has reasonable access to the basic goods of life, then we desperately need to make modifications to our current socioeconomic systems. Otherwise, we’re all just playing a rigged game.

Environmental Impacts of the Fashion Industry

photograph of Louis Vuitton storefront

While the designer for Louis Vuitton was probably hoping their iconic looks would be stealing the fashion hearts of the internet, it was not the powerhouse brand’s upcoming line that was posted all over the news. During the finale of one of the biggest fashion events in the world, Paris Fashion Week, while models for Louis Vuitton were in the midst of the runway, an environmental activist, Marie Cohuet, joined the models holding a sign stating “OVERCONSUMPTION = EXTINCTION.” Outside, more environmental activists from three different organizations were staging their own protest against the fashion industry’s harmful impact on the environment. Louis Vuitton was targeted specifically for its influence in the fashion industry, as well as for the brand’s recent pledge to reduce their environmental impact. The environmental group behind this protest claims Louis Vuitton is not living up to its promises — having committed to have 100% renewable energy in their production and logistics sites, and LED lighting in their stores by 2025. Are these commitments enough, however, to make a consequential impact on an environment that is becoming increasingly uninhabitable every year?

For one thing, Louis Vuitton is basing these objectives off the 2015 Paris Climate Agreement that settled on keeping global warming temperatures below 1.5- 2 degrees Celsius. This range of temperature indicates the difference between surviving the inclement weather we are currently dealing with and experiencing massive climate disasters that lead to unheard of burdens on countries and people. These two worlds look very different, especially depending on the geography of where one lives. Even at 1.5 degrees Celsius, many island nations will cease to exist, as this agreement was largely made based on the concerns of economic powerhouses, such as the U.S., that need not worry about their entire populations being swallowed by rising sea levels- just coast lines. Beyond just ignoring the potential extinction of smaller island nations, the goals of the Paris Climate Agreement are almost definitely unreachable at this point. The few goals Louis Vuitton has set for the brand’s environmental impact are not set to be reached until 2025, which is far beyond what the climate needs in reality from the industry. But, Louis Vuitton is only one brand of many in the industry, so what is the total impact of the entire fashion industry on the environment? And why should the fashion industry be at the forefront of industries limiting their environmental impacts?

Making clothes, is in fact, an extremely resource-intensive process, which consumes mass amounts of water, releases dangerous levels of carbon emissions, and depends on a wasteful consumerist business model. Every year, the fashion industry uses up such a massive amount of water that it could meet the needs of five million people. This is in a world that currently 2.2 billion people do not have safe access to clean drinking water. Furthermore, the industry depends largely upon synthetic materials, which put microplastics into the oceans, reeking negative impacts on an already vulnerable marine ecosystem. In terms of carbon emissions, the industry is responsible for ten percent of global emissions, which may rise by 50% by 2030, if it stays at the same pace. Fast fashion, a quickly growing pocket of the fashion industry, relies on a consumerist model in which one posts an outfit on social media, but then must buy a different outfit for their next post. Their clothes, therefore, are cheaply made and cheaply bought, and eventually end up in a landfill. Many of these clothes end up in an incinerator, which releases large amounts of poisonous gases and toxins into the air. Despite these statistics, the consumption of clothing is expected to rise from 62 metric tons in 2019 to 102 million tons in the next decade. These are environmental impacts that undoubtedly affect human’s health, however, there is a more direct connection to the endangerment of human life and the fashion industry.

Part of the reason fast fashion is able to sell its clothes at such a cheap price is because they do not pay the people in warehouses making the clothes a livable wage. This has essentially led to modern-day slavery practices in the production of the fashion world. Women make up the majority of the 40 million people worldwide that are enslaved in modern slavery networks and the fashion industry, from the workers in the warehouses to the collection of the raw materials, contributes to this network. The complicated supply chains that the fashion industry depends on make it difficult to track where the raw materials have come from and make it easier to hide the connection between a cute top on an Instagram model and an enslaved woman, or even child, in a dangerous factory. These factories and warehouses are often in countries that already struggle economically and therefore have populations of people vulnerable to the cheap wages and dangerous working conditions due to the risk of poverty. This present-day situation can undoubtedly be traced back to the roots of colonialism and the imperialist missions of the “Global North” against countries in the “Global South.” At the root of the fashion industry’s ethical issues lie not only environmental problems, but also complex race and gender issues. After all, the impacts on the climate will be felt first by the most vulnerable populations in the most vulnerable countries, both geographically and economically.

In order to address the mounting problems facing the fashion industry, some brands have turned towards more sustainable methods of making, packaging, and transporting clothes. For example, technology has allowed for companies to use recyclable fibers, which lack the toxins found in other sources. This also requires far less water than it would using the usual cotton material. Oftentimes, however, these sustainable brands can be extremely expensive, carrying a price tag of $550 for a simple white cotton t-shirt. This is simply unattainable for most of the population. One brand, CHNGE has managed to create a brand whose ideology is centered around sustainability, ethical practices, and activism. Their clothing is 100% carbon neutral as they protect hundreds of thousands of trees, they use an organic cotton that saves 500 gallons of water, and use recycled packaging for their clothes that can then be recycled again. They also own the factory that produces their clothing and guarantee fair and safe working conditions for their employees. They manage to do all of this while keeping the price of their shirts around $30.

Whereas brands like CHNGE seem to be taking active and important steps towards offsetting the impacts of their clothing production, it seems other brands like Louis Vuitton are failing to recognize the precarious place the world finds itself in. While individual fashion brands, and ideally the fashion world as a whole, can pledge and promise to decrease their environmental impacts, the impending climate doom does not rest solely upon the shoulders of fashion CEOs. Surely, they have a great responsibility given the impact of the fashion world, but our continued survival is largely dependent upon world leaders to make and enforce the real and necessary changes needed to prepare for the future. While the 2015 Paris Climate Agreement may have been historical in the global community’s acceptance of the need for change towards the climate, that agreement is failing. World leaders, from both poles of the globe, need to work together in a way that the world has never seen before in order to prepare for the worst that climate change is sure to bring.

October’s Harvest: Threats to Academic Freedom

photograph of narrow wood bridge surrounded by woods leading to open water

With the month of October barely underway, we have already seen two incidents at elite institutions of higher education that underscore the continuing threats to academic freedom from both the right and left. A Twitter mob convinced MIT to disinvite a distinguished professor of geophysics from speaking at the school due to his views about Diversity, Equity, and Inclusion (DEI) policies. And at Yale, a prominent history professor stepped down from leadership of a prestigious program when right-wing donors insisted on selecting members of a “board of visitors” that would advise on the appointments of program instructors.

After publicly announcing earlier this year that Professor Dorian Abbot, a geophysical scientist at the University of Chicago, would be delivering the prestigious John Carlson Lecture, MIT rescinded his invitation and cancelled the event. The reason? Abbot is a harsh critic of DEI policies, which encourage representation and participation of diverse groups of people in higher education, including through preferential hiring of faculty and evaluation of student applicants. In a recent Newsweek column, Abbot wrote that DEI “violates the ethical and legal principle of equal treatment” and “undermines the public’s trust in universities and their graduates.” Abbot proposed an alternative framework he called Merit, Fairness, and Equality whereby “university applicants are treated as individuals and evaluated through a rigorous and unbiased process based on their merit and qualifications alone.” Apparently, graduate students and faculty at both MIT and Chicago were so affronted by Abbot’s words that they organized a disinvitation campaign, which ultimately convinced the chair of MIT’s Department of Earth, Atmospheric and Planetary Science to de-platform Abbot.

For MIT’s part, the school says that it merely disinvited Abbot from giving the Carlson Lecture, a public outreach talk aimed, in part, at engaging local high school students. The university says it invited Abbot to campus to address fellow climate scientists about his research instead. Apparently, Abbot’s views about DEI make his climate science research unfit for consumption by the general public, but not by his fellow academics.

There are a number of troubling aspects to this episode. First, Abbot’s views about DEI are decidedly mainstream. According to a recent Gallup poll, 74% of U.S. adults oppose preferential hiring or promotion of Blacks. The Republican Party’s platform includes this line: “Merit and hard work should determine advancement in our society, so we reject unfair preferences, quotas, and set-asides as forms of discrimination.” If the nation’s institutions of higher education are to remain effective as providers of civic education, forums for political debate, and incubators of novel policy ideas, the views of most Americans and one of the two major political parties cannot be made verboten. Note carefully that in saying Abbot’s views are mainstream, I am not saying they are right. Rather, I am claiming that if universities want to make a significant epistemic contribution to the larger society, they cannot seal themselves off from views that have wide currency in the general public.

Second, having determined that Abbot’s scholarship would make a valuable contribution to MIT and the local community — something which they have a plenary right to do — faculty and administrators should not have allowed objections to his political views to outweigh or override that initial determination. When the free exchange of ideas is obstructed by political actors — be they government officials or political activists — academic life suffers. The political views of a vocal minority are no justification for suppressing scholarly exchange. Those who object to Abbot’s ideas have every right to strenuously protest them, but not to try to exclude him from an academic community that has already validated his worth as a scholar.

Finally, rescinding the invitation will undoubtedly embolden activists who seek to harness the power of social media to silence speakers whose views they deem harmful or offensive. It would have been better if Abbot had not been invited at all, if the alternative was to truckle to the heckler’s veto.

That’s the view from the left. But recent events amply demonstrate that academia has something to fear from the political right, as well. The Brady-Johnson Program in Grand Strategy at Yale University takes a select group of two dozen students and immerses them in classic texts of history and statecraft while also introducing them to a raft of high-profile guest instructors. The program was until recently led by historian Beverly Gage, and is underwritten by large donations from Nicholas Brady, a former U.S. Treasury secretary under presidents Reagan and H.W. Bush, and Charles Johnson, a mutual fund billionaire and leading Republican donor. A week after the 2020 presidential election, a professor who teaches in the program published an opinion article titled “How to Protect America From the Next Donald Trump.” According to Gage, this led Brady and Johnson to demand the creation of a five-member “board of visitors” that would advise on the appointments of instructors, pursuant to a 2006 donor agreement that had until then not been followed. Worse, the donors insisted that they could choose the board. Again according to Gage, Yale president Peter Salovey and Pericles Lewis, vice president for global strategy and vice provost for academic initiatives, ultimately caved to these demands. This caused Gage to resign, effective at the end of the year.

The day after The New York Times reported the story, Salovey released a letter to the faculty affirming Yale’s commitment to academic freedom and promising that he will give “new and careful consideration to how we can reinforce” that commitment. No word yet about plans for the board of visitors.

It is a foundational principle of academic freedom that scholars should be insulated from, to quote Fritz Machlup, those “fears and anxieties that may inhibit them from freely studying and investigating whatever they are interested in, and from freely discussing, teaching or publishing whatever opinions they have reached.” One source of such fears and anxieties is left-wing Twitter mobs; another is powerful donors who seek to steer teaching and research in a particular direction, often for ideological reasons. Freedom from political interference entails that faculty ought to be free to choose, in the absence of outside interference or pressure, both who gets to do teaching and research in the academic community and what they can research and teach. A board of visitors of the kind envisioned by Brady and Johnson, with members appointed by them and whose “advice” would be backed by the threat of pulling the fiscal plug on the program, is anathema to these principles.

Despite these stories, there is reason for optimism. As Matthew Yglesias pointed out, some surveys seem to indicate broad, and indeed increasing, American support for free speech, particularly among college graduates. This suggests that threats to free speech mostly stem from vocal or powerful minorities. But such compact, determined groups can wreak havoc. For example, the cause of prohibition was never supported by the majority of Americans, but the Anti-Saloon League and the voters it galvanized nevertheless managed to amend the Constitution to forbid the “manufacture, sale, or transportation of intoxicating liquors.” As the weather turns cold, faculty and administrators at our institutions of higher education must commit to thwarting a profounder chill.

Praise and Resentment: The Moral of ‘Bad Art Friend’

black-and-white photograph of glamorous woman looking in mirror

The story of the “Bad Art Friend” has taken social media by storm. For those who have yet to brave the nearly 10,000 word New York Times article, here is a summary of the tale: Dawn Dorland, a writer, decided to donate one of her kidneys after completing her M.F.A. She kept her social media friends abreast of her donation and surgery, and noticed (some time after the donation) that one of her friends had failed to comment on the donation. Dorland wrote to the friend (Sonya Larson, herself a writer) asking her why she hadn’t said anything about Dorland’s altruistic activities. They exchanged pleasantries, Sonya praised her for her sacrifice, and all seemed well. Several months later, however, Sonya published a short story inspired by Dorland’s kidney donation which set off a bevy of legal and relational blows involving multiple lawsuits and, potentially, ruined careers.

There are a slew of ethical issues and questions embedded in the text and subtext of this story: questions about the differences between plagiarism and inspiration, questions about appropriate boundaries in friendships and acquaintanceships, and questions about the legality and propriety of lawsuits. But a majority consensus has seemed to emerge about the protagonist of this story: almost universally, readers are not on the side of Dawn Dorland.

Elizabeth Bruenig, in an op-ed for The Atlantic, describes Dorland as the “patron-saint” of our “social-media age,” emphasizing the description is not a complement. She characterizes Dorland’s initial behavior towards Larson as follows:

“Dorland, in particular, went looking for [victimhood], soliciting Larson for a reason the latter hadn’t congratulated her for her latest good deed, suspecting—rightly—a chillier relationship than collegial email etiquette would suggest. She kept seeking little indignities to be wounded by—and she kept finding them. Her retaliations quickly outpaced Larson’s offenses, such as they were.”

Bruenig is right that Dorland considered herself to be wronged by Larson’s apparent apathy. And insofar as we find it implausible that Larson really did wrong her in this way, it is understandable why Bruenig might analyze the situation as one in which Dorland sought out a kind of victimhood status. This may explain part of why Dorland’s behavior immediately turns us off — looking for victimhood, or claiming it too quickly, seems like a kind of injustice to those who really are victims of really bad actions or circumstances. In diverting attention to extremely mild wrongs (if they were wrongs at all) done to herself, Dorland distracts people from truly awful situations that merit their consideration. Human attention is zero-sum: if I am paying attention to you, then that means I am not paying attention to something else. So, there is a consequentialist argument to be made that I should not seek out “victimhood” status and, thereby, attention, if the public’s attention would be better spent elsewhere.

Yet, Bruenig’s analysis does not consider the fact that our mild disgust at Dorland begins even before she voices her complaints to Larson. They begin even before she speaks to Larson at all. They begin where Dorland seeks out praise and attention for her (admittedly very brave) act of donating her kidney. But did Dorland actually do anything wrong in seeking out praise for her praise-worthy act? Does our disgust stem from genuine moral assessment, or a deeper kind of resentment of people who act more selflessly than we do?

The philosopher Immanuel Kant theorized that it was morally impermissible to treat others as a mere means to our own ends — we must always consider them to be intrinsically valuable creatures themselves, and our actions must reflect this. We may, therefore, think that Dorland’s seeking of praise for her donation indicates that she was using the kidney recipient as a mere means to gaining praise, popularity, or notoriety.

Still, it is not clear that Kant’s concepts would apply in this case. Dorland’s donation of her kidney indicates that, while she may have used the opportunity as a means to other social ends, she was not using the recipient merely as a means — in saving his life, she acted toward him in acknowledgement of his value as a person. There is nothing in Kant’s moral philosophy which prohibits us from using people to attain our ends, so long as we respect them as persons while doing so.

From a utilitarian perspective, seeking praise for your good works may even maximize happiness, meaning that it would be the morally correct thing to do. For example, by seeking praise for your honorable deeds, you may draw attention to what you did, encouraging others to display the same amount of selflessness and charity. Additionally, you yourself would derive happiness from the praise, and it doesn’t seem that anybody would lose happiness by praising you. Therefore, it seems that seeking such accolades may benefit everyone and harm no one.

A virtue ethical approach to the issue may seem to yield different results. After all, surely there is something unvirtuous about someone who seeks out praise for supposedly altruistic actions? Many consider humility to be a virtue, and Dorland’s constant social media updates and attention-seeking behavior seem to indicate a lack of humility in her character. Perhaps we are turned off by the desire for praise because it indicates a character vice: pompousness, perhaps, or neediness.

And yet, historically virtue ethicists have praised the (appropriate) seeking of praise. In his Nicomachean Ethics, book four, Aristotle calls it the virtue of “small honors,” which we might more simply understand as the virtue of seeking to do, and be rewarded for, honorable things. Of course, Aristotle still holds that I should not seek praise for things that are not praiseworthy, nor should I act in praiseworthy ways purely for the praise. Still, seeking honor (and the praise that arguably ought to go with it) in moderate amounts is a virtue. At least for Aristotle.

There is a case to be made that our distaste for those who seek praise has a distinctly Christian origin. In Christian scriptures — specifically, the Gospel of Matthew, chapter 6 — Jesus preaches against seeking recognition for acts of charity:

“Be careful not to practice your righteousness in front of others to be seen by them. If you do, you will have no reward from your Father in heaven. So when you give to the needy, do not announce it with trumpets, as the hypocrites do in the synagogues and on the streets, to be honored by others. Truly I tell you, they have received their reward in full. But when you give to the needy, do not let your left hand know what your right hand is doing, so that your giving may be in secret. Then your Father, who sees what is done in secret, will reward you.”

In the Christian tradition, the idea is that those who seek recognition from others in the here and now eliminate their opportunity to build character and, perhaps, gain other spiritual rewards. One may have earthly, social rewards, or longer-lasting spiritual rewards, but one may not have both.

Yes, I suspect there are many who would not claim Christianity who nevertheless are repelled by the idea of someone asking for praise for donating a kidney. Those familiar with Friedrich Nietzsche’s writings will recall his extensive critique of Christian moral thought which, he wrote, “has waged deadly war against this higher type of man; placed all the basic instincts of his type under ban” (The Anti-Christ, p. 5). Nietzsche argued that traditional Christian morality — which he referred to as “slave morality” — served only to make humans weak, powerless, and full of resentment at those who were powerful and flourishing. One can imagine a Nietzschean critique of our distaste for those announcing their good deeds in the public square: perhaps, rather than a kind of virtuous disgust, what we are truly experiencing is resentment toward someone acting with more courage than we have.

No matter your opinion on Bad Art Friend and all the drama that story contains, it is worth reflecting on how we respond when someone announces their good deeds to the public. Why do we prefer discretion? What is wrong with desiring praise and honor? These questions may be worth investigating deeper, lest we act in ordinary human resentment rather than careful moral consideration.

Vaccine Hesitancy as Free-Riding

photograph of masked passengers on subway

As the pandemic rages on, attention is beginning to turn to the moral status of those who refuse the COVID-19 vaccine. Some of these individuals have succumbed to outlandish conspiracy theories concerning microchips and magnetic implants. But for most, vaccine hesitancy is instead the expression of a genuine concern regarding the safety of the vaccine. It was, after all, developed using a novel mRNA approach to vaccines, and approved in what seemed like an exceedingly short period of time. For these individuals, their hesitancy to receive the vaccine is not based on bad-faith conspiracies, but in a sincere — if scientifically unfounded — fear of the unknown.

There are many arguments we might make regarding those who are hesitant to take the vaccine. Some of these focus on the risk the unvaccinated pose to others who, for whatever medical reason, are unable to be vaccinated. Most of us agree that it is morally wrong of us to unnecessarily put others in harm’s way — particularly when that harm is as serious as hospitalization and death. Given this — and given the importance of ‘herd immunity’ to protecting the vulnerable — we might argue that it is morally wrong for those who can receive the vaccine to refrain.

But the argument I wish to consider here is different. It’s not based on the moral wrongness of failing to protect others, but instead on the unfairness of being a free-rider. What’s a free-rider? Put simply, it’s someone who affords themself a special privilege that they don’t allow for others. More specifically, free-riding occurs when someone receives a benefit without contributing towards the cost of its production. Suppose that my town runs a phenomenal public transport system. Suppose, further, that I frequently make use of this system — commuting to work via bus, and utilizing public transport to run all other kinds of errands. Because I’m particularly stingy, however, I refuse to ever pay a fare — instead sneaking onto buses and expertly avoiding those who would check my ticket. What I’m doing, it seems, is unfair on those who do pay their fare. Why? Because I’m carving out a special exception for myself; an exception that I don’t extend to others. I clearly value the public transport system, and therefore value the contributions of those who pay their fare (since, without those contributions, the system would cease to exist). At the same time, however, I refuse to make any contribution myself. This is deeply inconsistent. If I were asked why I can ride for free when others cannot, I would struggle to provide a good answer.

We might argue that the same is true of vaccine hesitancy. Mass vaccination is directed towards a clear public good — that is, the attainment of herd immunity. As such, we each must be willing to contribute towards the cost of its production. And that cost is receiving the vaccine.

But there’s one potential problem with this argument. As we’ve seen, someone is only a free-rider if they refuse to contribute to the cost of something from which they will benefit. In the case of mass vaccination, the benefit is the protection of those who are unvaccinated. But there’s the problem. As soon as someone contributes to this project by receiving the vaccine, they are no longer eligible to receive the benefit. Herd immunity doesn’t help those who are already vaccinated.

But this is to take an unnecessarily narrow view of the benefits of mass vaccination. Even if I am vaccinated, herd immunity might benefit me by protecting those who I care about — such as loved ones who are unable to receive the vaccine. Further, mass vaccination limits the opportunities for the virus to mutate into newer, more virulent strains (such as the Delta variant that has seen renewed breakouts around the world). And the benefits of mass vaccination extend even further than this. As a result of the pandemic, many of us have been — and continue to be — unable to work, unable to attend classes, unable to travel, and unable to reunite with loved ones. Our ability to do these things will continue to be limited to varying degrees until we find a way to end this pandemic.

All of  us can agree that the world returning to normal is an unequivocal good, and the scientific data suggests that mass vaccination (around 80-90% of the population) is the most effective way of doing this. Of course, more conspiratorially-minded individuals will disagree with this assertion. But this argument isn’t for those people. It’s for those who recognize that vaccination is required, but who — contrary to the evidence — still harbor concerns about its safety.

Essentially, it boils down to this: If a vaccine hesitant individual both (1) wants the world returned to normal, and (2) accepts that mass vaccination is the most effective way of doing this, then they must be willing to contribute to the cost of its production — namely, by receiving the vaccine. If not, then they need to provide a convincing reason as to why they get to be among the 10-20% of individuals who needn’t pay the cost of getting vaccinated. Some — like those who cannot receive the vaccine for medical reasons — will have good reason. But those who are merely hesitant will not. Many of us would love to “wait and see” what happens with the vaccine rollout, or avoid the inherent unpleasantness of an injection altogether. But we don’t have that luxury. The vulnerable must be protected, and the world must return to normal. By failing to contribute to this project, we are free-riding, and — like the fare-dodging bus passenger — treating those around us in a way that’s grossly unfair.

In Defense of Eating Dogs

photograph of strays dog on street

In Western societies, dogs are regarded as our companions. As such, the idea that one might ever eat a dog would strike us as abhorrent. This view stands in stark contrast to that of many Asian countries, in which the consumption of dog meat is a regular part of their culture. However, attitudes appear to be shifting. As younger generations increasingly regard the practice as taboo, the president of South Korea has recently suggested that the time has come for the practice to be prohibited.

But should it be? I suspect that many people would regard the consumption of dogs as not only taboo, but morally wrong. However, this attitude seems to be inconsistent with our attitudes towards other animals.

I want to suggest that if there is nothing wrong with eating cows, chickens, and pigs, then there is nothing wrong with eating dogs. Conversely, if it is wrong to eat dogs, then it is also wrong to eat cows, chickens, and pigs. Regardless of what direction one goes with the reasoning, my point is that there is an inconsistency in how most people view dogs, cows, chickens, and pigs.

Can We Draw a Line?

Why might it be wrong to eat a dog? One answer is that dogs are companion animals. They are honorary members of our family, so to speak. Indeed, some dog owners refer to themselves as “dog moms” or “dog dads.” As such, it would be wrong to eat a dog because of the special status that we have given them.

The problem is that this association is contingent. Perhaps you might view your dog as part of your family, but that doesn’t mean everyone else views dogs in that way. Indeed, that is not how they are viewed by people who consume them as food and in societies where this practice is prevalent. If dogs only have significant value because we give it to them, then they don’t have it inherently. If that’s the case, then while eating dogs might be revolting or disgusting, it isn’t wrong. And just because something is offensive to one’s own tastes doesn’t mean it should be legally banned for everyone.

Another answer might be that dogs are what animal rights philosopher Tom Regan called “subjects of a life.” Dogs are conscious: they can experience pain, pleasure, and other aspects of consciousness. These qualities generate moral value which makes it wrong to kill them purely for the sake of consumption. While this argument shows that dogs have inherent value, it also applies equally to cows, chickens, and pigs — animals that we commonly consume. After all, all of these animals can feel pain and other aspects of consciousness. So why wouldn’t it be wrong to eat them? It seems that any property we think of is going to be a property that these other animals have.

As such, someone who accepts this line of reasoning must also be committed to stopping the consumption of these other animals. But that’s a tough bullet to bite, as many people who are opposed to dog consumption engage in other forms of meat consumption.

The point is that it’s arbitrary to draw a moral line at dogs but not for, say, cows. Consistency demands that we either embrace the permissibility of eating cows, chickens, and pigs — and therefore the permissibility of eating dogs, or we embrace the wrongness of eating dogs — and therefore the wrongness of eating other animals.

Which Direction Should Consistency Take Us?

There are arguments to be made for either.  I have argued that since it is not wrong to eat cows, chickens, and the like, that it is not wrong to eat dogs. On the other side, Alastair Norcross has argued that since it’s wrong to eat dogs, most other kinds of meat consumption are therefore also wrong.

It’s worth taking a deep dive into the literature to build an informed view, but let’s table these arguments for a second. Most people lack the expertise, time, or willpower to confidently explore the academic literature. Indeed, unless you’re a professional philosopher you likely haven’t taken deep dives on many of the beliefs you have. In the absence of that, the next best thing is to work from our background knowledge and engage in critical and reflective deliberation on our beliefs. How might we do that in this case?

Suppose that you’re opposed to eating dogs. Ask yourself this: which is stronger – your intuition that it’s morally permissible to eat chicken, cows, and pigs, or your intuition that there is something wrong with eating dogs? I suspect that most people would answer the former — after all, even those who are opposed to eating dogs are generally OK with eating other kinds of meat. So if that intuition is stronger, perhaps consistency should weigh in favor of that intuition.

That is to say, if we are faced with a dilemma where both horns are counterintuitive (in this case, either we say that eating dogs is morally permissible, or we say that most meat consumption is morally impermissible), then we should go with the horn that preserves our strongest intuition. Our moral common sense is generally reliable, so if we are going to deviate from it, the smaller the deviation the better. In other words, if we are going to bite a bullet, we should bite the smaller bullet. Based on that rule of thumb, we should go with the view that it is morally permissible to eat dogs.

Of course, this isn’t the final say. We are just weighing intuitions, and intuitions and heuristics are defeasible. There are other factors we might need to consider. One might give an independent argument against meat consumption that is strong enough to override intuitions in favor of meat-eating that were not formed reflectively. On the other hand, one might enhance these intuitions by giving independent arguments to shore them up.

Note that I am not saying that someone who thinks it is OK to eat cows, chickens, and pigs must also be OK with personally eating a dog. There is no inconsistency in being willing to eat a cow but refusing to eat a dog, so long as the different attitude is not justified by an appeal to different moral status. The point is one about intellectual consistency.

Ethics and Job Apps: Why Use Lotteries?

photograph of lottery balls coming out of machine

This semester I’ve been 1) applying for jobs, and 2) running a job search to select a team of undergraduate researchers. This has resulted in a curious experience. As an employer, I’ve been tempted to use various techniques in running my job search that, as an applicant, I’ve found myself lamenting. Similarly, as an applicant, I’ve made changes to my application materials designed to frustrate those very purposes I have as an employer.

The source of the experience is that the incentives of search committees and the incentives job applicants don’t align. As an employer, my goal is to select the best candidate for the job. While as an applicant, my goal is that I get a job, whether I’m the best candidate or not.

As an employer, I want to minimize the amount of work it takes for me to find a dedicated employee. Thus, as an employer, I’m inclined to add ‘hoops’ to the application process, by requiring applicants to jump through those hoops, I make sure I only look through applications of those who are really interested in the job. But as an applicant, my goal is to minimize the amount of time I spend on each application. Thus, I am frustrated with job applications that require me to develop customized materials.

In this post, I want to do three things. First, I want to describe one central problem I see with application systems — what I will refer to as the ‘treadmill problem.’ Second, I want to propose a solution to this problem — namely the use of lotteries to select candidates. Third, I want to address an objection employers might have to lotteries — namely that it lowers the average quality of an employer’s hires.

Part I—The Treadmill Problem

As a job applicant, I care about the quality of my application materials. But I don’t care about the quality intrinsically. Rather, I care about the quality in relation to the quality of other applications. Application quality is a good, but it is a positional good. What matters is how strong my applications are in comparison to everyone else.

Take as an analogy the value of height while watching a sports game. If I want to see what is going on, it’s not important just to be tall, rather it’s important to be taller than others. If everyone is sitting down, I can see better if I stand up. But if everyone stands up, I can’t see any better than when I started. Now I’ll need to stand on my tiptoes. And if everyone else does the same, then I’m again right back where I started.

Except, I’m not quite back where I started. Originally everyone was sitting comfortably. Now everyone is craning uncomfortably on their tiptoes, but no one can see any better than when we began.

Job applications work in a similar way. Employers, ideally, hire whosoever application is best. Suppose every applicant just spends a single hour pulling together application materials. The result is that no application is very good, but some are better than others. In general, the better candidates will have somewhat better applications, but the correlation will be imperfect (since the skills of being good at philosophy only imperfectly correlate with the skills of being good at writing application materials).

Now, as an applicant, I realize that I could put in a few hours polishing my application materials — nudging out ahead of other candidates. Thus, I have a reason to spend time polishing.

But everyone else realizes the same thing. So, everyone spends a few hours polishing their materials. And so now the result is that every application is a bit better, but still with some clearly better than others. Once again, in general, the better candidates will have somewhat better applications, but the correlation will remain imperfect.

Of course, everyone spending a few extra hours on applications is not so bad. Except that the same incentive structure iterates. Everyone has reason to spend ten hours polishing, now fifteen hours polishing. Everyone has reason to ask friends to look over their materials, now everyone has reason to hire a job application consultant. Every applicant is stuck in an arms race with every other, but this arms race does not create any new jobs. So, in the end, no one is better off than if everyone could have just agreed to an armistice at the beginning.

Job applicants are left on a treadmill, everyone must keep running faster and faster just to stay in place. If you ever stop running, you will slide off the back of the machine. So, you must keep running faster and faster, but like the Red Queen in Lewis Carrol’s Through the Looking Glass, you never actually get anywhere.

Of course, not all arms races are bad. A similar arms race exists for academic journal publications. Some top journals have a limited number of article slots. If one article gets published, another article does not. Thus, every author is in an arms race with every other. Each person is trying to make sure their work is better than everyone else’s.

But in the case of research, there is a positive benefit to the arms race. The quality of philosophical research goes up. That is because while the quality of my research is a positional good as far as my ability to get published, it is a non-positional good in its contribution to philosophy. If every philosophy article is better, then the philosophical community is, as a whole, better off. But the same is not true of job application materials. No large positive externality is created by everyone competing to polish their cover letters.

There may be some positive externalities to the arms race. Graduate students might do better research in order to get better publications. Graduate students might volunteer more of their time in professional service in order to bolster their CV.

But even if parts of the arms race have positive externalities, many other parts do not. And there is a high opportunity cost to the time wasted in the arms race. This is a cost paid by applicants, who have less time with friends and family. And a cost paid by the profession, as people spend less time teaching, writing, and helping the community in ways that don’t contribute to one’s CV.

This problem is not unique to philosophy. Similar problems have been identified in other sorts of applications. One example is grant writing in the sciences. Right now, top scientists must spend a huge amount of their time optimizing grant proposals. One study found that researchers collectively spent a total of 550 working years on grant proposals for Australia’s National Health and Medical Research Council’s 2012 funding round.

This might have a small benefit in leading research to come up with better projects. But most of the time spent in the arms race is expended just so everyone can stay in place. Indeed, there are some reasons to think the arms race actually leads people to develop worse projects, because scientists optimize for grant approval and not scientific output.

Another example is college admissions. Right now, high school students spend huge amounts of time and money preparing for standardized tests like the SAT. But everyone ends up putting in the time just to stay in place. (Except, of course, for those who lack the resources required to put in the time; they just get left behind entirely.)

Part II—The Lottery Solution

Because I was on this treadmill as a job applicant, I didn’t want to force other people onto a treadmill of their own. So, when running my own job search, I decided to modify a solution to the treadmill problem that has been suggested for both grant funding and college admissions. I ran a lottery. I had each applying student complete a short assignment, and then ‘graded’ the assignments on a pass/fail system. I then choose my assistants at random from all those who had demonstrated they would be a good fit. I judged who was a good fit. I didn’t try to judge, of those who were good fits, who fit best.

This allowed students to step off the treadmill. Students didn’t need to write the ‘best’ application. They just needed an application that showed they would be a good fit for the project.

It seems to me that it would be best if philosophy departments similarly made hiring decisions based on a lottery. Hiring committees would go through and assess which candidates they think are a good fit. Then, they would use a lottery system to decide who is selected for the job.

The details would need to be worked out carefully and identifying the best system would probably require a fair amount of experimentation. For example, it is not clear to me the best way to incorporate interviews into the lottery process.

One possibility would be to interview everyone you think is likely a good fit. This, I expect, would prove logistically overwhelming. A second possibility, and I think the one I favor, would be to use a lottery to select the shortlist of candidates, rather than to select the final candidate. The search committee would go through the application and identify everyone who looks like a good fit. They would then use a lottery to narrow down to a shortlist of three to five candidates who come out for an interview. While the shortlisted candidates would be placed on the treadmill, a far smaller number of people are subject to the wasted effort. A third possibility would use the lottery to select a single final candidate, and then use an in-person interview merely to confirm the selected candidate really is a good fit. There is a lot of evidence that hiring committees systematically overweight the evidential weight of interviews, and that this creates tons of statistical noise in hiring decisions (see chapters 11 and 24 in Daniel Kahneman’s book Noise).

Assuming the obstacles could be overcome, however, lotteries would have an important benefit in going some way towards breaking the treadmill.

There are a range of other benefits as well.

  • Lotteries would decrease the influence of bias on hiring decisions. Implicit bias tends to make a difference in close decisions. Thus, bias is more likely to flip a first and second choice, than it is to flip someone from making it onto the shortlist in the first place.
  • Lotteries would decrease the influence of networking, and so go some way towards democratizing hiring. At most, an in-network connection will get someone into the lottery but it won’t increase you chance of winning the lottery.
  • It would create a more transparent way to integrate hiring preferences. A department might prefer to hire someone who can teach bioethics, or might prefer to hire a female philosopher, but not want to restrict the search to people who meet such criteria. One way to integrate such preferences more rigorously would be to explicitly weight candidates in the lottery by such criteria.
  • Lotteries could decrease interdepartmental hiring drama. It is often difficult to get everyone to agree on a best candidate. It is generally not too difficult to get everyone to agree on a set of candidates all who are considered a good fit.

Part III—The Accuracy Drawback

While there are advantages accrue to applicants and the philosophical community, employers might not like a lottery system. The problem for employers is that a lottery will decrease the average quality of hires.

A lottery system means you should expect to hire the average candidate who meets the ‘good fit’ criteria. Thus, as long as trying to pick the best candidate results in a candidate at least above average, then the average quality of the hire goes down with a lottery.

However, while there is something to this point, the point is weaker than most people think. That is because humans tend to systematically overestimate the reliability of judgment. When you look at the empirical literature a pattern emerges. Human judgment has a fair degree of reliability, but most of that reliability comes from identifying the ‘bad fits.’

Consider science grants. Multiple studies have compared the scores that grant proposals receive to the eventual impact of research (as measured by future citations). What is found is that scores do correlate with research impact, but almost all of that effect is explained by the worst performing grants getting low scores. If you restrict your assessment to the good proposals, researchers are terrible at judging which of the good proposals are actually best. Similarly, while there is general agreement about which proposals are good and which bad, evaluators rarely agree about which proposals are best.

A similar sort of pattern emerges for college admission counselors. Admissions officers can predict who is likely to do poorly in school, but can’t reliably predict which of the good students will do best.

Humans are fairly good at judging which candidates would make a good fit. We are bad at judging which good fit candidates would actually be best. Thus, most of the benefit of human judgment comes at the level of identifying the set of candidates who would make a good fit, not at the level of deciding between those candidates. This, in turn, suggests that the cost to employers of instituting a lottery system is much smaller than we generally appreciate.

Of course, I doubt I’ll convince people to immediately use lotteries on major important decisions. Thus, for now I’ll suggest that for smaller less consequential decisions, try a lottery system. If you are a graduate department, select half your graduating class the traditional way, and half by a lottery of those who seem like a good fit. Don’t tell faculty which are which, and I expect several years later it will be clear that the lottery system works just as well. Or, like me, if you are hiring some undergraduate researchers, try the lottery system. Make small experiments and let’s see if we can’t buck the current status quo.

The Moral Dimension of Literary Translation

photograph of Amanda Gormanon stage

Twenty-two-year-old Amanda Gorman achieved literary stardom when she recited “The Hill We Climb,” a moving and optimistic reflection on American identity at the 2020 presidential inauguration. Meulenhoff, a Dutch publishing house keen on taking advantage Gorman’s popularity, announced in March of 2021 that Marieke Lucas Rijneveld, a non-binary Booker Prize-winning novelist, would be translating the poem into Dutch with Gorman’s approval. Though translators seldom make headlines, the choice sparked outrage, and Rijneveld stepped down from the role three days after the initial announcement. Many criticized the choice of a white translator when, as journalist Janice Duel pointed out, there were dozens of Black Dutch poets and writers who could have been selected for the job. These overlooked candidates are, in Duel’s words, “spoken-word artist[s], young, female and unapologetically Black,” just like Gorman. This debate over who should be allowed to translate what reveals the philosophical underpinnings of translation and the moral conundrums translators often face.

Translator Mark Polizzotti explains in Sympathy for the Traitor: A Translation Manifesto, there are basically two schools of thought when it comes to what makes a “good” translation. The first holds that translators have a responsibility to retain the linguistic and emotional texture of the original work, however unpalatable that might be to their target audience. One should shoot for accuracy over clarity. The second school holds that translations, like all literary works, should be enjoyable (or at least possible) to read, so it’s acceptable to play with the source material for the sake of legibility. Susan Sontag remarked that translation is inherently “an ethical task, and one that mirrors and duplicates the role of literature itself, which is to extend our sympathies . . . to secure and deepen the awareness (with all its consequences) that other people, people different from us, really do exist.” Every choice a translator makes, no matter how granular, works towards that awareness, but translators must decide whether they want their audiences to be aware of the similarities or the differences between their culture and the one being translated.

One example demonstrates the difficulties both approaches run up against, and reminds us that even static and “safe” ancient texts can easily lead the unwary translator into a minefield. In Homer’s epic poem The Iliad, the hair of the Greek hero Achilles is described as xanthos, translated as “yellow” by 19th-century poet Samuel Butler. But as Tim Whitmarsh, a professor of classical studies at Cambridge explains, there is a wide gap between how we write about color and how the Greeks wrote about color. Words that we take to indicate color rarely denote a single shade of original Greek; such words can imply movement, texture, depth, and even emotion. Achilles may be blonde, but xanthos can also denote a muddy brown. In the 20th-century Fagles translation of The Iliad, “yellow” becomes “fiery,” a very different word that gestures towards the Achilles’ famously irritable temperament as well as hue. Whitmarsh asks, “Behind this apparently simple question – how do we translate a single word from Greek into English – lies a huge debate, both philosophical and physiological, that has exercised scholars for more than a century: do different cultures perceive and articulate colours in different ways?” On the one hand, translators have to pay attention to these implications; there’s something unethical at worst and lazy at best about misrepresenting or flattening a different culture to make it more palatable to modern readers, but as many translators point out, too much strangeness puts us off. If Achilles’ hair was described as “fiery-muddy-gold,” we might be taken out of the story for a moment, but we lose all those expressive connotations of xanthos with “yellow.”

Translating, like writing, can never be neutral, regardless of what school you side with. In an essay responding to the Gorman controversy, Mridula Nath Chakraborty argues that translators have historically enabled the machinations of imperialism and colonialism, eliding or mangling cultural differences in service of domination. At the same time, Chakraborty reminds us that it is

“the essential element of unknowingness that animates the translator’s curiosity and challenges her intellectual mettle and ethical responsibility. Even when translators hail from—or belong to—the same culture as the original author, the art relies on the oppositional traction of difference . . . The act and the art of translation requires the permission to transcend borders, the permission to make mistakes, and the permission to be repeated, by anyone who feels the tempestuous tug, and the clarion call, of the unfamiliar.”

Both Sontag and Chakraborty agree that translation thrives on cultural difference, so how can we commit to social and political equality without losing that essential friction?

It may be impossible to strike a perfect balance. The American Literary Translators Association (ALTA) released an illuminating statement on the Gorman incident, saying,

“The question of whether identity should be the deciding factor in who is allowed to translate whom is a false framing of the issues at play. ALTA believes that if translators felt authorized to translate only those with whom they share an identity, it would be damaging to literary translation as a practice and as a profession.”

The real problem, the ATLA put forth, is the dearth of Black translators, who make up a small fraction of any already minuscule community; given structural inequalities that prevent people of color from accessing resources and career opportunities, it’s inevitable that non-white translators will be rare. Furthermore, there’s room for doubt over how desirable a perfect match would be, and Chakraborty’s point about intercultural differences is especially salient here. While people of color from different nations are subjected to many of the same structural oppressions, it’s shallow to assume that they share an identical outlook by virtue of their skin color, that the experience of being Black in the Netherlands is perfectly comparable to being Black in America. Trying to find a translator that aligns perfectly along demographic lines with the original author is as quixotic as the quest for an objectively “perfect translation,” in other words.

Writer and translator Chris Fenwick commented that “The moral panic over these Amanda Gorman translations in part originates in a blindness to the transactional nature of translation. Someone is being paid to do a job, and a publisher has to decide who they give money to and whose career they potentially promote. It’s not about who has the right to translate whom.” In other words, white Dutch writers can still translate Gorman if they wish, they just won’t receive professional recognition for it. And although white translators may find it harder to get work translating non-white authors in the future, Fenwick questions whether such “commissions are central to anyone’s livelihood. Most literary translators already have to do non-literary work.” Most working in the field are academics or writers who publish original work and take up translation as a side gig, so denying one translator hardly condemns them to poverty.

The negative backlash Rijneveld faced over their decision to step down demonstrates a deeply ingrained discomfort with race, as well as a general ignorance of the logistics of translation. Though it may be impossible to find an ideal translator for any given work (and as Chakraborty suggests, such an ideal is as unproductive as it is unachievable), this incident cast light on an oft-neglected corner of the publishing world, and reminds us that all parts of the literary sphere have more work to do when it comes to structural inequality.

Moral Lessons from the Meng Wanzhou Affair?

airplane boarding on Xi'an airport runway

Now that Meng Wanzhou has finally returned home to China and Canadians Michael Kovrig and Michael Spavor have been released from Chinese custody, a situation has been brought to a close which incited a great deal of moral controversy. The two Canadians were believed to be taken into detention in retribution for the arrest and detention of Meng, and given the state of relations between the U.S. and Canada, many wondered whether simply releasing Meng in exchange for the release of the two Michaels would simply be a better alternative. Last year, I covered some of the ethical concerns involved with this situation. But in light of the fact that the affair has now been settled, what is the status of these ethical issues in hindsight?

To briefly recap the situation, Meng was arrested by the RCMP in December 2018 at the request of the United States who charged her with conspiracy to commit fraud. After the U.S. requested extradition, the matter was placed before Canadian courts. That same month China detained two Canadians named Michael Spavor and Michael Kovrig who were later charged with espionage. The move by China has been taken to be retaliation for the arrest of Wanzhou after they threatened “grave consequences” for Canada and despite the fact that China insists that the arrest of the Michaels is unrelated. And, while Meng was placed on house arrest and forced to wear an ankle monitor while living in a Vancouver mansion, the two Michaels were subjected to hours of interrogation every day, were not permitted to go outside, and were limited in their ability to talk to their families.

As this situation stretched from weeks into years, many Canadians took the position that Canada should have resisted American calls for the arrest and extradition of Meng, or should have released her in exchange for the release of the Canadians. This proposal created a great deal of moral debate about the rule of law, arbitrary detention, and the potential precedents such a move might set in the world of “hostage politics.”

But, now the situation has been resolved. On September 24th, the Department of Justice announced that a deferred prosecution agreement had been reached with Meng which led to the withdrawal of its extradition request against her. That day, Meng boarded a plane and arrived in China after spending more than 1000 days under house arrest. On the same day, in an apparently unrelated sequence of events, China released the two Michaels “for medical reasons” who were then flown home to Canada.

It is worth noting that many believe that this situation was sparked by the United States as a politically motivated tool in their trade war with China. This is supported by remarks made by then President Trump and Secretary of State Mike Pompeo who suggested that they could intervene in the case for the sake of securing trade and by the fact that the arrest was unprecedented. China’s position, in response, is that this was politically motivated and that Canada conspired with the United States. Legal experts on extradition have called the case against Meng a “silly” “political type of enterprise.” Former Prime Minister Jean Chretien claimed that the “United States played a trick on Canada by forcing Ottawa to arrest Ms. Meng,” and many more prominent Canadians called on Meng to be released and a prisoner exchange arranged or who shared the view that this was a political matter and not a legal one. Thus, Canadians were faced with the dilemma of either releasing Meng and angering the United States or holding Meng and endangering its own citizens.

As I noted in my previous article on the subject, the Government of Canada’s position has always been that this is a legal matter falling under an independent judiciary, even accusing China of failing to understand such a concept. Meng, so the claim goes, had been charged with a crime and the extradition and trial process must be followed to preserve “the rule of law.” Thus, it would be a violation of such principles to offer to release Meng arbitrarily in order to secure a “hostage exchange.” A second argument was made that agreeing to release Meng in exchange for the two Michaels would set a bad precedent. Justin Trudeau argued that such an exchange would send a message to China that all they or anyone else had to do was arrest Canadians in order to pressure the Canadian Government and that this would put millions at risk.

So, how did this situation resolve itself? After several months of court proceedings the Justice Department offered Meng a deferred prosecution deal on the condition that she admit guilt in misrepresenting efforts of Huawei to circumvent sanctions against Iran. According to the Americans, there was “no link” between the deferral agreement and the desire to secure the release of the Michaels. After which, she was released in Canada and sent back to China. Simultaneously, after securing the “guilt” of the two Michaels for espionage weeks prior, China decided to release the two Michaels on bail for “health reasons” and they were subsequently sent back to Canada. Canada, the U.S. and China have all insisted that there was no deal despite the entire affair seeming “highly choreographed.”

Indeed, many see the entire affair as nothing but a prisoner exchange or “hostage swap.” Of course, we may not know for sure what exactly happened. The U.S. claims that the decision was reached by the Department of Justice free from political tampering. China claims that they too were following the rule of law in finding the two Michaels guilty after their confession and later releasing them. But if this just was a prisoner swap in the end, what does this mean for those who wished to stand on principle or prevent the establishment of a precedent?

First, let’s consider that each side is being truthful: the resolution to this case was purely a legal matter, and that this was, as some believe, a triumph for the rule of law. It is difficult to see how. There is no legal consensus that the case against Meng wasn’t politically motivated to begin with. So, the fact that the issue was settled in a manner consistent with legal procedure doesn’t support the idea that this was a victory for the rule of law. If anything, we are still left with questions about whether the law is being used in an arbitrary way for political ends. But, there is also the public perception of the affair to consider as well. Given the seemingly suspicious nature of the exchange, one wonders whether the public will see this as a success for the rule of law?

On the other hand, if there was some sort of coordination; if, in the end, this situation was only settled by an exchange, then what is the point of standing on principle for the rule of law or because you are worried about setting a bad precedent? To what ends did it serve to insist on such principles just to engage in an exchange anyways? Could a great deal of suffering have been avoided to achieve a similar result? Did, at the end of the day, the detention of Meng and the two Michaels actually achieve anything as morally important as it ultimately cost in moral terms? As my previous article covered, it was always a murky argument that the rule of law would not permit the Canadian Government to facilitate Meng’s release, so the notion that because Canada stuck it out until legal proceedings could conclude that this was thereby a victory of the rule of law is not certain either.

Either way, this doesn’t seem like a great principled victory for the rule of law. Perhaps if there is a moral lesson to be learned for Canada it is that principles can be great ideals, but that their application must factor in the situation they are applied to. This is particularly true if, as in this case, it seems that following the rule of law in the way the Canadian government chose to conceive of such a principle only served to deny Canadians their rights for years.

Background Checks for Alcohol: A Response

photograph of gun with bullets and glass of alcohol on table

The other night, my wife and I went to our local brewery. They had posted on Facebook that their Double IPA is back. It is, and this is no exaggeration, one of the best beers I have tasted – strong, without being overpowering, and a smoothness rarely found at such an ABV. I had two pints of it. Tonight, I’ll have a couple of glasses of wine with a few friends.

None of this should sound particularly controversial. But, in a thought-provoking piece in this venue, Tim Hsiao argues we should treat this like buying a firearm, and if there should be background checks for firearms then there should be background checks for buying alcohol (and if there shouldn’t be checks for alcohol, there shouldn’t be checks for firearms). I want to probe his argument by looking at some of the background assumptions in place.

Every few months, there seems to be a piece on Americans’ relationship with alcohol (sometimes sponsored by companies with a vested interest in stoking some fear). A recent piece by Kate Julian in The Atlantic is badly titled “America Has a Drinking Problem.” It’s a bad title because it falls into the trope of always writing about drink in terms of a problem, but the piece is much more nuanced: “Am I drinking too much? And: How much are other people drinking? And: Is alcohol actually that bad? The answer to all these questions turns, to a surprising extent, not only on how much you drink but on how and where and with whom you do it.”

The conclusion is that the sort of drinking I spelled out in the opening paragraph is good. Summarizing Edward Slingerland’s Drunk, Julian notes how drinking helps us be more creative and enhances social bonding. And she points out that, especially after the asocial years of pandemic living, being sociable is positively good for us and supports our health.

But not all social drinking sees the benefits outweigh the cost. Drinking can make us aggressive, damage our livers, and can be addictive. But what is key, and what Julian stresses, is that there are different forms of drinking. And there is a large class of drinking – moderate social drinking – that has a substantial benefit.

To recognize this undermines Hsiao’s argument that we should treat firearms and drinking the same. His argument is that alcohol abuse causes more deaths than firearm use and is involved in many more crimes. But this blunt comparison runs all forms of drinking together and ignores the benefits. The fact that a large class of drinking plausibly is a net social good means that Hsiao must do much more work to reach his conclusion. He needs to show that firearm ownership is as beneficial as drinking and that the costs of background checks are similarly proportionate. Otherwise, the analogy falls apart.

But what are the net benefits of firearm ownership? For one, there is hunting, which provides both a source of nutrition and an important social activity for many. But a 2013 study found that around half of gun owners own a firearm for self-defense purposes. There is an argument – a contested one – that owning a gun for self-defense actually increases your risk of harm, because it increases the risk of an accident, misuse, and even suicide. Further, the U.S. has a much greater rate of gun violence than other wealthy countries, many of which have stricter controls on gun ownership.

So, we have seen a plausible argument that alcohol consumption is (in general, or at least in a major set of cases) good, and have also seen a plausible argument that owning a gun – given the risk of misuse, accidents, suicide, or violence – may well be a net negative. Plausibly, we can increase the chance that firearms are used properly if we mandate background checks that increase the likelihood that firearm use will be a net positive: appropriate self-defense or hunting, say.

Perhaps this sets up an argument that some firearms and some drinking should not face background checks, but others should. But the other side of the coin is that background checks on any form of alcohol consumption will be much more onerous than checks on firearms. For one, alcohol is more immediately consumed than firearms are used. After all, few people buy a firearm for immediate self-defense or a last-second hunting trip, but we buy beers for immediate consumption or a bottle of wine to take to a dinner party.

Further, there are many more individual transitions involving alcohol. Americans buy around 40 million firearms a year and there are around 400 million firearms in the U.S.. According to one estimate, the average drinking-age adult drinks around 200 pints of beer a year (to say nothing of cider, wine, or liquor). And there are, conservatively, 200 million drinking-age adults in the states. I’ve struggled to get any precise statistic on the number of transactions involving alcohol per year, but if 200 million adults drink around 200 pints of beer each (even if they’re buying packs of 16 cans), it isn’t hard to see that there will be vastly more transactions involving alcohol than the 40 million firearms sales a year.

Even if drinking alone or binge drinking is a net loss, it would be onerous to get everybody to either undergo a background check or somehow prove that they are drinking socially. For Hsiao’s argument to go through, he would have to show that the costs of drinking so outweigh the benefits of social drinking that they justify treating alcohol purchases like firearm purchases, and this needs to take account of the extra cost involved: people buy alcohol much more regularly than they buy firearms.