← Return to search results
Back to Prindle Institute

Gas Stoves: A Kitchen Culture Clash

photograph of gas burner being lit

Progressive and conservative media flared up last month over an issue tucked in the side of your kitchen: gas stoves. This surprise episode in America’s culture wars aired after a Biden administration official, the chairman of the Consumer Product Safety Commission, suggested that it is considering restricting or even banning gas stoves in the wake of a new study published in the International Journal of Environmental Research and Public Health, a study that alleges that gas stoves threaten public health and damage the environment. This kitchen equipment drama featured conservative media lambasting the administration for its latest show of “paternalism” and “green extremism,” progressive media rushing to deny that the administration was moving to ban them and yet also to defend the soundness of gas stove bans that have already been passed in states like New York, and the Biden administration itself denying that it wants to ban them entirely while supporting states that did.

In the midst of all this political hubbub, many are left to wonder: why does it matter whether gas stoves are banned?

The pro-ban crowd shares a few reasons for its case. First, gas stoves pose a risk to public health. The study that prompted the ban debate alleges that gas stoves emit enough detrimental fumes that children who inhale risk developing asthma. Second, gas stoves damage the environment. The fumes from gas stoves contain enough greenhouse gasses to contribute to climate change. Thus, to slow the rate of climate change, one small but meaningful change we can make to our lives is to switch out our gas stoves for electric ones. Even if the change might pale in comparison to other solutions — like moving away from fossil fuels in our electricity supply — we should do it anyways because, as some say, we must treat climate change as a World War II-esque threat and mobilize of all our available resources to fight it. Thus, to protect public health and the environment, the pro-ban team says we should ban gas stoves.

The anti-ban crowd shares a few reasons for its case. First, they allege that the study used to justify public health and environment concerns lacks scientific merit and is only being touted for aligning with the partisan motives of the Biden administration. They say that the study’s findings are misdirected; if true, this would not only undermine the case for banning gas stoves but would erode trust in the Biden administration: surely it is wrong to distort science to further one’s political agenda, an especially nefarious type of virtue signaling.

Second, they allege that even if there were some slight detrimental effects of gas stoves concerning public health and the environment, the cost of keeping gas stoves is surely lower than the cost — to consumers’ wallets and freedom — of replacing all gas stoves with electric stoves. Thus, it would be imprudent to ban stoves; this side may recognize that climate change is real, but they also recognize that unchecked, militaristic zeal to “fight climate change” might create graver problems than it solves. Such a climate crusade might keep the Earth’s average temperature less than 1.5 degrees Celsius higher than pre-industrial levels, but it could spark inflation, arrest economic growth, and thus also cause chronic unemployment.

Third, along similar lines, the anti-ban crowd alleges that there is a glaring inconsistency in the principles behind support for banning gas stoves: if equipment that risks harm to public health and the environment should be banned, then shouldn’t we ban cars, trains, ships, and planes? Thus, according to the anti-ban crowd, we should reject the mentioned principle that underlies the logic of the pro-ban crowd, for, if we followed it to its logical conclusions, we would have to commit ourselves to policies that we cannot undertake.

Fourth, perhaps more obviously, the anti-ban crowd fears that banning gas stoves would violate the principle of consumer autonomy through excessive government oversight in the kitchen. In short, the anti-ban crowd objects to gas stove bans on the grounds that they are motivated by the wrong things, are imprudent, and derive from an untenable principle.

Thus, there are two main camps on the gas stove issue, and neither seems willing to budge. Yet, the responsible citizen should resist the temptation to turn to tribalism and deny that the other side has good points. Although an anti-ban zealot might claim that the pro-ban crowd represents the side of green hysteria and government nannying, no one can disagree that public health and environmental care are important. Likewise, although a pro-ban zealot might claim that the anti-ban crowd is motivated by feigned outrage, fanned by the specious reasoning and spicy rhetoric of conservative media, no one can deny that honesty in science, prudence in policy, and soundness in principle are noble aims toward which we should all strive.

Ultimately, each side takes its respective stance with admirable intentions, and the responsible citizen should authentically engage with each side, listening to its reasoning and judging the issue for theirself.  In doing so, they should ask themself: what is the key ingredient to a healthy, environmentally clean kitchen — individual responsibility or government intervention?

On “Just Asking Questions”

photograph of journalist raising hand

From the “why” stage of toddlerhood to Socratic questioning, from scientific inquiry to question-based religious traditions, questions play an important role in our understanding of the world. Some believe that no questions should be off-limits in a free society, but that idea has recently received significant push-back.

Take, for example, the February 15 open letters criticizing The New York Times for its coverage of trans issues. One letter, co-signed by numerous LGBTQ advocacy groups and public figures, calls out NYT for “just asking questions” about trans healthcare in a way that has negative real-world consequences. Note the scare quotes around the accusing phrase, which suggest that the questioning is irresponsible, misleading, or inauthentic.

The charge of “just asking questions” does not primarily concern the legal status of these questions or their protection under the First Amendment. The issue is, rather, a moral one. Are some lines of questioning irresponsible — even immoral? And what makes them so? (I’ll assume those two questions are permitted.) Let’s start with a brief discussion of how one might defend inquiry without limits, and where that defense might go wrong.

The defense of no-limits questioning might go broadly like this:

Statements make claims about the world, so they are the sort of things that can be right or wrong. But questions don’t make any claims; they’re just requests for information. So, asking a question is never wrong. In fact, asking questions is the way to learn more about the world.

There are at least two problems with this reasoning. The first problem is that, while questions technically don’t make claims, they do affirm claims in a subtler way through the assumptions embedded in them. In philosophy of language, these assumptions are called presuppositions, many of which are innocuous. For example, “What classes are you taking this fall?” presupposes that the person you’re asking is taking classes this fall. In many contexts, that claim is harmless enough.

Other presuppositions are not so innocent. Consider the following question: “When we measure human intelligence, which race comes out as genetically superior?” This question, researched numerous times in recent decades, makes a number of dubious assumptions, including that human intelligence can be measured by our tools, that our tools measure it accurately, and that intelligence has a genetic basis. Sure, a question cannot be false. But it can presuppose claims that are dubious or outright false.

Asking a question in a certain context also has implications beyond the claims it presupposes. One important implication is made whenever a question is posed non-rhetorically in a public forum: the implication that the question is an open question.

An open question is one whose answer has not been definitively settled. “Have you eaten yogurt today?” is probably not an open question for you. You know what the answer is, and outside of a philosophy class you don’t have much reason to doubt that answer. Similarly, “Is the earth flat?” is not an open question. The answer has been known for millennia.

So, when a column in The New York Times asks, “Could some of the teenagers coming out as trans today be different from the adults who transitioned in previous generations?”, that wording implies that these differences might be significant — significant enough to potentially overrule decades of well-established and evidence-based medical practices. The article does mention the precedent for positive outcomes with respect to these practices, but in a way that invites speculation that the precedent no longer applies — crucially, without providing support for why these differences would be significant enough to undermine the precedent. Asking a question can thus be irresponsible, when it relies on false or dubious presuppositions or when it treats a question as open without — or in opposition to — evidence.

There’s another problem with the defense of no-limits questioning above: the argument equivocates on “right” and “wrong.” A question itself cannot be false the way a statement can (though, of course, its presuppositions can be false), but that doesn’t settle the issue of whether or not asking a question can be wrong morally. Let’s briefly consider two moral issues: asking a question in bad faith and asking a question with harmful consequences.

Asking a question in bad faith means asking inauthentically — without a willingness to accept the answer, with a purpose to obscure the truth, or without a desire to learn.

One example might be someone in a class who plays devil’s advocate, asking questions that are purposely contrary simply because they enjoy challenging others’ ideas. This behavior, beyond being personally frustrating, can also inhibit learning. When someone takes up time asking fruitless questions, they leave less time for honest inquiry.

In some cases of bad-faith inquiry, the questioner is simply not interested in the answer at all. Consider a recent video (released on Twitter) in which former President Trump asks a House committee to investigate specific questions regarding the possibility of interference in the 2016 presidential election. As Washington Post analyst Philip Bump points out, these questions have already been answered in federal investigations. But finding out the answers isn’t the point. The rhetorical effect of garnering support is achieved just by asking them.

Beyond the issue of authenticity, asking questions irresponsibly can have harmful consequences. Some of those consequences occur on a personal level. For example, when people from privileged social groups ask people from marginalized social groups to explain the history of their oppression, that can unjustly burden them. Regardless of the intent behind asking these questions, marginalized people can end up doing extra educational and emotional work to make up for others’ poor education.

Some questions, such as those asked in major news outlets, have far-reaching effects. As GLAAD (co-author of one of the open letters mentioned at the start of this article) notes, multiple New York Times articles have been directly cited in defense of a law criminalizing providing gender-affirming care to minors in Alabama. Put simply, the questions asked in public venues make a difference in the world, and not always for good.

These considerations make it clear that questions are subject to both factual and moral evaluation. Faulty presuppositions, bad-faith motives, and harmful consequences can all contribute to making a question problematic. “Just asking questions” isn’t always an innocent enterprise.

ChatGPT: The End of Originality?

photograph of girl in mirror maze

By now, it has become cliché to write about the ethical implications of ChatGPT, and especially so if you outsource some of the writing to ChatGPT itself (as I, a cliché, have done). Here at The Prindle Post, Richard Gibson has discussed the potential for ChatGPT to be used to cheat on assessments, while universities worldwide have been grappling with the issue of academic honesty. In a recent undergraduate logic class I taught, we were forced to rewrite the exam when ChatGPT was able to offer excellent answers to a couple of the questions – and, it must be said, completely terrible answers to a couple of others. My experience is far from unique, with professors rethinking assessments and some Australian schools banning the tool entirely.

But I have a different worry about ChatGPT, and it is not something that I have come across in the recent deluge of discourse. It’s not that it can be used to spread misinformation and hate speech. It’s not that its creators OpenAI drastically underpaid a Kenyan data firm for a lot of the work behind the program only weeks before receiving a $10 billion investment from Microsoft. It’s not that students won’t learn how to write (although that is concerning), the potential for moral corruption, or even the incredibly unfunny jokes. And it’s certainly not the radical change it will bring.

It’s actually that I think ChatGPT (and programs of its ilk) risks becoming the most radically conservative development in our lifetimes . ChatGPT risks turning classic FM radio into a framework for societal organization: the same old hits, on repeat, forever. This is because in order to answer prompts, ChatGPT essentially scours the internet to predict

“the most likely next word or sequence of words based on the input it receives.” -ChatGPT

At the moment, with AI chatbots in their relative infancy, this isn’t an issue – ChatGPT can find and synthesize the most relevant information from across the web and present it in a readable, accessible format. And there is no doubt that the software behind ChatGPT is truly remarkable. The problem lies with the proliferation of content we are likely to see now that essay writing (and advertising-jingle writing, and comedy-sketch writing…) is accessible to anybody with a computer. Some commentators are proclaiming the imminent democratization of communication while marketers are lauding ChatGPT for its ability to write advertising script and marketing mumbo-jumbo. On the face of it, this development is not a bad thing.

Before long, however, a huge proportion of content across the web will be written by ChatGPT or other bots. The issue with this is that ChatGPT will soon be scouring its own content for inspiration, like an author with writer’s block stuck re-reading the short stories they wrote in college. But this is even worse, because ChatGPT will have no idea that the “vast amounts of text data” it is ingesting is the very same data it had previously produced.

ChatGPT – and the internet it will engulf – will become a virtual hall of mirrors, perfectly capable of reflecting “progressive” ideas back at itself but never capable of progressing past those ideas.

I asked ChatGPT what it thought, but it struggled to understand the problem. According to the bot itself, it isn’t biased, and the fact that it trains on data drawn from a wide variety of sources keeps that bias at bay. But that is exactly the problem. It draws from a wide variety of existing sources – obviously. It can’t draw on data that doesn’t already exist somewhere on the internet. The more those sources – like this article – are wholly or partly written by ChatGPT, the more ChatGPT is simply drawing from itself. As the bot admitted to me, it is impossible to distinguish between human- and computer-generated content:

it’s not possible to identify whether a particular piece of text was written by ChatGPT or by a human writer, as the language model generates new responses on the fly based on the context of the input it receives.

The inevitable end result is an internet by AI, for AI, where programs like ChatGPT churn out “original” content using information that they have previously “created.” Every new AI-generated article or advertisement will be grist for the mill of the content-generation machine and further justification for whatever data exists at the start of the cycle – essentially, the internet as it is today. This means that genuine originality and creativity will be lost as we descend into a feedback loop of increasingly sharpened AI-orthodoxy; where common-sense is distilled into its computerized essence and communication becomes characterized by adherence. The problem is not that individual people will outsource to AI and forget how to be creative, or even that humanity as a whole will lose its capacity for ingenuity. It’s that the widespread adoption of ChatGPT will lead to an internet-wide echo chamber of AI-regurgitation where chatbots compete in an endless cycle of homogenization and repetition.

Eventually I was able to get ChatGPT to respond to my concerns, if not exactly soothe them:

In a future where AI-generated content is more prevalent, it will be important to ensure that there are still opportunities for human creativity and original thought to flourish. This could involve encouraging more interdisciplinary collaborations, promoting diverse perspectives, and fostering an environment that values creativity and innovation.

Lofty goals, to be sure. The problem is that the very existence of ChatGPT militates against them: disciplines will die under the weight (and cost-benefits) of AI; diverse perspectives will be lost to repetition; and an environment that genuinely does value creativity and innovation – the internet as we might remember it – will be swept away in the tide of faux-progress as it is condemned to repeat itself into eternity. As ChatGPT grows its user base faster than any other app in history and competitors crawl out of the woodwork, we should stop and ask the question: is this the future we want?

Elizabeth Holmes & the Right to a Trial by Jury

photograph of empty jury box

Elizabeth Holmes’s treatment by the criminal justice system was substantially different from the treatment most criminal defendants receive. Was she the beneficiary of white privilege, or perhaps what anthropologist David Graber called “the communism of the rich”?

Holmes had been widely hailed as the world’s youngest self-made female billionaire, but in November of 2022, the founder and former CEO of Theranos – a medical technology company – was convicted on one count of conspiracy to defraud and three counts of fraud. In total, she was convicted of defrauding investors of over one hundred forty million dollars.

Prosecutors argued that Theranos’s technology, meant to dramatically reduce the amount of blood drawn for tests, did not work – and she knew it. Theranos actually lost more like six hundred million dollars of investor funds, but the jury deadlocked on three of the charges and she was acquitted on four others.

The prosecutors could have retried Holmes on those three charges, but, even before sentencing, they announced they were dropping them. The judge had the option of sentencing her to twenty years on each count, and letting the counts run consecutively, which would have meant an eighty-year sentence. She was sentenced to eleven years.

The trial was delayed to allow Holmes to give birth, though an estimated 58,000 women are sent to prison while pregnant every year. Now that she’s been sentenced, she is still free on bail awaiting an appeal even though she was recently caught booking a one-way plane ticket to Mexico. (Her bail, by the way, is $500,000.)

Some legal commentators argue that her treatment has not been that unusual, at least for “someone like Holmes, who is not a danger to society.” Which might make one wonder who, in general, constitutes a danger to society and who does not.

That question seems especially salient since, in California (where Holmes was tried), the potential punishment for stealing anything over $950 is three years in jail and a $10,000 fine. The bail in felony theft cases is typically between $20,000 and $100,000.

On the other hand, concurrent sentences are not unusual and the maximum sentence is rarely the one applied. Delays and postponements of various types are not atypical. Arguably, the most striking discrepancy is in the amount of bail.

However, even if Holmes’ treatment has been unfairly lenient, nothing mentioned so far touches on the most significant inequity. The fundamental unfairness that now underlies the criminal justice system of the United States is invisible to most of us, most of the time.

What Elizabeth Holmes received, because of her wealth and privilege, that almost no one else in America receives, is a jury trial.

Arguably, the right to trial by jury is the oldest and most essential right. The Magna Carta declared, “No man (sic) shall be taken, outlawed, banished, in any way destroyed, nor will we proceed against or prosecute him (sic), except by the lawful judgments of his peers…” The importance of jury trials to liberty was one of the few things on which Thomas Jefferson and James Madison agreed. Jefferson regarded trial by jury “as the only anchor yet imagined by man, by which a government can be held to the principles of its constitution,” while Madison, the author of the Constitution wrote, that it “is as essential to secure the liberty of the people as any one of the pre-existent rights of nature,” including freedom of religion and speech.

Yet, only 2% of the 80,00 people charged in federal courts in 2018 received a jury trial. The states do a little better, but consistently less than 5% of defendants in state courts get a jury trial.

We often speak very abstractly about rights and freedoms. But it is in the application of criminal and civil law that the power of the state – to take your money, your freedom, even your life – becomes terrifyingly concrete. Yet, modern political philosophers have had very little to say about the moral and democratic significance of juries. They have focused almost exclusively on the epistemic value of juries; that is, on how well juries succeed at getting at the truth.

The most influential epistemic argument, Marquis de Condorcet’s “jury theorem,” shows, in a rigorous way, that (i) on an issue with two alternatives, (ii) where a decision is made independently by each participant, (iii) and there exists an objectively right decision, (iv) assuming each decision-maker has even slightly greater than 50% chance of making the right call, a group of 5 or more people have a high likelihood of making the correct decision – and a group of 12 even higher.

The trouble is that not all these conditions hold. For example, jurors do not make their calls independently. They deliberate. Nor is it clear that the typical juror has a better than 50% chance of getting the verdict right. The empirical evidence on the epistemic value of juries is mixed, although some studies suggest that juries seem to do not too bad (to put it scientifically).

Accuracy, however, is not the issue. Juries are a requirement of democracy.

“Rule by the people” requires that a representative sample of your fellow citizens stand between you and the state as a buffer on the application of state power in its most literal form: the power to use violence to arrest and/or detain you or deprive you of property.

Without juries, the state becomes the sole arbitrator, not just of the law, but of the facts. Even if juries don’t succeed at getting the facts right, at least in a jury trial the facts are not controlled solely, and entirely, by the state.

Why so few jury trials then? Well, 97% of federal cases and 94% of state trials are settled instead by plea bargaining. Among the tools deployed to obtain a plea while avoiding a jury trial are holding people in indefinite detention with an unaffordable level of bail, charging the accused with more, and more serious, charges than their actual conduct would merit (“overcharging”), and onerous mandatory-minimum sentences.

Why is it this way? Cost is undeniably a factor. A trial in federal court costs over half a million dollars. If every felony charged in 2021 resulted in a jury trial, the cost to the state would exceed twenty-eight billion dollars. On the other hand, twenty-eight billion dollars is only one third of one percent of the federal budget in 2021. If we don’t want to pay that much for jury trials, maybe, we should look for ways to deal with social problems that don’t involve imprisoning people.

In any case, whether Elizabeth Holmes “bought,” or received, special treatment is debatable, but that she is better off than most of us because she could afford a jury trial is not.

Whole Body Gestational Donation

photograph of pregnant belly

In her recent paper, “Whole Body Gestational Donation,” Oslo University-based ethicist Anna Smajdor proposed a thought experiment in which the bodies of brain-dead women were used as biological incubators to gestate humans from conception to birth. Her argument follows along the lines of traditional posthumous organ donation, arguing that if we’re comfortable with the regulatory and ethical systems underlying the gifting of individual body parts (hearts, kidneys, livers, eyes, etc.), then we should allow consenting women to donate their entire body to act as a deceased surrogate. And that, if we have some discomfort with the latter prospect, and we are committed to the idea of treating like for like, then perhaps there is something wrong with the more traditional form of donation. But, conversely, if we’re happy with the former, we should be satisfied with the latter.

Unsurprisingly, given the controversial subject matter, her paper blew up. Both curious and indignant responses have come from broadcasters and outlets across the spectrum, including Fox News, Cosmopolitan, BioEdge, and Women’s Health, to name just a few. Smajdor received such vitriol because of this coverage that she wrote a follow-up piece for The Progress Educational Trust, providing some context to her thoughts and defending the work, emphasizing that it was not a policy suggestion but, rather, a way of highlighting a potential inconsistency in how we understand postmortem donation.

Now, much could be written about how media outlets have covered (and, as Smajdor suggests in her response, deliberately misconstrued) her argument. Instead, however, what I want to do here is engage with the work itself. Specifically, I want to discuss the best use of donated organs.

But, before doing so, I feel it’s important to acknowledge that the prospect of women being used as tools for gestation after brain death is bleak. Rather than being taken off ventilation and allowed to die promptly (and maybe with some dignity), the idea that doctors could keep these women artificially alive simply so their reproductive organs can work to grow a fetus for a third party needing a surrogate is, on the face of it, horrifying. It, not unjustly, conjures up intense emotional discomfort for many. But, as Smajdor notes in her paper and response, simply finding something unpleasant isn’t a sufficient justification to consider it immoral or impermissible.

Many things that we now think acceptable, maybe even good, were at one point lambasted because of their seemingly clear immorality (heart transplants, for example). Ultimately, the “wisdom of repugnance,” as Leon Kass terms it, may give us reason to pause for thought but is not a good enough reason to outright disregard a proposal.

What, then, is the problem (or at least one of the problems) with Smajdor’s proposal? The answer for this article’s focus comes down to a numbers game. Specifically, how many people can the organs from a single cadaver help?

In the right conditions – that is, if the cause of death isn’t something that makes the organs unusable – a single deceased organ donor can save up to eight lives. Each kidney can be donated to a different individual, freeing them from dialysis (on average, for someone on dialysis, life expectancy is five to ten years). A single liver can be split into two and donated to two more people. Each lung can go to a different individual, helping another two people. Finally, the pancreas and the heart can help the final two persons. It is not just life-saving body parts like these that clinicians can harvest after death: corneas, skin, tendon, ligaments, blood, bone, bone marrow, and even the hands and face can be donated to those who need them. In fact, according to the U.S.’s Health Resources & Services Administration, a single deceased donor can save eight lives and help another seventy-five.

Not everyone who signs up to be a deceased donor can donate the full range of body parts. There are multiple reasons why this may be the case, from medical to social to religious. Even with this acknowledgement, however, each person who agrees to donate their organs and other biological materials does something which can fundamentally change many people’s lives for the better.

Each part of the body that is donated is a gift of immeasurable worth, one that we must think carefully about how best to use. To waste such organs or consolidate them so that they help only a tiny few is to do a great disservice to the person who, by donating their body after death, undertakes an act of immense selflessness and beneficence.

It is here that whole body gestational donation runs into a problem.

Using someone’s body for gestation means that those organs and tissues cannot be relocated and used for another purpose or help another person. Instead, the life-saving or enhancing organs and tissues will be occupied for the nine months that the donor uses their reproductive organs to grow a human. For example, you can’t harvest the heart from a brain-dead person if the cadaver already uses that heart to pump blood around the body during gestation. The same is true of other organs, which will need to remain in the body to ensure that pregnancy can occur and delivery is successful.

A potential counterargument is that not all organs are required for persons to gestate or even live. Living organ donation happens regularly and doesn’t result in that person’s untimely demise. You can donate part of your liver or pancreas, an entire kidney or lung and keep on living, albeit with some health implications. It seems theoretically possible that the same could be true for whole body gestational donation. Some organs and tissues would need to remain for the pregnancy to occur, while others could be harvested and donated to those in need. In effect, splitting the donation allocation into those required for gestation and those not.

Beyond the unpleasantness of such a proposition (which, again, isn’t sufficient to rule out the proposal), there may likely be practical reasons why this isn’t possible.

As Smajdor herself notes, pregnancy isn’t a benign process. On the contrary, it carries severe dangers and puts a not-inconsiderable toll on the human body. This is as likely to be the case for the dead body as it is for the alive one.

As such, harvesting multiple organs and tissues while simultaneously expecting the brain-dead body to gestate successfully might simply be asking too much. Ultimately, the body may be unable to handle the biological load of pregnancy without relying upon the full range of life-sustaining organs.

In traditional, post-donation pregnancies, this usually doesn’t appear to be the case. For example, the U.K.’s NHS notes that “many women have had babies after donating a kidney without any impact on the pregnancy from the kidney donation.” However, we’re not talking about normal pregnancies here. The brain-dead body could be vulnerable to various complications and negative impacts because it’s dead. And while this wouldn’t be a risk to the pregnant body (after all, they’re already dead), it could jeopardize the efficacy of whole body gestational donation if it means that successful gestation is unfeasible when combined with traditional organ donation.

So then, if faced with a choice between whole body gestational donation, which could help bring one person into the world, or traditional forms of organ and tissue donation, which could save eight lives and help a further seventy-five, the latter seems like the obvious choice. This, in turn, may help us explain (or perhaps justify) our differing intuitions when it comes to the apparent equivalence of organ donation and gestational donation.

Can We Justify Non-Compete Clauses?

photograph of a maze of empty office cubicles

In the State of the Union, President Joe Biden claimed that 30 million workers have non-compete clauses. This followed a proposal by the Federal Trade Commission to ban these clauses. Non-compete clauses normally contain two portions. First, they prohibit employees from using any proprietary information or skills during or after their employment. Second, they forbid employees from competing with their employer for some period of time after their employment. “Competing” normally consists of working for a rival company or starting one’s own business in the same industry.

Non-compete agreements are more common than one might expect. In 2019, the Economic Policy Institute found that about half of employers surveyed use non-compete agreements, and 31.8% of employers require these agreements for all employees.

While we often think of non-compete agreements as primarily being found in cutting-edge industries, they also reach into less innovative industries. For instance, one in six workers in the food industry have a non-compete clause. Even fast-food restaurants have used non-compete clauses.

Proponents of non-compete clauses argue that they are important to protect the interests of employers. The Bureau of Labor Statistics estimates that, in 2021, the average turnover rate (the rate at which employees leave their positions) in private industry was 52.4%. Although one might think the pandemic significantly inflated these numbers, rates from 2017-2019 were slightly below 50%. Businesses spend a significant amount of money training new employees, so turnover hurts the company’s bottom line  – U.S. companies spent over $100 billion on training employees in 2022. Additionally, the transfer of skilled and knowledgeable employees to competitors, especially when those skills and knowledge were gained at their current position, makes it more difficult for the original employer to compete against rivals.

However, opponents argue that non-compete clauses depress the wages of employees. Being prohibited from seeking new employment leaves employees unable to find better paying positions, even if just for the purposes of bargaining with their current employer. The FTC estimates that prohibiting non-compete clauses would cause yearly wage increases in the U.S. between $250 and $296 billion. Further, for agreements that extend beyond their employment, departing employees may need to take “career detours,” seeking jobs in a different field. This undoubtedly affects their earnings and makes finding future employment more difficult.

It is worth noting that these arguments are strictly economic. They view the case for and against non-compete clauses exclusively in terms of financial costs and benefits. This is certainly a fine basis for policy decisions. However, sometimes moral considerations prevail over economic ones.

For instance, even if someone provided robust data demonstrating that child labor would be significantly economically beneficial, we would find this non-compelling in light of the obvious moral wrongness. Thus, it is worthwhile to consider whether there’s a case to be made that we have moral reason to either permit or prohibit non-compete agreements regardless of what the economic data show us.

My analysis will focus on the portion of non-compete clauses that forbids current employees from seeking work with a competitor. Few, I take it, would object to the idea that companies should have the prerogative to protect their trade secrets. There may be means to enforce this without restricting employee movement or job seeking, such as through litigation. Thus, whenever I refer to non-compete agreements or clauses, I mean those which restrict employees from seeking work from, or founding, a competing firm both during, and for some period after, their employment.

There’s an obvious argument for why non-compete clauses ought to be permitted – before an employee joins a company, they and their employer reach an agreement about the terms of employment which many include these clauses. Don’t like the clause? Then renegotiate the contract before signing or simply find another job.

Employers impose all sorts of restrictions on us, from uniforms to hours of operation. If we find those conditions unacceptable, we simply turn the job down. Why should non-compete agreements be any different? They are merely the product of an agreement between consenting parties.

However, agreements are normally unobjectionable only when the parties enter them as equals. When there’s a difference in power between parties, one may accept terms that would be unacceptable between equals. As Evan Arnet argues in his discussion of a prospective right to strike, a background of robust workers’ rights is necessary to assure equal bargaining power and these rights are currently not always secure. For most job openings, there are a plethora of other candidates available. Aside from high-level executives, few have enough bargaining power with their prospective employer to demand that a non-compete clause be removed from their contract. Indeed, even asking for this could cause a prospective employer to move on to the next candidate. So, we ought to be skeptical of the claim that workers freely agree to non-compete clauses – there are multiple reasons to question whether workers have the bargaining power necessary for this agreement to be between equals.

One might instead appeal to the long-run consequences of not allowing non-compete agreements. The argument could be made as follows. By hiring and training employees, businesses invest in them and profit from their continued employment. So perhaps the idea is that, after investing in their employees, a firm deserves to profit from their investment and thus the employee should not be permitted to seek exit while still employed. Non-compete clauses are, in effect, a way for companies to protect their investments.

Unfortunately, there are multiple problems with this line of reasoning. The first is that it would only apply to non-compete agreements in cases where employees require significant training. Some employees may produce profit for the company after little to no training. Second, this seems to only justify non-compete clauses up to the point when the employee has become profitable to the employer – not both during and after employment. Third, some investments may simply be bad investments. Investing is ultimately a form of risk taking which does not always pay off. To hedge their bets, firms can instead identify candidates most likely to stay with the company, and make continued employment desirable.

Ultimately, these argument regarding what a company “deserves” lays bare the fundamental moral problem with non-compete agreements: they violate the autonomy of employees.

Autonomy, as a moral concept, is about one’s ability to make decisions for oneself – to take an active role in shaping one’s life. To say that an employee owes it to her employer to keep working there is to say that she does not deserve autonomy about what does with a third of her waking life. It says that she no longer has the right to make unencumbered decisions about what industry she will work in, and who she will work for.

And this is, ultimately, where we see the moral problem for non-compete clauses. Even if they do not suppress wages, non-compete agreements restrict the autonomy of employees. Unless someone has a large nest egg saved up, they may not be able to afford to quit their job and enter a period of unemployment while they wait for a non-compete clause to expire. Especially since voluntarily quitting may disqualify you from unemployment benefits. By raising the cost of exit, non-compete clauses may eliminate quitting as a viable option. As I have discussed elsewhere, employers gain control over our lives while we work for them – non-compete agreements aim to extend this control beyond the period of our employment, further limiting our ability to choose.

As a result, even if there were not significant potential financial benefits to eliminating non-compete agreements, there seems to be a powerful moral reason to do so. Otherwise, employers may restrict the ability of employees to make significant decisions about their own lives.

The Right to Strike

photograph of teacher's strike signs

2023 has already been an eventful year for strikes and labor.

On January 10th the United States Supreme Court heard Glacier Northwest v. International Brotherhood of Teamsters. The court is now deliberating whether to strip certain legal protections from striking workers. On January 19th France erupted in strikes and demonstrations over plans to raise the retirement age. On January 30th the U.K. House of Commons voted in favor of a controversial anti-strike bill. It now moves to the upper chamber of parliament for further deliberation. On February 5th an agreement was reached in Woburn, Massachusetts after teachers went on strike in violation of a state law that prohibits public employees from striking. On February 8th Temple University administration informed striking graduate workers it would be cutting their tuition benefits. The administration had already begun to cut their health insurance.

This raft of labor-related actions foregrounds a fundamental question: Should there be a right to strike?

Broadly speaking, a strike is a deliberate work stoppage in pursuit of some aim or goal. Strikes are often the last tactic employed by unions and workers, and occur after other pathways have failed, after negotiations have broken down, or, in the most serious cases, because of intolerable or inhumane labor conditions. Going on strike is mentally and physically exhausting, and workers face pay loss and potential retaliation. Strikes can involve violation of contracts, public inconvenience, and the occupation of employer property. Despite this, the history of labor organizing testifies to the significance of striking as a tactic for improving working conditions and securing workers’ rights. In short, strikes are worth careful ethical consideration.

Currently, the National Labor Relations Act of 1935 grants U.S. private sector workers the legal right to strike, although there is a thicket of riders, limitations, and provisos surrounding this core legal right. The right to strike in the United States can also be signed away in collective bargaining agreements (contracts between unions and employers) through no-strike clauses. The situation becomes yet more complex at the state level as, in many states, public employees such as teachers cannot legally strike at all.

But my aim here is not to debate whether specific strikes are justifiable (some surely are and some are not). Rather, the query is whether it should be, at a minimum, legal for most workers to go on strike in most situations. (There is considerable room for nuance regarding what a right to strike should look like and what kind of regulation should accompany it.)

Courts in democratic countries have long held that workers should enjoy freedom of association and accordingly a right to form organizations (such as unions) – they should possess the liberty to engage in activities to change and improve their workplace. This alone, however, does not obviously lead to a right to strike. For unions could be permitted and yet some range of potential activities, such as strikes, prohibited. If anything, strikes – causing intentional (if perhaps justified) economic harms to employers and often community inconvenience – appear notably distinct from actions such as petitions and demonstrations.

The ability to strike shares a complex relationship with bargaining and negotiations. At first glance, a right to strike seems out of step with the freedom of employers and employees to make agreements as they see fit. If workers want to be able to strike, then they can (presumably) elect to work for employers that allow strikes under certain conditions. The implicit argument is that the state should not be interfering in an agreement freely made between an employee and their employer, and therefore the state should not grant a right to strike over and above what is explicitly agreed to in a contract.

The counterargument is that a right to strike is an important pre-condition to a fair contract. Employers usually have significantly more material and economic power than their employees, as well as easier access to expert advice and legal counsel. Consequently, negotiations between workers and their employers occur from a position of inequality. This has been explicitly acknowledged in U.S. Supreme Court decisions such as National Labor Relations Board v. Jones & Laughlin Steel Corporation (1937). Countering this structural imbalance is a major reason workers join together into unions and engage in collective bargaining (see my previous article on Unions and Worker Agency)

From this perspective, it is only on the backdrop of robust workers’ rights, including the right to strike, that legitimate negotiations between employees and their employers can occur at all.

The philosopher David Borman has expanded this line of thinking, arguing that the right to strike derives from a more fundamental right for self-determination. He holds that “by striking, workers declare their right to self-determination within economic life, the right to cooperatively determine the rules and conditions of labor which affect them in essential ways, materially in psychologically.”

An alternative approach is to look at the impacts of striking and decide if there is a public interest in enshrining a right to strike. Not every strike is won, but strikes can assuredly help striking workers secure better pay and conditions. But what about the broader public? Anti-strike legislation and policy often argue the public harms of strikes justify restricting or limiting the right to strike. For example, new proposed legislation in the U.K. which would enforce minimum service levels is supposed to “ensure the safety of the public and their access to public services.”

Undoubtedly, strikes can be inconvenient. Strikes are also, perhaps unsurprisingly, associated with short-term harm. Transportation strikes increase car accidents as more people drive and patients admitted during nursing strikes have worse medical outcomes.

What this does not tell us is whether there is a longer-term salutary effect. The very same labor conditions that drive nurses to strike, such as understaffing, can have documented negative effects on patient outcomes. Striking workers — such as the 7000 New York City nurses who went on strike in early January — often stress the ability to do their job effectively as a motivating factor.

Granting that working conditions matter to job performance, and strikes can improve working conditions, a right to strike may be, on balance, worth it for the broader public.

There is also a problematic underlying assumption here, namely, that hospital administrators care about patient outcomes, but nurses don’t; that Tories in the U.K. parliament care about public transit, but transportation workers don’t. This assumption fails to take employees seriously as people who can take pride in their work.

Unfortunately, there is little long-term research on the broader societal impacts of strike policy. Nonetheless, the historical impact of organized labor including diminished economic inequality, the 8-hour workday, and workplace safety legislation, creates at least a suggestive case that a legal framework that supports labor organizing, presumably including the right to strike, facilitates the public good.

The effects of strikes and strike regulation deserve thorough empirical analysis, but the initial case for a right to strike on public policy grounds is compelling.

One last consideration. It is often said that freedom of speech does not mean freedom from consequences (this statement has been critically analyzed by The Prindle Post). By this same token, there could be a legal right to strike, but no protections for striking workers. So striking workers would not be jailed or fined by the state, but they could be fired, replaced, sued, and otherwise interfered with. The problem with this stance is it makes striking enormously burdensome and risky for workers, and many of the arguments for a right to strike depend on it’s occurrence being a live possibility. If there is to be a right to strike, it must be supported by enough legal protections such that the right can be meaningfully exercised.

Pathogenic Research: The Perfect Storm for Moral Blindness?

microscopic image of virus cells

In October, scientists at Boston University announced that they had created a COVID-19 variant as contagious as omicron (very) but significantly more lethal. “In K18-hACE2 mice [engineered mice vulnerable to COVID],” their preprint paper reported, “while Omicron causes mild, non-fatal infection, the Omicron S-carrying virus inflicts severe disease with a mortality rate of 80%.” If this beefed-up Omicron were released somehow, it would have had the potential to cause a much more severe pandemic.

The National Science Advisory Board for Biosecurity has now released new guidelines which seek to strike a significantly more cautious balance between the dangers and rewards of risky research involving PPPs — potential pandemic pathogens. The previous standards, under which the Boston University research was allowed to be conducted without any safety review, were, according to the NSABB, reliant on definitions of a PPP that were “too narrow” and likely to “result in overlooking… pathogens with enhanced potential to cause a pandemic.” (The researchers at Boston University claimed their enhanced COVID-19 variant was marginally less deadly than the original virus, and hence that they were not conducting risky “gain of function” research requiring oversight. But this argument is flawed since the deadliness of a virus with pandemic potential is a function of the combination of infectiousness and deadliness. Since the novel variant combined close-to-original-COVID-19 deadliness with omicron infectiousness, the novel variant is likely significantly more dangerous than the original strain.)

Experiments like these are not merely a question of public policy. Apart from the legal and regulatory issues, we can also ask: is it morally permissible to be personally involved in such research? To fund it, administer it, or conduct it?

On the positive side, research with PPPs, including some forms of the heavily politicized “gain-of-function” research, promises valuable insight into the origins, risks, and potential treatment of dangerous pathogens. We may even prevent or mitigate future natural pandemics. All of this seems to give us strong moral reasons to conduct such research.

However, according to Marc Lipsitch and Alison Galvani, epidemiologists at Harvard and Yale, these benefits are overblown and achievable by safer methods. The risks of such research, on the other hand, are undeniable. Research with dangerous pathogens is restricted to the safest rated labs. But even top safety-rated BS-3 and BS-4 research labs leak viruses with regularity. The COVID-19 lab leak theory remains contentious, but the 1977 Russian flu pandemic was very likely the result of a lab leak. It killed 700,000 people. Anthrax, SARS, smallpox, zika virus, ebola, and COVID-19 (in Taiwan) have all leaked from research labs, often with deadly results. One accident in a lab could cause hundreds of millions of deaths.

Given the scale of risk involved, you might ask why we don’t see mass refusals to conduct such research? Why do the funders of such work not outright reject contributing to such risk-taking? Why does this research not spark strong moral reactions from those involved?

Perhaps part of the reason is that we seem particularly vulnerable to flawed moral reasoning when it comes to subjects this like this. We often struggle to recognize the moral abhorrence of risky research. What might explain our “moral blindness” on this issue?

Stalin supposedly said, “One death is a tragedy. A million deaths is a statistic.” Morally, he was wrong. But psychologically, he was right. Our minds are better suited to the small scale of hunter-gatherer life than to the modern interconnected world where our actions can affect millions. We struggle to scale our moral judgments to the vast numbers involved in a global pandemic. Moral psychologists call this effect “scope neglect” and I discuss it in more detail here.

When a lab worker, research ethics committee member, or research funder thinks about what might go wrong with PPP research, they may fail to “scale up” their moral judgments to the level needed to consider the moral significance of causing a worldwide pandemic. More generally, research ethical principles were (understandably) built to consider the risks that research poses to the particular individuals involved in the research (subjects and experimenters), rather than the billions of innocents that could be affected. But this, in effect, institutionalizes scope neglect.

To compound this clouding effect of scope neglect, we tend to mentally round up tiny probabilities to “maybe” (think: lottery) or round them down to “it will never happen” (think: being hit by a meteorite while sleeping, the unfortunate fate of Ann Hodges of Alabama). Lipsitch and Inglesby’s 2014 study gives a 0.01-0.6% probability of causing a pandemic per lab worker per year to gain-of-function research on virulent flu viruses.

But rounding this probability down to “it won’t happen” would be a grave moral error.

Because a severe pandemic could cause hundreds of millions of deaths, even the lower-bound 0.01% risk of causing a global pandemic each year would mean that a gain-of-function researcher should expect to cause an average of 2,000 deaths per year. If that math is even remotely close to right, working on the most dangerous PPPs could be the most deadly job in the world.

Of course, we don’t act like it. Psychologically, it is incredibly hard to recognize what is “normal” as morally questionable, or even profoundly wrong. If your respected peers are doing the same kind of work, the prestigious scientific journals are publishing your research, and the tenure board are smiling down from above, it’s almost impossible to come to the disturbing and horrifying  conclusion that you’re doing something seriously unethical. But if the risks are as severe as Lipsitch and Co. claim (and the benefits as mediocre) then it is difficult to see how working with PPPs could be ethically defensible. What benefit to the world would your work have to provide to justify causing an expected 2,000 deaths each year?

Even putting the ethical debate to one side, extreme caution seems warranted when debating the morality of lab research on PPPs. It is a topic that could create the “perfect storm” of flawed moral reasoning.

Ethics Laid Bare

Part of the reason why HBO’s new series The Last of Us is so impactful — and the reason why the video game it is based on won dozens of awards — is because it shows how quickly it can all fall apart.

Early in the first episode, we find ourselves in the middle of an unfolding catastrophe. A fungus has evolved a way to invade the human nervous system and cause those who it infects to become manically violent; and our protagonist, Joel, along with his brother Tommy, has just narrowly saved his young daughter Sarah from a neighbor who was among the first to be infected. With a shocked Sarah still processing the violence which she had just witnessed — her father, armed with a wrench, killing their infected neighbor to save her — the group scrambles into a truck and frantically tries to flee.

With Tommy at the wheel and police cars screaming by, Sarah pleads for information which neither Joel nor Tommy can provide. The radio and cell towers are out, and the military, believing that the outbreak began in the city, has blocked off the main highway. But for all of the uncertainty, our protagonists do know that something is horrendously wrong: having seen their infected neighbor, they speed down country roads, by homes engulfed in flames, trying to get to a highway.

As they drive, our perspective shifts to that of Sarah, who sees a family come into view: a man, woman, and small child. With their van halfway into the ditch beside the road, the man, hands raised, runs into the street pleading for help. Tommy begins to downshift, and as we hear the brakes screech, Joel protests; Tommy mentions the child in the woman’s arms, but Joel reminds him that they have a child to consider too, and without allowing space for further discussion, tells Tommy to keep driving. Sarah tries to interject, saying that the family could ride in the back of the truck, but Tommy acquiesces to Joel’s request, and Sarah watches the still-pleading family through the rear windshield.

As we hear the pleading fade, Joel says that someone else will come along. The camera turns to Sarah, whose eyes well with tears.

* * *

It’s easy, in the study of ethics, to abstract oneself from the context of lived decision-making: for the vast majority of moral decisions, we do not make our choice with a robust understanding of the utility which is at stake, or if our will could simultaneously become universal law. When philosophers write papers on ethics, we have a luxury which many do not: the time to think deeply and clearly.

This is part of why, among many other reasons, dystopian stories like The Last of Us can be so emotionally compelling. The world is crashing down around our protagonists, and over the next 15 minutes of runtime, the devastation and violence which they witness is suffocating. There isn’t time to breathe or think — only to act. Shortly after encountering the family on the road, our protagonists try to navigate a town center, where panicked people have flooded out of burning buildings and into the street among mobs of the infected. Yet, even as the apocalypse arrives in one of the most chaotic and frenzied forms imaginable, we still see desperate gasps of moral virtue: through the windshield of the truck, we still see people carrying the injured and tending to the wounded.

This is why, even in all of the madness of the show’s first episode, the encounter on the road can be so deeply challenging to our conscience. This is ethics laid bare: when there is no one to call and no time to think, what do you do? Is your inclination like Tommy’s — to stop the truck at the risk of your loved ones? Do you think like Joel, choosing to protect those loved ones while hoping that someone else will take the risk? Or perhaps you’re like Sarah, desperately trying to find a third path and break free from the chokehold of the choice. Maybe you relate to those who stopped to help the wounded; maybe you relate to those who ran. Maybe you relate to both.

I do not believe that there is a clear answer to the question of whether or not our protagonists should have helped the family. I can see the deontologist’s argument that to not render aid to those in need violates some form of moral responsibility, but I can also see that the deontologist has room for Joel’s view, holding that responsibilities to one’s family supersede. The utilitarian would likely point out that, had they taken the family into the nearby town, their chances of surviving the chaos were dim to pitch dark. Or, maybe things would have turned out differently.

The encounter on the road is not challenging because of the various approaches which could be used to assess its moral dimensions; rather, it’s challenging because it strips us of the information and time required to think clearly about what’s at stake — and reveals what is underneath. We have no sense of the ethical lives of these characters prior to this moment, but their reactions to it show their dispositions, and lay their values and virtues bare. The encounter on the road speaks to a fundamental insight: that, at times, ethics is less what you think and more who you are, and how that is reflected in what you do.

It is not likely, I admit, that we will find ourselves in the middle of a zombie apocalypse. But the lesson of this encounter is nonetheless valuable, especially when extrapolated to our everyday reality. Sometimes the challenges which we will face are extraordinary: no matter one’s role — whether you be a healthcare provider, freight yard worker, beach-goer, or a bystander — we may all find ourselves in the position one day to make a choice resembling Joel’s, and have our own virtues laid bare. But even in the quotidian, in the ways in which you treat those around you, in those who you choose to acknowledge, in your actions and your omissions, in your choice to consume, in the choice to stand up when other won’t — in each of these choices is a reflection of who you are.

* * *

The choice presents itself: as you become conscious of the dilemma, precious seconds pass. The need is maybe great, but so is the risk — and the decision is yours. With no time to think, what remains is you.

What do you do?

We Are Running Out of Insults

photographs of different people yelling

When it comes to speech, kindness is often the best policy, but those who need a sharp word may find themselves in a predicament: how to express what they mean without using language that is demeaning towards marginalized groups.

Perhaps unsurprisingly, many words used as insults have historically been used to oppress. One prominent example is the trio “idiot,” “imbecile,” and “moron,” which were codified as scientific classifications for mental disability in the 19th century and then popularized by the eugenicist psychologist Henry Goddard in the early 20th century. A word you might call your little brother when he is being — well, frustrating — is deeply ableist and tied historically to eugenics.

Language has long been used as a tool of oppression. But in many cases, the link to a word’s more explicit oppressive use is lost to many contemporary users of that word.

Awareness of many words’ oppressive histories is growing, thanks in large part to efforts from scholars and members of marginalized communities, such as disability activist Hannah Diviney, who called out Beyoncé and Lizzo for use of an ableist slur in their lyrics. However, many people are simply unaware that the words they are using are tied historically to ideologies and practices they themselves would find immoral.

Is it permissible to use words that are historically tied to oppression, especially when many people are unaware of the link? In many cases, a person’s ignorance of facts relevant to a situation can absolve that person of moral responsibility. For example, if I give you a ticket for a flight that unbeknownst to me will end in a crash, I will not be morally responsible for the harm that comes to you when you board that flight. I could not have known. But we are morally responsible for ignorance that is borne out of negligence, and higher stakes increase our responsibility for educating ourselves.

Because the terms in question have been used to suppress entire groups of people, there’s ample opportunity for collateral damage when they are used.

For example, if someone calls their friend a misogynist slur, that slur not only aims its disdain at its direct target (the friend), but also at women and girls in general. In fact, many misogynist insults work only via their denigration of women. When someone is called a sissy, for example, the insult just is the identification of the target with femininity — and therefore, by implication, weakness. This point of view is communicated to anyone who reads or hears the insult.

The philosophical term for the idea that certain words can cause damage to those to whom they are not directed is known as a term being “leaky.” A leaky term is one which, even when it is merely quoted or mentioned rather than used, is still felt as damaging or offensive. The n-word is a paradigm case of a leaky term. The n-word is so-called because, for many, saying the full term in any context is racist.

Not only are insults tied to oppressing marginalized groups potentially leaky and prone to cause collateral damage (often by design, as in the “sissy” case); people’s speech also reaches further now than any time in history. Gone are the days of cursing out a politician for a small audience consisting only of one’s family in one’s own living room. Now, a simple @ of the public figure’s username will direct one’s insult right to the screens of hundreds, thousands, or millions of users of social media. More is at stake, because more people are affected.

Generally, when the repercussions for an action are greater, the moral responsibility for carefulness increases.

It may not be a big deal for a parent to serve their family food that seems unspoiled without first consulting its expiration date; the same cannot be said of a restaurant. Similarly, the wider one’s speech reaches, the less one’s ignorance of its meaning is morally exculpatory. Put simply, the leakiness of insults and the far reach of social media increase the stakes for considering the meaning of one’s speech, even beyond speech that is very obviously ableist, racist, or misogynist.

We can also question whether the connections with the past uses are really lost, even in cases less commonly thought of as full-on slurs. The current use of a word can reveal a tacit commitment to the immoral ideals the word represented more explicitly in the past. Consider “idiot” with its historical tie to eugenics. It was an ableist term meant to designate a so-called mental age of the person being evaluated, designating low intelligence. It’s still used to designate low intelligence today, as an insult rather than a clinical diagnosis. And as such, it still carries an ableist dismissiveness of people with cognitive disabilities.

So what’s a non-ableist, anti-racist, non-misogynist to do? The answer may be, of course, that one could simply refrain from insulting another person. This is an excellent suggestion, and it has much to recommend it as a general policy. However, some cases will require the use of an insult, and some people, at least, will find it easier to change their terms than to quit the habit of insulting others.

More generally, people need the ability to transgress a little with language — as when reserving taboo words for special expressions of outrage. Wouldn’t it be helpful for a language to have terms of insult that hit their intended target and no one else?

Perhaps we should invent new insults. Media aimed at children often does. The problem with novel insults is that they often come across as unserious. Similarly to fictional profanity, such as “frak” in place of “f***” on the TV show Battlestar Galactica, novel terms often lack the bite of the original. The attempt to replace the original term, which is damaging beyond its intended meaning, results in a term that is often not strong enough. “Dillweed,” with its surprisingly illustrious career, may get a laugh on TV, but real life may require something stronger.

Numerous lists offer examples of alternative words, but as with much of language, change comes slowly. It may be that change in social consciousness surrounding terms that are ableist, racist, misogynist, etc., must precede widespread use of insults that avoid these pitfalls.

Private Reasons in the Public Square

photograph of crowd at town hall

The recent Dobbs decision induced a tidal wave of emotions and heated discourse from both sides of the political aisle as well as from all corners of the American cultural landscape. Some rejoiced that it’s a significant move towards establishing a society that upholds the sanctity of human life, while others mourned the loss of a basic liberty. The Dobbs ruling overturned the historic Roe v. Wade verdict, and it has the practical consequence of relegating decisions about the legality of abortion to individual states. Abortion access is no longer a constitutionally protected right, and thus where and when abortion is legal will be determined by the democratic process.

The legal battle at the state level over abortion rights will continue over the coming months and years, giving voters the chance to share their views. Many of these citizens take their most deeply held moral, religious, and philosophical commitments to have obvious implications for how they vote.

But should all of these types of reasons affect how one votes? If other citizens reject your religion or moral framework, should you still choose political policies based on it?

Political philosophers offer a range of responses to these questions. For simplicity’s sake, we can boil down the responses to two major camps. The first camp answers “no,” arguing that only reasons which are shared or shareable amongst all reasonable citizens can serve as the basis for one’s vote. This seems to rule out religious reasons, experience-based reasons, and reasons that are based on controversial moral and philosophical principles, as reasonable people can reject these. So what kinds of reasons are shareable amongst all reasonable citizens? Candidates for inclusion are general liberal ideals, such as a commitment to human equality, individual liberty, and freedom of conscience. Of course, what these general ideals imply for any specific policy measure (as well as how these reasons should be weighed against each other when they conflict) is unclear. Citizens can disagree about how to employ these shared reasons, but at least they are appealing to reasons that are accepted by their fellow reasonable citizens instead of forcing their privately held convictions on others.

The other camp of political philosophers answers “yes,” arguing that so long as one’s reasons are intelligible or understandable to others, they can be used in the public square. This approach lets in many more reasons than the shareable reasons standard. Even if one strongly opposes Catholicism, for example, it is nevertheless understandable why their Catholic neighbor would be motivated to vote according to church teaching against abortion rights. Given the neighbor’s faith commitments, it is intelligible why they vote pro-life. Similarly, even if one accepts the controversial claim that personhood begins at conception, it is easy enough to understand why other reasonable people reject this belief, given there is no consensus in the scientific or philosophical communities. This intelligibility standard will also allow for many citizens to appeal to personal experiences, as it is clear how such experiences might reasonably shape one’s political preferences, even if these experiences are not shared by all reasonable citizens.

Of course, one might notice a potential pitfall with the intelligibility standard. What if a citizen wishes to support a certain policy on the basis of deeply immoral grounds, such as racist or sexist reasons? Can the intelligibility standard keep out such reasons from public discourse?

Defenders of the intelligibility standard might respond that it is not intelligible how a reasonable person could hold such beliefs, blocking these reasons from the public square. Of course, there may also be disagreement over where exactly to draw this line of reasonableness. Advocates of the intelligibility standard hope that there is enough consensus to distinguish between reasonable belief systems (e.g., those of the major world religions and cultures) and unreasonable ones (e.g., those of racist sects and oppressive cults). Naturally, proponents of the shareable reasons standard tend to be dubious that such an intuitive line in the sand exists, doubling down on placing tight restrictions on the types of reasons that are acceptable in the public square.

What is the relevance of this shared vs. intelligible reasons distinction when it comes to the average citizen? Regardless of where one falls in the debate, it is clearly beneficial to reflect on our political beliefs. Appreciating the reasons of other thoughtful citizens can prompt us to take the following beneficial steps:

1. Recognize that your privately held belief system is not shared by every reasonable, well-intentioned citizen. Our political culture is constituted by a wide array of differing opinions about abortion and many other issues, and people often have good reasons for holding the viewpoints they do. Recognition of this empirical fact is a crucial starting point for improving our political climate and having constructive democratic debate.

2. Reflect on why your friends, neighbors, and co-workers might disagree with you on political issues. Morality and politics are complicated matters, and this is reflected by surveys which indicate the depth of disagreement amongst professional experts in these fields. Given this complexity, individuals should be open to potentially revising their previously held beliefs in light of new evidence.

3. Engage with those who do not share your belief system. Inter-group contact has been shown to decrease harmful political polarization. In the wake of the Dobbs decision, this looks like a willingness to engage with those on both the pro-choice and pro-life sides of the aisle.

Regardless of where they fall in the shared reasons versus intelligible reasons debate, citizens have a responsibility to recognize that their political opponents can be reasonable as well. Embracing this idea will lead to more productive democratic discourse surrounding difficult political issues like those bound up in the Dobbs ruling.

Are Voters to Blame for the Polarization Crisis?

drawing of political protest crowd

Even before Donald Trump was elected president, political polarization was on the rise. In his final State of the Union address, Barack Obama said that one shortcoming of his time in office was his failure to curb polarization. He acknowledged that “the rancor and suspicion between the parties has gotten worse instead of better,” calling it “one of the few regrets of [his] presidency.”

Even though Obama was aware of the growing political divide, it still would have been difficult to anticipate what would happen next. During the Trump presidency, the United States became one of the most, if not the most, polarized democracies on Earth. In the summer of 2020, 76% of Republicans thought that the U.S. government was doing a good job dealing with the pandemic, while only 29% of Democrats agreed. Across the nations surveyed, this was the largest such divide. Americans strongly distrust those who vote for the other party, a trend that only worsens amongst younger voters.

Who is responsible for this growing political impasse? To many, the answer is obvious: irrational voters are to blame.

To begin with, polarized citizens are less likely to listen to opposing views, looking primarily to party identification to guide how they vote. Hearing less and less from those who disagree, their confidence increases that they are in the right, further deepening the political impasse.

In this way, extreme polarization can become a vicious cycle. Voters increasingly pay attention only to what supports their existing political views. The news stories they hear, and the policy proposals that they take most seriously, come from those with whom they already agree. Whether this is because they are uncomfortable with the uncertainty, or simply like belonging to their political tribe, avoiding the opposition only exacerbates the political divide.

Even when voters do hear things that challenge their views, they often engage in motivated reasoning – evaluating information in a way that favors their prior beliefs.

Suppose I believe that district-based public schooling leads to better results than school choice programs. I am presented with two studies, one that provides evidence in favor of mandatory public schools and the other that supports school choice vouchers. Because of my previous beliefs, I engage in some motivated reasoning. I more closely scrutinize the second study, identifying several methodological flaws. I walk away more convinced than ever that school choice programs are inferior to mandatory public schooling.

So there you have it. Not only do citizens listen primarily to their own political party, but when faced with information that might challenge their political opinions, they reason in a way that merely confirms their beliefs. Because of their irrationality, it seems voters are at fault for our descent into toxic partisanship.

While this assessment might seem straightforward, some question whether the blame should be laid so squarely at the feet of voters. Citizens face a number of challenges in deciding how to vote. Political issues are complex, requiring competence in things like history, sociology, and economics to fully understand.

Voters have limited time and resources. They cannot become experts in every area necessary to make good political choices. Inevitably, they must rely on others to sort through all the relevant information.

Because they depend on others to help them determine how to vote, citizens must decide who to trust. Should they trust members of the other political party, politicians and pundits that they already think are mistaken about many issues? Or should they trust those that share their perspective, those that they already think advocate for better policies? From this perspective, the answer should be clear. Surely, they should trust those they believe to be more reliable. In fact, it is rational for them to do so. Why trust someone who you think makes a lot of mistakes rather than someone who you think makes good choices?

What about motivated reasoning? Voters, as we have discussed, do not have the time to evaluate all the relevant evidence. When their evidence conflicts, they may have to make a choice about what to inspect more closely. From their perspective, the evidence that challenges their beliefs is more likely to be misleading, so it makes sense to scrutinize that evidence more closely. Using our example from before, it is rational for me to take a closer look at the study that supports school choice, leading to finding more flaws with that study rather than the one that supports district-based public schools.

If all this is right, then political polarization is a predictable consequence of the challenges that voters face. Due to their limited time and expertise, it is rational for citizens to trust their political party and be more critical of evidence that challenges their beliefs.

But if voters are not to blame for political polarization, who is?

Perhaps polarization is more a product of our current political culture than individual voter behavior. Democrats and Republicans each have their own newspapers, magazines, and cable news networks, creating conditions where citizens only hear the same views and opinions parroted. To make matters worse, both liberals and conservatives are quick to vilify those who disagree with them, undermining trust in anyone that doesn’t stick to the party line. Even when voters are behaving rationally, these dynamics encourage citizens to become more and more entrenched.

Regardless of who is to blame, recognizing the challenges that voters face makes it clear that cultural change is needed to turn the tide of political polarization. Helping citizens productively navigate the complex political arena will require reestablishing social trust and focusing on our shared values. As it stands, voters might be doing the best they can with what they have. They are simply being set up to fail.

Aging Empires

photograph of lawmakers voting

The 2020 presidential election saw 77-year-old Joe Biden overcome 74-year-old Donald Trump, with Michael Bloomberg (78), Bernie Sanders (79) and Elizabeth Warren (71) all competitive in democratic primaries. Biden, who will be 82 when the next presidential term begins in 2025 (and 86 when it ends), has not yet decided whether he will run again, while Trump’s campaign is already in full swing. Nancy Pelosi recently lost her speakership at the age of 82, while the oldest sitting senator, Diane Feinstein, is a remarkable 89 years old and has held her position for over 30 years. Last year, septuagenarian democrats Carolyn Maloney and (76) and Jerry Nadler (75) battled it out over New York’s 12th congressional district, with 38-year-old Suraj Patel finishing a distant third.

All this means that the current American government is the oldest in history. Recent suggestions to reverse this trend include term limits for senators and age limits for judges like we have in Australia, where the mandated age of retirement is 70. Business Insider has published a whole series of articles – Red, White, and Grey – on the dangers of gerontocracy, arguing that a country run by the elderly will be uniquely unrepresentative. In the U.S. as a whole, about 17% of people are over the age of 65. In congress, this rises to a whopping 50%. Elderly people tend to be whiter and wealthier than their younger counterparts.

A recent poll found that more than half of Americans support an upper age-limit for public officials. But even considering a limit of this sort raises serious ethical considerations: is it justifiable to exclude people from public office based on nothing more than their age?

One concern with gerontocracy is that elderly people will avoid long-term planning in favor of short-term gain. Why plan for 50 years in the future when you won’t be around to see the consequences? Elderly people are less concerned about climate change, for example, comfortable in the belief that it won’t “pose a serious threat” in their lifetime. But accusations of short-termism have their limits. Consider, for example, 80-year-old Mitch McConnell’s decades-long plot to capture the Supreme Court, overturn Roe v. Wade, and, in doing so, cement his legacy as a champion of the pro-life movement. Liberal opposition to his efforts has less to do with the short-term consequences of legislative flux than the long-term future of a country where women’s bodily autonomy is heavily restricted. McConnell has even suggested, without a hint of irony, that the Roe v. Wade decision is outdated. So, for better or worse, elderly leaders can retain a focus on the future.

Michael Clinton, writing for Esquire, argues that accusations of gerontocracy are simply ageism: a surprisingly socially-acceptable form of discrimination against elderly people. Ageism itself is nothing new; indeed, even Aristotle showed a prejudice against both the youthful and the aged. Philosopher Adam Woodcox presents a version of Aristotle’s approach to the elderly:

The elderly character… is pessimistic and cynical. He is distrustful since he has been let down many times; he covets wealth since he knows how difficult it is to acquire and how easy it is to lose; and he lacks confidence in the future, because things have so often gone wrong.

Age limits on public service would not only institutionalize discrimination but also risk losing the experience and “gravitas” that can only come with age. But Clinton is quite happy to endorse a minimum age on important positions – indeed, he is in favor of raising the minimum age for the presidency from 35 to 40.

Age discrimination against younger people is widespread and generally uncontroversial. Voting, drinking alcohol, driving, serving in the military, even riding on rollercoasters – all sorts of activities are locked behind age barriers. These restrictions are accepted because humans don’t fully mature – physically or mentally – until about the age of 25. But evidence also suggests that cognitive capacity declines later in life: although speech and language functions remain largely intact, executive functions – including decision-making and problem-solving abilities – decline with age.

If we have a minimum age for the presidency, why shouldn’t we have a maximum?

It is also worth considering the personal element. Loneliness and social isolation pose huge health risks for the elderly, and restricting their political involvement might have serious consequences in this regard. But this is not so much an argument against age limits on public service as a note of caution about their introduction: any initiative that restricts the ability of the elderly to run for office would have to be matched by an initiative to get them involved in other ways.

Notoriously in-touch, spacefaring, tweeting ego-machine Elon Musk thinks that the elderly are simply out of touch. He might have a point. But it remains unclear whether age limits are the best remedy. Perhaps efforts to increase active political participation among the youth might be more fruitful, as they are consistently outvoted – and out-represented – by their elders. Indeed, there are signs of change on the horizon, with a drastic increase in the youth vote at the 2020 presidential election. Unfortunately, their only options were to vote for their (great) grandparents. With Biden poised to decide on his future, the age question is certainly not going anywhere.

Curfews and the Liberty of Cats – A Follow-Up

photograph of cat on fence at night

In April 2022, the city of Knox, Australia, imposed a “cat curfew” requiring pet cats to be kept on their owners’ premises at all times. The main reason behind this policy was a simple one: our cuddly domestic companions are filled with murderous urges.

On average, a well-fed free-roaming domestic cat will kill around 75 animals per year. As a result, pet cats are responsible for the deaths of around 200 million native Australian animals annually. But the hunting practices that are directly attributable to pet cats are only part of the problem.

The refusal of negligent cat owners to spay or neuter their pets has led to an explosion of the feral cat population (currently estimated to be somewhere between 2.1 million and 6.3 million) in Australia.

A feral cat predates at a much higher rate than a domestic cat, killing around 740 animals per year. Because of this, feral cats are responsible for the deaths of an additional 1.4 billion native Australian animals annually.

And no, this isn’t the “circle of life.” In Australia – as in many parts of the world – cats are an invasive species, dumped by humans into an ecosystem that is ill-prepared to accommodate them. As a result, cats have already been directly responsible for the extinction of 25 species of mammal found only in Australia. This accounts for more than two-thirds of all Australian mammal extinctions over the past 200 years. Cats are currently identified as a further threat to an additional 74 species of mammals, 40 birds, 21 reptiles and four amphibians.

At the time, I argued that the Knox curfew was morally justified on both a consequentialist and a rights-based analysis. Consequentially, any harm suffered by a cat kept under curfew will be vastly outweighed by the protection of individual native animals and the preservation of entire species (not to mention the curbing of other undesirable behaviors like spraying, fighting, and property damage). Alternatively, from a rights-based perspective, it seems acceptable to limit a cat’s right to liberty in order to better respect other animals’ right to life. What’s more, such limitations also better respect the cat’s own right to life.

Free-roaming cats are vulnerable to all kinds of risks, including everything from getting hit by a car, to feline leukemia, to wild animal attacks. As a result, the life expectancy of an outdoor cat is only 2-5 years, while indoor cats live for an average of 10-15 years.

Now, a new study is adding further support to this moral obligation to keep our cats indoors. Conducted over a three-year period, the study analyzed the behaviors of free-roaming  domestic cats in the greater Washington, D.C. area. Predictably, cats were found to engage widely with a number of native mammals – including those notorious for harboring zoonotic diseases. The increased risk of contracting such a disease – causing harm not only to the cats themselves, but also to their human families – is just one more reason to keep cats indoors. As author Daniel J. Herrera notes:

While a human would never knowingly open their doors to a rabid raccoon, owners of indoor-outdoor cats routinely allow their cats to engage in activities where they might contract rabies, then welcome them back into their homes at the end of the day.

Such a finding should be particularly troubling during a global pandemic. Cats are, after all, capable of spreading the virus that causes COVID-19. But other diseases are cause for concern too. Cats are the primary vector for Toxoplasma gondii, a parasite that infects between 30-50% of the human population. Toxoplasmosis – the condition resulting from infection with the T. gondii parasite – is of particular concern for expectant mothers, who can pass it on to their unborn child. Around half of babies who are infected with toxoplasmosis will be born prematurely, and may later develop symptoms such as an enlarged liver and spleen, hearing loss, jaundice, and vision problems. Problems occur for adults, too. One in 150 Australians are thought to have ocular toxoplasmosis – the result of an eye infection by the parasite. Around half of these individuals will experience permanent vision loss.

Cats have now also become the top domestic animal source of human rabies – a disease that is 100 percent fatal once symptoms appear. While dogs are traditionally seen as the most likely domestic source of rabies infections, more stringent canine regulation and vaccination programs – coupled with lax attitudes towards cats – has seen felines awarded this questionable accolade.

Ultimately, cats with outdoor access are 2.77 times more likely to be infected with a zoonotic disease (like toxoplasmosis or rabies) than a cat that is kept indoors.

This revelation adds new weight to the arguments in favor of cat curfews. For one, the significant risk to human health provides further support for the consequentialist argument against allowing cats outside. In simple terms: whatever enjoyment cats might have received outdoors is far outweighed by the increased risk of disease that their free-roaming lifestyle creates for their human families. Similar support can be found for the rights-based approach, with it seeming appropriate to prioritize a human family’s rights to good health over a cat’s right to liberty.

What’s more, similar reasoning applies even if we ignore the increased risk to humans. Zoonotic diseases are harmful – and sometimes fatal – for the cats themselves. Given this, it seems reasonable to impose limitations on a cat’s freedom in order to promote its continued health. This argument is made all the more persuasive when we remind ourselves that such limitations come at virtually no cost to a cat’s quality of life. Experts agree that cats do not need to go outside for their mental health, and that it’s possible for a cat to be just as happy indoors. Even the American Veterinary Medical Association recommends that pet cats be kept indoors or in an outdoor enclosure at all times.

Of course, the mental well-being of indoor cats isn’t automatic. As I noted last time, it requires careful, attentive pet-ownership with a focus on indoor enrichment. And this involves a lot more work than simply allowing a cat to roam freely and seek its thrills through property destruction and the decimation of wildlife. This, then, might be what fuels so much of the resistance towards cat curfews like the one in Knox: not genuine concern for the liberty of cats, but simple laziness on behalf of the owners.

Justice and Climate Change

photograph of oil pumps in the desert

A recent analysis by Harvard researchers vindicated the work of ExxonMobil’s internal scientists. As far back as the 1970s, Exxon not only knew about human-caused climate change, but had impressively accurate predictions regarding the global temperature over the next several decades. This analysis adds to the pile of documentation concerning the oil giant’s role in climate change misinformation.

Since the story broke in 2015, it has been clear that Exxon was aware of the threat of global warming – to the planet and to company profits – and shifted gears from open discussion to the seeding of doubt. The new research establishes just how good their early internal science was, and accordingly, just how deliberately obfuscatory their tactics.

And so we come to “climate justice.” This slogan designates that anthropogenic climate change is not just an ethical issue, it is a moral one. Typically the concern is one of fairness, and how the burden of fixing climate change should be distributed. What is each country’s responsibility? How does this intersect with their wealth, their historical benefit from fossil consumption, and their expected impact from climate change? Nonetheless, this does not exhaust the space of justice. Lady Justice holds a sword as well as scales.

What should climate justice, in the sense of holding people accountable, look like?

One approach is “atmospheric trust litigation” championed by the organization Our Children’s Trust. They are currently involved in a number of lawsuits against the United States government exemplified by the ongoing case Juliana v. United States which alleges that there is a general right to an atmospheric system capable of sustaining life that has been violated by the actions of the government, and seeks injunctions that would curb the use of fossil fuels.

The core idea is that the atmosphere is a public trust held by the state for public use and the government has failed as trustee, that is, failed to protect and maintain the atmosphere adequately for the public.

It remains to be seen whether this is a good approach to seek legal remedy for anthropogenic climate change, but if one wants to penalize or deter the actions of specific individuals or companies this approach will not work. For ultimately it is the United States government broadly understood that is the target of the lawsuits. An alternative is to not just pursue civil cases against the government, but hold corporations or select actors within, directly accountable for their actions in either causing climate change or deliberately deceiving politicians, shareholders, or the public about climate change.

William Tucker, a lawyer for the U.S. Environmental Protection Agency has argued (in an individual capacity, not on behalf of the EPA) that fraud is another possibility for holding certain actors civilly and criminally liable. This is nearer to a retributive or deterrent understanding of climate justice. A number of cases have been filed against companies like ExxonMobil for their deceptive business practices, including one in New York (which ended in failure) and an ongoing case in Massachusetts.

A different tack is found in the work of lawyer and activist Polly Higgens. She sought to make ecocide, or the deliberate destruction of ecosystems, a crime under international law. The campaign to make it a law is ongoing, but if successful, people could be criminally prosecuted for gross environmental harms in countries that accepted ecocide as a crime. The notion is increasingly prevalent in European debates on climate change. A similar proposal by political theorist Catriona McKinnon is the inclusion of postericide among international criminal law, for those who engaged in “reckless conduct fit to bring about the extinction of humanity.” These proposals both focus on the harms of human-caused climate change, not just the coverup. The United Nations has also indicated its intent to see criminal prosecution for environmental and climate harms, although their approach is not yet clear.

Especially for Americans, a more crime-based approach to dealing with climate change is unfamiliar.

Narratives of climate change, partially spread by fossil fuel companies, tend to focus on who is responsible for the burning of fossil fuels and often hold a mirror to the modern Western lifestyle. Beef eating, plane travel, and even having children emerge as climate sins. (See the Prindle Post’s ongoing discussion regarding Procreative Autonomy and Climate Change.)

And yet, while climate change has literally billions of contributors, it is only a small number of people in select positions that truly could have acted differently and made an appreciable difference in the current climate crisis. Almost all of us have left a light on, but few of us knew about climate change in the 1970s and chose to launch a multidecade campaign to obscure the truth.

One potential concern is that treating climate change as a crime is inherently retrospective. It may feel good to get the bad guys, but it will not solve the fundamental problem. Although this concern is not unique to climate change – it applies just the same in cases of murder. The dead cannot be brought back to life simply by punishing the wrongdoer.

Different approaches to justice answer this challenge differently. A retributive stance alleges that criminals deserve their due even if it does not undo the harms they committed. Whereas a deterrent perspective focuses instead on discouraging similar behavior in the future.

ExxonMobil’s climate change disinformation campaign is not an outlier, but is a textbook example of a very modern – and challenging – form of immoral action. The public relations strategy of big oil to cast doubt on climate change was the exact same pursued by big tobacco to cast doubt on the health effects of cigarettes. Even the same public relations firm, Hill and Knowlton Strategies, was involved.

People placed high-up in the vast institutional structures erected during the 20th century, like the modern corporation, make decisions – perhaps for profit, perhaps for power – that can have the downstream effect of harming many, many people. Such actions are ill-fit to our intuitions of crime and punishment. A potential crime such as spreading climate change misinformation with the resources of a multibillion-dollar company is characterized by the incredible scale of potential harms, but also by a complete lack of viscerality. It is not gruesome, it is not personal. The causal connection between an executive decision to cast aspersions on climate science and someone dying in an extreme weather event is complicated by the convolutions of corporate structure and the mercuriality of weather. (Although there are scientific methods to tie climate change to specific weather events.)

Nonetheless, a societal answer to this question – of how to ground meaningful accountability for actions mediated by giant institutions – is vitally important. For the harms conducted by a lone murderer pale compared to the damage that can be done with the resources at the hands of governments and corporations.