← Return to search results
Back to Prindle Institute

‘Locked Down’: Representing the Pandemic on Screen

photograph of empty London street at night

I have consumed a lot of media over the past year, not all of it great. Spending so much more time at home has resulted in burning through all the shows and movies I wanted to watch pretty quickly, leaving me rapidly approaching the bottom of the proverbial media barrel. Given that the pandemic has been on everyone’s minds pretty much every day for the past 365, it’s been nice to be able to turn my brain off for a minute by watching Parks and Recreation for the dozenth time, or by finally getting around to checking out The Witcher on Netflix (official review: meh). However, we have now been doing this so long that new movies and shows have been made during the pandemic itself, some of which incorporate pandemic-life into the plot.

While it’s certainly true that media in the form of the news has an obligation to present accurate representations of the reality of the pandemic, what about TV and movies? Should these, too, present as realistic a picture as possible, or does a creative license allow them to distort the situation somewhat?

Given that I am a paragon of researcher integrity (combined with my need to find something new to watch), I decided to consume a couple of these pandemic-centric shows and movies. One was a critically unsuccessful and potentially morally problematic movie called Locked Down.

HBO Max’s 2021 Locked Down is described as follows:

“Just when Linda and Paxton have chosen to get separated, they get to hang on to each other due to forced lockdown. It’s hard to live together, but poetry and lots of wine bring them closer together in surprising ways.”

It’s a romantic comedy of a sort, which also involves a heist so that something will actually happen during its 2-hour runtime. The opening minutes of the film show something very familiar: a Zoom call between family members discussing their respective woes. While Paxton is living in “total lockdown” in London and has just been furloughed, his brother is mostly upset that the NBA season has been cancelled (or, at least cut short). Paxton laments that he will spend the next two weeks in total isolation with his now ex-girlfriend in an impossibly well-appointed and no doubt outrageously expensive townhouse in London. The isolation is getting to both Paxton and Linda: for the former, it exacerbates his anxieties; for the latter, she just wants to be able to get away from her ex.

So far, so uninspired. I am certain that I speak for many when I say that the last thing I want to watch after a day of awkward, choppy Zoom calls is a movie about two rich, beautiful people complaining about how awful it is to have to have days full of awkward, choppy Zoom calls. But the potentially morally problematic aspects of the movie are not merely limited to low-hanging comedic fruit and a bad case of failing to read the room: as some outlets have commented, the way the characters interpret the lockdown restrictions risks sending the wrong kind of message to viewers. Specifically, the characters seem to treat the lockdown as nothing more than a burden, rather than something that is a necessity to stop the spread of a deadly virus. Here, for example, are some of the more questionable moments:

  • Paxton reads a poem in the middle of the street, loudly, waking up his neighbors. When Linda asks what he’s doing, he says that he’s “entertaining our fellow inmates.”
  • On a Zoom call, Linda’s Swedish coworker brags that “you can go to bars here!” while Linda asks if anyone is actually obeying all the lockdown rules.
  • Paxton, having previously spent some time in jail, remarks: “People like me who have spent time in actual prison are thriving in this new reality.”
  • Paxton needs to go to the store, but doesn’t have a mask. Linda finds his old bandana in her drawer. While he’s initially surprised and excited to see it, she says: “It’s no longer a symbol of rebellion, it’s now government advice.”
  • When someone starts banging pots in the street shouting for everyone to “make noise for the NHS,” a tipsy Linda hurries outside with pots and pans of her own, not in a way that shows genuine appreciation, but seemingly the result of conditioned obligation, in a manner that is almost sarcastic.
  • At one point, a character describes the situation as an “insane fucking lockdown.”

The worry, then, is that portraying lockdown procedures in these ways reinforces a narrative that conceives of such procedures as overblown and unnecessary, and might encourage those who are watching to feel the same way.

Of course, one can also sympathize with these characters. Spending a significant amount of time cooped up inside can make anyone feel a little stir crazy, perhaps even like they are a prisoner of their own home. At the same time, it’s important to interpret these feelings against the backdrop of the bigger picture, namely why people are locking down at all. In this way, Locked Down might just be “too soon”: it may be that, one sweet day in the future when the coronavirus is no longer a significant problem, that we can look back and commiserate about the comparatively minor inconveniences. Until then, though, it does not seem like the best idea to glamorize rebellion against lockdown orders.

The Ethics of Vaccination Passports

photograph of couple presenting passport

The light at the end of the tunnel appears to finally be approaching after a year of the COVID-19 pandemic. Now that multiple vaccines are available in most countries and roll-out plans are ongoing, albeit at a slow pace in the United States, questions about getting back to “normal” are starting to be asked. Chief among these are ones regarding the most restricted activity with COVID-19, as well as the most sought after: international travel. After many countries restricted their borders with the United States due to COVID-19, it seems that Americans are itching to fly across oceans to enjoy the vacations that were cancelled in 2020. Now, countries must ask how travel can occur safely, or at least how the risk of spreading the coronavirus across international borders, which started the pandemic in the first place, might be limited. One possible solution being considered by multiple countries is a “vaccination passport.” Providing certification for those having received full vaccination would streamline things so that those inoculated against the virus might have privileged access to enter countries, ride on airplanes, and potentially even use gyms or enter bars.

The concept of only allowing entrance of persons with certain vaccines is not foreign. The World Health Organization issues the Yellow Card for people who have been vaccinated against certain deadly diseases in order to prevent the outbreak of those diseases in certain countries. The Yellow Card, then, is very similar to the suggested vaccine passport, except that COVID-19 raises a number of pressing questions concerning accessibility. Throughout this pandemic, minorities have been disproportionately affected by the virus. Facing systemic racism in the U.S., minorities have been less likely to receive adequate healthcare, to possess the necessary housing needed for quarantining, and to enjoy employment opportunities that might offer work-from-home options. Now that vaccines have been rolling out, it is the same situation: neighborhoods that faced the worst consequences of the coronavirus are now being the last to be vaccinated. While some countries might have very strong vaccine roll-out programs, the United States quickly fell behind the Trump administration’s goal of 20 million vaccines by the end of 2020 by about 17 million. Now, the Biden administration has committed to 100 million vaccinations in the first 100 days. Unfortunately, he first has to patch together a tremendous nation-wide effort in a country that has a very complex and privatized healthcare system — a system which has created many issues for Americans trying to get the vaccine.

It is, however, not only a question of access and who can get the vaccine, but also about considering the situation of those who can’t. At the very beginning of vaccine roll-out, it appeared that some Americans with allergies would simply not be able to get the vaccine because of the risk of anaphylactic shock, which can be deadly. If vaccine passports were required to enter some countries, then some people would simply be unable to enter them for an uncertain amount of time. It could take years before countries loosen restrictions or vaccine providers provide an alternative with different ingredients than those that currently make up the dose. There is also the question of what form the vaccine passport would take. Many countries are interested in a digital card that people could access through their phones. While many people may have access to a smart phone capable of holding documentation of a vaccination, plenty of people still do not have access to that technology, either out of choice or because it is not an affordable option. Technology then becomes just another barrier to international travel.

The main motivation behind these passports is an understandable desire to return to the feeling of living in a “normal” society, where people can move fairly freely throughout the world. Just the desire to travel is one that many people across the world share as it allows them to form meaningful relationships and connections with people both different and similar to themselves — a good the pandemic has stripped from us. Before we can get back to a sense of “normal,”  however, it is important to remember that this pandemic is far from over, especially in the United States. It would make sense, therefore, to have some sort of system set up to prevent people from spreading the virus across countries and continents. These passports raise important concerns about equality in access to medical, technological, and human goods. Many people would be left behind if these passports were to be implemented without addressing the fact that different populations do not have the same access to goods. Vaccine passports would effectively create a 2-tier citizenship hierarchy with those who have been lucky enough to receive full vaccination the freedom to move about in the world and take advantage of unique offerings that would even include public facilities. A great many people, and more importantly those already vulnerable and marginalized, will continue to be restricted in their movements and will lack access to the same opportunities that those with the vaccine would enjoy. This pandemic has already aggravated many inequalities and injustices between populations in the world, and a vaccine passport threatens to further codify this unjustified unequal treatment.

On an Imperative to Educate People on the History of Race in America

photograph of Selma anniversary march at Edmund Pettus Bridge featuring Barack Obama and John Lewis

Many people don’t have much occasion to observe racism in the United States. This means that, for some, knowledge about the topic can only come in the form of testimony. Most of the things we know, we come to know not by investigating the matter personally, but instead on the basis of what we’ve been told by others. Human beings encounter all sorts of hurdles when it comes to attaining belief through testimony. Consider, for example, the challenges our country has faced when it comes to controlling the pandemic. The testimony and advice of experts in infectious disease are often tossed aside and even vilified in favor of instead accepting the viewpoints and advice from people on YouTube telling people what they want to hear.

This happens often when it comes to discussions of race. From the perspective of many, racism is the stuff of history books. Implementation of racist policies is the kind of thing that it would only be possible to observe in a black and white photograph; racism ended with the assassination of Martin Luther King Jr. There is already a strong tendency to engage in confirmation bias when it comes to this issue — people are inclined to believe that racism ended years ago, so they are resistant and often even offended when presented with testimonial evidence to the contrary. People are also inclined to seek out others who agree with their position, especially if those people are Black. As a result, even though the views of these individuals are not the consensus view, the fact that they are willing to articulate the idea that the country is not systemically racist makes these individuals tremendously popular with people who were inclined to believe them before they ever opened their mouths.

Listening to testimonial evidence can also be challenging for people because learning about our country’s racist past and about how that racism, present in all of our institutions, has not been completely eliminated in the course of fewer than 70 years, seems to conflict with their desire to be patriotic. For some, patriotism consists in loyalty, love, and pride for one’s country. If we are unwilling to accept American exceptionalism in all of its forms, how can we count ourselves as patriots?

In response to these concerns, many argue that blind patriotism is nothing more than the acceptance of propaganda. Defenders of such patriotism encourage people not to read books like Ibram X. Kendi’s How to be an Anti-racist or Ta-Nehisi Coates’ Between the World and Me, claiming that this work is “liberal brainwashing.” Book banning, either implemented by public policy or strongly encouraged by public sentiment has occurred so often and so nefariously that if one finds oneself on that side of the issue, there is good inductive evidence that one is on the wrong side of history. Responsible members of a community, members that want their country to be the best place it can be, should be willing to think critically about various positions, to engage and respond to them rather than to simply avoid them because they’ve been told that they are “unpatriotic.” Our country has such a problematic history when it comes to listening to Black voices, that when we’re being told we shouldn’t listen to Black accounts of Black history, our propaganda sensors should be on high alert.

Still others argue that projects that attempt to understand the full effects of racism, slavery, and segregation are counterproductive — they only lead to tribalism. We should relegate discussions of race to the past and move forward into a post-racial world with a commitment to unity and equality. In response to this, people argue that to tell a group of people that we should just abandon a thoroughgoing investigation into the history of their ancestors because engaging in such an inquiry causes too much division is itself a racist idea — one that defenders of the status quo have been articulating for centuries.

Dr. Martin Luther King Jr. beautifully articulates the value of understanding Black history in a passage from The Autobiography of Martin Luther King, Jr.:

Even the Negroes’ contribution to the music of America is sometimes overlooked in astonishing ways. In 1965 my oldest son and daughter entered an integrated school in Atlanta. A few months later my wife and I were invited to attend a program entitled “Music that has made America great.” As the evening unfolded, we listened to the folk songs and melodies of the various immigrant groups. We were certain that the program would end with the most original of all American music, the Negro spiritual. But we were mistaken. Instead, all the students, including our children, ended the program by singing “Dixie.” As we rose to leave the hall, my wife and I looked at each other with a combination of indignation and amazement. All the students, black and white, all the parents present that night, and all the faculty members had been victimized by just another expression of America’s penchant for ignoring the Negro, making him invisible and making his contributions insignificant. I wept within that night. I wept for my children and all black children who have been denied a knowledge of their heritage; I wept for all white children, who, through daily miseducation, are taught that the Negro is an irrelevant entity in American society; I wept for all the white parents and teachers who are forced to overlook the fact that the wealth of cultural and technological progress in America is a result of the commonwealth of inpouring contributions.

Understanding the history of our people, all of them, fully and truthfully, is valuable for its own sake. It is also valuable for our actions going forward. We can’t understand who we are without understanding who we’ve been, and without understanding who we’ve been, we can’t construct a blueprint for who we want to be as a nation.

Originally published on February 24th, 2021

Color Blindness and Cartoon Network’s PSA

photograph of small child peeking through his hands covering his face

Cartoon Network’s latest anti-racist PSA is undeniably clever. “See Color” takes place on the set of a PSA, where Amethyst, a Crystal Gem from the show Steven Universe (don’t ask me what this means), leads a couple of tots in a song about color blindness.

“Color blindness is our game, because everyone’s the same! Everybody join our circle, doesn’t matter if you’re white or black or purple!”

Amethyst isn’t buying it. “Ugh, who wrote this?” she says. “I think it kinda matters that I’m purple.” The children register their agreement.

“Well, I’m not an alien,” says the Black child, “but it definitely matters to me that I’m Black.”

“Yeah, it makes a difference that I’m white,” the white child chimes in. “The two of us get treated very differently.”

The Black child explains further: “My experience with anti-Black racism is really specific…But you won’t see any of that if you ‘don’t see color.’”

The idea that color blindness is deficient as a means of extirpating racism — because it blinds people to existing discrimination and invalidates legitimate race-based affirmative action — is not new. Indeed, the rejection of the philosophy and practice of color blindness has by now become the new orthodoxy in academic and left-leaning circles. That this rejection has trickled down to kids’ shows is surely a powerful measure of its success.

Conservative critics complain that the new anti-color blindness position is antithetical to Dr. Martin Luther King, Jr.’s dream of a society in which people are judged by the content of their character rather than the color of their skin. This is a mistake. To see this, it is useful to understand the distinction in political philosophy between ideal theory and non-ideal theory. 

The distinction was first introduced by John Rawls in his classic A Theory of Justice. According to Rawls, ideal theory is an account of what society should aim for given certain facts about human nature and possible social institutions. Non-ideal theory, by contrast, addresses the question of how the ideal might be achieved in practical, permissible steps, from the actual, partially just society we occupy.

Those who reject color-blindness can see the color-blindness envisioned by King as a property of an ideal society, a society in which racism does not exist. In that society, the color of a person’s skin really does not matter to how they are in fact treated; hence, it is something we can and ought to ignore in our treatment of them. Unfortunately, we don’t live in this society, and in addition, we ought not pretend that we do. Instead, we ought to recognize other people’s races so that we may treat them equitably, taking into account the inequitable treatment to which they have and continue to be subjected.

But just as the norms which we must follow in a non-ideal society are perhaps different from those we ought to follow in an ideal society, so the norms we ought to teach our children should perhaps be different from the ones adults ought to follow. And there is a danger in teaching children to “see color” while also asking them, as we still do, to embrace King’s vision: it may very easily lead to confusion, or worse, a rejection of a color blindness as an ideal. After all, how many children are equipped to understand the distinction between ideal and non-ideal theory? Imagine white children criticizing King as a racial reactionary because of the latter’s insistence that in his ideal society, judgments of people’s merits would not take their race into account.

On the other hand, perhaps risking this outcome is better than the alternative: another generation of white children who believe that because race shouldn’t matter in some ideal society, it therefore ought not matter to us. Can we really afford to risk another generation of white people who believe that the claim that Black lives matter is somehow antithetical to the claim that all lives matter? Perhaps not.

There are good reasons to reject color blindness as a philosophy and practice for the real world: it leads us to ignore actual discrimination and vitiates the justification for race-based affirmative action. But there are limits to what children can be asked to understand, and ensuring that they are neither led astray nor confused requires careful thought.

The Ethics of Cancelling Student Loan Debt

image of graduation cap icon filled with shiny coins

Since Joe Biden’s election, the public has been waiting to see how the new administration plans to tackle one of the biggest campaign issues: student debt. Biden’s platform contained some measures for debt relief for students, and he has pledged $10,000 in student loan forgiveness. However, that promise has not sat well with members of his own party, including Representative Alexandria Ocasio-Cortez and even Senate Majority Leader Schumer, who have called on the president to forgive a much greater amount of student debt. This debate over differing policy proposals has exposed a number of moral issues and concerns regarding debt forgiveness.

It is estimated that about 65% of all jobs in the United States require a post-secondary degree. About 43 million people owe 1.6 trillion dollars in federal student loans. The amount of debt has skyrocketed by over 600% since 2004 following years of rising costs of education. Student loans are now the second largest form of household debt after mortgages. Following the Great Recession, it has become more difficult to pay off greater amounts of debt. More than 30% of student loan borrowers are in default, and that number has increased steadily compared to default rates in the 1990s. Now with the pandemic, it is likely to become even more difficult for recent graduates to be able to find work and pay down their loans.

Part of the problem with this massive amount of student loan debt is that it has a larger social impact. Students who take on debt are more likely to put off larger purchases later in life. Millennials, who have the lowest credit scores of any generation, are reportedly more likely to delay life decisions that can affect debt such as having children, getting married, or buying a house. Many who struggle to pay also experience significant mental health problems. With less disposable income, this can create a drag on the economy which means that the issue affects everyone, not just students. This has led many to call the debt issue a crisis. As Daniel M. Johnson of the Harvard Business Review reports, “By almost any definition, this is a crisis: It is certainly a crisis for those with student loan debt…it is also a crisis for lenders…a crisis for the federation government…[and] many argue that it is also a crisis for our nation’s economy.”

This brings us to the various proposals made to ameliorate the problem. In addition to Biden’s pledge to cancel $10,000 per person as a response to the COVID crisis, Biden has also taken steps to pause loan payments until October. His Democratic colleagues have called for him to cancel $50,000 of debt per person using the Higher Education Act of 1965. (Biden’s initial opposition to this proposal was that based on not having the authority to do so without Congress.) However, several experts, including 17 state Attorney’s General believe otherwise and have called on Biden to act (Biden has referred the matter to the DOJ). More recently, Biden has shifted his opposition to the proposal on the grounds that not everyone deserves the relief, recently arguing that people who went to Harvard, Yale, and Pennsylvania do not need that kind of relief.

Indeed, one of the arguments against cancelling student debt is that it rewards people who will be better off and earn more with their degree. People with at least a bachelor’s degree are on average going to earn over $25,000 per year more than someone with a high school diploma (a PhD holder will earn $59,000 more per year). So, the argument goes, why should taxpayers, particularly those who did not attempt post-secondary education, be on the hook for other people’s debts, particularly if they are likely to earn more money anyways?

Of course, if there is one thing that the pandemic should teach us it is that things that might not obviously affect us can eventually have a huge impact on everyone. Despite their qualifications, grads often settle for lower paying jobs in order to start paying off their debt faster. Large debts also disincentivize many from pursuing higher education at all. A sluggish economy post-COVID will also exacerbate the issue of finding employment. If student debt continues to be a problem, it could be a source of economic drag, eventually limiting the revenue from future taxpayers.

According to Kate Padgett Walsh of Iowa State University, another argument against debt forgiveness is that it would seem to violate the Kantian deontological moral principle that one should keep their promises. Reneging on such promises is disrespectful to you and to others. On the other hand, one doesn’t renege on a promise if the promisee is released from the promise from the promisor. Also, it is in the nature of Kant’s moral thinking to disregard consequences and context for the sake of universality. The massive surge in debt, the potential risk to the economy, and the state of things following COVID become irrelevant to a Kantian moral take on the situation. The moral question then remains for us whether they should be relevant.

There are additional moral issues pertaining to justice and equity which are pertinent as well. Studies have shown that Black and brown Americans face a racial pay gap, and will likely owe more debt, meaning that it is more difficult for them to service their debt. According to Elissa Nadworny of NPR, “Many Black and Latino families have missed out on ways to build wealth in the past…due to racist policies. Researchers who study and talk to student loan borrowers say student debt is a primary factor holding them back now.”

This is in addition to, as mentioned, the physical and mental health issues that having so much debt can have. However, even if we accept that there should be some debt relief, there is still the question of how much. While some may argue that $10,000 is not sufficient, the facts suggest that not only is a significant amount of student debt less than $10,000, but that these are the loans which are most difficult to pay off. For example, many of these people went to college but did not finish. Thus, not only do they now owe money, but they likely make less than they would have. Meanwhile, more than a third of student debt is owed by the top 20% of income holders. This suggests $10,000 of forgiveness may have a more significant impact than cancelling another $40,000 on top of it. Thus, the problem may be that many will receive relief but not need it. There are several of these factors which affect the relative equity provided by each debt relief figure, and this makes the argument between $10,000 and $50,000 more complicated than it may first appear.

There is also the larger moral concern about tackling the cost of post-secondary education in the first place. This includes education cuts, rising tuition in public schools, as well as the growth of private schools which have resulted in a higher average debt burden. While debt relief may be a treatment, it doesn’t address the underlying sickness. Given that more jobs than ever require post-secondary education, and as a result enrollment has also skyrocketed, perhaps we should recognize that access to post-secondary education is a social need, and that the current university system as a whole, created and developed as it was before such need was felt, needs to be rethought.

Can Hyperfemininity Be Radical?

image of black lipstick kiss on white background

In late November of 2020, Rolling Stone published a controversial article on “Bimbo TikTok,” a nascent subculture that has found a home on the popular video sharing platform. Young twenty-something women are reclaiming the word bimbo, and as EJ Dickson puts it in the Rolling Stone article, they aim “to transform the bimbo into an all-inclusive, gender-neutral leftist icon.” These women, along with the handful of gay men and non-binary people who also embrace the aesthetic, signal their bimbo-ness through Barbie-pink clothing, glittery eyeshadow, and a willingness to appear ditzy.

So how is this revolutionary? Kate Muir, one of the young women interviewed by Dickson, explains that “you become everything men want visually whilst also being everything they hate (self aware, sexually empowered, politically conscious, etc.).” In other words, coupling a hyperfeminine (and as Dickson acknowledges, explicitly sexual) aesthetic with a radical political sensibility creates cognitive dissonance in viewers with a habit of objectifying women. There isn’t really a political manifesto for this nebulous movement, so Muir’s statement is about as close as you can get to a bimbo philosophy.

However, there are a few glaring problems with “reclaiming the bimbo” under the banner of progressive politics. The biggest is that while some women, like Muir, claim to be anti-consumerist, it isn’t really possible to reject consumerism and be hyperfeminine at the same time. Name-brand clothing, makeup, and hair products don’t appear out of thin air; they require a significant investment of both time and money.

Furthermore, the idea that women can fight misogyny by performing hyperfemininity is problematic in itself. Griffin Maxwell Brooks, an engineering student at Princeton and another person Dickson interviewed, said that “The modern bimbo aesthetic is more about a state of mind and embracing, ‘I want to dress however I want and look hot and not cater to your expectations.” Bodily autonomy is a deeply important feminist issue, but when we stop interrogating why we choose to look a certain way, it becomes hollow as an analytical framework. In other words, women should be allowed to dress however they want, but that doesn’t mean we shouldn’t ask what cultural forces shape our desires. In a society where women are still expected to perform femininity, how is leaning into femininity not catering to men’s expectations? Why is the bimbo’s version of “hot” revolutionary, despite the fact that it’s just what straight men find attractive? All women, regardless of how they present, suffer from misogyny in one way or another, but it’s ludicrous to propose that women who are conventionally attractive and do buy into hyperfemininity are more maligned than the women who refuse to play ball.

To be clear, the problem isn’t that women are dressing provocatively and wearing fake eyelashes. The problem is that an inherently consumerist aesthetic is being reframed as politically radical. For years, make-up companies have used feminism to sell their products, equating personal expression with the freedom to buy lipstick. But now, companies no longer have to spend millions on market research and insidious campaigns; consumers are doing the work for them. It has become almost impossible to escape consumerism, so young people with radical impulses can only direct their energy into an empty aesthetic, into products rather than activism.

Furthermore, these young women are putting time and money into an aesthetic that remains rooted in deeply misogynistic views about women’s intellectual capacity. Is the bimbo really the best figure to reclaim in the name of sexual liberation? A more truly radical action would be to expand our collective imagination beyond tired and harmful stereotypes, to imagine a new way to be free without the help of old templates.

There is a fun element of campiness to the bimbo aesthetic, and it might have the potential to become a sophisticated parody of misogynistic expectations. Glittery makeup is harmless, but it’s a waste of time to frame wearing it as a political statement. The fact that we feel the need to reframe blatant consumerism as radical is, in itself, deeply troubling.

Mexico City’s Tampon Ban

collection of feminine hygeine products

On February 9, tampons disappeared from store shelves across Mexico City. The decision to ban tampons was borne out of a larger crusade to eliminate single-use plastics across the city, which since January 1, has abolished plastic bags, straws, and cutlery. Mexico City’s strategy to eliminate tampons is due to the single-use plastic applicators included in the most popular brands. The city decided not to gradually phase tampons out, but instead to impose a ban on sale practically overnight. Though government officials claim this ban was announced far in advance, that did not ease the tension felt by many when they awoke to find the city devoid of tampons.

Was it ethical for Mexico City to ban single-use plastic tampons? Do people who menstruate have an obligation to prioritize reusable products? How can we weigh environmental issues against health and autonomy?

Though evidence of single-use tampons has been documented since at least 3,000 BCE, it was not until 1920’s that the first single-use tampons and pads were mass produced. These single-use tampons first began using plastic in the 1970’s, which some argued made application easier than the cardboard alternatives. The preference for plastic tampons is reflected in North America and Europe, but in many other parts of the world, tampons are devoid of plastic. The advent of accessible menstruation products marked an increase in women’s health and autonomy. Relative access to menstruation health management has been linked to women’s health outcomes and general well-being.

Many criticisms of Mexico City’s tampon ban come largely due to its implementation. For many who menstruate, the convenience of stopping to pick up products when caught off guard is a necessity. As roughly 26% of the world population experiences menstruation regularly, menstrual hygiene has been recognized as a critical human rights issue by prominent human rights organizations such as the Human Rights Watch and UNICEF. Though menstruation is often painted as a women’s rights issue, it is also fundamentally an issue of health. Lack of access to feminine hygiene products has been linked to an increased risk of reproductive and urinary tract infections and urogenital diseases. In many countries in the world, menstruation is still highly stigmatized despite access to products. Additionally, very few countries in the world subsidize menstruation products, instead placing the financial burden on menstruating individuals to cover the cost. With all of these challenges already faced by individuals who menstruate, Mexico City’s decision to eliminate affordable menstrual products stands to exacerbate existing class and sex inequities.

One defense of Mexico City’s ban is the fact that in the long-term, reusable menstrual products are cheaper in addition to being better for the environment. Alternatives such as menstrual cups or even reusable pads save menstruating individuals from having to buy new products every month. However, while these products may be cheaper in the long term for many poor individuals the one-time cost of these products is simply too high as it demands an ability to financially invest in the long term. Those barely able to afford the cheaper disposable products month to month, living paycheck to paycheck or worse, simply do not have the bandwidth to buy these reusable products. The pandemic has also taken a toll on the financial well-being of Mexicans, a country with roughly 10 million people in poverty. Additionally, reusable products are not simply a financial investment in the product itself, but also demand an investment in terms of clean water and cleaning products to properly sanitize these products during usage. The role of clean water in reusable menstruation products becomes even more important when realizing that more than 260,000 homes in Mexico City lack running water. For these reasons, it is not practical nor ethical to obligate menstruating individuals with very little disposal income or access to clean water to switch to reusable hygiene products.

However, putting aside class-based concerns, are those who are capable obligated to switch to reusable menstrual hygiene products? As people had been menstruating hundreds of thousands of years before the invention of single-use plastics, it is clearly possible to menstruate without these products. Depending on one’s access to alternatives, it is arguable that switching to more sustainable menstrual products is part of one’s larger obligation to cut down on plastic consumption generally, especially considering the growing current plastic pollution crisis. However, choosing which menstrual product to use is arguably more than a simple consumer choice. Menstruation is fundamentally a matter of health, and how one approaches it can be deeply personal. There is also something to be said about comfort and preference. Every menstruating individual has a different preference and different products might not be comfortable or feasible depending on their body. With this in mind, menstruation health management is a matter of bodily autonomy. Limiting the types of menstruation products available also restricts an individual’s ability to choose which product works best for them. For this reason alone, some might argue that Mexico City’s decision to outright ban tampons was unethical, as it robbed millions of individuals the right to choose a menstruation tool often relied upon.

The continued stigmatization of menstruation only further demands sensitivity when determining an individual’s obligations in relation to it. While it might be difficult to determine whether reusable menstruation products are obligatory, it might be fair to say that single-use plastics products should be avoided if possible, considering the growing environmental crisis. In many parts of the world, tampons without plastic applicators are widely used. Reusable products have also taken off, and are lauded by many women, reflected in both reviews and in their growing market share. Notwithstanding health and class-based concerns, individuals should explore ways to decrease their consumption impact generally, and one’s choice of menstrual products represents one way to do so.

While an individual may hold an obligation to decrease their waste and consumption, it is another step to justify the government’s decision to outright ban hygiene products without subsidized alternatives. As I have discussed in a previous article, there is a strong argument that governments and corporations bear the brunt of responsibility in addressing the plastic crisis. From this angle, Mexico City’s decision might be viewed as a progressive and necessary step in eliminating its plastic pollution. However, the decision to ban these products without comparably convenient or affordable alternatives could be said to be an injustice in itself, as it greatly impacts the health and autonomy of millions of people who menstruate in Mexico City. And perhaps this tradeoff is not necessary. Those leading the charge against plastic tampons have also largely pushed for government responsibility to provide alternatives.

In what some may see as a great irony, the government officials who led the charge to ban tampons — Mexico City’s mayor, Claudia Sheinbaum, and the director general for environmental regulation, Lilian Guigue —  are both women. In a statement to the Financial Times, Guigue tried to justify the decision by explaining that she had attempted to negotiate with tampon producers to develop alternatives. However, the decision to immediately ban tampons is a reflection of her belief the crisis demanded immediate action.

Environmental issues from climate change to plastic pollution will undoubtedly demand radical consumption changes. Governments around the world, including that of Mexico City, should be aware that a commitment to rapid changes devoid of socioeconomic considerations will unquestionably lead to negative consequences for already marginalized people.

A New Role for an Old Rule: Usury and Speculation

photograph of GameStop store exterior sign

Can Aristotle tell us anything about short selling GameStop stock? I think so! But to understand what, we need to turn to one of the strangest parts of ancient and medieval ethics: the prohibition on loaning money at interest.

Part 1: An Old Argument Against Usury

Dante Alighieri, in Canto XI of his Inferno, places usurers in the 3rd ring of the 7th circle of hell. Those who lend money at interest, then, are placed lower in hell than the greedy, the gluttonous, and the lustful. In fact, they are placed in the same circle as (and a lower ring than) murderers!

Nor was Dante unusual in his condemnation of usury. Plato and Aristotle each condemned loaning money at interest, as did the Catholic Church for most of its history. Indeed, usury is still widely condemned by Islamic thinkers in light of passages in the Qur’an. There was reasonable, if not complete, uniformity amongst ancient and medieval ethicists in the condemnation of loaning money at interest.

I want to look at this old prohibition against usury, not because I think it applies today, but because I think part of the underlying principle might have new applications in the modern economy.

There tend to be two elements to the old arguments against usury. First, there is a confusing worry that there is something iffy about profiting off money as such. Second, there is a worry that usury contributes to inequality. Thus, Thomas Aquinas, in his discussion of usury, starts his objection by saying that to “take usury for money lent is unjust in itself, because this is to sell what does not exist, and this evidently leads to inequality which is contrary to justice.” Some philosophers emphasize one of these two points more than others (thus Plato seems to focus on inequality while Aristotle focuses on the unnaturalness of profiting off the medium of exchange); but I think to really understand the ancient worry we need to put them together.

Part 2: An Old Argument in New Terms

Economic activity should be contributive, not extractive. That is, we should profit by taking a proper subset of our contribution to the common good, not by simply shifting money from someone else to myself.

If I’m a farmer, I make money by growing crops that contribute to the wellbeing of others. I then sell those crops at a profit. My profit is a subset of the total good that I produce. If I’m a painter, I make money by improving the appearance of the buildings we live in. If I’m an academic, I make money by increasing the store of human knowledge or by educating others. In any of these jobs, even were I not to get paid, the world is a better place for my efforts. And because the world is a better place for my efforts, it is good that I be remunerated for what I produce.

In contrast, if I make a profit not by taking some share of the extra good I produce, but just by extracting goods from others and accruing them to myself, then that is not a legitimate way to make money. Such behavior is what is sometimes called ‘rent-seeking.’ The problem with extractive activities is that your good always comes at someone else’s expense. You are benefited only to the extent that others are hurt. (Note not any redistribution is rent seeking, mere redistribution from the rich to the poor actually can produce increased value since a dollar is worth more to a poor person than a rich person given the principle of diminished marginal utility.)

Now, it is, of course, possible to make the world a better place by the furnishing of money. Thus, suppose you come to me with a business idea but need an investment to start the business. I think it is a good idea, so I give you $10,000 in exchange for part ownership of the business. You put in the work, I put in the capital. Now, if this business thrives, say by selling pasta, then the world will be better off (people have access to pasta) and we will both make some profit.

But that is not what happens when I loan money at interest. Elizabeth Anscombe points out this distinction in her article “Two Philosophers’ Objections to Usury” where she discusses the “distinction between investing in a capital venture, with a view to sharing the profits if it should succeed, and demanding interest on the mere strength of a loan.” If I invest in your company, my profit scales with the amount of pasta you sell (namely the amount of good that you do). But if instead I simply loan you money at interest, then you owe me the same return whether the business succeeds or not. That is, I make a claim to the money whether or not you successfully contribute to the common good.

This, then, is the worry about profiting off money as such. It is not that you cannot use your current money to make more money (of course you can, by investing), the problem comes when the investment just is the money, because then your profiting is shorn off from any contribution to the common good.

Now, how is this connected with equality? Well, if profiting off just having money is a form of rent-seeking behavior, it is a particularly pernicious form because it is a rent-seeking behavior which siphons money from the poor to the rich. If the rich invest money with a poor person, the rich person will profit only if the poor person profits as well. But if instead the poor person takes out a loan with the rich, then the rich person profits regardless of the good that accrues to the poor person.

Part 3: An Argument in New Form

I don’t think this argument against usury still applies today. It does not, as far as I can tell, quite make sense. In particular, I think it fails to take seriously how the continual expansion of the economy means that it actually does create real value by having capital now rather than later. However, I do think there is a way to apply some of this reasoning to a lot of what goes on in various forms of speculative investing. Consider three different investments (I’m not an expert on this subject, so take my descriptions with a grain of salt).

Currency Speculation. Suppose that I think that in the future the value of the dollar will increase relative to the euro. Thus, I trade some of my euros with you in exchange for dollars as a form of investment. If the value of the dollar, then, goes up relative to the euro, I can pocket a nice little profit. But note, that if my dollars go up in relative value, your euros must have gone down. If we trade dollars for euros as an investment, then one of us can make money only if the other person loses it.

Shorting Stocks. I think a company is currently valued too highly. Because of that, I borrow a stock that you own, sell it to someone else, and then when the price of the stock falls buy a new cheaper stock to pay you back with. I make some money, you still have your original stock (plus whatever fee I paid to borrow it), but we did not actually create any value for the common good. So where did the value come from? It came because the person who bought the short stock lost money. Any dollar that I make by shorting a stock, someone else must have lost by buying that stock.

Bitcoin. Suppose I expect the value of bitcoin to increase in the future, while you are worried it will decrease. Thus, I buy some of your bitcoin, the price increases and I pocket a profit. Now, I was able to profit here, but only because you lost on the trade. Bitcoin, does not itself, contribute to the common good. Thus those who profit off bitcoin, it seems to me, might be profiting off money in precisely the way Aristotle was critical of.

These forms of financial speculation, then, really are in a deep sense like gambling. People often describe them as gambling in the sense that they are risky, but they are like gambling in another sense, that the money you make does not come from the creation of new wealth, but rather from extracting wealth from a loser. You win only to the extent that someone else loses; and that is no way to make a living.

Now, I don’t actually think all gambling is morally troubling. When my friends and I play bridge, I often like playing for a ‘penny a point.’ But we play for money, not because we actually want to make money, but because the game is more fun and the strategy more balanced when some money is on the line. If I win one night, and my friend wins the next, the actual financial exchange might cancel out, but we are all the richer at least in having played better games. I don’t think there is any problem with gambling, in this sense; rather I think gambling is wrong when one’s final end really is to take the money of someone else.

Now, economists do tell me that my examples are oversimplified and that there actually is real value created by these financial maneuvers, perhaps in a way that parallels my bridge example. The person who buys my euros might not be saving them, they might be planning to use them for something that contributes to the common good (though, again, I will make my money whether they do so or not). Likewise, economists tell me that short selling stocks plays an important role in preventing the development of financial bubbles.

And maybe that is right. But it is at least worth noting, that even if some value is created by these practices, the profit one pockets is not a percentage of that value. I profit because someone else loses money; were no one else to lose money my profit would be far less. I expect that most who engage in such financial speculation would not do so, unless they thought they were net pocketing the money of others (in a way I would play bridge for money, even if I expect everything to equal out in the end).


There is something morally troubling about ‘making money off money.’ That worry may no longer apply to all forms of usury (though it surely does still apply to exploitative loans like those provided by many payday lenders), but I think it may apply to new financial transactions that Plato and Aristotle had never conceived of.

Of course, I doubt many of my readers are hedge fund managers. So does this advice apply to you?

I’m not sure, but I think it might. Just as the hedge fund managers did something wrong, potentially so too did those redditors squeezing the short. That succeeded (to the extent it did), not because investment in GameStop contributed to the common good, but because others lost money.

Now I’ll be the first to admit I don’t feel much sympathy for hedge fund managers who lost money; and I already mentioned the possible positive good created by redistribution. But all the same, it doesn’t quite feel right. If these hedge fund managers really have rights to that money, then I’m not sure I should profit even at their expense. And if they don’t have rights to the money, I’d rather the redistribution come by taxation rather than playing a game they might still win.

My practical advice is this: Whenever you see yourself making a profit, ask yourself where the total value is being produced in the common good, and make sure, as a first step, that you are not profiting more than the value you create.

‘Malcolm & Marie’ and the Politics of Representation

image of ripped paper on white background

At a glance, Sam Levinson’s 2021 film Malcolm & Marie has all the components of a critically acclaimed drama. It’s shot in black and white (which, besides being beautiful, reminds the audience that this is a “serious” film), stars two very talented actors with promising careers (John David Washington as Malcolm, Zendaya as Marie), and is a film with something to say about filmmaking. Malcolm, a director who gets into an argument with his long-suffering girlfriend Marie after an awards ceremony, weaves his problems with contemporary cinema and film criticism into their fight.

Stories with something to say about the film industry usually play well with critics, but Malcolm & Marie has been almost universally panned. One review described it as “a very talk-y movie that takes aim at film criticism and its relationship to Black art in the most muddled and perplexing of ways: through the convoluted dialogue of a white director (who also happens to be the son of another famous director), filtered through two black characters,” resulting in “a sudsy, exhausting drama about a couple that probably shouldn’t be together, and is only just now admitting the quiet part aloud.”

Reviewers are divided over the quality of actors’ performances, but one thing nearly everyone agrees on is the main problem at the film’s core; Levinson. As the review above explains, much of Malcolm’s tirade against film critics (in particular, a “white lady from the L.A. Times” who reviewed his last movie poorly) seems lifted directly from Levinson’s personal issues with the industry. Note that Levinson’s last film, Assassination Nation, was poorly reviewed by Katie Walsh, a white lady from the L.A. Times.

Even worse, Levinson’s ire towards negative reviews of his own work are expressed by a black character. As one critic for The Independent put it, “there are many moments where it feels as if Malcolm, who is a Black Hollywood director, serves as a mouthpiece for Levinson’s own opinions on race and filmmaking – making them harder to disagree with. The points made about reviewers are far from anti-racist or even progressive . . . but because they’re coming out of Malcolm’s mouth, we’re tempted to believe they are grounded in his experiences as a Black man.”

The problems with Malcolm & Marie as a film are perhaps less interesting than this question; is it alright for white writers to write non-white characters? It’s certainly not a new question, as this 2016 article from The New Yorker on the anxieties of writing outside one’s ethnicity demonstrates. On the one hand, the idea that we should limit fiction in any sense is troubling. If fiction is supposed to cultivate empathy, then writers should not only be allowed to but be encouraged to write characters unlike themselves. Otherwise, we end up with white writers only writing about white characters, contributing to an already homogenous artistic landscape. At the same time, white writers can easily fall into traps when they appropriate the voices and experiences of non-white characters. White writers can become defensive when this is brought up, and accuse non-white writers of attempting to silence or muffle art. But as writer Viet Thanh Nyugen explains, “It is possible to write about others not like oneself, if one understands that this is not simply an act of culture and free speech, but one that is enmeshed in a complicated, painful history of ownership and division.”

When asked about writing black characters as a white man in an article for Esquire, Levinson responded, “I have faith in the collaborative process and in my partners that if I write something that doesn’t feel true, that JD or Z [John David Washington and Zendaya] don’t respond to or feel to be honest, that they are going to say something and we’ll work it out. I didn’t have anxiety in that sense because I have too much respect for the collaborative nature of filmmaking.” Levinson is perhaps misrepresenting the power actors have on set, and while filmmaking is a collaborative process, Levinson still has power as director and sole screenwriter at the end of the day.

It’s very easy to make Levinson into a symbol of everything wrong with white male directors, but obviously the problem goes beyond just him. While Malcolm & Marie was written with the intent to prod film critics, it has provoked a larger conversation about the ethics of race and representation, a conversation as contentious as (though much less exhausting than) the one at the heart of Levinson’s film.

When Should We Be Undemocratic?

photograph of the White House at night

I am inclined to think the following two things:

  1. The Senate should have convicted former President Trump and prohibited him from holding future office (as permitted by Article I, Section 3, Clause 7 of the U.S. constitution).
  2. It would have been undemocratic for the Senate to bar President Trump from future office.

Why do I think it undemocratic to bar President Trump from office? Simply because it removes the ability of the democratic populous to select him once again as president. Certainly, I think his behavior should disqualify him from ever holding public office again; but there are a great many people who I believe should never hold public office, and yet it would be undemocratic for my will to be decisive in preventing my fellow citizens from electing them.

Barring a president from future office, then, is actually far more profoundly undemocratic than removing a president who was voted into office. After a president has been elected, it takes four years before the people could vote him or her out. Thus, impeachment and removal is necessary to maintain an interim political check. The problem with barring someone from future office, however, is that future elections already provide this democratic check. The people can choose to not reelect someone! To bar someone from holding office says: even if the people choose to reelect, even then, he or she should not be allowed to take that seat.

I’m tempted to console myself here; to tell myself that President Trump’s behavior made him a threat to democracy, and as such it is not undemocratic to remove his name from the list of potential candidates. This, however, I think would just be a pleasing rationalization. It is, itself, undemocratic for me to unilaterally decide which threats to democracy should (and should not) bar one from future office. For a long time, people thought that there was something essentially undemocratic about electing a Catholic to high office, since that would put U.S. decision-making under the moral control of the Pope. Of course, this was just anti-Catholic bigotry; but who am I to say the argument about Catholics is wrong and the argument about President Trump is right? When I look at the evidence this seems clear, but looking at the evidence I also thought Trump should never be president, and it would clearly have been undemocratic to make that choice for the nation.

To see the worry, note that I think there are many undemocratic aspects of both the Democratic and Republican platforms. But it would clearly be undemocratic to prohibit any Republicans or Democrats from running for office. To decide what undemocratic behavior disqualifies one from office should, in a democracy, be up to the people.

Most arguments I heard against impeachment seemed bad to me, but even I had to admit there was something to the worry that it would be undemocratic to not let the people decide for themselves.

Of course, there are goods other than democracy, and those goods speak in favor of impeaching President Trump. In particular, it seems important that we maintain a credible political threat against lame-duck presidents who have been voted out of office. If the Senate cannot impose a penalty barring future office, if the president is already on the way out the door, and if we want to preserve the norm against criminally prosecuting political enemies, then it is unclear what threat there is to hold a president in line other than impeachment (of course, this problem will still apply to president’s in their second term; so even impeachment is not an altogether adequate solution).

Now, I don’t want to here analyze whether it was right to bar President Trump from office. (I think it is, at least in this case, rather clear that barring him from office would have been the right thing to do all things considered.)

But I’m still worried, because I have no general principle for how to make these tradeoffs. I have no idea how to make comparisons between the undemocratic nature of barring someone from future office, and the importance of the social goods granted by the threat of impeachment. In this case, I have the strong intuition that the limited harm to democracy is unimportant when compared to the gains granted by deterrence. And, in fact, in this case, I’m actually pretty confident in that intuition. If any case is clear, it seems to me that this is going to be this one.

But what if the case were messier; what if the president’s behavior was itself less brazenly undemocratic? How would I go about comparing the good of democracy to other social goods? In a previous Prindle Post piece, I argued that, psychologically, we often make these decisions by intensity matching. How undemocratic does impeachment feel? How terrible do the president’s actions feel? If the president’s actions feel more terrible than impeachment feels undemocratic, then we should impeach and bar from future office. If the impeachment feels more undemocratic than the president’s behavior feels terrible; then impeach but don’t bar from future office. As I argued in that piece, however, the problem with intensity matching is that it does not reliably connect with any moral reality.  It depends on how one anchors their own scale, and often produces morally bizarre behavior (like a willingness to spend the same amount of money to save one hundred or one hundred thousand birds from oil spills).

So if our gut intuitions don’t tell us how to make this comparison, we need some principle. But right now I don’t see what that principle could be; and I think that should make us all a little more cautious in our calls for political action.

Of Pajamas and Self-Care

photograph of masked businessmen in tie and pajamas working at laptop at home

If you’re like me, then you’ve been spending a lot of time working from home as of late. This has its benefits – e.g., no need to commute into work, snacks are abundant, my cat is here – and its detriments – e.g., I spend hours and hours sitting in the same place and looking at the same computer monitor, day-in, day-out. One additional benefit comes in the form of what you wear: without having to be around coworkers you’re pretty much free to wear whatever you want (except on days with Zoom calls, of course). A popular pandemic choice has been pajamas: comfortable and easy to just throw on, you can sit in comfort as you watch the barriers between your work life and non-work life slowly dissolve away into nothingness.

While pajamas and/or sweatpants have become the de facto lockdown uniform, numerous news outlets have recently reported some worrying news: a recent study showed that wearing your pajamas all day while you work correlates with reports of declining mental health. “Academics who are tempted to remain in pajamas during the work day should think again” says Insider Higher Ed, warning that “those who stay in bedroom attire are twice as likely to report a worsened state of mental health.” While the study in question did not show any effect of pajama-wearing on one’s productivity, other outlets have warned that all-day pajama wearing could be part of a larger set of behaviors that would result in such a decline, urging those who work from home to “create a routine and structure that you force yourself to stick to” (a routine that involves, presumably, changing into something other than pajamas).

Let’s say that one has some minimal obligations to one’s own well-being; in other words, you have a duty to take care of yourself, and part of that duty will involve your mental health. If it is, in fact, the case that pajama-wearing correlates with decreased mental health, then it would seem that one should throw on some less comfortable, at least during the workday.

Still, one might not be convinced. While pajama wearing may correlate with decreased mental health, one might think that it is surely any number of the many additional variables – such as being isolated, not being able to see or interact with one’s friends and family face-to-face, not being able to do many of the activities one did in the pre-pandemic world, etc. – that are causing this reported decrease in mental health. If anything, one might think, wearing pajamas all day can be something that can make you feel just a little bit better.

Hence we see another side to the pajama issue: instead of providing warnings about mental health and establishing routines, some businesses have begun to cater to the work-from-home lifestyle by offering a range of “home loungewear.” “As working from home becomes the new normal, many are finding that changing out of pajamas can be quite a daunting task,” says Celia Fernandez at Insider. Rather than advising that we find something else to wear, she instead suggests that we lean into it, helpfully providing some “perfect loungewear options that feel like pajamas without looking like them.” If wearing pajamas helps you feel better, then you might as well exercise this little bit of self-care with style.

We are thus being presented with conflicting messages: on the one hand, wearing pajamas all day may be indicative of having fallen into a rut, and thus it seems that one ought to make changing into non-pajamas part of one’s daily routine. On the other hand, working from home all the time takes a mental toll, and so you should do whatever little things you can to make this time a little less terrible. And if wearing pajamas all day will help with that, so be it.

This conflicting advice represents the difficulty in balancing what might seem to be competing duties one has towards self-care: on the one hand, it seems that we have short-term duties to ourselves to allow us to best cope with the problems that we are dealing with here and now; on the other hand, it seems that we also have long-term duties to future selves, to make sure that we are able to cultivate habits that allow us to be happy and healthy in the long run. While we may always face this conflict to a certain extent, pandemic-times have brought some of the conflicts to the front and center, feeling both that one should do everything one can to get “back to normal,” while also feeling like one needs to just get through another day.

While there’s no solution to this problem that can be applied universally, it is worth considering what is more pressing on an individual basis. For instance, if things are feeling particularly tough on any given day and wearing some stylish pajamas will make a significant difference in how you’re feeling, then by all means go for it. If, however, you find that staying in your pajamas all day does not so much bring you comfort as it just feels like a normal part of pandemic living, then you might consider thinking about working towards improving your long-term well-being instead.

What Good Is Ignorance?

photograph of single person with flashlight standing in pitch darkness

Most of us think knowledge good, and ignorance bad. We justify this by pointing to all the practical goods that knowledge affords us: we want the knowledgeable surgeon and legislator, and not the ignorant ones. The consequences of having the latter are potentially dire. And so, from there, many people blithely assume ignorance is bad: if knowing is good, not knowing should be avoided.

What’s striking though is that people’s actions often don’t match their words: they will pay lip service to the value of knowledge, yet choose to remain ignorant despite having relatively easy access to know more or know better. The actions of these folks suggests that there is something they must value about ignorance — or, perhaps, they think gathering knowledge is more trouble than it’s worth. Part of the explanation here is no doubt that people are lazy — they are, to put the point more precisely, cognitive misers. However, we should be suspicious of one-factor explanations of complicated behavior. And knowledge looks like it is subject to the Goldilocks principle: we don’t want too little knowledge, but we don’t want too much knowledge either. Do you really want to know everything there is to know about the house you bought? Of course you don’t. While you want to know, say, whether the roof is in good condition, and the foundation is sound, you don’t care exactly how many specks of dust are in the attic. And just as we can oftentimes overstate the value of knowledge, we can understate the value of ignorance too: it turns out, there are some benefits to knowing less. We should canvass several of them.

First, consider the value of flow states: flows states are states where we have intense focus and concentration on the task at hand in the present moment; the merging of action and awareness, and the loss of self-reflection — what people often describe as ‘being in the zone.’ Flow states allow us to achieve amazing things whether in the corporate boardroom, the courthouse, or the basketball court, and many other tasks in-between. We may wonder how flow states are related to ignorance. Here we must understand what is required to be in a flow state: intensive and focused concentration on what one is doing in the present moment; the loss of awareness that one is engaging in a specific activity, among other things. When we’re in a flow state, while writing, say, we focus to the point of immersion into the writing process, inhibiting knowledge of what we’re doing. We do not focus on the keystrokes necessary to produce the words on the page or think too much about the next sentence to come. Athletes often describe how it feels to be in a flow state in similar terms.

Next, consider the value of privacy where we value the ignorance of others. we often value privacy — ignorance of our words and actions by others — performatively, even if we may say things dismissive of privacy. When the issue of state surveillance is broached, some retort that they don’t fear the state knowing their business since they’ve done nothing wrong. The implication here is that only criminals, or folks up to no good, would value their privacy; whereas, law-abiding citizens have nothing to fear from the state. Yet their actions belie their words: they password-protect their account, use blinds and curtains to prevent snooping into their homes, and so on. They, in other words, intuitively understand that privacy is valuable for leading a normal life having nothing to do with criminality. The fact that they would be reticent to forgo their privacy says volumes about what they really value, despite their expressed convictions to the contrary. We can think about the value of privacy by thinking about a society where privacy is absent. As George Orwell masterfully put the point:

“There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live—did live, from habit that became instinct—on the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.”

And finally, sometimes we (rightly) value our ignorance of other people, even those closest to us. Would you really want to know everything about people in your life — every thought, word, and deed? I’m guessing for most folks the answer is no. As the philosopher, Daniel Dennett, nicely explains:

“Speaking for myself, I am sure that I would go to some lengths to prevent myself from learning all the secrets of those around me—whom they found disgusting, whom they secretly adored, what crimes and follies they had committed, or thought I had committed! Learning all these facts would destroy my composure, cripple my attitude towards those around me.”

We thus have a few examples where ignorance — in different forms — is actually quite valuable, and where we wouldn’t want knowledge. This is some confirmation for the Goldilocks principle applied, not just to knowledge, but to ignorance too (stated in reverse): we don’t want too much ignorance, but we don’t want too little ignorance either.

Pandemic Life and Duties of Self-Improvement

image of woman meditating surrounded by distractions

I haven’t really done that much since last March. I mean, I’ve done the minimum number of activities required to keep myself alive, tried to stay connected to friends and family as much as possible, have tried to stay productive at work, and have stretched my legs outside every once in a while. But it feels like I could have done more. I have, for instance, been spending a lot of time doomscrolling, catching up on those shows in my Netflix queue, and losing all sense of time, but I could have been doing things that would have resulted in some degree of self-improvement. I’ve always wanted to learn how to play the guitar, and I need to improve my Danish (min dansk er forfærdelig!), and there are just a whole bunch of other hobbies and skills that I’ve wanted to pick up or improve that I never seemed to have the time for in the past. But now I’ve had the time, and I haven’t done any of these things.

Does this make me a bad person? I’ve had the opportunity for self-improvement and didn’t really do much with it. Do I have a duty to use my time to try to be better?

These questions only arise in my case because the pandemic has, in fact, presented me with means and opportunity for self-improvement in the form of extra time. A lot of people have not had the luxury of facing this particular problem. Instead of having a lot more free time, many people have found what little free time they had prior to the pandemic disappear: perhaps you have obligations to take care of family members that you didn’t before, or you’ve gotten sick or had to help out with someone else who’s been sick, or work has become much more strenuous, or you’re out of work and have had more pressing matters. If you’ve been in this position, picking up the guitar is probably pretty low down on your list of priorities.

So it’s not as though I think I’m deserving of sympathy. But, if you’re like me and have found yourself with a lot of extra hours that you can’t keep track of, you might find yourself feeling guilty that that last nearly full year could have been spent better. Is this guilt deserved?

There has, as one might suspect, been philosophical debate over whether one has any particular moral obligation towards self-improvement. Kant, for instance, argued that we do possess a certain kind of duty of this sort: it is not that we always, in every circumstance, must work towards improving ourselves, but we should definitely strive to when given the chance. Certainly, then, Kant would consider wasting a lot of time absentmindedly on the internet as something worthy of moral condemnation.

On the other hand, you might wonder if we really have any duties towards self-improvement at all: while I certainly have duties to help other people, we might think that when it comes to ourselves we are pretty much free to do what we want. So if I want to learn a lot of new skills and become a better version of myself, that’s great; but if I want to sit on the couch and do nothing all day, I should be free to do so without facing any kind of negative judgment. Telling me to do otherwise can feel overly moralizing, a kind of self-righteous judgment in which you seem to be saying that you know how I should live my life better than I do.

Something seems wrong about both of these positions. For instance, it’s hard to determine how we’re supposed to adhere to a duty of self-improvement: does this mean that whenever I have free time that I have an obligation to, for instance, practice vocabulary in a language I want to learn? If so, this feels too demanding: I don’t seem to be doing anything wrong if I spend an evening now and then just doing nothing in particular. At the same time, if I spend all my free time doing nothing in particular, that feels like a waste. We sometimes make these kinds of judgments of other people: that it’s a shame they’re wasting their time, or their potential to develop their talents. And while this can sometimes feel moralizing, sometimes it also just feels apt: sometimes people really do waste too much time, and sometimes peoples’ efforts really could be put towards self-improvement.

Being in the midst of a global pandemic also muddies the water a bit. While one might have, on the one hand, a lot more time freed up by not having to commute to work, socialize, or really do anything else outside, one also has to deal with new challenges in the form of a dizzying amount of news, anxiety about the current and future state of the world, and a whole host of extra mental burdens that can quickly drain one’s energy and motivation to be a better version of yourself. If these matters become too distressing then they can quickly become a burden that needs to be dealt with.

It’s not clear how to evaluate the best way to make use of the extra opportunities one is afforded in the form of free time that is the result of a world that feels like it’s falling apart. Maybe, then, when it comes to duties to oneself these days, focusing on self-maintenance is more important than self-improvement: while it’s never a bad thing to work towards improvement, it feels like if you’re managing to check off the mandatory items of your to-do list in month 10 of a pandemic then you’re doing pretty well. Meaningful self-improvement can wait for another day.

Zoom, Academic Freedom, and the No Endorsement Principle

photograph of empty auditorium hall

It was bound to be controversial: an American university sponsoring an event featuring Leila Khaled, a leader of the U.S.-designated terrorist group Popular Front for the Liberation of Palestine (PFLP), who participated in two hijackings in the early 1970’s. But San Francisco State University’s September webinar has gained notoriety for something else: it was the first time that the commercial technology company Zoom censored an academic event. It would not be the last.

In November, faculty at the University of Hawaii and New York University organized webinars again featuring Khaled, ironically to protest the censoring of her September event. But Zoom deleted the links to these events as well.

Zoom has said that the webinars violated the company’s terms of service, which prohibit “engaging in or promoting acts on behalf of a terrorist organization or violent extremist groups.” However, it appears that the real explanation for Zoom’s actions is fear of possible legal exposure. Prior to the September event, the Jewish rights group Lawfare Project sent a letter to Zoom claiming that giving a platform to Khaled would violate a U.S. law prohibiting the provision of material support for terrorist groups. San Francisco State gave assurances to Zoom that she was not being compensated for her talk or was in any way representing the PFLP, but a 2009 Supreme Court decision appears to support Lawfare’s broad interpretation of the law. In any case, the Khaled incidents highlight the perils of higher education’s coronavirus-induced dependence upon private companies like Zoom, Facebook, and YouTube.

The response to Zoom’s actions from academia has been unequivocal denunciation on academic freedom grounds. San Francisco State’s president, Lynn Mahoney, released a statement affirming “the right of faculty to conduct their scholarship and teaching free of censorship.” The American Association of University Professors sent a letter to NYU’s president calling on him to make a statement “denouncing this action as a violation of academic freedom.” And John K. Wilson wrote on Academe magazine’s blog that “for those on the left who demand that tech companies censor speech they think are wrong or offensive, this is a chilling reminder that censorship is a dangerous weapon that can be turned against progressives.”

How do Zoom’s actions violate academic freedom? Fritz Machlup wrote that,

“Academic freedom consists in the absence of, or protection from, such restraints or pressures…as are designed to create in minds of academic scholars…fears and anxieties that may inhibit them from freely studying and investigating whatever they are interested in, and from freely discussing, teaching or publishing whatever opinions they have reached.”

On this view, academic freedom is not the same as free speech: instead of being the freedom to say anything you like, it is the freedom to determine what speech is valuable or acceptable to be taught or discussed in an academic context. By shutting down the Khaled events, the argument goes, Zoom violated academic freedom by usurping the role of faculty in determining what content is acceptable or valuable in that context.

While there is surely good reason for Zoom to respect the value of academic freedom, it is also understandable that it would prioritize avoiding legal exposure. As Steven Lubet writes, “as [a] publicly traded compan[y], with fiduciary duties to shareholders, [Zoom was]…playing it safe in a volatile and unprecedented situation.” Businesses will inevitably be little inclined to take to the ramparts to defend academic freedom, particularly as compared to institutions of higher education explicitly committed to that value and held accountable by their faculty for failing to uphold it. The relative reluctance of technology companies to defend academic freedom is one important reason why in-person instruction must remain the standard for higher education, at least post-COVID.

A less remarked upon but equally important principle underlying the objections to Zoom’s actions is that giving speakers an academic platform is not tantamount to endorsing or promoting their views. Call this the “no-endorsement” principle. It is this idea that underwrites the moral and, perhaps, legal justifiability of inviting former terrorists and other controversial figures to speak on campus. It was explicitly denied in a letter signed by over eighty-six Pro-Israel and Jewish organizations protesting SFSU’s September event. The letter rhetorically asks, “what if an invitation to speak to a class—in fact an entire event—is an endorsement of a point of view and a political cause?” As Wilson noted, if that’s true, then freedom of expression on campus will be destroyed: “if every speaker on a college campus is the endorsement of a point of view by the administration, then only positions endorsed by the administration are allowed.”

Quite recently, the philosopher Neil Levy has added some intellectual heft to the denial of the “no-endorsement” principle. Levy writes that “an invitation to speak at a university campus…is evidence that the speaker is credible; that she has an opinion deserving of a respectful hearing.” Levy argues that in some cases, this evidence can be misleading, and that “when we have good reason to think that the position advocated by a potential speaker is wrong, we have an epistemic reason in favor of no-platforming.” Levy makes a good point: inviting a speaker on campus means something — it sends a message that the university views the speaker as worth listening to. But Levy seems to conflate being worth listening to and being credible. Even views that are deeply wrong can be worth listening to for a variety of reasons. For example, they might contain a part of the truth while being mostly wrong; they might be highly relevant because they are espoused by important figures or groups or a large proportion of citizens; and they might be epistemically useful in presenting a compelling if wrongheaded challenge to true views. For these reasons, the class of views that are worth listening to is surely much larger than the class of true views. Thus, it is not necessarily misleading to invite onto campus a speaker whose views one knows to be wrong.

The use of Zoom and similar technology in higher education contexts is unlikely to completely cease following the post-COVID return of some semblance of normalcy. But the Khaled incidents should make us think carefully about using communications technology provided by private companies to deliver education. In addition, the notion that giving a person a platform is not tantamount to endorsing their views must be defended against those who wish to limit academic discourse to those views held to be acceptable by university administrators.

Is the Future of News a Moral Question?

closeup photograph of stack of old newspapers

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.

In the face of increasing calls to regulate social media over monopolization, privacy concerns, and the spread of misinformation, the Australian government might be the world’s first country to force companies like Google and Facebook to pay to license Australian news articles featured in those site’s news feeds. The move comes after years of declining revenue for newspapers around the world as people increasingly got their news online instead of in print. But, is there a moral imperative to make sure that local journalism is sustainable and if so, what means of achieving this are appropriate?

At a time when misinformation and conspiracy theories have reached a fever pitch, the state of news publication is in dire straits. From 2004 to 2014, revenue for U.S. newspapers declined by over 40 billion dollars. Because of this, several local newspapers have closed and news staff have been cut. In 2019 it was reported that 1 in 5 papers had closed in the United States. COVID has not helped with the situation. In 2020 ad revenue was down 42% from the previous year. Despite this drop, the revenue raised from digital advertising has grown exponentially and estimates suggest that as much as 80% of online news is derived from newspapers. Unfortunately, most of that ad revenue goes to companies like Facebook and Google rather than news publishers themselves.

This situation is not unique to the United States. Newspapers have been in decline in places like the United Kingdom, Canada, Australia, certain European nations, and more. Canadian newspapers recently published a blank front page to highlight the disappearance of news. In Australia, for example, circulation has fallen by over two-thirds since 2003. Last year over 100 newspapers closed down. This is part of the reason Australia has become the first nation to pursue legislation requiring companies like Google and Facebook to pay for the news that they use in their feeds. Currently for every $100 spent on advertising, Google takes $53 and Facebook receives $28. Under the proposed legislation, such companies would be forced to negotiate commercial deals to license the use of their news material. If they refuse to negotiate, they face stiff penalties of potentially 10 million dollars or more.

The legislation has been strongly opposed by Google and Facebook who have employed tactics like lobbying legislators and starting campaigns on YouTube to get content creators to oppose the bill. They have also threatened to block Australians from Google services telling the public, “The way Aussies search everyday on Google is at risk from new government regulation.” (Meanwhile, they have recently been taking some steps to pay for news.) Facebook has also suggested that they will pull out of Australia, however the government has stated that they will not “respond to threats” and have said that paying for news will be “inevitable.” Australia is not the only jurisdiction that is moving against Google and Facebook to protect local news. Just recently, several newspapers in West Virginia filed a lawsuit against Google and Facebook for anti-competitive practices relating to advertising, claiming that they “have monopolized the digital advertising market, thereby strangling a primary source of revenue for newspapers.”

This issue takes on a moral salience when we consider the relative importance of local journalism. For example, people who live in areas where the local news has disappeared have reported only hearing about big things like murders, while stories on local government, business, and communities issues go unheard. For example, “As newsrooms cut their statehouse bureaus, they also reduced coverage of complex issues like utility and insurance regulation, giving them intermittent and superficial attention.” Without such news it becomes more difficult to deal with corruption and there is less accountability. Empirical research suggests that local journalism can help reduce corruption, increase responsiveness of elected officials, and encourage political participation. The importance of local journalism has been sufficient to label the decline of newspapers a threat to democracy. Indeed, studies show that when people rely more on national news and social media for information, they are more vulnerable to misinformation and manipulation.

Other nations, such as Canada, have taken a different approach by having the federal government subsidize local news across the country with over half a billion dollars in funding. Critics, however, argue that declining newspapers are a matter of old models failing to adapt to new market forces. While many newspapers have tried to embrace the digital age, these steps can create problems. For example, some news outlets have tried to entice readers with a larger social media presence and by making the news more personalized. But if journalists are more focused on getting clicks, they may be less likely to cover important news that doesn’t already demand attention. Personalizing news also plays to our biases, making it less likely that we will encounter different perspectives, and more likely that we will create a filter bubble that will echo our own beliefs back to us. This can make political polarization worse. Indeed, a good example of this can be found in the current shift amongst the political right in the U.S. away from Fox News to organizations like NewsMax and One America News because they reflect a narrower and narrower set of perspectives.

Google and Facebook – and others opposed to legislation like that proposed in Australia – argue that both sides benefit from the status quo. They argue that their platforms bring readers to newspapers. Google, for example, claims that they facilitated 3.44 billion visits to Australian news in 2018. And both Google and Facebook emphasize that news provides limited economic value to the platforms. However, this seems like a strange argument to make; if the news doesn’t matter much for your business, why not simply remove the news feeds from Google rather than wage a costly legal and PR battle?

Professor of Media Studies Amanda Lotz argues that the primary business of commercial news media has been to attract an audience for advertisers. This worked so long as newspapers were one of the only means to access information. With the internet this is no longer the case; “digital platforms are just more effective vehicles for advertisers seeing to buy consumer’s attention.” She argues that the news needs to get out of the advertising business; save journalism rather than the publishers. One way to do this would be by strengthening independent public broadcasters or by providing incentives to non-profit journalism organizations. This raises an important moral question for society: has news simply become a necessary public good like firefighting and policing; one that is not subject to the free market? If so, then the future of local news may be a moral question of whether news has any business in business.

The Ethics of Cringe

photograph of upset audiency members in a movie theater

Much has already been said about the ethical morass of “cancel culture,” but very rarely is the internet phenomenon of “cringe culture” given serious intellectual attention. Cringe culture is, on the surface, very straightforward. People deliberately seek out content online that makes them cringe, that gives them a visceral reaction of discomfort and secondhand embarrassment. This content can be screenshotted and shared on the subreddit r/cringe – a gallery of humiliation that currently boasts over one million member – or slotted into a “cringe compilation” video on Youtube – videos which consistently pull in hundreds of thousands of views. While cringe content was a more visual component of online culture in the mid 2010’s, it certainly hasn’t disappeared. Google Trends shows that the frequency of searches for “cringe” has been surprisingly steady since it peaked in 2016.

Cancel culture attempts to uphold morality in online spaces; when someone says or does something racist, sexist, or otherwise problematic, they are publicly shamed and their reputation in the community is blemished. Cringing at someone, on the other hand, just feels like an evolved form of middle-school bullying. But the two practices aren’t completely worlds apart; both attract large audiences, and both involve a spectacle of public humiliation (justified or otherwise). Despite their similarities, cringe culture is cancel culture’s more vulgar twin, the trashy daytime reality show to cancel culture’s CNN.

It’s worth examining why voluntarily cringing at strangers online is so popular, and whether or not this activity can be morally justified. It’s also important to further specify what exactly is meant by “cringe,” because this genre of content is surprisingly specific; something can be deeply embarrassing without being categorized as cringey. As Melissa Dahl says in her book Cringeworthy: A Theory of Awkwardness, we cringe at ourselves whenever “we’re yanked out of our own perspective, and we can suddenly see ourselves from someone else’s point of view.” When we cringe at ourselves, the experience involves a sudden onset of self-awareness, the realization that our internal image of ourselves is different from how others perceive us.

But that isn’t exactly the kind of content you find on r/cringe. The three most popular posts of all time involve Trump blundering his way through speeches and interview questions (like his inability to name his favorite book, or an awkward attempt to plant a kiss on a clearly unwilling young woman). Another top post is a video of a group of white girls joyously singing along to a song containing the n-word. The camera pans to the only black man in the room, who looks deeply uncomfortable. (To be clear, the girls are framed as the cringey ones here, not the man.) None of these subjects, the group of girls or the former president, are visibly experiencing shame, or a sudden realization of how they are being perceived. Our nation’s perverse fascination with Trump stems, in part, from his famous inability to believe or to be embarrassed by what others think of him, apparent in the almost cartoonish bravado he displays in every speech and television appearance. While plenty of people post their own embarrassing teenage memories on r/cringe, the most popular content focuses on those who are unable to feel shame, though we feel that they should.

In his book Humiliation, Wayne Koestenbaum explains that “Humiliation involves a triangle: (1) the victim, (2) the abuser, and (3) the witness. The humiliated person may also behold her own degradation, or imagine someone else, in the future, watching it or hearing about it. The scene’s horror — its energy, its electricity — involves the presence of three.” But in these scenarios, we can’t imagine the cringeworthy subjects beholding their own flaws, because they seem oblivious to them. Their lack of self-awareness, coupled with the general immortality of their actions, almost allows us to laugh at their embarrassment without feeling cruel. They’re getting their comeuppance. Furthermore, there is no clear “abuser” or humiliator in these situations; the singing girls have humiliated themselves, not the person filming them. In the absence of a humiliator, the witnesses (or the viewer) asserts themselves more strongly in the situation, further expanding the distance between us and the humiliated subject. Most of us are repelled by humiliating moments, especially our own, but when the triangle is distorted, humiliation can exert a magnetic force.

But it’s also worth noting that what the internet considers cringey has changed wildly since it’s heyday in 2016. Back then, the targets of cringe compilation videos were clearly autistic or otherwise on-the-spectrum children, “angry feminists” (search “feminism cringe” to see many dishearteningly popular compilation videos), the poor, people of color who behaved in a “ghetto” way, and fat people. Those with social and political capital were completely absent. Anything that felt outside of white middle-class neurotypical values was considered embarrassing simply for existing.

There’s something self-indulgent, even soothing, about watching other people fail in spectacular ways. We may feel guilty about it, but we take comfort in knowing at least we aren’t the ones in the pillory. Public shaming reinforces which behaviors (racism, political chauvinism) are socially unacceptable, and reminds us (in more mundane cases) that no one is perfect, and that everyone does embarrassing things. But when it becomes a spectacle, as it often does online, cringe content can be a kind of moral junk food. It allows us to feel a burst of superiority, and demands no reflection in return. When we only focus on spectacularly tone-deaf examples of racism, we can easily lose sight of the more insidious forms of social inequality. Framing something as cringey allows us to distance ourselves from it, to disown it, which as the earliest phase of cringe content reveals, has the potential to do more harm than good.

The Cost of Free Speech

cartoon image of excited speech bubble

As 2021 got underway, and the United States was dealing with the fallout from the January 6 insurrection, a much smaller-scale political controversy was blowing through Australia’s sweltering summer. The prime minister was on holiday, his deputy Michael McCormack was in charge, and Craig Kelly, an outspoken member of the leading party who is a notorious climate skeptic, alternative COVID-19 treatment theorist, and vaccine doubter, had a hold of the mic and was getting plenty of attention proffering conspiracy-style views on his social media accounts.

Australia has done exceptionally well in keeping the global coronavirus pandemic at bay with strict lockdowns in response to outbreaks, effective contact tracing, and strict quarantine rules for all international arrivals. The country of 25 million, has recorded fewer than 1,000 deaths since the pandemic hit last March. Though the community is generally willing to comply with expert public health advice, there has been some dissent from conspiracy theorists and anti-vaxxers.

As Australia began preparing to roll out its COVID-19 vaccination program, Craig Kelly, that zealous critic of scientific evidence, was hard at work on his personal Facebook page posting in favor of unproven treatments and against vaccines and other public health measures, such as the wearing of masks.

Kelly has a large social media following, and public health officials in Australia, including the Australian Medical Association and the chief medical officer, pushed back hard, expressing concern that his views pose a danger to public health, and calling on senior government figures – the acting Prime Minister Michael McCormack and the Health Minister Greg Hunt – to condemn those views and rebuke Kelly. But no rebuke came. Instead, McCormack had this to say:

“Facts are sometimes contentious and what you might think is right – somebody else might think is completely untrue – that is part of living in a democratic country… I don’t think we should have that sort of censorship in our society.”

Notice how familiar this type of response is becoming: when politicians or pundits are called out for expressing views that are misleading, offensive or wrong, there is a tendency to claim a free speech defense. Notice too that McCormack makes specific reference here to what living in a democratic country involves. It is of course true that democratic legitimacy is one of the functions of free speech, but does free speech include freedom to lie, confabulate, or spread misinformation? And how do these things affect democracy? Can we untangle freedom of speech, as a fundamentally necessary democratic principle, from demagoguery?

Let’s look in a bit more detail at McCormack’s statement, which is problematic for a number of reasons, but namely in invoking freedom of speech in defense of views which ought to be rejected because they are wrong, harmful, and generally indefensible. This is a sly move, given the high importance citizens of free, democratic countries place on the right to free speech. It is also a tactic which often has little to do with defending this important right and more to do with evading a subject or shutting down an argument – contra free speech.

As a point of logic, rebuking Kelly for proffering dangerous falsehoods is not censorship. If McCormack’s assertion is that Kelly is free to make these claims then, on that argument, McCormack is free to condemn them.

Furthermore, McCormack’s assertion that facts are contentious appears to imply an ‘everyone is entitled to their own opinion’ kind of defense, which bears a strong resemblance to the free speech defense. But it simply isn’t right. In matters of fact, for example matters of science, as opposed to matters of taste, you are not entitled to your opinion; you are entitled to what you can make a case for, and what you can support through reasoned argument, true premises, and solid inferences. You are not entitled to an opinion that is demonstrably false. Both logic and good faith hold you to a standard which requires you to recognize when a belief is indefensible. Democratic legitimacy depends as much on that as it does on freedom of speech.

Following McCormack’s comments, as public and medical professional pushback grew, no senior member of Kelly’s government – not the Federal Health Minister, nor the Prime Minister himself (now back from his holiday) would bring Kelly into line. Finally, it was Facebook whose moderators intervened and Kelly was required to remove one post proffering COVID-19 misinformation and conspiracy-style rhetoric. Kelly did so, saying: “I have since removed the post… under protest.” He then gave this ominous pronouncement: “We have entered a very dark time in human history when scientific debate and freedom of speech is being suppressed.”

Perhaps Kelly is right that we have ‘entered a dark time in human history’ (if the present can be said to be history) – but not for the reasons he thinks. When we see the right of free speech being used again and again to evade responsibility and excuse lies and falsehoods, it is time to take stock, and look closely at what is at stake in our fundamental beliefs about freedom, democracy, and truth.

One reason this use of the free speech defense is so pernicious, is that most people living in open, democratic societies will agree on the importance of free speech and hold it in high regard. This invocation of freedom of speech seems to trade on the hearer not noticing that something they value highly is being used to degrade other things of value.

International law recognizes and protects the right to freedom of speech which is enshrined into the UN Declaration of Human Rights, as stated in Article 19. The antithesis of freedom of speech is censorship. Censorship is the intolerance of opposing views. This happens, politically, where the establishment fears or dislikes opposition, or where governments want to suppress information about their activities.

Democratic legitimacy is one of the most important functions of free speech. And free speech is one of the most important mechanisms of democratic legitimacy. Real democratic engagement requires the free exchange of ideas, where forms of dissent are not censored, and where differing or opposing views can be aired, discussed, and considered. In this way the citizenry can be engaged, well-informed, and part of the political process.

Even though the argument from democratic legitimacy holds free speech in high regard, very few people take an absolutist position on freedom of speech. Free speech does not imply a free-for-all. Therefore, protection of free speech always involves judgments on when and why speech might justifiably be regulated or curtailed. The answer to the question of what kind of speech causes harm and is justifiably restricted hinges on the extent to which freedom of speech is valued in itself. In liberal societies its intrinsic value is usually held to be high. If freedom of speech is curtailed, its limits will be decided around the protection of other, countervailing values, like human dignity and equality. In this sense there is a (sometimes unacknowledged) weighing-up of the value of freedom of speech relative to other values. If freedom of speech is, in itself, very highly valued, then other values may be subordinated. It is upon this scale that the right to freedom of speech is, for some, synonymous with the right to give offense.

A quick internet search of “free speech quotes” is instructive here, serving up such ideas as: “free speech is meaningless unless it tolerates the speech that we hate,” from Henry Hyde; “Free speech is meant to protect unpopular speech. Popular speech by definition, needs no protection,” from Neil Boortz; and “Freedom of Speech includes the freedom to offend,” courtesy of Brad Thor. Add to these offerings, the infamous contribution of Senator George Brandis, Australia’s erstwhile attorney general, who, in 2014 while making an argument for winding back Australia’s anti-racial discrimination laws, put it to the parliament that ”People do have a right to be bigots, you know.”

All this illustrates which values go down in ranking when free speech goes up. If we take freedom of speech to protect or our right to be bigots, that points to something we value. That is, it suggests, we value our right to be bigots more than we value equality or human dignity; that we would prefer to be allowed to vilify than to protect people from vilification.

Perhaps we will decide that we do have a right, by virtue of to the right to freedom of speech, to be bigots. If that is so, it certainly sheds light on the ethical problems that can arise from constructing our basic moral bearings around defending our rights at the expense of other ways of thinking about what is important in our moral lives. Perhaps we might orient our ethical thinking more towards questions about what we owe one another morally rather than what we can lay claim to. We might, for example, ask ourselves whether, rather than uncritically digging in about our rights, it would be better to reflect on our values in this space.

It comes back to the question of why freedom of speech is so important. If free speech, according to the democratic legitimacy argument, is so important because it allows us to better hold power to account, allows citizens to make informed decisions and engage in reasoned, open debate, then it does not make sense to defend or promote speech which itself undermines these goals — speech like Craig Kelly’s COVID-19 misinformation posts, or any picking from the multiverse of conspiracy theories currently working their way into the marrow of certain sections of society. Americans have recently experienced the very hard consequences of lies and misinformation on democratic society in the twin crises of the January 6 insurrection and the runaway COVID-19 pandemic.

In conclusion, we don’t seem to be paying close enough attention to the way that freedom of speech is being used to justify lies and to push back against demands for accountability from the powerful and privileged. If we can untangle freedom of speech as a fundamentally necessary democratic principle from demagoguery, we must do so by directing more critical attention to how it is invoked and what is at stake when freedom of speech is taken to mean freedom to lie or to further a pernicious ideology. Yes, freedom of speech is fundamentally important, and we should protect it because of its central role in the democratic process. At the same time, truth matters and lies have real consequences. When we stand up for freedom of speech, we should be thinking broadly in terms of why it is valuable, what role it serves, and what our responsibilities are in respect of each other. A broader discussion about our values will serve us better than a narrow focus on rights, no matter what they cost us.

Ethical Concerns About Space Mining

image of asteroid in deep space

It was a big news day in Canada last month as it was announced that, thanks to an agreement on the Artemis program, a Canadian astronaut will join the United States on the first crewed mission to the moon scheduled to take place in just a few years. While the Canadian government was happy to note that this will mean that Canada will be only the second country to have an astronaut in deep space, the efforts to return to the moon are not driven only in the interests of science and discovery. This may be only a first step towards mining in space and that prospect raises ethical concerns.

The Artemis program followed the signing of Space Policy Directive 1 which calls for the United States to return humans to the moon for “long-term exploration and utilization.” Since then, in collaboration with private companies and international partners, missions have been scheduled for crewed missions in just under three years. While much of the program is scientific in nature and is being led by NASA, part of the program includes the Artemis Accords which have been signed by several nations and which outline some guidelines for the mining of space resources. According to the accord, the signatories affirmed various guidelines for the “extraction and utilization of space resources.”

According to the Outer Space Treaty of 1967, no nation may claim ownership of the moon or other celestial body. Space shall be free for exploration and use by all signatories. The Moon Treaty of 1979, drafted by the United Nations states that the Moon is a common heritage of humans and harvesting its resources is forbidden except by an international regime. It also bans any ownership of extraterrestrial property by a private organization. However, the United States, Russia, and China have not ratified it. Instead, in 2015 the United States passed the Commercial Space Launch Competitive Act which explicitly allows private corporations to engage in commercial exploitation of space resources even while avoiding asserting sovereignty over a celestial body. In April of this year, Trump signed an executive order encouraging space mining with it noting, “it shall be the policy of the United States to encourage international support for the public and private recovery and use of resources in outer space.”

The moon, in addition to several other space objects, contains minerals which can be extremely valuable in space and on Earth. Some of these are difficult to get on Earth and are located mostly in places like China, Russia, and Congo. For example, the moon is estimated to have more helium-3 than Earth and has several uses, including a possible use as fuel for nuclear fusion. The moon also contains lithium, titanium, aluminum, cobalt, silicon, and other important minerals. Materials like these can be useful for building everything from medical equipment to electric cars which can help with environmental problems. In space, such materials can be used for making things like rocket fuel and solar panels.

There are several reasons why such mining can be beneficial. As mentioned, there are plenty of materials that could be enormously helpful for further space exploration. Bringing material like rocket fuel from Earth is expensive, and while the costs have dropped in price in recent years, it will be cheaper if materials can be sourced from space itself for projects like the planned Lunar Gateway and for missions to Mars. A big reason why the costs of space travel have come down is owing to the investment of private companies. Greater private investment into mining may make the costs even cheaper over time, and this may allow for more efforts at scientific exploration and experimentation. It can also be of benefit on Earth as many of these materials can be used in medical technology.

Another important reason for the Artemis Accords is that the laws and treaties governing space were outdated. The nations that have signed on believe that it is a necessary step to establish guidelines for lunar exploration and to avoid conflict between different parties from Earth. The accords stress that while no one owns the Moon, parties that send equipment do own it and are liable for any damage they may cause. Setting out clear expectations for commercial interests now may help facilitate standards regarding issues like waste and prevent the moon from being an industrial dump.

On the other hand, there are several concerns that could arise with mining either the lunar surface or other celestial bodies. For example, there are several legal and political concerns. Russia and China are not part of the Artemis Accords. Russia has been critical of the accords and the Artemis program for being too U.S.-centric and for being a step back from the Outer Space Treaty whose central provision is that all nations of the world should benefit from space exploration. Indeed, one main criticism is that this is not being governed by the United Nations. A recent article for the journal Science argues “NASA’s actions must be seen for what they are—a concerted, strategic effort to redirect international space cooperation in favor of short-term U.S. commercial interests, with little regard for the risks involved.”

There are also the larger ethical concerns about who owns space, whether it should be mined, and what kinds of problems this could create. For example, in their article How much of the Solar System should we leave as wilderness? Martin Elvis and Tony Milligan suggest that we should already be concerned about “super-exploitation” where a growing space economy could lead to exhaustion of the finite resources of the solar system “surprisingly soon.” They note, “Approaching a point of super-exploitation is something that we ought to be concerned about if we assume that we ought to be concerned, at this point in time and in action-guiding ways, not only about ourselves but about future generations of humans.” They suggest that so long as economic growth is exponential, we should limit ourselves to 1/8th of the exploitable materials of the solar system with the rest being left “wild.”

The economic concerns may not stop there either. Efforts to mine material on the moon may negatively impact the national economies who may rely on selling those resources. There is also the potential worry about how much could be brought back. A single asteroid of platinum has been valued at possibly 50 billion dollars. One-eighth of the iron in the Asteroid Belt contains more than a million times all of Earth’s current reserves. Such dramatic changes in resource extraction and refinement has the potential to dramatically harm an economy as well.

There is also a concern about possible future militarization. While current treaties and laws regarding space prohibit many forms of military operations, if mining begins and not everyone agrees to the same rules, then future conflicts in space may then require militarization to support commercial interests. The United States Space Force Guardians may be called upon to secure U.S. interests in space. Lunar mining, and the mining of celestial objects in general, will carry with a large host of ethical problems and concerns and these will likely become better known to us sooner rather than later.

Under Discussion: Global Warming and the Right to Risk Wrong

photograph of industrial chimney stacks polluting air over natural landscape

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Combating Climate Change.

There is an increasing call to use climate engineering as a solution to global warming. Rather than simply try to decarbonize the economy, some think we should work to develop new technology that will allow us to prevent global warming even while fossil fuels are used. Some think we can use carbon sequestration to leech carbon out of the atmosphere even as we continue to burn fossil fuels. Others think that even if carbon continues to build up in the atmosphere, we can counteract the greenhouse effect by reflecting more sunlight away from earth. (For a great introduction to the questions surrounding climate engineering check out this great episode: Pushkin podcast Brave New Planet.)

Some support the use of climate engineering because they think the global coordination required for decarbonization is politically unfeasible; some because they think global warming is already too far gone and we need to buy time; and some because they think the real costs to decarbonization are too high.

There are, of course, also compelling objections to climate engineering. In particular, many worry about the inevitable unintended consequences of messing with the environment even more to fix our initial mistake (remember the old lady who swallowed a fly?). (Though for myself, I think it unlikely that the negative impacts of carefully studied intentional environmental intervention are as bad as the uncoordinated and unintended effects of carbon industrialization.)

However, I don’t want to spend this post investigating the prospects of climate engineering. I’m not nearly expert enough to do that. Instead, I want to talk about an odd sort of moral obstacle to climate engineering.

Here is a simple question: who has the right to run a massive program to change the earth’s climate? Would it be right, for instance, for the United States to unilaterally decide that the risks of global warming are great enough that it justifies a massive cloud seeding project? Any such decision will affect every other country, but of course the citizens of those other countries do not get a vote in U.S. politics (you might worry, then, that this is profoundly undemocratic because those deeply affected by a policy should have a say in its shaping, for an overview to these questions of democracy see Robert Goodin’s paper on the ‘all affected interests’ principle). So perhaps the United Nations should make the decision? But, of course, many nations are not voting members of the UN, nor is the UN a particularly democratic institution.

Even if geoengineering is the right solution to climate change, it is not altogether clear who should be the one to make that final determination? If I, Marshall, personally decide climate engineering is the way to go, and also come into a lot of money, then do I have the moral right to change the climate for everyone else (even if I’m trying to counteract what was already a negative artificial change). Or to make the scenario more realistic, if the Bill and Melinda Gates Foundation decided it was time to act unilaterally, would it be right for them to do so?

Now, here is where things get puzzling. How could we have had the power to mess up the environment, and yet not be morally empowered to fix it?

There are two possibilities here. One, it might be that countries were acting wrongly when they messed up the environment. Perhaps we are all blameworthy for the amount we have contributed to global warming; but just because we did damage does not mean we are thereby entitled to find our own way to clean it up.

Second, it might be that actually many did not act wrongly in using carbon. There is something of a collective action problem here. Perhaps each person only produced a small amount of carbon, such that no one person really impacted the climate of anyone else.  It is only in aggregate that the bad effect occurred. However, we cannot fix the climate in a similarly disaggregated way. It might be that each of us could plant some trees, but it would require systematic and careful coordination to adopt a more aggressive climate engineering strategy (and no one has the right to act as the global enforcement coordinator).

Global warming, then, is an instance of an annoying type of moral problem. Sometimes we do things which could be fixed, but which we are not morally empowered to fix. Sometimes we say something cruel and want to apologize, but the person we hurt wants nothing to do with us and we have no right to impose on them even to apologize.  Sometimes we spill stuff on a carpet in a party, and the host waves us out of the way and insists that they will fix the problem. Sometimes we do wrong things, things we’d like to make up for, but which we cannot make up for acting on our own. While often unfortunate, it remains a fascinating problem.

Under Discussion: Conspiracy Theories, Climate Change, and the Crisis of Trust

photograph of several snowballs at the bottom of hill with tracks trailing behind

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: Combating Climate Change.

On February 26th, 2015, Republican Senator James Inhofe carried a plastic bag filled with snow into the Capitol Building; in his now-infamous “Snowball Speech” criticizing the Democrats for their focus on climate policy, the senior senator from Oklahoma said “In case we have forgotten — because we keep hearing that 2014 has been the warmest year on record — I ask the chair: you know what this is [he holds up a softball-sized snowball]? It’s a snowball that’s just from outside here. So, it’s very, very cold out.”

Of course, Inhofe’s snowball disproved the reality of climate change no moreso than a heat wave in January disproves the reality of Winter (at least for now). But that didn’t stop Inhofe from chuckling through his hasty generalization of what’s proper to conclude about historical trends in temperature and other metrics from a random snow sample he happened to see on his way to work. The difference between climate and weather is a basic distinction that Inhofe simply ignored for the sake of a quip.

Given Inhofe’s career of expressed skepticism towards the science supporting climate change (something about which Inhofe himself said he “thought it must be true until I found out what it would cost”), we might think this was just a political stunt. However, it was one that resonates with a not-insignificant chunk of our society. While popular consensus still technically leans towards recognizing the threat posed by anthropogenic climate change (something about which expert consensus overwhelming agrees), there remains a stubborn minority of Americans who are convinced (to varying degrees and for various reasons) that climate change either does not warrant significant political or financial attention or that it is simply a hoax — just one more example of so-called “fake news.”

The Prindle Post has spent the past week exploring the complicated issue of how to address climate change — a thorny problem that interweaves questions of political risk, economic uncertainty, and genuine danger for both present and future generations. But the hope of successfully coordinating our efforts in the ways necessary to shift current climate trends seems particularly unrealistic when climate change deniers (who make up between 10 and 15% of the population) continue to spin conspiracy theories about the scientists, the science, and the “real” schemes secretly motivating both.

For example, in a video created last year by the conservative media production company PragerU, Alex Epstein (author of The Moral Case for Fossil Fuels) argues that “climate change alarmism” exaggerates the threat of the “genuine” science while intimating that such distortions are actually motivated by a desire to justify “an unprecedented increase in government power.” For another, prior to taking office, former President Donald Trump claimed that global warming was “created by and for the Chinese in order to make U.S. manufacturing non-competitive” — a sentiment he echoed during his first presidential campaign when he explicitly called it a “hoax.” And as recently as this month, Fox News host Sean Hannity criticized President Joe Biden’s aggressive climate plan as something designed to benefit “hostile [foreign] regimes”: “Mark my words,” said Hannity, “this will not end well.” In different ways, each of these suggest that the real story about climate change is some terrible secret (often involving corrupt or otherwise evil agents), so the “official” story (about how human activity has provoked wildly unprecedented global temperature shifts) should be doubted.

At least some forms of climate change denial are easy to explain, such as ExxonMobil’s well-documented, decades-long disinformation campaign about the evidence for a link between human activity (in particular, activity related to things like carbon emissions) and global temperatures; given that ExxonMobil’s nature as an energy company depends on carbon-emitting practices, it has always had good reason to protect its operations by deceiving the public about matters of scientific fact. In a similar way, politicians hungry for votes can use the rhetoric of climate skepticism to signal to their supporters in return for political capital; when Ted Cruz said recently that the Biden administration’s decision to rejoin the Paris Climate Agreement (PCA) prioritizes the “views of the citizens of Paris” over the “jobs of the citizens of Pittsburgh,” the junior senator from Texas was clearly more concerned about scoring partisan points than accurately representing the nature of the PCA (which, for example, received no substantive input from the people of Paris).

But conspiracy theories about climate change — like conspiracy theories about anything — don’t require elite figures like Cruz or Hannity to be maintained (however helpful celebrity endorsements might be); much of their viability stems from the naturally enjoyable experience of the cognitive processes that underlie conspiratorial thinking. For example, in his book Conspiracy Theories, Quassim Cassam explains how the story-like nature of conspiracy theories (especially grandiose ones that posit particularly complicated connections or conclusions) provides a kind of cognitive pleasure for the person who entertains them; as he says towards the end of chapter two, conspiracy theories “invest random events with a deeper significance, which they wouldn’t otherwise have” in a way that can satisfy apophenic desires of all stripes. Moreover, conspiracy theories allow the conspiracy theorist to imagine themselves as superior to others, either for cleverly figuring out a puzzling truth or for being a hero “who doggedly takes on the forces of the deep state or the new world order in the interests of making sure that the public knows what’s really going on beneath the surface.” The ease with which we can access and disseminate information online only exacerbates this problem (for just one example: consider the recent spread of the QAnon slogan #SaveTheChildren).

Similarly, Tom Stafford discusses the biases at play when we take the time to think through things for ourselves (or when we “do our own research” about an already much-researched topic); at the end of that process, we might well be loathe to give up our conclusions because “we value the effort we put in to gathering information” and “enjoy the feelings of mastery that results from insight” (even if that “insight” is targeting nothing true). In short: if you build it yourself, you’re more apt to experience feelings of loss aversion about it — and this apparently applies to mental states or beliefs just as much as to other things in the world. Furthermore, given the web of suspicion about many different agencies, studies, scientists, and data points that is required to maintain doubts about something like climate change, Stafford’s “epistemic IKEA effect” seems useful for explaining not only the phenomenon of climate change skepticism, but how climate skeptics are more likely than most to believe in conspiracy theories about other topics as well.

So, importantly, contrary to the stereotypical image, conspiracy theorists are not just half-crazed hermits with walls of photographs connected by string; careful thought, reasoned argument, and even the citation of evidence are common elements of a conspiracy theorist’s case for their position — the problem is simply that they’re applying those tools towards objectively invalid ends. Sometimes, conspiracy theorists (such as those who believe that JFK, Princess Diana, or Jeffrey Epstein were killed by various complicated networks of culprits) might be relatively harmless. But when conspiracy theories have political consequences, such as in the case of climate change denial, they have ethical consequences as well.

Of course, what to do about conspiracy theories regarding climate change is far from clear. Although various proposals have been put forth for how to deal with conspiracy theories in general, researchers currently seem to agree mainly on one practical thing: straightforward confrontation of conspiracy theorists’ beliefs is almost certainly a bad move. An attempt to debunk an interlocutor, particularly in public, will (perhaps understandably) tend to trigger a backfire effect and simply provoke them into a defensive posture, rather than maintain a common ground of trust from which conversations can proceed. While some might find the sarcastic ridiculing of climate deniers entertaining, those jokes also feed a standard component of the kind of echo chambers that fuel conspiratorial thinking: distrust of outsiders who believe things that contradict the conspiracy theory.

In his work on echo chambers, C. Thi Nguyen has highlighted the role of trust for breaking through the epistemic barriers around conspiracy theories that end up fueling (and being fueled by) political and other social divisions. Though we often take it for granted, trusting strangers to tell us the truth is a fundamental component of living in and contributing to the collective project of society together. In a very real way, our collective scientific processes — and, hopefully, the governmental policies based on them — depend on the presumption that the people involved are trustworthy. But by rejecting that starting point, conspiracy theories (about climate change or anything else) reject one of the fundamental elements that makes public cooperation possible.

This crisis of trust cannot be fixed simply by shoehorning legislation through committees, regulating social media posts, encouraging companies to deploy trendy, green-themed advertising campaigns, or shaming relatives who roll their eyes at the near-unanimous consensus of climate scientists — indeed, however commendable (and, in some cases, necessary) such tactics are for quickly calming the rapidly-changing climate, they also encourage the continued entrenchment of climate skepticism and denial. If we wish to make comprehensive headway on tackling climate change together, we must at least pragmatically attend to even the most anti-science perspectives for the sake of promoting respectful discourse that can help repair the broken relationships which have rent our social fabric into its hyperpartisan state. Such a project might even serve to mitigate the effects of other echo chambers along the way; an ebbing tide calms all conspiracy theories, as it were.

How to implement such a policy at an effective scale is a problem for a different expert (what would a “trust-promotion campaign” even look like?). In the end, destabilizing echo chambers might well be the kind of thing that governmental (or otherwise “official”) action can’t accomplish: the respectful discourse required to manifest what Nguyen calls a “social-epistemic reboot” might well fall to individuals building relationships with other individuals, enriching the soil of our social lives so that our epistemic lives can collectively grow strong.

But one thing is clear: the deep roots of conspiratorial theorizing in America about climate change must be considered and addressed if we hope to untangle this knotty existential problem. Without doing so, any substantive attempt to take action on climate policy stands a snowball’s chance on the rapidly-warming Earth.