Back to Prindle Institute

Trump v. Facebook, and the Future of Free Speech

photograph of trump speech playing on phone with Trump's Twitter page displayed in the background

On July 7th, former President Donald Trump announced his intention to sue Facebook, Twitter, and Google for banning him from posting on their platforms. Facebook initially banned Donald Trump following the January 6th insurrection and Twitter and Google soon followed suit. Trump’s ban poses not only legal questions concerning the First Amendment, but also moral questions concerning whether or not social media companies owe a duty to guarantee free speech.

Does Trump have any moral standing when it comes to his ban from Facebook, Twitter, and Google? How can we balance the value of free expression with the rights of social media companies to regulate their platforms?

After the events of January 6th, Trump was immediately banned from social media platforms. In its initial ban, the CEO of Facebook, Mark Zuckerberg, offered a brief justification: “We believe the risks of allowing the President to continue to use our service during this period are too great.” Following Trump’s exit from office, Facebook decided to extend Trump’s ban to two years. Twitter opted for a permanent ban, and YouTube has banned him indefinitely.

Though this came as a shock to many, some argued that Trump’s ban should have come much sooner. Throughout his presidency, Trump regularly used social media to communicate with his base, at times spreading false information. While some found this communication style unpresidential, it arguably brought the Office of the President closer to the American public than ever before. Trump’s use of Twitter engaged citizens who might not have otherwise engaged with politics and even reached many who did not follow him. Though there is value in allowing the president to authentically communicate with the American people, Trump’s use of the social media space has been declared unethical by many; he consistently used these communiques to spread falsehoods, issue personal attacks, campaign, and fund-raise.

But regardless of the merits of Trump’s lawsuit, it raises important questions regarding the role that social media platforms play in modern society. The First Amendment, and its protections regarding free speech, only apply to federal government regulation of speech (and to state regulation of speech, as incorporated by the 14th Amendment). This protection has generally not extended to private businesses or individuals who are not directly funded or affiliated with the government. General forums, however, such as the internet, have been considered a “free speech zone.” While located on the internet, social media companies have not been granted a similar “free speech zone” status. The Supreme Court has acknowledged that the “vast democratic forums of the Internet” serve an important function in the exchange of views, but it has refused to extend the responsibility to protect free speech beyond state actors, or those performing traditional and exclusive government functions. The definition of state actors is nebulous, but the Supreme Court has drawn hard lines, recently holding that private entities which provide publicly accessible forums are not inherently performing state actions. Recognizing the limits of the First Amendment, Trump has attempted to bridge the gap between private and state action in his complaint, arguing that Facebook, Twitter, and Google censored his speech due to “coercive pressure from the government” and therefore their “activities amount to state action.”

Though this argument may be somewhat of a stretch legally, it is worth considering whether or not social media platforms play an important enough role in our lives to consider them responsible for providing an unregulated forum for speech. Social media has become such a persistent and necessary feature of our lives that Supreme Court Justice Clarence Thomas has argued that they should be considered “common carriers” and subject to heightened regulation in a similar manner to planes, telephones, and other public accommodations. And perhaps Justice Thomas has a point. About 70% of Americans hold an active social media account and more than half of Americans rely upon social media for news. With an increasing percentage of society not only using social media, but relying upon it, perhaps social media companies would be better treated as providers of public accommodations rather than private corporations with the right to act as gatekeepers to their services.

Despite American’s growing dependence on social media, some have argued that viewing social media as a public service is ill-advised. In an article in the National Review, Jessica Melugin argues that there is not a strong legal nor practical basis for considering social media entities as common carriers. First, Melugin argues that exclusion is central to the business model of social media companies, who generate their revenue from choosing which advertisements to feature to generate revenue. Second, forcing social media companies to allow any and all speech to be published on their platforms may be more akin to compelling speech rather than preventing its suppression. Lastly, social media companies, unlike other common carriers, face consistent market competition. Though Facebook, Instagram, and Twitter appear to have taken over for now, companies such as Snapchat and TikTok represent growing and consistent competition.

Another consideration which weighs against applying First Amendment duties to social media companies is the widespread danger of propaganda and misinformation made possible by their algorithmic approach to boosting content. Any person can post information, whether true or false, which has the potential to reach millions of people. Though an increasing amount of Americans rely on social media for news, studies have found that those who do so tend to be less informed and more exposed to conspiracies. Extremists have also found a safe-haven on social media platforms to connect and plan terrorist acts. With these considerations in mind, allowing social media companies to limit the content on their platforms may be justified in combating the harmful tendencies of an ill-informed and conspiracy-laden public and perhaps even in preventing violent attacks.

Despite the pertinent moral questions posed by Trump’s lawsuit, he is likely to lose. Legal experts have argued that Trump’s suit “has almost no chance of success.” However, the legal standing of Trump’s claims do not necessarily dictate their morality, which is equally worthy of consideration. Though Trump’s lawsuit may fail, the role that social media companies play in the regulation of speech and information will only continue to grow.

In-Groups, Out-Groups, and Why I Care about the Olympics

photograph of fans in crowded stadium holding one big American flag

We all, to some extent, walk around with an image of ourselves in our own heads. We have, let’s say, a self-conception. You see yourself as a certain kind of person, and I see myself as a certain kind of person.

I bring this up because my own self-conception gets punctured a little every time the Olympics roll around. I think of myself as a fairly rational, high-brow, cosmopolitan sort of person. I see myself as the sort of person who lives according to sensible motives; I don’t succumb to biased tribal loyalties.

In line with this self-conception, I don’t care about sporting events. What does it matter to me if my university wins or loses? I’m not on either team, I don’t gain anything if FSU wins a football game. So yes, I am indeed one of those obnoxious and self-righteous people who a) does not care about sports and b) has to fight feelings of smug superiority over sports fans who indulge their tendencies to tribalism.

This is not to say I don’t have my loyalties: I’m reliably on team dog rather than cat, and I track election forecasts with an obsessive fervor equal to any sports fanatic. But I tell myself that, in both cases, my loyalty is rational. 

I’m on team dog because there are good reasons why dogs make better pets.”

“I track elections because something important is at stake, unlike in a sports game.”

These are the sorts of lies I tell myself in order to maintain my self-conception as a rational, unbiased sort of person. By the end of this post, I hope to convince you that these are, in fact, lies.

The Olympic Chink

The first bit of evidence that I’m not as unbiased as I’d like to think, comes from my interest in the Olympics. I genuinely care about how the U.S. does in the Olympics. For example, I was disappointed when, for the first time in fifty years, the U.S. failed to medal day one.

Nor do I have any clever story for why this bias is rational. While I think there is a strong case to be made for a certain kind of moral patriotism, my desire to see the U.S. win the most Olympic medals is not a patriotism of that sort. Nor do I think that the U.S. winning the most medals will have important implications for geopolitics; it is not as though, for example, the U.S. winning more medals than China will help demonstrate the value of extensive civil liberties.

Instead, I want the US to win because it is my team. I fully recognize that if I lived in Portugal, I’d be rooting for Portugal.

But why do I care if my team wins? After all, everything I said earlier about sports is also true of the Olympics. Nothing in my life will be improved if the U.S. wins more medals.

Turning to Psychology

To answer this question, we need to turn to psychology. It turns out that humans are hardwired to care about our in-group. Perhaps the most famous studies demonstrating the effects of in-group bias come from the social psychologist Henri Tajfel.

In one study, Tajfel brought together a group of fourteen- and fifteen-year-old boys. Tajfel wanted to know what it would take to get people invested in ‘their team.’ It turns out, it barely requires anything at all.

Tajfel first had the boys estimate how many dots were flashed on a screen, ostensibly for an experiment on visual perception. Afterwards the boys were told that they were starting a second experiment, and that, to make it easier to code the results, the experimenters were dividing the boys into subgroups based on if they tended to overestimate or underestimate the number of flashed dots (in actual fact the subgroups were random). The boys were then given the chance to distribute rewards anonymously to other participants.

What Tajfel found was that the mere fact of being categorized into a group of ‘overestimators’ or ‘underestimators’ was enough to produce strong in-group bias. When distributing the reward between two members of the same group, the boys tended to distribute the reward fairly. However, when distributing between one member of the in-group and one member of the out-group, the boys would strongly favor members in their same group. This was true even though there was no chance for reciprocation, and despite participants knowing that the group membership was based on something as arbitrary as “overestimating the number of dots flashed on a screen.”

Subsequent results were even more disturbing. Tajfel found that not only did the boys prioritize their arbitrary in-group, but they actually gave smaller rewards to people in their own group if it meant creating a bigger difference between the in-group and out-group. In other words, it was more important to treat the in-group out-group differently than it was to give the biggest reward to members of the in-group.

Of course, this is just one set of studies. You might think that these particular results have less to do with human nature and more to do with the fact that lots of teenage boys are jerks. But psychologists have found tons of other evidence for strong in-group biases. Our natural in-group bias seems to explain phenomena as disparate as racism, motherlove, sports fandom, and political polarization.

Sometimes this in-group bias is valuable. It is good if parents take special care of their children. Parental love provides an extremely efficient system to ensure that most children get plenty of individualized attention and care. Similarly, patriotism is an important political virtue, it motivates us to sacrifice to improve our nation and community.

Sometimes this in-group bias is largely benign. There is nothing pernicious in wanting your sports team to win, and taking sides provides a source of enjoyment for many.

But sometimes this in-group bias is toxic and damaging. A nationalistic fervor that insists your own country is best, as opposed to just your own special responsibility, often leads people to whitewash reality. In-group bias leads to racism and political violence. Even in-group sports fandom sometimes results in deadly riots.

A Dangerous Hypocrisy

If this is right, then it is unsurprising that I root for the U.S. during the Olympic games. What is perhaps much more surprising is that I don’t care about the results of other sporting games. Why is it then, if in-group bias is as deep as the psychologists say it is, that I don’t care about the performance of FSU’s football team?

Is my self-conception right, am I just that much more rational and enlightened? Have I managed to, at least for the most part, transcend my own tribalism?

The psychology suggests probably not. But if I didn’t transcend tribalism, what explains why I don’t care about the performance of my tribe’s football team?

Jonathan Haidt, while reflecting on his own in-group biases, gives us a hint:

“In the terrible days after the terrorist attacks of September 11, 2001, I felt an urge so primitive I was embarrassed to admit it to my friends: I wanted to put an American flag decal on my car. . . . But I was a professor, and professors don’t do such things. Flag waving and nationalism are for conservatives. Professors are liberal globetrotting universalists, reflexively wary of saying that their nation is better than other nations. When you see an American flag on a car in a UVA staff parking lot, you can bet that the car belongs to a secretary or a blue-collar worker.”

Haidt felt torn over whether to put up an American flag decal. This was not because he had almost transcended his tribal loyalty to the US. Rather he was being pulled between two different tribal loyalties. His loyalty to the US pulled him one way, his loyalty to liberal academia pulled the other. Haidt’s own reticence to act tribally by putting up an American flag decal, can itself be explained by another tribal loyalty.

I expect something similar is going on in my own case. It’s not that I lack in-group bias. It’s that my real in-group is ‘fellow philosophers’ or ‘liberal academics’ or even ‘other nerds,’ none of whom get deeply invested in FSU football. While I conceive of myself as “rational,” “high-brow,” and “cosmopolitan”; the reality is that I am largely conforming to the values of my core tribal community (the liberal academy). It’s not that I’ve transcended tribalism, it’s that I have a patriotic allegiance to a group that insists we’re above it. I have an in-group bias to appear unbiased; an irrational impulse to present myself as rational; a tribal loyalty to a community united around a cosmopolitan ideal.

But this means my conception of myself as rational and unbiased is a lie. I have failed to eliminate my in-group bias after all.

An Alternative Vision of Moral Education

But we seem to face a problem. On the one hand, my in-group bias seems to be so deep that even my principled insistence on rationality turns out to be motivated by a concern for my in-group. But on the other hand, we know that in-group bias often leads to injustice and the neglect of other people.

So what is the solution? How can we avoid injustice if concern for our in-group is so deeply rooted in human psychology?

We solve this problem, not by trying to eliminate our in-group bias, but rather by bringing more people into our in-group. This has been the strategy taken by all the greatest moral teachers throughout history.

Consider perhaps the most famous bit of moral instruction in all of human history, the parable of the Good Samaritan. In this parable, Jesus is attempting to convince the listening Jews that they should care for Samaritans (a political out-group) in the same way they care for Jews (the political in-group). But he does not do so by saying that we should not have a special concern for our in-group. Rather, he uses our concern for the in-group (Jesus uses the word ‘neighbor’) and simply tries to bring others into the category. He tells a story which encourages those listening to recognize, not that they don’t have special reasons to care for their neighbor (their in-group), but to redefine the category of ‘neighbor’ to include Samaritans as well.

This suggests something profound about moral education. To develop in justice, we don’t eliminate our special concern for the in-group. Instead we expand the in-group so that our special concern extends to others. This is why language like ‘brotherhood of man’ or ‘fellow children of God’ has proven so powerful throughout history. Rather than trying to eliminate our special concern for family, it instead tries to get us to extend that very same special concern to all human beings.

This is why Immanuel Kant’s language of the ‘Kingdom of Ends’ is so powerful. Rather than trying to eliminate a special concern for our society, instead we recognize a deeper society in which all humans are members.

The constant demand of moral improvement is not to lose our special concern for those near us, but to continually draw other people into that same circle of concern.

On Patriotism

photograph of a patchwork ofnational flags sewn together

As a child, I savored July weekends at the carnival in my grandparents’ town of Wamego, KS. Nowhere on earth was Independence Day, and the lingering celebration of American freedom, taken more seriously and celebrated with more enthusiasm. But today, these holidays and traditions draw as much criticism as they do excitement. Recent events, crises, national shames and national triumphs, make it difficult to know what to do or how to feel during the summer holidays when most Americans spend their weekends in flag-adorned swimming trunks, celebrating the land of the free and the home of the brave. A new question confronts us during the summer holiday season: is it wrong to participate in celebrating a nation so rife with inequality, racial and gender injustice, and environmental degradation? Are these celebrations and traditions merely an attempt to put an optimistic gloss on a nation that we ought to feel anything but optimistic about? And more cynically, does participating in these activities serve to normalize the harsh and unjust conditions that many Americans still face?

G.K. Chesterton, a philosopher, theologian, and fiction writer from the early 20th century, considered similar questions regarding whether we should love the world — for, after all, the world contains many deeply terrible and unlovable things! Should we be optimists about the world, he asks, because it contains so many things of deep value? Or ought we to be pessimists about the world because there is so much suffering, and evil, and injustice, with seemingly no end? Chesterton ends up endorsing a third view in his book Orthodoxy:

“[There] is a deep mistake in this alternative of the optimist and the pessimist. The assumption of it is that a man criticises this world as if he were house-hunting, as if he were being shown over a new suite of apartments. […] A man belongs to this world before he begins to ask if it is nice to belong to it. He has fought for the flag, and often won heroic victories for the flag long before he has ever enlisted. To put shortly what seems the essential matter, he has a loyalty long before he has any admiration.”

Chesterton suggests here that loyalty is not something we choose to exhibit based on likeable features, but rather is something that we automatically display whenever we do work to make things better. Through this work, Chesterton argues, we show love and loyalty to a world that, yes, is probably quite bad. Conversely, by refusing to participate in this kind of labor of love, we resign the world to a quickly-worsening fate. So, loving a bad world can actually be a good thing, if Chesterton is right, because this sort of love leads to loving improvement.

There are problems with applying this view straightforwardly to our attitude on national pride — namely, while we cannot choose loyalty toward some other planet, we could choose loyalty toward another country. One obvious response to this objection is that there are no perfect countries! As we have seen in the past couple years, other nations have followed the U.S. in forming Black Lives Matter groups and holding demonstrations protesting local instances of racially motivated police brutality. Additionally, following Chesterton, we may wonder what the world would look like if everyone poured their loyalties and efforts into the very “best” countries (whatever they take them to be): without a people willing to love a place despite its deep flaws, is there any hope of improving conditions from within?

Chesterton suggests that the love we feel for the place we live need not lead to negative effects. But not everybody agrees that there is no harm in showing such naive loyalty. The philosopher David Benatar, in his book Better to Never Have Been, argues that, given the insufferable nature of human existence, humankind ought not to participate in perpetuating the cycle of life. His position, called “anti-natalism,” argues against the permissibility of procreation and in favor of working to reduce suffering for those who are already born. In support of this conclusion, he emphasizes two supporting points: 1) even in a very good life, the pain and suffering one must endure will always outweigh the pleasure and happiness they enjoy, and 2) there is no greater meaning or purpose to give a life of suffering any value.

Benatar echoes strains of French existentialist philosopher Albert Camus’s “The Myth of Sisyphus” and The Plague in his description of human life as “absurd” — short and full of meaningless labor on the way to ultimate annihilation. If life in the world truly is this bad, even for people for whom it is “best,” then why allow it to continue? Ultimately, Benatar does not endorse hastening death for oneself or others — while life is overall a negative experience in virtue of the pain and suffering overwhelming the happy points, death (especially the process of dying) is even worse than life. But we should allow humanity to die out by refusing to procreate. This, then, is the opposite of what Chesterton calls an attitude of “loyalty” toward life on earth. Benatar sees this loyalty as blind faith and a cruel refusal to try to halt the long chain of suffering that human existence has wrought.

This perspective on earthly existence can help shed light on the position of those who choose not to participate in celebrations and traditions of national pride. Analogous to the anti-natalist, those against participation in such celebrations may see this kind of unconditional national pride as a mechanism for the continuation of the sufferings, injustices, and inequalities that mar the current state of the nation. Understandably, many may see this as an unacceptable price to pay for showing even the kind of self-sacrificial patriotic love that Chesterton discusses. Perhaps patriotic celebrations of national love or pride are themselves cruel refusals to fully grieve the ways in which citizens continue to face severe hardships and injustices.

So, what should we do? Should we join in the celebrations, ensuring that we include voices of criticism alongside voices of praise as equally important aspects of patriotic love? Or should we opt out of the celebrations, allowing our silence to send a message to others that the pain of discrimination, poverty, brutality, and other injustices, make our nation one that is not worth fighting for? Regardless of whether we choose to participate in specific forms of national traditions and celebrations, it may be worth taking to heart part an insight from Chesterton and an insight from Benatar. Chesterton brings our attention to the fact that things are rarely made better without people willing to love them despite terrible flaws. We might remember President Joe Biden’s response earlier this year when asked by reporters about his son’s struggles with drug and alcohol addictions, stating simply, “I’m proud of my son.”

Benatar, on the other hand, shows us that it is important to be discerning about who and what are worth loving and improving. While Benatar thinks that human life on earth is not worth furthering, loving and improving the lives of those humans who already exist is of supreme importance. And he argues it is perfectly consistent to reject loving “human life” while continuing to love individual living humans. Likewise, perhaps it is perfectly consistent to reject pride in a nation while loving and serving the individual people of that nation.

Both of these thinkers draw our attention to the fact that “pride” is more complex than we, or our national celebrations, have tended to realize. Is it possible to see the value both in participation and in abstention from celebrations of national pride? Alternatively, how can these celebrations incorporate a deep awareness of the ways in which we still struggle with discrimination, poverty, brutality, and injustice? Is our love for our country strong enough to weather the acknowledgment of these criticisms? Is our love for our fellow citizens deep enough to inspire us to take up a kind of love for our country, if that love could be transformative?

The Ethics of Policing Algorithms

photograph of silhouettes watching surveillance monitors

Police departments throughout the country are facing staffing shortages. There are a number of reasons for this: policing doesn’t pay well, the baby boomer generation is retiring and subsequent generations have reproduced less, and recent occurrences of excessive use of force by police have made the police force in general unpopular with many people. Plenty of people simply don’t view it as a viable career choice. In response to shortages, and as a general strategy to save money, many police departments throughout the country have begun relying on algorithms to help them direct their efforts. This practice has been very controversial.

The intention behind policing algorithms is to focus the attention of law enforcement in the right direction. To do this, they take historical information into account. They look at the locations in which the most crime has occurred in the past. As new crimes occur, they are added to the database; the algorithm learns from the new data and adjusts accordingly. These data points include details like the time of year that crimes occurred. Police departments can then plan staffing coverage in a way that is consistent with this data.

Proponents of policing algorithms argue that they make the best use of taxpayer resources; they direct funds in very efficient ways. Police don’t waste time in areas where crime is not likely to take place. If this is the case, departments don’t need to hire officers to perpetually cover areas where crime historically does not happen.

There are, however, many objections to the use of such algorithms. The first is that they reinforce racial bias. The algorithms make use of historical data, and police officers have, historically, aggressively policed minority neighborhoods. In light of the history of interactions in these areas, police officers may be more likely to deal with members of these communities more severely than members of other communities for the same offenses. Despite comprising only 13% of the population, African Americans comprise 27% of all arrests in the United States. These populations are twice as likely to be arrested than are their white counterparts. This is unsurprising if policing algorithms direct police officers to focus their attention on communities of color because this is where they always focus their attention. If two young people are in possession of marijuana, for example, a young person of color is more likely to be arrested than a young white person is if the police are omnipresent in a community of color while they aren’t present at all in an affluent white community. This will serve to reinforce the idea that different standards apply to different racial and socioeconomic groups. For example, all races commit drug-related crimes in roughly equal numbers, but African Americans are far more likely to be arrested and sentenced harshly than are white people.

In addition, some are concerned that while police are busy over-policing communities of color, other communities in which crime is occurring will be under-protected. When emergencies happen in these communities, there will be longer response times. This can often make the difference between life and death.

Many argue that policing algorithms are just another example of an institution attempting to provide quick, band-aid fixes for problems that require deeper, more systemic change. If people are no longer choosing to pursue law enforcement careers, that problem needs to be resolved head-on. If people aren’t choosing to pursue careers in law enforcement because such a job has a bad reputation for excessive force, then that is just one among many reasons to stop police officers from using disproportionate force. There are many ways to do this: police could be required to wear body cameras that are required to be on at all times while officers are responding to calls. Officers could be required to go through more training, including sessions that emphasize anger management and anti-racism. Some police departments throughout the country have become notorious for hiding information regarding police misconduct from the public. Such departments in general could clean up the reputation of the profession by being perfectly transparent about officer behavior and dealing with such offending officers immediately rather than waiting to take action in response to public pressure.

Further, instead of focusing algorithms on locations for potential policing, our communities could focus the same resources on locations for potential crime prevention. The root causes of crimes are not mysteries to us. Poverty and general economic uncertainty reliably predict crime. If we commit resources to providing social services to these communities, we can potentially stop crime before it ever happens. The United States incarcerates both more people per capita and total people overall than any other country in the world. Incarceration is bad for many reasons, it stunts the growth and prevention of incarcerated individuals, getting in the way of flourishing and achieving their full potential. It also costs taxpayers money. If we have a choice as taxpayers between spending money on crime prevention and spending money on incarceration of criminals after crimes have already taken place, many would argue that the choice is obvious.

Will the Real Anthony Bourdain Please Stand Up?

headshot of Anthony Bourdain

Released earlier this month, Roadrunner: A Film About Anthony Bourdain (hereafter referred to as Roadrunner) documents the life of the globetrotting gastronome and author. Rocketing to fame in the 2000’s thanks to his memoir Kitchen Confidential: Adventures in the Culinary Underbelly and subsequent appearances on series such as Top Chef and No Reservations, Bourdain was (in)famous for his raw, personable, and darkly funny outlook. Through his remarkable show Anthony Bourdain: Parts Unknown, the chef did more than introduce viewers to fascinating, delicious, and occasionally stomach-churning meals from around the globe. He used his gastronomic knowledge to connect with others. He reminded viewers of our common humanity through genuine engagement, curiosity, and passion for the people he met and the cultures in which he fully immersed himself. Bourdain tragically died in 2018 while filming Parts Unknown’s twelfth season. Nevertheless, he still garners admiration for his brutal honesty, inquisitiveness regarding the culinary arts, and eagerness to know people, cultures, and himself better.

To craft Roadrunner’s narrative, director Morgan Neville draws from thousands of hours of video and audio footage of Bourdain. As a result, Bourdain’s distinctive accent and stylistic lashings of profanity can be heard throughout the movie as both dialogue and voice-over. It is the latter of these, and precisely three voice-over lines equating to roughly 45-seconds, that are of particular interest. This is because the audio for these three lines is not drawn from pre-existing footage. An AI-generated version of Bourdain’s voice speaks them. In other words, Bourdain never uttered these lines. Instead, he is being mimicked via artificial means.

It’s unclear which three lines these are, although Neville has confirmed one of them, regarding Bourdain’s contemplation on success, appears in the film’s trailer. However, what is clear is that Neville’s use of deepfakes to give Bourdain’s written words life should give us pause for multiple reasons, three of which we’ll touch on here.

Firstly, one cannot escape the feeling of unease regarding the replication and animation of the likeness of individuals who have died, especially when that likeness is so realistic as to be passable. Whether that is using Audrey Hepburn’s image to sell chocolate, generating a hologram of Tupac Shakur to perform onstage, or indeed, having a Bourdain sound-alike read his emails, the idea that we have less control over our likeness, our speech, and actions in death than we did in life feels ghoulish. It’s common to think that the dead should be left in peace, and it could be argued that this use of technology to replicate the deceased’s voice, face, body, or all of the above somehow disturbs that peace in an unseemly and unethical manner.

However, while such a stance may seem intuitive, we don’t often think in these sorts of terms for other artefacts. We typically have no qualms about giving voice to texts written by people who died hundreds or even thousands of years ago. After all, the vast majority of biographies and biographical movies feature dead people. There is very little concern about the representation of those persons on-screen or the page because they are dead. We may have concerns about how they are being represented or whether that representation is faithful (more on these in a bit). But the mere fact that they are no longer with us is typically not a barrier to their likeness being imitated by others.

Thus, while we may feel uneasy about Bourdain’s voice being a synthetic replication, it is not clear why we should have such a feeling merely because he’s deceased. Does his passing really alter the ethics of AI-facilitated vocal recreation, or are we simply injecting our squeamishness about death into a discussion where it doesn’t belong?

Secondly, even if we find no issue with the representation of the dead through AI-assisted means, we may have concerns about the honesty of such work. Or, to put it another way, the potential for deepfake facilitated deception.

The problem of computer-generated images and their impact on social and political systems are well known. However, the use of deepfake techniques in Roadrunner represents something much more personable. The film does not attempt to destabilize governments or promote conspiracy theories. Rather, it tries to tell a story about a unique individual in their voice. But, how this is achieved feels underhanded.

Neville doesn’t make it clear in the film which parts of the audio are genuine or deepfaked. As a result, our faith in the trustworthiness of the entire project is potentially undermined – if the audio’s authenticity is uncertain, can we be safe in assuming the rest of the film is trustworthy?

Indeed, the fact that this technique had been used to create the audio footage was concealed, or at least obfuscated, until Neville was challenged about it during an interview reinforces such skepticism. That’s not to say that the rest of the film must be called into doubt. However, the nature of the product, especially as it is a documentary, requires a contract between the viewer and the filmmaker built upon honesty. We expect, rightly or wrongly, for documentaries to be faithful representations of those things they’re documenting, and there’s a question of whether an AI-generated version of Bourdain’s voice is faithful or not.

Thirdly, even if we accept that the recreation of the voices of the dead is acceptable, and even if we accept that a lack of clarity about when vocal recreations are being used isn’t an issue, we may still want to ask whether what’s being conveyed is an accurate representation of Bourdain’s views and personality. In essence, would Bourdain have said these things in this way?

You may think this isn’t a particular issue for Roadrunner as the AI-generated voice-over isn’t speaking sentences written by Neville. It speaks text which Bourdain himself wrote. For example, the line regarding success featured in the film’s trailer was taken from emails written by Bourdain. Thus, you may think that this isn’t too much of an issue because Neville simply gives a voice to Bourdain’s unspoken words.

However, to take such a stance overlooks how much information – how much meaning – is derivable not from the specific words we use but how we say them. We may have the words Bourdain wrote on the page, but we have no idea how he would have delivered them. The AI algorithm in Roadrunner may be passable, and the technology will likely continue to develop to the point where distinguishing between ‘real’ voices and synthetic ones becomes all but impossible. But such a faithful re-creation would do little to tell us about how lines would be delivered.

Bourdain may ask his friend the question about happiness in a tone that is playful, angry, melancholic, disgusted, or a myriad of other possibilities. We simply have no way of knowing, nor does Neville. By using the AI-deepfake to voice Bourdain, Neville is imbuing meaning into the chef’s words – a meaning which is derived from Neville’s interpretation and the black-box of AI-algorithmic functioning.

Roadrunner is a poignant example of an increasingly ubiquitous problem – how can we trust the world around us given technology’s increasingly convincing fabrications? If we cannot be sure that the words within a documentary, words that sound like they’re being said by one of the most famous chefs of the past twenty years, are genuine, then what else are we justified in doubting? If we can’t trust our own eyes and ears, what can we trust?

Can We Trust Anonymous Sources?

photograph of two silhouettes sitting down for an interview

Ben Smith’s recent article in The New York Times about Tucker Carlson’s cozy relationship with the media has caused quite a stir among media-watchers. It turns out that the man who calls the media the “Praetorian Guard for the ruling class” loves to anonymously dish to journalists about his right-wing contacts.

Missing from this discussion about Carlson’s role in the media ecosphere, however, is any exploration of the philosophically rich issue of anonymous sources. Is the practice of using such sources defensible, either from a moral or an epistemic point of view?

First, there is an issue of terminology. A truly anonymous source would be something like a phone tip, where the source remains unknown even to the journalist. In most cases, however, the identity of a source is known. These sources are not truly anonymous, but could be called “unnamed” or “confidential.” For reasons that will become apparent shortly, it is never appropriate for journalists to publish information from truly anonymous sources unless the information is capable of being independently verified, in which case there is no need to use the anonymous source in the first place. When I talk about “anonymous” sources in this column, I am referring to confidential or unnamed sources.

The basic epistemic problem with confidential sources can be summed up as follows: we really can’t assess the truth of a person’s testimony without knowing who the person is. If a shabbily-dressed stranger shuffles up to you and tells you that JFK was the victim of a conspiracy, you’re likely to discount the testimony quite a bit. On the other hand, if the head of the CIA came out and made the same claim, you’d be likely to update your beliefs about JFK’s assassination. In short, many details about a person’s identity are relevant to the reliability of their testimony. Thus, without access to these facts, it’s almost impossible to know whether the testimony is, indeed, true. But in the case of anonymous sources, the public lacks the necessary data to make these judgments. So, we are in a poor position to determine the veracity of the source’s claims. And if we can’t assess the reliability of the testimony, then we aren’t justified in relying upon it.

This epistemic trouble can often become a moral problem. Anonymous sourcing can encourage people to believe that a source’s claim is more reliable than it is, and in this way it may mislead. But surely, journalists have a moral obligation to take every precaution to guard against this. One example of the way anonymous sourcing can mislead is the anonymous essay published by The New York Times in September 2018 purporting to be written by a “senior official” within the Trump administration. This essay caused many people to believe that a cabinet-level official was helming a resistance to Trump from within the White House, but it turned out that the writer was Miles Taylor, former chief of staff to Department of Homeland Security Secretary Kirstjen Nielsen. There is a case to be made that the Times misled the public in that case, causing them to hope in vain that some sort of resistance to Trump was taking place in the upper echelons of the executive branch.

How should we go about solving this problem? How is the public to distinguish between the straight scoop and unsubstantiated rumor? How can we mitigate the harm that comes with directing public attention at a shaky story without losing the ability to speak truth to power?

Reporters’ primary answer to the problem of anonymous sourcing is to point to the reliability not of the source, but of the news publication. Call this the “vouching” solution. The reporter claims that people should believe an anonymous source because the reporter’s institution does; the source’s trustworthiness is a function of the trustworthiness of the publication. But this is like saying that you can justifiably rely on the shabbily-dressed stranger’s claim that JFK was the victim of a conspiracy because an honorable friend reports it to you, and you trust your friend to vet the stranger’s claim before presenting it. The trouble with this solution is that if we’re dealing with a truly anonymous source, our “honorable friend” – the news publication – lacks the necessary information to properly vet, and thus adequately vouch for, the stranger and their claims.

That our faith in news outlets justifies the use of unnamed or confidential sources is just one reason why it is so important for the news media to cultivate public trust. Unfortunately, however, people’s confidence in the mainstream media is at an all-time low. According to one recent poll, a majority of Americans do not have trust in traditional media. For these Americans, the vouching solution fails to even get off the ground. Moreover, for these Americans, it would arguably be irrational for them to rely on the media’s anonymous sources, given their skepticism. If one does not trust one’s friend, it would be foolish to rely on the sources for which one’s friend vouches. By the same token, if one does not trust the media, it would be irrational to rely on the anonymous sources for which the media vouches.

What does journalistic vetting of anonymous sources involve? One thing it does not entail is securing independent verification of an anonymous source’s information. If this were possible, then it would be unnecessary to grant a source confidentiality at all — journalists could just settle for the independent evidence. Thus, journalistic vetting usually involves scrutinizing the motives and behavior of the source. Is the source eager or reluctant to share information? Is she in a position of power or vulnerability? What is her agenda?

Which brings us back to Carlson, who seems like a signally poor candidate for confidentiality. Smith’s article makes clear that Carlson likes to portray himself in a flattering light to reporters, and that he is eager to share information. He is also, of course, in a position of great power and influence, and surely uses his effusions strategically to further his own agenda. For these reasons, using Carlson as a confidential source seems to be an epistemically and, because of the potential for misleading the public, ethically dubious practice.

Intervention and Self-Determination in Haiti

photograph of Hispaniola Island on topographic globe

Haiti is in crisis, though that fact is not new. Its president, Jovenel Moïse, has been assassinated, probably by foreign actors, after refusing to leave office following the end of his term — a term that began with a contested election. Though, this isn’t the first time that’s happened either. Haiti has been beset by conflict nearly since its founding with almost all the brief periods of “peace” accompanied by ruthless, authoritarian control either by native dictators or foreign powers. Using terminology from MLK Jr., there has never been the positive peace of liberation in Haiti, though for brief periods there has been the negative peace of iron-fisted oppression.

No one yet knows who assassinated Moïse or why they did it. However, there are some clues. The assassination was “well-orchestrated” with numerous vehicles full of upwards of 20 people storming the president’s home early in the morning while most of his guards were noticeably absent. And, Moïse had many enemies: he was unpopular, many powerful business-controlling families opposed him, and the leader of G9, the most prominent confederation of gangs, expressed opposition to his reign.

As a result of the assassination, the country has fallen into a chaos of leadership. At least three people have been claiming legitimate authority over the Haitian government: Claude Joseph, the acting Prime Minister who was fired by Moïse just a week before his death; Ariel Henry, the man Moïse appointed to replace Joseph; and Joseph Lambert, the President of the Haitian Senate who the Senate voted for to succeed Moïse. Meanwhile, the legislature is mostly empty, since the terms of all the representatives in Haiti’s lower house and two-thirds of those in the upper house expired last year and elections to replace them were not held. Because of this situation, Moïse was ruling by decree and advocating a constitutional referendum to increase the power of the presidency. Thus, when he was killed, there was no obvious authority to replace him. (Claude Joseph has agreed to hand power over to Ariel Henry, but, as NPR reports, “some lawmakers. . .  said the agreement lacks legal legitimacy.”)

Without clearly legitimate leadership, several ongoing crises in Haiti are likely to worsen: the spread of COVID-19 variants, the dysfunctional economy, and the growing power of violent gangs. The situation is unconscionable. Surely, Haiti is in need of aid and would benefit from the help of its rich, powerful neighbor, the United States, right? Unfortunately, it’s not that straightforward.

There are two main camps on this issue. Some people, mostly liberal American commentators, are pro-intervention for basically the reason expressed above: the situation is dire, requires an immediate fix, and the people of Haiti cannot do it alone. Others, including socialists, anti-imperialists, and activists in Haiti, oppose intervention, citing the long history of foreign intervention in Haiti that has only made things worse, furthering the interests of everybody except the Haitian people.

Before we turn to assessing the merits of these two positions, it’s important to appreciate some of the context since, without a sense of the history here, we seem doomed to repeat it. We’ll look at how Haiti has actually been governed, the history of intervention in Haiti by foreign powers, and why there is disagreement about who should lead the government.

Putting the Problem in Context

History of Foreign Influence in Haiti

In practice, Haiti has rarely lived up to the ideal of a constitutional republic. The Spanish and then the French colonized the island from 1492 to 1804, when Haitians declared independence. For most of its history thereafter, Haiti has been led by a local dictator (such as François Duvalier), a military junta, or a foreign occupying military (most often the United States).

The U.S. occupied Haiti from 1915-1937 and from 1994-1995 and participated in the 2004 coup d’état of Haiti’s first truly democratically elected President Jean-Bertrand Aristide. In the first occupation, the United States military compelled Haiti to rewrite its constitution to allow foreign ownership of Haitian land. They killed fifteen thousands rebelling Haitians. And, they introduced Jim Crow laws, reintroducing racism to the island after its founders had declared that all its citizens would be considered Black. They did all this to reinforce American business interests on the island and to strengthen the United States’ imperialist interests in the region.

The UN then occupied Haiti from 2004 to 2017, ostensibly to keep the peace. They brought cholera, killing thousands. And, there were credible reports of UN soldiers very frequently sexually assaulting the Haitians they were stationed there to protect.

And, already, it has come out that some of the Colombian mercenaries involved in the assassination were U.S.-trained, if not actually U.S.-led. The U.S. trained these Colombian mercenaries to fight against drug cartels in Central and South America, just one more example of U.S. foreign intervention with unforeseen consequences.

Given all this foreign influence and the changes those foreign influences have had on the Haitian Constitution, the constitution in Haiti is not treated with the same reverence as the United States Constitution is in the U.S. Nonetheless, for those who don’t claim to rule by sheer force (as opposed to the numerous gangs who do, and, in practice, control large parts of the capital city of Port-au-Prince) the constitution is the sole source of authority.

Origin of the Leadership Controversy

The current Constitution, from 2012, says that the prime minister assumes the role of the president should the sitting president die. Thus, Claude Joseph and Ariel Henry, both of whom claim the Prime Ministership, claim the power to serve as acting president. But, as the Haitian Times reports, “the constitution also says that if there is a vacancy ‘from the fourth year of the presidential mandate,’ the National Assembly will meet to elect a provisional president.”

Unfortunately, the National Assembly has been almost entirely empty since last year when the terms of two-thirds of the Senators expired along with the terms of all the House Deputies. Thus, the remaining 10 Senators, who are the only elected representatives in office, claim the authority to elect the provisional president. Of those ten, eight agreed on Joseph Lambert, the President of the Senate, who is the third to claim the power of the presidency. With the president assassinated, the Chief Justice of the Supreme Court recently having passed away from COVID-19, and the legislature virtually empty, all three branches of government lack straightforwardly legitimate leadership.

Ethical Perspectives on Intervention

Pro-Intervention

So, what should be done about this mess, if anything? The pro-intervention camp varies in their prescriptions, but we can identify two main suggestions that repeatedly crop up: first, they support a U.S.-led investigation into the assassination. As Ryan Berg of CNN states, “The international community. . . should push for an investigation. . . lest [the perpetrators] benefit from the impunity that is all too common in Haiti.” If unelected interests can simply kill politicians they don’t like, the government isn’t much of a government at all.

Second, they recommend the U.S. or UN organize an immediate election to refill the legislature and office of the president. In Haiti, the government is responsible for running elections. But, as we’ve seen, there isn’t much of a government left. Thus, as the editorial board of The Washington Post argues,

“The hard truth, at this point, is that organizing them and ensuring security through a campaign and polling, with no one in charge, may be all but impossible.”

Anti-Intervention

The anti-interventionists staunchly disagree. Kim Ives, an investigative journalist at Haïti Liberté explained in an interview with Jacobin that the assassination was likely a response to socialist Jimmy Cherizier. Cherizier brought together nine of the largest gangs in Port-au-Prince into a single organization called G9 and advocated against foreign ownership of Haitian businesses. He made a statement on social media, saying, “It is your money which is in banks, stores, supermarkets and dealerships, so go and get what is rightfully yours.” Ives supports the “G9 movement,” as he calls it and so opposes intervention that would serve to crack down on the “crime” he sees as revolutionary. As he says, the interests of Haiti’s rich are “practically concomitant with US business interests” and so U.S. intervention would “set the stage for the repression, for the destruction of the G9 movement.”

But, you need not be a revolutionary socialist to oppose intervention in Haiti. A great many Haitians oppose U.S. intervention. They tend to give two reasons: first, foreign intervention has frequently hurt Haiti, intentionally or unintentionally, far more than it has helped; and second, as André Michel, a human rights lawyer and opposition leader demands, “The solution to the crisis must be Haitian.” Racism and classism has led outside nations to think Haitians cannot solve their own problems. But, they have always failed. As Professor Mamyrah Douge-Prosper urges, “Rather than speaking authoritatively while standing atop long-standing racist tropes, it is important more than ever to be humble, ask questions, and focus on the deeper context.”

In short, it is Haitians who know best how to fix Haiti. Its problems are largely a result of colonialism and imperialism from foreigners. France forced Haiti into debt to preserve its independence. The UN brought cholera and sexual violence. Foreign aid money has destroyed the local economy. Foreign entanglement has always been the problem, not the solution.

Resolving the Disagreement

What are we to make of this disagreement between those in favor of and those against foreign intervention? One solution is to appeal to democracy and simply do what the Haitian people want. The people of Haiti may have a right to self-determination that we must respect. The value of respecting national sovereignty as a rule might be more important than the benefits accrued from a particular successful violation of that sovereignty. Now, Claude Joseph has requested U.S. or UN military intervention. But, as we’ve seen,

Haiti’s government is currently far from representing its people.

A strictly consequentialist view would be hard-pressed to justify intervention given the damage past interventions have done. But, perhaps we nonetheless have a duty to do something. Intuitively, it seems hard to say we can just do nothing. Just because the interventions of the past have failed does not mean that this one must too. Surely it’s possible that we might learn from our mistakes. And so, perhaps a limited intervention made with good intentions and careful consideration of past errors could do good.

If you’re a socialist, you might be inclined to oppose intervention in the hope that the G9 movement prompts a real revolution. But, if you agree that the past predicts the future when it comes to the inefficacy of foreign intervention, you must also consider how past socialist revolutions have resulted in dictatorships just as bad if not worse than the governments they were intended to replace. This can be seen least controversially in North Korea, Cambodia, and the Soviet Union.

Conclusion

Regardless of which way you swing on the issue, there are several uncontroversial conclusions we can draw about the situation in Haiti:

First, there is no simple solution. U.S. intervention will not immediately make things all better, nor will simply hoping that Haitians solve their crises on their own without addressing the systemic issues that have led to the present situation. There are a mess of interested players, from wealthy business families, to the abundant political parties, and to socialist gang confederations. Additionally, there are many axes of conflict relevant to this situation: bourgeois vs. proletariat, mulattos vs. Blacks, and colonizers vs. colonized, among others.

Anti-interventionists suggest we respect the autonomy of Haitians by respecting their preferences. But, given all these divisions, there’s no real majority preference to be respected. Respecting any preference would be taking a side. And, more than that, say the pro-interventionists, why do their preferences matter if intervention would make them all better off? It’s a valid concern, but is also the argument that has been given over and over again to justify intervention from foreign nations to ill effect.

Thus, second, we must act in the context of history. Any intervention that is carried out must be done extremely cautiously in light of all the harm past interventions have done. For Haitians to succeed in resolving their problems, they must be treated as capable of resolving their own  problems. An intervention that is not Haitian-led will reinforce the belief of many Haitians that they are not the ultimate agents of their own affairs. With that consciousness, Haiti will not retain any positive changes that are made.

Finally, as we began with, the status quo in Haiti is unacceptable. Something must be done. The situation in Haiti is the complex result of the involvement of numerous nations. These nations have a duty, if not to intervene, then at least to ensure that the sort of harms they caused (and continue to cause) Haiti do not follow it into the future. For example, the French might owe Haiti the enormous debt they unfairly levied on their former colony. Likewise, the United States might be obligated to end the American property holdings in Haiti that were only possible because of the revisions the United States forced upon their constitution. And finally, it may be that colonizing nations more broadly might have an obligation to invest more in colonized nations, to make up for the damage colonization has wrought on the Global South. Haiti’s crisis is just one more example of how the consequences of colonialism and imperialism can filter down across the centuries.

The Artemis Accords: A New Race to Dominate Space

image of American flag superimposed over the moon

We are on the verge of a new space age – the age of New Space. Unlike the space race of the Cold War era – starting with the 1957 launch, by the U.S.S.R, of Sputnik, the first human-made object in space, and culminating in the U.S. Apollo moon mission and the 1969 moon landing – in which the competitors were national space agencies, this new space age is being driven in a large part by billionaires, private space corporations and commercial business ventures. It will be characterized by the onset of space tourism, mining of the moon, asteroids and other planets and, in all likelihood, habitation of space and colonization of other planets and celestial bodies.

Alongside the well-rehearsed justifications for space exploration – of scientific discovery, furthering or fulfilling the destiny of humankind, perhaps extending the reach and viability of the human species in off-world or inter-planetary habitats – this new era will be sustained and driven by motivations of profit and resource extraction.

All these fast approaching space activities throw-up significant ethical challenges. How would space habitats be governed? Who should get to own and profit from space resources? What are the implications of the increasing use of satellite technology, and how can we prevent the militarization of space? What environmental issues do we need to be aware of – such as forwards and backwards contamination (causing changes to space environments by introducing terrestrial matter or to Earth environments from extra-terrestrial matter)? How do we understand concepts of ownership, sovereignty and heritage in relation to space?

The ethical implications of all kinds of activities in space are not just important for astronauts, space tourists, or future inhabitants of new colonies, but are important for the vast majority of humans who will never go into space; that is to say, for ‘all humankind’.

Legal scholars and space law experts recognize the current regime of international space law needs updating. The privatization and commercialization of space is one of the most significant issues the international community will face in coming years, and there is urgent need for regulation and policy to catch up.

The U.S. government is moving to shape the (United Nations sponsored) international space regime in a mold that is favorable to commercial activity – through domestic law as well as by spearheading a series of international agreements known as the Artemis Accords.

Artemis is the sister of Apollo – and the mission of NASA’s newest lunar program is sending humans back to the moon for exploration, and then (as it becomes possible) sending astronauts to Mars. It seems clear that the program increases the likelihood of resource extraction and eventual habitation.

The Artemis Accords are a series of bilateral agreements the U.S. signed in 2020 with a select few nations it wishes to partner with in space (including the U.K., U.A.E., Japan, Italy, Canada and Australia), and are designed to advance NASA’s Artemis Program. Though they are under the aegis of NASA, these accords clearly have implications for commercial space ventures. The accords enshrine the core principle that “space resource extraction and utilization can and will be conducted under the auspices of the Outer Space Treaty.”

Essentially, the Accords are an attempt to secure an interpretation of the 1969 Outer Space Treaty (OST) that will allow activities of “space resource extraction” which are not universally acknowledged or agreed to. To understand why some are concerned, how the Artemis Accords stand in relation to other important space treaties – in particular, the OST and the Moon Agreement – must be appreciated.

First of the five core space treaties, the OST was signed in 1969 by the U.S.A. and the U.S.S.R. at the height of Cold War tensions, amid fervid space technology competition, and was drawn up under the auspices of the UN as a way of preventing the militarization of space. (In this it has not entirely succeeded.) The OST is the foundation of the international space regime which sets out the most fundamental principles of space exploration and use, and its basic tenets are reflected in policies adopted by the international community to govern human activities in space.

Article One states:

“The exploration and use of outer space, including the moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind.” (My italics.)

Article Two states:

“Outer space, including the moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.” (My italics.)

So the basis for all international space law, from its inception, was an agreement which essentially ruled out ownership of space environments by nations or individuals, including the moon, planets, or other celestial bodies.

The Moon Agreement – fifth of the five core space treaties, opened for signatures in 1979 – was drawn up largely by non-spacefaring states. It was an attempt to strengthen the terms of these principles of the OST, and to protect against the possibility that dominant space actors could claim (and benefit exclusively from) space resources, without any accountability to, or input from, smaller and less capable states.

Neither the U.S., nor any other major space-faring nation, has signed the Moon Agreement, giving it very little power to influence New Space activity. The reason: the Moon Agreement goes a step further by designating space as “the common heritage” as well as “the province of all mankind.” This principle holds in place two important provisions of the Moon Agreement which made it unpalatable to powerful spacefaring nations.

Article Four states:

“The exploration and use of the moon shall be the province of all mankind and shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development.” (My italics.)

And Article Eleven, which states that the moon’s resources cannot be the property of any state or non-state entity or person, goes on to stipulate that parties to the agreement “establish an international regime… to govern the exploitation of the natural resources of the moon…”

Essentially the Moon Agreement is a treaty in which less powerful countries sought to secure space resources as a kind of global commons that would benefit everyone. However, the implication that this joint claim might entail an equitable sharing of space resources and benefits more broadly, was enough to deter major space players from endorsing the spirit and the letter of the agreement.

Instead, via the Artemis Accords, the U.S. is now seeking bilateral consensus to advance an interpretation of “the province of all mankind” that rejects that which the Moon Agreement tried to secure. Specifically, the accords push an interpretation in which resource extraction does not violate the prohibition on national sovereignty or individual ownership in space. (Section 10 of the Artemis Accords says: “…the extraction of space resources does not inherently constitute national appropriation under Article II of the Outer Space Treaty…”)

In contrast to the global commons approach, the U.S., and other spacefaring states, are taking a dominance approach. There has been some backgrounding on this in U.S. domestic law. In 2015 Congress passed the Space Launch Competitiveness Act – stating that any U.S. citizen or corporation shall be entitled to “possess, own, transport, use and sell any space resource obtained in accordance with applicable laws.”

And a U.S. Presidential executive order signed by Donald Trump in 2020 more or less kills off the spirit of the Moon Agreement. The Executive Order on Encouraging International Support for the Recovery and Use of Space Resources, 2020 makes clear that the U.S., including private or corporate interests operating under its flag, does not consider outer space to be a ‘global commons’.

All this clearly reflects the huge political and monetary inequalities existent in the world. In international law, as in other areas of law, gestures to universality, objectivity, and neutrality are spurious, because, despite pretensions to be otherwise, the law reflects the power imbalances within a society, and international law reflects them across the globe.

Space law is a relatively new field, and at present it functions as a species of international law, through treaties and international agreements. Space Law specialist Cassandra Steer writes: “Treaties, though negotiated in a multilateral setting, are always a result of political give and take, and the states who can leverage their power the most take more than they give.” As Steer argues, “There is no equality between countries, despite the notion of formal equality as a value underpinning international law.” Similarly, there is no equal access to space, nor equal distribution of the benefits derived from space. Despite the promise of the OST, space is far from being “the province of all mankind.”

As we enter the New Space era an updated international legal regime is needed; a regime which is robust enough to bring domestic law into harmony with international obligations and treaties, and one which is geared towards managing the new, commercial uses of space as a frontier for private enterprise. This needs to happen before the horse bolts.

The size of this task can’t be underestimated. It requires a collaborative effort to revisit and carefully reflect upon fundamental ethical concepts of value and equity and justice.

This includes thinking about how to share resources by revisiting questions of claim, ownership, and sovereignty in space. It means thinking about what intrinsic value space environments might have, and what ways we think it is important to protect them. It involves considering questions of heritage and it involves thinking about rights in terms of resources that are finite, such as the use of near-Earth orbit which is already experiencing extremely high volumes of satellite traffic.

Given that the terrestrial environment is facing catastrophe from climate change and ecological destruction, which is itself an outcome of resource depletion from rapacious capitalist activity, I think we, as a global community, should work to prevent such impulses from dominating our ventures beyond this world as well.

And like many things that feel urgent in this time of rapid change, we must do it soon or it will be too late.

In Defense of Space Tourism for Billionaires

photograph of astronaut sitting on surface of foreign planet at dawn

It is a powerful reminder of wealth inequality. It serves no direct scientific purpose. Yet, the billionaire class’s space tourism venture is cause for celebration.

Jeff Bezos, owner of Amazon and the richest man in the world, is heading to space today. Elon Musk and Sir Richard Branson, also multi-billionaires, have reservations for future spaceflights. This news has largely been met with a mix of amusement and negative moral judgment. Admittedly, it seems immoral for billionaires to spend large sums on the frivolity of space tourism while, here on Earth, there is such great need for their financial resources. A “fun trip to space,” our own A.G. Holdier writes, could “fully pay two years of tuition for thirty-three students at community college.”

This kind of consequentialist argument seems fairly convincing. Between the two options, it seems like community college would surely produce the better outcome. So, it seems like the moral choice. But a closer examination of this argument yields a more complicated picture.

Within consequentialism (of which utilitarianism is the best-known version), there are both “maximizing” and “non-maximizing” consequentialists. Each view suggests a different moral verdict on space tourism for billionaires.

Let’s start with non-maximizing consequentialism. According to this view, for our actions to be morally permissible, they must simply be good enough. Imagine all the good consequences of an action, and all the bad. The world is incredibly causally complex, and our actions have consequences that ripple out for days, months, and even years. Presumably, then, every action will have some good consequences and some bad ones. Non-maximizing consequentialists say that an action is permissible if it produces more good consequences than bad ones. Or, more precisely, it claims that an action is permissible if it produces a good enough ratio of good consequences to bad ones. In other words, there’s a threshold level that divides moral actions from immoral ones, and that the goodness of the action’s consequences determines which side of the threshold the action lands. On this view, the moral question is: does billionaire space tourism fall above or below this threshold?

Most of us seem to think that, with a few exceptions, ordinary tourism is generally above the threshold of moral permissibility. After all, every dollar spent is also a dollar earned. Tourism, besides being an enjoyable and enriching experience for the tourist, also creates jobs and income, and thereby reduces poverty and raises education and healthcare outcomes. Those all seem like good consequences that often compensate for the (e.g., environmental) costs.

In similar fashion, space tourism also generates jobs and income in the growing space industry. Like traditional tourism, it has certain environmental costs (a rocket launch releases about as much CO2 as flying a Boeing 777 across the Atlantic Ocean). The consequences of space tourism are largely comparable, in other words, to other forms of tourism.

Unlike other forms of tourism, however, space tourism has a morally significant added benefit: strengthening humanity’s capacity for space exploration. Given the choice between a billionaire funding the design, manufacture, and development of spacecraft and buying another luxury beachside holiday house, the former is surely preferable. Since space tourism produces a similar (or perhaps even superior) cost/benefit ratio to traditional tourism, that suggests that space tourism has a similar moral status. And most people seem to think that moral status is permissible.

A maximizing consequentialist has a different theory about the moral permissibility of actions. According to this view, any action that fails to produce the best possible outcome is morally impermissible. A maximizing consequentialist may accept that space tourism has largely the same consequences, or perhaps even somewhat better consequences, as compared with traditional tourism. All this shows, according to the maximizing consequentialist, is that they are both immoral; there’s much better ways to spend those sums of money — sixty-six years of community college for example!

But if producing the best consequences is what morality demands, then why should we stop at community college? Sure, that seems like a better way of spending money than sending a rich guy to space (and back). But we could instead spend that $250,000 a seat in the rocket capsule costs on the most effective international aid charities and save 50-83 lives. What’s more important? Reducing the student debt burden for thirty-three (disproportionately well-educated) people in the world, or saving 50-83 people’s lives? The argument against billionaires funding space tourism, it seems, works equally well against billionaires funding community college tuition.

The maximizing consequentialist position is now beginning to look extremely morally demanding. Indeed, even donating to moderately effective charities looks morally impermissible if we have the option of donating to the most effective ones. On this view, billionaire space tourism is indeed immoral because it fails to produce the best possible consequences. But that is a fairly uninteresting conclusion, given that this view also entails that just about everything we do is immoral. And this suggests there’s nothing particularly immoral about billionaire space tourism.

Of course, consequentialist moral arguments are not the only game in town. For example, A.G. Holdier provides a non-consequentialist argument against billionaire space tourism here. According to Holdier’s Aristotelian argument, we ought to focus more closely on the moral characters of those who would spend such large sums (of their enormous wealth) on something like space tourism instead of, for example, philanthropic causes. The sort who would do this, his argument suggests, are “simply not good people.” Someone who exhibited the Aristotelian virtues of “liberality” and “magnificence” would know how to use their money in the right kinds of way and at the right kind of scale. They would not spend it on “a fleeting, personal experience” while keeping it from “others who might need it for more important matters.”

While Holdier makes a strong case that Aristotle would condemn the space billionaires’ characters, I am less confident that he would condemn their spaceflights. On Aristotle’s account, our upbringing and life experiences contribute greatly to our character development and our acquisition of the virtues. Not everyone gets the right circumstances and experiences to fully develop the virtues, but the lucky few do.

The “Overview Effect” is an oft-reported and now well-studied effect of viewing the Earth from space. It is best summarized as a profound and enduring cognitive shift. Edgar Mitchell, an Apollo 14 astronaut, described the effect of seeing Earth from space as follows:

“You develop an instant global consciousness, a people orientation, an intense dissatisfaction with the state of the world, and a compulsion to do something about it.”

Ronald Garan described a similar shift:

“I was really almost immediately struck with a sobering contradiction between the beauty of our planet on one hand and the unfortunate realities of life on our planet, for a significant portion of its inhabitants on the other hand.”

Yuri Gargarin, Scott Kelly and Chris Hadfield are among numerous astronauts who reported the same profound and lasting shift in their worldview upon looking back on Earth from space. Central to the effect is the sense that the world and humanity are a valuable whole that must be cared for and protected. If we really want these incredibly powerful individuals to do more for our planet and for humanity, indeed if we want their characters to improve, for them to become more virtuous, we should be cheering them all the way to their capsules — for their sake as well as for ours.

The Aristotelian Vulgarity of Billionaires in Space

photograph of Blue Origin shuttle takeoff

On July 11th, billionaire Sir Richard Branson (net worth: ≈$5,400,000,000) made history by becoming the first human to partially self-fund his own trip into space. An investor and entrepreneur who rose to fame after founding Virgin Records, Branson eventually expanded that enterprise into an airline, a passenger rail company, and — possibly in the relatively near future — a space tourism business. With a current price point of about $250,000 (and predictions that the price might nearly double), a ticket to space with Branson’s Virgin Galactic will cost roughly the same amount as the total annual grocery bill for 53 average U.S. families. A host of celebrities, including Tom Hanks (net worth: ≈$400,000,000), Lady Gaga (net worth: ≈$320,000,000), and billionaire Elon Musk (net worth: ≈$168,700,000,000) have already reserved their seats.

Recently, Carlo DaVia argued here that space exploration is, in general, morally impermissible (given the host of terrestrial problems that remain below the stratosphere). In March, Senator Bernie Sanders (net worth: ≈$1,800,000) criticized Musk (whose company is developing a space program of its own and whose personal wealth exceeds the GDP of 159 countries) for prioritizing interstellar tourism at the expense of ignoring needy families, telling the tech mogul that we should instead “focus on Earth.” (Musk’s reply was a textbook example of what DaVia calls the “Insurance” argument.) To make the kind of moral judgment Sanders is invoking, we could weigh the expected utility for “a fun trip to space” against the number of unhoused or uninsured people that the same amount of money could help. Or we could consider the duties we might have to our fellows and prioritize paying two years of tuition for thirty-three students at a community college instead of choosing to experience four minutes of weightlessness.

But Aristotle would say something different: billionaires who spend their money to take themselves to space are simply not good people.

While such a conclusion might sound similar to the other kinds of judgments mentioned above, Aristotle’s concern for human virtue (as opposed to, say, utility-maximization or respect for creaturely dignity) grounds this moral assessment in a fundamentally different, and also more basic, place. Rather than concentrating on the morality of a choice, Aristotle is persistently focused on the character of the person making that choice; insofar as your choices offer a window into your character, Aristotle believes them useful as potential evidence for a more comprehensive assessment, but it is always and only the latter that really matters when making ethical judgments.

Virtues, then, are the kinds of positive character traits that allow a human to live the best kind of life that humans qua humans can live; vices are, more or less, the opposite. Notably, Aristotle identifies that most, if not all, virtues are opposed by two vices: a deficiency and an excess. Just as the story of ‘Goldilocks and the Three Bears’ demonstrates, it is not only bad to have too little of a good thing, but it can be equally bad to have too much — real virtue, to Aristotle, is a matter of threading the needle to find the “Golden Mean” (or average) between each extreme. Consider a virtue like “courage” — when someone lacks courage, then they demonstrate the vice of “cowardice,” but when they have too much courage, they may possess the vice of “rashness.” On Aristotle’s model, learning how to live an ethical life is a matter of cultivating your habits such that you aptly demonstrate the right amount of each virtuous character trait.

In Book Four of the Nicomachean Ethics, Aristotle identifies at least two virtuous character traits that are relevant for thinking about billionaires in space: what he calls “liberality” and “magnificence.” Both are related to how a good person spends their money, with the first relating “to the giving and taking of wealth, and especially in respect of giving.” As he explains in NE IV.1, a good/virtuous person is someone who “will give for the sake of the noble, and rightly; for he will give to the right people, the right amounts, and at the right time, with all the other qualifications that accompany right giving.” Importantly, a good person will not spend their money begrudgingly or reluctantly, but will do so “with pleasure or without pain.” To lack this virtue is to have what Aristotle calls the vice of “meanness” (or caring too much about one’s wealth such that you never spend it, even to pay for things on which it should be spent); to have this virtue in excess is to be what he calls a “prodigal” (or a person who persistently spends more money on things than they rightly deserve).

So, while it might seem like Branson, Musk and others could be exhibiting prodigality insofar as they are spending exorbitant amounts of money on a fleeting, personal experience (or, perhaps, displaying meanness by stubbornly refusing to give that money to others who might need it for more important matters), Aristotle would point out that this might not be the most relevant factor to consider. It is indeed possible for a billionaire to spend hundreds of thousands of dollars on an orbital trip while also donating large sums of money to charity (Branson, in particular, is well-known for his philanthropic work), thereby complicating a simple “yes/no” judgment about a person’s character on this single metric alone.

But this is precisely where the Aristotelian virtue of magnificence becomes important. While many of the virtues that Aristotle discusses (like courage, patience, and truthfulness) are familiar to contemporary thoughts on positive character traits, others (like wittiness or shame) might sound odd to present-day ears — Aristotelian magnificence is in this second category. According to Aristotle, the virtuous person will not only give their money away in the right manner (thereby demonstrating liberality), but will also specifically spend large sums of money in a way that is artistic and in good taste. This can happen in both public and private contexts (though Aristotle primarily gives examples pertaining to the financing of public festivals in NE IV.2) — what matters is that the virtuous person displays her genuine greatness (as a specimen of humanity) by appropriately displaying her wealth (neither falling prey to the deficiency of “cheapness” or the excess of “vulgarity”). Wealthy people who lack magnificence will spend large sums of money to attract attention to themselves as wealthy people, putting on gaudy displays that are ultimately wasteful and pretentious; virtuous people will spend large sums of money wisely to appropriately benefit others and display the already-true reality of their own virtuousness.

So, when Aristotle describes the “vulgar” person as someone who “where he ought to spend much he spends little and where little, much,” he might well look to Virgin Galactic’s founder and soon-to-be customers as people lacking the kind of good taste relevant to virtuous magnificence. Such outlandish displays of extravagant wealth (such as the would-be tourist who paid a different company the non-refundable sum of $28,000,000 to ride to space, but then canceled their plans, citing “scheduling conflicts”) fail to meet Aristotle’s expectation that the magnificent person “will spend such sums for the sake of the noble” (NE IV.2).

Ultimately, this means that Aristotle can side-step debates over the relative usefulness of space travel versus philanthropy or deductive analyses of the moral obligations relevant for the ultra-wealthy to instead speak simply about how such choices reflect back upon the character of the person making them. For a contrasting example, consider MacKenzie Scott; since divorcing billionaire Jeff Bezos (net worth: $212,400,000,000) in 2019, Scott has donated over $8,500,000,000 to a wide range of charities and non-profit organizations. Asking whether or not Scott was morally required or otherwise obligated to make such donations is, on Aristotle’s view, beside the point: her choice to spend her money in noble ways is instead indicative of a good character.

Meanwhile, Scott’s ex-husband is scheduled to make a space flight of his own tomorrow.

Space: The Immoral Frontier?

photograph of starry night in the woods

Space exploration has been all over the news this year, mostly because of billionaires racing to send their rockets and egos into orbit. This cold war between geek superpowers – Jeff Bezos, Elon Musk, and Richard Branson – is a bonfire of vanities. The obvious moral critiques have been made (here, here, here, et cetera, ad nauseam caelorum). Petitions have even been signed to deny them re-entry into our atmosphere.

Despite such criticisms, the public remains strongly supportive of our collective investment in space. According to a recent C-SPAN poll, 71% of Americans think that space exploration is “necessary.” A similar Pew poll found that 72% of Americans deemed it “essential” for the United States to continue to be a leader in space exploration. In our age of polarization, this is quite a consensus. But I suspect the view is wrong. I suspect that space is the immoral frontier.

I’m not suggesting that we should pull the plug on all extraterrestrial investment. Life as we presently know it would come to a standstill without satellites. I am, however, suggesting that it is no easy task to justify our spending another pretty penny in putting a human being on the moon or Mars or any other clump of space dirt. It seems to me that before we set out for other planets, we should first learn to live sustainably on the one we presently inhabit.

Most people would probably agree with me that humanity must learn to dwell on our present planet without destroying it. But they probably also think that we – or at least the Bezos crowd – should throw some money at space exploration. Four arguments have been frequently given in support of this view. Let’s consider each in turn:

The Capabilities Argument

When JFK pitched the Apollo program to the American people, he argued: “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills.” This is surely not the full reason for the Apollo program, but it was part of it. The mission summoned all of our capabilities as human beings. It gave us the chance to see what we as a people and species could achieve.

This argument reflects a “capability approach” to ethical theory. According to that approach, our actions are morally right to the extent to which they help us realize our human capabilities, and especially our most valuable ones. Making friends is one such valuable capability, throwing frisbees less so. JFK’s argument reflects this capability approach insofar as it holds that space exploration is worth doing because it helps us realize our most valuable capabilities as human beings. It demands that we bring out “the best of our energies and skills.”

Realizing our capabilities may very well be an important part of the good human life. But must we realize our capabilities by sending a few astronauts to space? Are there not countless other ways for us to be our best selves?

The Eco Argument

Some will say that space exploration promotes precisely the kind of environmental awareness that we need to cultivate. Sending people to space and having them share their experiences in word and image reaffirms our reverence for the planet and our responsibility to protect it. When Richard Branson held his post-flight press conference, he made this very point: “The views are breathtaking…We are so lucky to have this planet that we all live on…We’ve got to all be doing everything we can do to help this incredible planet we live on.”

The Eco Argument has a bit of history on its side. The photograph “Earthrise” (below), taken in 1968 by Apollo 8 astronaut William Anders, helped spark today’s environmental movement.

The photograph is undoubtedly beautiful, and its influence undoubtedly significant. But should we really keep shelling out billions for such pictures when a sunrise photo taken from Earth, at a fraction of the cost, might do comparably well? Moreover, a sense of reverence is not the only reaction that photographs like “Earthrise” provoke. As philosopher Hannah Arendt already observed in The Human Condition (1958), such photos can just as easily prompt a sense of relief that we have taken our first step “toward escape from men’s imprisonment to the earth.” And that invites laxity. If the scientists will save us, why worry? In this way space exploration produces marketing collateral that is double-edged: it can deepen our appreciation for the planet just as much as promise an escape hatch.

The Innovation Argument

A second argument is that we should invest in space exploration because it promotes technological innovation. Without NASA, we wouldn’t have LEDs, dust busters, computer mice, or baby formula. Even if a space mission fails, those invented byproducts are worth the investment.

This Innovation Argument is also nearly as old as space exploration itself. We heard it from Frank Sinatra and Willie Nelson, who got together to inform other “city dudes and country cousins” that space research has given us medical imaging technology and other life-saving devices. This is no doubt true, and we should be grateful that it is. But Frank and Willie do not give us any reason to think that space research is especially well-suited to producing technological innovation. Most of the great inventions of the past century have had absolutely zilch to do with outer space.

The argument becomes even weaker when we recognize that the technological innovations generated by space exploration are often quite difficult for poorer communities to access – and particularly so for communities of color. I can do no better than quote Gil Scott-Heron’s “Whitey on the Moon” (1970):

“I can’t pay no doctor bills.

But Whitey’s on the moon.

Ten years from now I’ll be paying still.

While Whitey’s on the moon.”

Medical imaging is life-saving, but not so much for those who can’t afford it. Might we be better off providing affordable (dare I say free?) healthcare before investing in more space gizmos?

The Insurance Argument

Back in October 2018, Elon Musk tweeted:

“About half my money is intended to help problems on Earth & half to help establish a self-sustaining city on Mars to ensure continuation of life (of all species) in case Earth gets hit by a meteor like the dinosaurs or WW3 happens & we destroy ourselves”

This, in a nutshell, is the Insurance Argument: let’s invest in space exploration so that we can be sure to have an escape hatch, just in case of a meteor strike or nuclear fallout.

This is an argument that seasoned philosophers have also offered. Brian Patrick Green, an expert in space ethics (with a forthcoming book so titled), has been making a version of this argument since at least 2015 (even on CNN). It is quite plausible. Every building has an emergency exit. Shouldn’t we have an emergency exit for the planet we live on? Just in case?

It’s a compelling line of thought – until we consider a few facts. Mars is hands-down the most hospitable planet that astronauts can reach within a lifetime of space travel. But Mars is freezing. At its balmy best, during the summer, at the equator, Mars can reach 70 degrees Fahrenheit during the day. But at night it drops to minus 100 degrees Fahrenheit. It’s little surprise that when Kara Swisher asked Diana Trujillo, a NASA flight director, if she wanted to live in outer space, Diana immediately answered “No!!!” We humans were made to live on planet Earth, and there’s no place like home.

If an asteroid slams against our planet, we will likely go the way of the majestic dinosaurs. But are we sad that velociraptors aren’t prowling the streets? I certainly am not. Should we really be sad at the prospect of our ceasing to exist? Maybe. But we probably should get used to it. The Roman poet and philosopher Lucretius was on to something:

“Life is given to no one for ownership, to all for temporary use. Look back at how the past ages of eternity before our birth are nothing to us. In this way nature holds up a mirror for us of the time that will come after our death. Does anything then seem frightening? Does it seem sad to anyone? Does it not appear more serene than all of sleep?”

We cannot escape death or extinction. So perhaps we should stop allocating resources on moonshots for the few, at the expense of the poor. And perhaps we should instead invest in those who are in greatest need. They deserve a life befitting a human being — a life of dignity in a safe community with access to education, medicine, and a chance to marvel at the starry skies above.

The Virtuous Life and the Certainty of Death

painting of sailboat at sea with darkening clouds

In the winter of 2019-2020, people in the United States and around the world watched the events unfolding in Wuhan, and, later, across China more broadly, with disbelief. News coverage showed hauntingly empty streets occasionally populated by isolated figures wearing hazmat suits and facemasks. The mystery illness unfolding there was horrifying, but it seemed to many of us to be a distant threat, something that could affect others, but not us, not here.

The human tendency to think of illness and death as misfortunes that happen only to other people is a form of bad faith that is discussed at length in existential literature.

Tolstoy explores these themes in The Death of Ivan Ilyich. The titular character suffers a minor accident which leads to his unexpected and untimely demise. He discovers with horror that he must die alone — no one around him is having his set of experiences, so no one can empathize with what he is going through. His friends and family can’t relate because they live their lives in denial of the possibility of their own respective deaths. Tolstoy describes the reaction of one of Ilych’s friends and colleagues on the occasion of his funeral,

“Three days of frightful suffering and then death! Why, that might suddenly, at any time, happen to me,” he thought, and for a moment he felt terrified. But—he did not himself know how—the customary reflection at once occurred to him that this had happened to Ivan Ilyich and not to him, and that it should not and could not happen to him, and that to think it could would be yielding to depression which he ought not to do…After which reflection Peter Ivanovich felt reassured, and began to ask with interest about the details of Ivan Ilych’s death, as though death was an accident natural to Ivan Ilyich but certainly not to himself.

There is something relatable about Ivanovich’s response to his friend’s death, but when Tolstoy presents it to us in the third-person we can’t help but to recognize the absurdity of it. Of course death will come for Ivanovich — as it will for us all. Denial doesn’t change anything. Yet, denial is a common response when death and devastation surround us.

The COVID-19 pandemic is a tragic example of this. At the time of this writing, it has killed a (conservative) estimate of over 4 million people. Despite this fact, many refuse to recognize the severity of the threat and resist safety measures and vaccination. Throughout the pandemic, news agencies have reported the stories of families who deeply regret that they did not take COVID-19 seriously, often because they contracted a debilitating case themselves, or they have lost a friend, family member, or significant other to the virus. Missouri resident David Long commented, after losing his wife to the virus, “If you love your loved ones, take care of them.”

In times like this (and in all times, really) the attitude that senseless death won’t or can’t affect you or those you love is the very thing that hastens it.

Unsurprisingly, many schools of philosophy have focused on adopting a healthy and virtuous perspective toward death. The ancient Stoics taught that we ought not to live in denial of it; living virtuously involves accepting the inevitable features of existence over which we have no control. This approach to death emphasizes coming to terms with that which we cannot change rather than denying our powerlessness.

Some schools of Buddhism offer similar guidance. Mindfulness of death (maaranasati) is an important aspect of right living. Diligent Buddhists frequently engage in the practice of focusing on the image of a corpse during meditation. Doing so reminds the practitioner of the inevitability of death, but at the same time reduces anxiety related to it through a process of familiarization and acceptance.

The senseless ways in which pandemics maim and kill threaten our sense of control over our circumstances. They reveal to us the absurdity of the human condition. This is something that many people would rather not think about, and the result is that many refuse to do so. As Camus writes in The Myth of Sisyphus,

Eluding is the invariable game. The typical act of eluding, the fatal evasion … is hope. Hope of another life one must “deserve” or trickery of those who live, not for life itself, but for some great idea that will transcend it, refute it, give it a meaning, and betray it.

Escapism has the potential to do much harm, not just to the self, but to others. When we think of death and disease as experiences that happen not to us, but only to others, we create in-groups and out-groups, as humans predictably do. When we see another living being as a member of an out-group, we have less compassion and empathy for their suffering.

We all die alone, and COVID-19 reminds us of that more starkly than other ways of dying. Often, the victims are quite literally alone when they experience their last moments. When we see suffering and death as a universal experience for sentient beings and that we are no different, the barriers that keep us from understanding one another fall away. We all are in a position to relate to the suffering and death of another regardless of the identity categories to which that other belongs.

As Camus puts it in The Plague,

One can have fellow-feelings toward people who are haunted by the idea that when they least expect it plague may lay its cold hand on their shoulders, and is, perhaps, about to do so at the very moment when one is congratulating oneself on being safe and sound.

And as his character Tarrou says of another character, Cattard, while in lockdown due to plague,

like all of us who have not yet died of plague he fully realizes that his freedom and his life may be snatched from him at any moment. But since he, personally, has learned what it is to live in a state of constant fear, he finds it normal that others should come to know this state. Or perhaps it should be put like this: fear seems to him more bearable under these conditions than it was when he had to bear its burden alone.

All of these considerations provide us with compelling reasons to think of the COVID-19 pandemic as relevant to each of us, regardless of whether we are young or old, or suffer from health ailments or are temporarily healthy. We should treat the frailty of human life as seriously in the case of others as we would want it treated in our own case.

Medical Challenge Trials: Time to Embrace the Challenge?

photograph of military personnel receiving shot

The development of the COVID-19 vaccines is worthy of celebration. Never has a vaccine for a  novel virus been so quickly developed, tested, and rolled out. Despite this success, we could have done much better. In particular, a recent study estimates that by allowing “challenge trials” in the early months of the pandemic, we would have completed the vaccine licensing process between one and eight months faster than we did using streamlined conventional trials. The study also provides a conservative estimate of the years of life that an earlier vaccine rollout would have saved: between 720,000 and 5,760,000. However, whether we should have used challenge trials depends on a number of ethical considerations.

Here is an extraordinary fact: we first genetically sequenced the virus in January 2020. Moderna then developed their RNA vaccine in just two days. But the F.D.A. could only grant the vaccine emergency authorization in late December — almost a year later. Over this period the virus killed approximately 320,000 U.S. citizens. The vast majority of the delay between development and approval was due to the time needed to run the necessary medical trials. Enough data needed to be collected to show the vaccines were effective and, even more importantly, safe.

Here’s how those trials worked. Volunteers from a large pool (for example, 30,420 volunteers in Moderna’s phase three trial) were randomly provided either a vaccine or a placebo. They then went about their lives. Some caught the virus, others didn’t. Researchers, meanwhile, were forced to wait until enough volunteers caught the illness for the results to be statistically valid. The fact that the virus spread so quickly was a blessing in this one respect; it sped up their research considerably.

So-called “challenge trials” are an alternative way to run medical trials. The difference is that in a  challenge trial healthy (and informed) volunteers are intentionally infected with the pathogen responsible for the illness researchers want to study. The advantages are that statistically significant results can be found with far fewer volunteers far more quickly. If we vaccinate volunteers and then expose them to the virus, we’ll have a good idea of the vaccine’s effectiveness within days. This means faster licensing, faster deployment of the vaccine, and, therefore, thousands of saved lives.

Challenge trials are generally blocked from proceeding on ethical grounds. Infecting healthy people with a patho­gen they might nev­er oth­er­wise be ex­posed to — a patho­gen which might cause them ser­i­ous or per­man­ent harm or even death — might seem dif­fi­cult to jus­ti­fy. Some med­ic­al prac­ti­tion­ers con­sider it a vi­ol­a­tion of the Hip­po­crat­ic oath they have sworn to up­hold — “First, do no harm.” Ad­voc­ates of chal­lenge tri­als point out that slow, tra­di­tion­al med­ic­al tri­als can cause even great­er harm. Hun­dreds of thou­sands of lives could likely have been saved had COV­ID-19 chal­lenge tri­als been per­mit­ted and the various vac­cines’ emer­gency approv­al occurred months earli­er.

Ad­mit­tedly, chal­lenge tri­als ef­fect­ively shift some risk of harm from the pub­lic at large to a small group of med­ic­al vo­lun­teers. Can we really accept greater risk of harm and death in a small group in order to protect society as a whole? Or are there moral limits to what we can do for the ‘greater good’? Per­haps it is this unequal distribution of burdens and benefits that critics object to as un­eth­ic­al or un­just.

Ad­vocates of chal­lenge tri­als point out that vo­lun­teers con­sent to these risks. Hence, per­mit­ting chal­lenge tri­als is, fun­da­ment­ally, simply per­mitting fully con­sent­ing adults to put them­selves at risk to save oth­ers. We don’t ban healthy adults from run­ning into dan­ger­ous wa­ter to save drowning swim­mers (even though these adults would be risk­ing harm or death). So, the reas­on­ing goes, nor should we ban healthy adults from vo­lun­teer­ing in med­ic­al tri­als to save oth­ers’ lives.

Of course, if a volunteer is lied to or otherwise misinformed about the risks of a medical trial, their consent to the trial does not make participation ethically permissible. For consent to be ethically meaningful, it must be informed. Volunteers must understand the risks they face and judge them to be acceptable. But making sure that volunteers fully understand the risks involved (including the ‘unknown’ risks) can be difficult. For example, a well-replicated finding from psychology is that people are not very good at understanding the likelihood of very low- (or high-) probability events occurring. We tend to “round down” low probability events to “won’t happen” and “round up” high probability events to “will happen”. A 0.2% probability of death doesn’t seem very different from a 0.1% probability to most of us, even though it’s double the risk.

Informed consent also cannot be obtained from children or those who are mentally incapable of providing it, perhaps due to extreme old age, disability, or illness. So members of these groups cannot participate in challenge trials. This limitation, combined with the fact that younger, healthier people may be more likely to volunteer for challenge trials than their more vulnerable elders, means that the insights we gain from the trial data may not translate well to the broader population. This could weaken the cost-benefit ratio of conducting challenge trials, at least in certain cases.

A fur­ther eth­ic­al worry about chal­lenge tri­als is that the poor and the dis­ad­vant­aged, those with no oth­er op­tions, might be indirectly coerced to take part. If in­dividu­als are des­per­ate enough to ac­cess fin­an­cial resources, for ex­ample for food or shel­ter they require, they might take on in­cred­ible per­son­al risk to do so. This dy­nam­ic is called “des­per­ate ex­change,” and it must be avoided if chal­lenge tri­als are to be eth­ically per­miss­ible.

One way to pre­vent des­per­ate ex­changes is to place lim­its on the fin­an­cial com­pens­a­tion provided to vo­lun­teers, for ex­ample merely cov­er­ing travel and in­con­veni­ence costs. But this solu­tion might be thought to threaten to un­der­mine the pos­sib­il­ity of run­ning chal­lenge tri­als at all. Who is go­ing to volun­teer to put his life at risk for noth­ing?

There’s some evid­ence that people would be will­ing to vo­lun­teer even without ser­i­ous fisc­al com­pens­ation. In the case of blood dona­tion, un­paid vol­untary sys­tems see high dona­tion rates and high­er donor qual­ity than mar­ket-based, paid-dona­tion sys­tems such as the U.S.’s. As I write this 38,659 vo­lun­teers from 166 coun­tries have already signed up to be Chal­lenge Tri­al volun­teers with “1 Day Soon­er,” a pro-Chal­lenge Tri­al or­gan­iz­a­tion fo­cus­ing on COV­ID-19 tri­als. These vo­lun­teers ex­pect no mon­et­ary com­pens­a­tion, and are primar­ily mo­tiv­ated by eth­ic­al con­sid­er­a­tions.

The ad­voc­ates of chal­lenge tri­als sys­tem­at­ic­ally failed to win the ar­gu­ment as COV­ID-19 spread across the globe in 2020. Med­ic­al reg­u­lat­ors deemed the eth­ic­al con­cerns too great. But the tide may now be chan­ging. This Feb­ru­ary, Brit­ish reg­ulat­ors ap­proved a COV­ID-19 chal­lenge tri­al. When time-in-tri­al equates with lives lost, the prom­ise of chal­lenge tri­als may prove too strong to ig­nore.

What Morgellons Disease Teaches Us about Empathy

photograph of hand lined with ants

For better or for worse, COVID-19 has made conditions ripe for hypochondria. Recent studies show a growing aversion to contagion, even as critics like Derek Thompson decry what he calls “the theater of hygiene,” the soothing but performative (and mostly ineffectual) obsession with sanitizing every surface we touch. Most are, not unjustifiably, terrified of contracting real diseases, but for nearly two decades, a small fraction of Americans have battled an unreal condition with just as much fervor and anxiety as the contemporary hypochondriac. This affliction is known as Morgellons, and it provides a fascinating study in the limits of empathy, epistemology, and modern medical science. How do you treat an illness that does not exist, and is it even ethical to provide treatment, knowing it might entrench your patient further in their delusion?

Those who suffer from Morgellons report a nebulous cluster of symptoms, but the overarching theme is invasion. They describe (and document extensively, often obsessively) colorful fibers and flecks of crystal sprouting from their skin. Others report the sensation of insects or unidentifiable parasites crawling through their body, and some hunt for mysterious lesions only visible beneath a microscope. All of these symptoms are accompanied by extreme emotional distress, which is only exacerbated by the skepticism and even derision of medical professionals.

In 2001, stay-at-home mother Mary Leiato noticed strange growths on her toddler’s mouth. She initially turned to medical professionals for answers, but they couldn’t find anything wrong with the boy, and one eventually suggested that she might be suffering from Munchausen’s-by-proxy. She rejected this diagnosis, and began trawling through historical sources for anything that resembled her son’s condition. Leiato eventually stumbled across 17th-century English doctor and polymath Sir Thomas Browne, who offhandedly describes in a letter to a friend “’that Endemial Distemper of little Children in Languedock, called the Morgellons, wherein they critically break out with harsh hairs on their Backs, which takes off the unquiet Symptoms of the Disease, and delivers them from Coughs and Convulsions.” Leiato published a book on her experiences in 2002, and others who suffered from a similar condition were brought together for the first time. This burgeoning community found a home in online forums and chat rooms. In 2006, the Charles E. Holman foundation, which describes itself as a “grassroots activist organization that supports research, education, diagnosis, and treatment of Morgellons disease,” began hosting in-person conferences for Morgies, as some who suffer from Morgellons affectionately themselves. Joni Mitchell is perhaps the most famous of the afflicted, but it’s difficult to say exactly how many people have this condition.

No peer-reviewed study has been able to conclusively prove the disease is real. When fibers are analyzed, they’re found to be from sweaters and t-shirts. A brief 2015 essay on the treatment of delusional parasitism published by the British Medical Journal notes that Morgellons usually appears at the nexus between mental illness, substance abuse, and other underlying neurological disorders. But that doesn’t necessarily mean the ailment isn’t “real.” When we call a disease real, we mean that it has an identifiable biological cause, usually a parasite or bacterium, something that will show up in blood tests and X-rays. Mental illness is far more difficult to prove than a parasitic infestation, but no less real for that.

In a 2010 book on culturally-specific mental illness, Ethan Watt interviewed medical anthropologist Janet Hunter Jenkins, who explained to him that “a culture provides its members with an available repertoire of affective and behavioural responses to the human condition, including illness.” For example, Victorian women suffering from “female hysteria” exhibited symptoms like fainting, increased sexual desire, and anxiety because those symptoms indicated distress in a way that made their pain legible to culturally-legitimated medical institutions. This does not mean mental illness is a conscious performance that we can stop at any time; it’s more of a cipherous language that the unconscious mind uses to outwardly manifest distress.

What suffering does Morgellons make manifest? We might say that the condition indicates a fear of losing bodily autonomy, or a perceived porous boundary between self and other. Those who experience substance abuse often feel like their body is not their own, which further solidifies the link between Morgellons and addiction. Of course, one can interpret these fibers and crystals to death, and this kind of analysis can only take us so far; it may not be helpful to those actually suffering. Regardless of what they mean, the emergence of strange foreign objects from the skin is often experienced as a relief. In her deeply empathetic essay on Morgellons, writer Leslie Jamison explains in Sir Thomas Browne account, outward signs of Morgellons were a boon to the afflicted. “Physical symptoms,” Jamison says, “can offer their own form of relief—they make suffering visible.” Morgellons provides physical proof of that something is wrong without forcing the afflicted to view themselves as mentally ill, which is perhaps why some cling so tenaciously to the label.

Medical literature has attempted to grapple with this deeply-rooted sense of identification. The 2015 essay from the British Medical Journal recommends recruiting the patient’s friends and family to create a treatment plan. It also advises doctors not to validate or completely dispel their patient’s delusion, and provides brief scripts that accomplish that end. In short, they must “acknowledge that the patient has the right to have a different opinion to you, but also that he or she shall acknowledge that you have the same right.” This essay makes evident the difficulties doctors face when they encounter Morgellons, but its emphasis on empathy is important to highlight.

In many ways, the story of Morgellons runs parallel to the rise of the anti-vaccination movement. Both groups were spear-headed by mothers with a deep distrust of medical professionals, both have fostered a sense of community and shared identity amongst the afflicted, and both legitimate themselves through faux-scientific conferences. The issue of bodily autonomy is at the heart of each movement, as well as an epistemic challenge to medical science. And of course, both movements have attracted charlatans and snake-oil salesmen, looking to make a cheap buck off expensive magnetic bracelets and other high-tech panaceas. While the anti-vaxx movement is by far the most visible and dangerous of the two, these movements test the limits of our empathy. We can acknowledge that people (especially from minority communities, who have historically been mistreated by the medical establishment) have good reason to mistrust doctors, and try to acknowledge their pain while also embracing medical science. Ultimately, the story of Morgellons may provide a valuable roadmap for doctors attempting to combat vaccine misinformation.

As Jamison says, Morgellons disease forces us to ask “what kinds of reality are considered prerequisites for compassion. It’s about this strange sympathetic limbo: Is it wrong to speak of empathy when you trust the fact of suffering but not the source?” These are worthwhile questions for those within and without the medical profession, as we all inevitably bump up against other realities that differ from our own.

Do Politicians Have a Right to Privacy?

photograph of Matt Hancock delivering press briefing

On Friday, June 25th, 2021, British tabloid The Sun dropped a bombshell: leaked CCTV images of (then) UK Health Secretary, MP Matt Hancock, kissing a political aide in his office. Video footage of the pair intimately embracing rapidly circulated on social media. Notably, the ensuing outrage centered not on the fact that Hancock was cheating on his wife (lest we forget, Prime Minister Boris Johnson is himself a serial offender), but on the hypocrisy of Hancock breaching his own social distancing guidelines. By the next day, with his position looking increasingly untenable, Hancock resigned. Thus, the man who had headed up the UK’s response to the COVID-19 pandemic over the past 18 months was toppled by a single smooch.

In the wake of this political scandal, it is useful to take a step back and consider the ethical issues which this episode brings to light. Following the release of the video, Hancock pleaded for “privacy for my family on this personal matter.” What is privacy, and why is it valuable? Does a distinct right to privacy exist? Do politicians plausibly waive certain rights to privacy in running for public office? When (if ever) can an individual’s right to privacy be justifiably infringed, and was this the case in the Hancock affair?

It is widely accepted that human beings have a very strong interest in maintaining a hidden interior which they can choose not to share with others. The general contents of this interior will differ widely between cultures; after all, what facts count as ‘private’ is a contingent matter which will vary depending on the social context. Nevertheless, according to the philosophy professor Tom Sorell, this hidden interior can roughly be divided into three constituents (at least, in most Western contexts): the home, the body, and the mind.

There are a plethora of reasons as to why privacy is important to us. For instance, let us briefly consider why we might value a hidden psychological interior. Without the ability to shield one’s inner thoughts from others, individuals would not be able to engage in autonomous self-reflection, and consequently would be a different self altogether. Moreover, according to the philosopher James Rachels, the ability to keep certain aspects of ourselves hidden is essential to our capacity to form a diverse range of interpersonal social relationships. If we were always compelled to reveal our most intimate secrets, then this would not only devalue our most meaningful relationships, but would also make it impossible to form less-intimate relationships such as mere acquaintances (which I take to be valuable in their own right).

There is considerable debate over whether a distinct right to privacy exists. As the philosopher Judith Jarvis Thomson famously noted, “perhaps the most striking thing about the right to privacy is that nobody seems to have any very clear idea what it is.” According to Thomson, this can be explained by the fact that our seeming ‘right’ to privacy is in fact wholly derivative of a cluster of other rights which we hold, such as rights over our property or our body; put another way, our interest in privacy can be wholly attributed to our interest in other goods which are best served by recognizing a discrete, private realm, such that we have no separate interest in something called ‘privacy’.

Suppose that a right to privacy does in fact exist. Can this right to privacy be (i) waived, (ii) forfeited, or (iii) trumped? Let us go through each in turn. A right is waived if the rights-holder voluntarily forgoes that right. Many people believe that certain rights (for instance, the right not to be enslaved) cannot be voluntarily waived. However, intuitively it would seem that privacy is not such an inalienable right: there are plenty of goods which we may legitimately want to trade privacy off against, such as our ability to communicate with others online. It could be argued that, in choosing to run for public office, politicians waive certain rights to privacy which other members of the public retain, since they do so in the knowledge that a certain degree of media scrutiny is a necessary part of being a public servant. Perhaps, then, Hancock had waived his right to keeping his sexual life private, in virtue of having run for public office.

A right is arguably forfeited if the rights-holder commits certain acts of wrongdoing. For instance, according to the so-called rights forfeiture theory of punishment, “punishment is justified when and because the criminal has forfeited her right not to be subjected to this hard treatment.” For those who endorse this (albeit controversial) view, it could perhaps be thought that Hancock forfeited his right not to have this sexual life publicized, in virtue of having culpably committed the wrongdoing of breaching social distancing guidelines and/or hypocrisy.

Finally, can a right to privacy be trumped? Philosophers disagree about whether it is coherent to talk about rights ‘trumping’ one another. According to the philosopher Hillel Steiner, rights comprise a logically compossible set, meaning that they never conflict with one another. By contrast, philosophers such as Thomson maintain that rights can and do conflict with each other.

Suppose that we think that the latter is true. In an instance where an agent’s right to privacy conflicts with the right of another agent, we must determine whose interests are weightier and give them priority. In the case of the Hancock saga, it could be said that there was a strong public interest in knowing that the Health Secretary had breached his own social distancing guidelines. However, the mere existence of a public interest in knowing this information is not sufficient to generate a right on behalf of the public to find out this information; moreover, even if it did, this would not necessarily trump the right of the individual politician to privacy.

So, did the leaking of the CCTV footage breach Hancock’s right to privacy? And if so, were the newspaper reports nevertheless justified on balance? My own view is that Hancock had neither waived nor forfeited his right to privacy, and that his right to privacy was not trumped by other considerations – that is to say, I think that the leaking of the footage wronged Hancock in some way. Nevertheless, I have complete sympathy with the subsequent public reaction to the newspaper reports. Throughout the pandemic, many facts which had previously been regarded as paradigmatically ‘private’ (such as whether one was sexually active, and with whom) were suddenly subject to a very high degree of public intrusion. Set against this backdrop, the Hancock affair served as yet another instance of “one rule for the establishment, another for everyone else.”

Is It Time to Show the Lobster a Bit of Respect?

photograph of lobsters in water tank at market

The United Kingdom is currently in the process of revising what the law says about sentience and the ethical treatment of animals. This week news broke that the Conservative Animal Welfare Foundation has called for greater protections for non-vertebrates such as octopi and crustaceans. As a consequence, debate is emerging about whether practices such as boiling lobsters alive should be banned. Much of this debate seems to be centered on scientific facts regarding the nervous system of such animals and whether they are capable of feeling pain at all. But, perhaps this is the wrong mindset to have when considering this issue. Perhaps it is more important to consider our own feelings about how we treat lobsters rather than how the lobsters feel about it.

The ethical debate about the treatment of lobsters has mostly focused on the practice of boiling them alive when being prepared for eating. Lobsters are known to struggle for up to two minutes after being placed in boiling water and emit a noise caused by escaping air that many interpret as screaming. In response to such concerns, Switzerland, Norway, Austria, and New Zealand have all banned the practice of boiling lobsters alive and require that they be transported in saltwater rather than being kept in/on ice. But the debate always seems to hinge on the question of sentience. Can a lobster feel pain when being boiled alive? To answer that, questions of sentience become questions of science.

There is no clear consensus among scientists about whether the lobster nervous system permits it to feel pain. But how do you measure pain? To many the reaction to being in boiling water is taken as a sign that the lobster is in pain. Some studies have shown that lobsters will avoid shocks, a process called nociception where the nervous system responds to noxious stimuli by producing a reflex response. This explains why the lobster thrashes in the boiling water. However, other scientists have questioned whether the nervous system of the lobster is sophisticated enough to allow for any actual sense of suffering, arguing that a lobster’s brain is more similar to an insect. They suggest that the sensory response to stimuli is different than that to pain which involves an experience of discomfort, despair and other emotional states.

Indeed as invertebrates, lobsters do not have a central brain, but rather groups of chain ganglia connected by nerves. This can make killing them challenging as simply giving it a blow to the head will not do; a lobster must have its central nervous system destroyed with a complicated cut on the underside. It is recommended that they be stunned electronically. Because of this very different brain structure, it is suggested that lobsters lack the capacity to suffer. As Robert Bayer of the Lobster Institute describes the issue, “Cooking a lobster is like cooking a big bug…Do you have the same concern when you kill a fly or mosquito?”

Nevertheless, critics charge that this thinking is only a form of discrimination against animals with neurological architecture different from our own. Indeed, beyond nervous system reflex responses, because pain is difficult to directly measure, other markers of pain are often driven by using arguments by analogy comparing animals to humans. But creatures who are fundamentally different from humans may make such analogies suspect. In other words, because we don’t know what it is like to be a lobster, it is difficult to say if lobsters feel pain at all or if pain and suffering may fundamentally mean something different for lobsters than they do for humans and other vertebrates. This makes addressing the ethics of how we treat lobster by looking to the science of lobster anatomy difficult. But perhaps there is another way to consider this issue that doesn’t require answering such complex questions.

After all, if we return to Bayer’s remarks comparing lobsters to bugs, there are some important considerations: Is it wrong to roast ants with a magnifying glass? Is it wrong to pull the wings off flies? Typically, people take issue with such practices not merely because we worry about how the ant or the fly feels, but because it reveals something problematic about the person doing it. Even if the ant or the fly doesn’t feel pain (they might), it seems unnecessarily brutal to effectively torture such animals by interfering in their lives in such seemingly thoughtless ways, particularly if not for food. But would it all suddenly be okay if we decide to eat them afterwards? Perhaps such antics reveal an ethical character flaw on our part.

In his work on environmental ethics, Ronald L. Sandler leans on other environmental ethicists such as Paul Taylor to articulate an account of what kind of character we should have in our relationships with the environment. Taylor advocates that actions be understood as morally right or wrong in so far as they embody a respect for nature. Having such a respect for nature entails a “biocentric outlook” where we regard all living things on Earth as possessing inherent moral worth. This is because each living thing has “a good of its own.” That is, such an outlook involves recognizing that all living organisms are teleological centers of life in the same way as humans and that we have no non-question begging justification for maintaining the superiority of humans over other species. In other words, all living things are internally organized towards their own ends or goods which secure their biological functioning and form of life and respecting nature means respecting that biological functioning and the attainment of such ends.

Taylor’s outlook is problematic because it puts all life on the same ethical level. You are no more morally important than the potato you had for dinner (and how morally wrong it was for you to eat that poor potato!) However, Sandler believes that much of Taylor’s insights can be incorporated in a coherent account of multiple environmental virtues, with respect for nature being one of them. As he puts it, “The virtues of respect for nature are informed by their conduciveness to enabling other living things to flourish as well as their conduciveness to promoting the eudemonistic ends.” While multiple virtues may be relevant to how we should act — such that, for example, eating lobster may be ethical — how we treat those lobsters before that point may demonstrate a fundamental lack of respect for a living organism.

Consider the lobster tanks one finds at a local grocery store, where multiple lobsters may be stacked on top of each other in a barren tank with their claws stuck together. Many have complained about such tanks, and some stores have abandoned them as critics charge that they are stressful for the lobster. It is difficult to say that such “live” lobsters are really living any kind of life consistent with the kind of thing a lobster is. Does keeping lobsters in such conditions demonstrate a lack of respect for the lobster as a living organism with a good of its own? As one person who launched a petition over the matter puts it “I’m in no way looking to eliminate the industry, or challenge the industry, I’m just looking to have the entire process reviewed so that we can ensure that if we do choose to eat lobsters, that we’re doing it in a respectful manner.”

So perhaps the ethical issue is not whether lobsters can feel pain as we understand it. Clearly lobsters have nervous systems that detect noxious stimuli, and perhaps that should be enough to not create such stimuli for their system if we don’t have to. We know it doesn’t contribute to the lobster’s own good. So perhaps the ethical treatment of lobsters should focus less on what suffering is created and focus more on our own respect for the food that we eat.

Bill Cosby and Rape Culture

black and white photograph of lamp light in darkness

In 2018, comedian, television personality, and serial rapist Bill Cosby was convicted and sentenced by a jury of his peers to three to ten years in prison for drugging and sexually assaulting Temple University employee Andrea Constand in 2004. The Constand rape was the crime for which Cosby was convicted, but he was accused of very similar crimes by no fewer than 60 women, including two who were underage girls at the time of their alleged assaults. Cosby’s conviction was hailed as a major success for the #MeToo movement, which aims at long lasting change when it comes to misogyny and rape culture in the United States. At last, it seemed, we might finally be starting to see the end of the ability of men, especially powerful men, to get away with sexual transgressions. Even “America’s Dad” was not too powerful to be held accountable for how he treated women — or so it appeared. On Wednesday, June 30th, 2021, Pennsylvania’s highest court overturned Cosby’s conviction and he walked away a free man.

The court did not vacate the conviction because new information came to light concerning Cosby’s guilt. They did not overturn it because Cosby was actually innocent of the crimes for which he was accused and convicted. Instead, as is usually the case in these kinds of proceedings, his appeal prevailed because of a technical legal issue — in a split decision, the court found that Cosby’s due process rights had been violated. Cosby agreed to testify in a civil case related to the same allegation because a prosecutor guaranteed him that the case would not be prosecuted in criminal court. A different person, who claimed that they didn’t make the promise and were not bound by the agreement, prosecuted Cosby in the criminal proceeding in 2018. Cosby’s testimony in the civil trial was used against him in the criminal proceeding. The Pennsylvania Supreme court ruled that this violated Cosby’s rights against self-incrimination. In depositions related to these matters, Cosby has acknowledged giving quaaludes to women with whom he wanted to “have sex.”

It’s important that our justice system is procedurally fair. As a result, it’s equally important that we have an appeals process that corrects procedural unfairness. It’s extremely unfortunate that there was a technical mistake in Cosby’s conviction — based on the evidence presented at his trial, the finders-of-fact determined that he was guilty. People who have done extremely bad things are released for reasons of procedural unfairness all the time, and this is as it should be. We don’t want a criminal justice system in which prosecutors and other players in the system can bend the rules. If this were the way the system worked, anyone could be steamrolled for anything. What’s more, the victims of that kind of procedural injustice are frequently members of oppressed groups. Abandoning procedural fairness would only make these problems much worse. That said, there are many unfortunate consequences of the court’s ruling and they highlight the fact that we still have a long way to go to create an environment that is safe and peaceful for women and survivors of sexual violence.

First is the disingenuous response of Cosby himself. On Twitter, he posted a picture with his fist held high as if in victory with the caption, “I have never changed my stance nor my story. I have always maintained my innocence.” This is at best a non-sequitur and at worst an attempt to gaslight and deceive. The court didn’t find evidence of his innocence. In fact, if Cosby had not incriminated himself, that is, if he did not admit his crime in the civil proceeding, the court would not have been able to overturn his conviction in the first place.

The behavior of close friends of Cosby’s did not help matters. His long-time television wife, Phylicia Rashad, tweeted the following: “FINALLY!!!! A terrible wrong is being righted- a miscarriage of justice is corrected!” Rashad now serves as the Dean of the Fine Arts College at Howard University, and she quickly faced considerable backlash for her online remarks. In response, Rashad released an apology to Howard University students and parents saying, among other things, “My remarks were in no way directed towards survivors of sexual assault. I vehemently oppose sexual violence, find no excuse for such behavior, and I know that Howard University has a zero-tolerance policy toward interpersonal violence.” She committed “to engage in active listening and participate in trainings to not only reinforce University protocol and conduct, but also to learn how I can become a stronger ally to sexual assault survivors and everyone who has suffered at the hands of an abuser.” Notably absent from her apology was any discussion of the Cosby case specifically or the fact that she had misrepresented the reasons for his release or suggested that the substantive evidence supporting his conviction had been somehow undermined by the appellate court.

Overturning Cosby’s sentence led to a mountain of celebrity apologetics online — enough to make rape survivors feel very uncomfortable. When celebrities are involved, many people succumb to confirmation bias — in this case they have affection for the wild-sweater-wearing, Jell-O-pudding-slinging, television super dad of their youths, and they don’t want to believe that a person they liked so much could be capable of doing the things for which Cosby has been tried and convicted.

The fact is, survivors of sexual assault watch all of this happen and they see how eager people are to trust their heroes and how reluctant they are to trust accusers. This impacts the willingness of a victim to come forward because they see how they might be treated if and when they do, even in cases in which the evidence is overwhelming.

This case emphasizes the moral necessity of educating our children in more comprehensive ways when it comes to rape culture and the kinds of biases that come along with it. We need to teach children not just about the mechanics of sex, how to engage in family planning, and how to avoid STDs. We also need to have open and honest conversations with young people about the nature of consent.

Unfortunately, some state legislatures are quite unfriendly to the concept. For instance, this year, lawmakers in Utah rejected a bill that would have mandated teaching consent in schools. Their reasoning was that teaching consent suggests to children that it might be okay to say yes to sex before marriage. The majority of the state’s lawmakers favor an abstinence-only policy. But refraining from talking to students about what it means to grant consent results in people having ill-formed ideas about the conditions under which consent is not given. This leaves us with a citizenry that is willing to pontificate on social media about whether giving someone a quaalude in anticipation of “sex” is really setting the stage for rape. Our children should all know that it is.

Children should be taught further that even the most affable and charismatic people can be sexual offenders. In fact, having such traits often makes it easier for these people to commit crimes unsuspected and undetected. A real commitment to ending rape culture entails a commitment to speak openly and honestly about sex and sexual misconduct. In practice, abstinence only policies are, among other things, a frustrating barrier to the full realization of women’s rights.

Sha’Carri Richardson and the Spirit of the Game

photograph of blurred sprinters leaving starting blocks

Sprinter and Olympic hopeful Sha’Carri Richardson made headlines recently when she was suspended from the US women’s team after testing positive for THC, a chemical found in marijuana. Using marijuana is in violation of the World Anti-Doping Agency’s (WADA) World Anti-Doping Code, which includes “all natural and synthetic cannabinoids” on its prohibited list. Richardson has accepted responsibility for violating the rules, and while she stated that she is not looking to be excused, she explained that learning about the death of her biological mother and the subsequent emotional suffering was the reason why she used marijuana, despite knowing that it is a prohibited substance.

Many online expressed their confusion as to why Richardson should be reprimanded so harshly as to potentially miss the upcoming Olympic games, as well as why THC would be on WADA’s list of banned substances in the first place. For instance, while WADA’s justification for including THC on its list of prohibited substances is that it “poses a health risk to athletes, has the potential to enhance performance and violates the spirit of sport,” many online have pointed out that it is debatable as to whether it poses health risks to athletes (especially when compared to other substances which are not the prohibited list, such as alcohol and cigarettes), and that it is a stretch to say that it could enhance one’s athletic performance.

What about marijuana usage violating “the spirit of the sport”? WADA defines this notion in a few ways: as the “intrinsic value” of the sport, “the ethical pursuit of human excellence through the dedicated perfection of each Athlete’s natural talents,” and “the celebration of the human spirit, body and mind” which is expressed in terms of a number of values like “health,” “fun and joy,” “teamwork,” etc. Let’s focus on the second one: what might it mean to “ethically pursue” human excellence in the way that WADA describes, and did Richardson fail in this regard?

Unfortunately, the WADA Code does not go into details about what is meant by “ethical.” Perhaps the example of an unethical pursuit of human physical excellence that comes immediately to mind is the use of anabolic steroids: the use of such substances may be considered unethical as they represent a kind of shortcut, providing a distinctly unnatural way to enhance one’s talents. Other substances on WADA’s banned substance list potentially provide routes to excellence in sports in more subtle ways: for instance, beta-blockers – which reduce blood pressure and make one’s heart beat more slowly – provide a seemingly unnatural advantage when it comes to sports like archery and shooting. One way to violate the spirit of the sport, then, may involve taking shortcuts to physical improvement and the overcoming of physical obstacles.

As we have seen, however, marijuana does not provide any performance-enhancing effects to sprinters, and indeed is likely to be detrimental, if anything. It is clear that Richardson’s use of marijuana is thus not unethical with respect to taking shortcuts.

Presumably, though, there are more ways to fail to ethically pursue excellence in athletics than using performance-enhancing drugs. For example, if I were to consistently berate my teammates for failing to meet the standards of my supreme physical prowess in an attempt to get a higher spot on the roster, I would presumably be acting unethically in a way that violates the spirit of the sport, as I would be violating numerous values on WADA’s list (I would not, for example, be exemplifying the value of “fun and joy”). Another way to violate the spirit of the sport may thus involve an attempt to succeed by deliberating and intentionally thwarting others in a way besides simply being better at some given competition.

Again, it seems clear that using marijuana to help cope with the emotional pain of a personal tragedy also fails to fall into this category. What about the celebration of “the human spirit, body, and mind”? Maybe one could reason in this way: smoking pot is a trait possessed by the lazy nogoodniks of society, not Olympic athletes. Pot-smokers are not out there training every day, pushing themselves to their physical limits in the pursuit of excellence; instead, they are sitting on the couch, eating an entire pan of brownies, and giggling to themselves while watching Arrested Development for the tenth time. This is the kind of person who does not celebrate the human spirit, or body, or mind.

While this is a caricature, it is perhaps not far from WADA’s own reasoning. For instance, in a recent guidance note, WADA clarified that it identifies some “Substances of Abuse” on the basis of their being “frequently abused in society outside the context of sport.” In addition to marijuana, cocaine, meth, and ecstasy make the list of Substances of Abuse. None of these drugs offer any obvious performance enhancing effects, and it is unclear why they would be included on the list besides the stereotype that the kind of people who use them are, in some way, “bad.” It is unclear, however, why using a drug that can be abused outside of the context of sport should be considered in violation of the spirit of the sport if one is not themselves abusing it.

There are potentially many more angles form which one could approach Richardson’s suspension – for instance, in a tweet Alexandria Ocasio-Cortez highlighted how marijuana laws in the U.S. reflect “policies that have historically targeted Black and Brown communities,” that such laws are beginning to change across the U.S., and that marijuana is legal in the state in which Richardson used it. While there is no doubt that Richardson violated WADA’s rules, it also seems clear that there is good reason to revise them. Indeed, by WADA’s own standards, Richardson’s actions were not unethical, and did not in any way violate the spirit of the game.

“Cruel Optimism,” Minimum Wage, and the Good Life

photograph looking up at Statue of Liberty

In early May, executives from the fast casual restaurant Chipotle Mexican Grill announced that the company would be raising its average hourly wage to $15 by the end of June. A few weeks later, Chipotle also announced that its menu prices would be increasing by about four percent to help offset those higher wages (as well as the increasing costs of ingredients). This means that instead of paying, say, $8.00 for a burrito, hungry customers will now instead be expected to pay $8.32 for the same amount of food.

While you might think that such a negligible increase would hardly be worth arguing about, opponents of a minimum wage hike jumped on this story as an example of the supposed economic threat posed by changing federal labor policies. During recent debates in Congress, for example, those resistant to the American Rescue Plan’s original provision to raise the federal minimum wage frequently argued that doing so could disadvantage consumers by causing prices to rise. Furthermore, Chipotle’s news exacerbated additional complaints about the potential consequences of the Economic Impact Payments authorized in light of the coronavirus pandemic: allegedly, Chipotle must raise their wages so as to entice “lazy” workers away from $300/week unemployment checks.

Nevertheless, despite the cost of burritos rising by a quarter or two, the majority of folks in the United States (just over six out of ten) support raising the federal minimum wage to $15 per hour. As many as 80% think the wage is too low in general, with more than half of surveyed Republicans (the political party most frequently in opposition to raising the minimum wage) agreeing. Multiple states have already implemented higher local minimum wages.

Why, then, do politicians, pundits, and other people continue to spread the rhetoric that minimum wage increases are unpopular and financially risky for average burrito-eaters?

Here’s where I think a little philosophy might help. Often, we are attracted to things (like burritos) because we recognize that they can satisfy a desire for something we presently lack (such as sustenance); by attaining the object of our desire, we can likewise satisfy our needs. Lauren Berlant, the philosopher and cultural critic who recently died of cancer on June 28th, calls this kind of attraction “optimism” because it is typically what drives us to move through the world beyond our own personal spaces in the hopes that our desires will be fulfilled. But, importantly, optimistic experiences in this sense are not always positive or uplifting. Berlant’s work focuses on cases where the things we desire actively harm us, but that we nevertheless continue to pursue; calling such phenomenon cases of “cruel optimism,” they explain how “optimism is cruel when the object/scene that ignites a sense of possibility actually makes it impossible to attain the expansive transformation for which a person or a people risks striving.” Furthermore, cruel optimism can come about when an attraction does give us one kind of pleasure at the expense of other, more holistic (and fundamental) forms of flourishing.

A key example Berlant gives of “cruel optimism” is the fallacy of the “good life” as something that can be achieved if only one works hard enough; as they explain, “people are trained to think that what they’re doing ought to matter, that they ought to matter, and that if they show up to life in a certain way, they’ll be appreciated for the ways they show up in life, that life will have loyalty to them.” Berlant argues that, as a simple matter of fact, this characterization of “the good life” fails to represent the real world; despite what the American Dream might offer, promises of “upward mobility” or hopes to “lift oneself up by one’s own bootstraps” through hard work and faithfulness have routinely failed to manifest (and are becoming ever more rare).

Nevertheless, emotional (or otherwise affective) appeals to stories about the “good life” can offer a kind of optimistic hope for individuals facing a bleak reality — because this hope is ultimately unattainable, it’s a cruel optimism.

Importantly, Berlant’s schemata is a paradigmatically natural process — there need not be any individual puppetmaster pulling the strings (secretly or blatantly) to motivate people’s commitment to a given case of cruel optimism. However, such a cultural foundation is apt for abuse by unvirtuous agents or movements interested in selfishly profiting off of the unrealistic hopes of others.

We might think of propaganda, then, as a sort of speech act designed to sustain a narrative of cruel optimism. According to Jason Stanley, a key kind of damaging propaganda is “a contribution to public discourse that is presented as an embodiment of certain ideals, yet is of a kind that tends to erode those very ideals.” When a social group’s ideals are eroded into hollowness — when stories about “the good life” perpetuate a functionally unattainable hope — then the propagandistic narratives facilitating this erosion (and, by extension, the vehicles of propaganda spreading these narratives) are morally responsible.

The case of Chipotle arises at the center of several overlapping objects of desire: for some, the neoliberal hope of economic self-sufficiency is threatened by governmental regulations on market prices of commodities like wage labor, as well as by federal mechanisms supporting the unemployed — with the minimum wage and pandemic relief measures both (at least seemingly) relating to this story, it is unsurprising that those optimistic about the promise of neoliberalism interpreted Chipotle as a bellwether for greater problems. Furthermore, consumer price increases, however slight, threaten to damage hopes of achieving one’s own prosperity and wealth. The fact that these hopes are ultimately rather unlikely means that they are cases of cruel optimism; the fact that politicians and news outlets are nevertheless perpetuating them (or at least framing the information in a manner that elides broader conversations about wealth inequity and fair pay) means that those stories could count as cases of propaganda.

And, notably, this is especially true when news outlets are simply repeating information from company press releases, rather than inquiring further about their broader context: for example, rather than raising consumer prices, Chipotle could have instead saved hundreds of millions of dollars in recent months by foregoing executive bonuses and stock buybacks. (It is also worth noting that the states that elected to prematurely freeze pandemic-related unemployment funding, ostensibly to provoke workers to re-enter the labor market, have not seen the hoped-for increase in workforce participation — that is to say, present data suggests that something other than $300/week unemployment checks has contributed to unemployment rates.)

So, in short, plenty of consumers are bound to cruel optimisms about “the good life,” so plenty of executives or other elites can leverage this hope for their own selfish ends. The recent outcry over a burrito restaurant is just one form of how these strings are pulled.

Conservatorships and the Problem of Possessing People

photograph of legal consultation with one party pausing over contract

For the second time in recent years, conservatorships are in the news. Like the many articles discussing Britney Spears’s, these accounts often highlight the ways the conservatorship system can be abused. News outlets focus on abuse for good reason, there are over 1.3 million people in conservatorship/guardianship in the United States, and those in such a position are far too often taken advantage of.

But there are other ethical concerns with conservatorship beyond exploitation. Even when a conservator is totally scrupulous and motivated merely by the good of their conservatee, there is still something ethically troubling about any adult have the right to make decisions for another. As Robert Dinerstein puts it, even when conservatorship “is functioning as intended it evokes a kind of ‘civil death’ for the individual, who is no longer permitted to participate in society without mediation through the actions of another.”

So, what is the moral logic underlying the conservatorship relationship? What are the conditions under which, even in principle, one should be able to make decisions for another person; and how exactly should we understand that kind of relationship? These are the questions I want to address in this post.

So What Is a Conservatorship?

Tribb Grebe, in his excellent explain piece, defines a conservatorship as “a court-approved arrangement in which a person or organization is appointed by a judge to take care of the finances and well-being of an adult whom a judge has deemed to be unable to manage his or her life.”

(You may sometimes hear conservatorships referred to as guardianship. Both the terms ‘conservatorship’ and ‘guardian’ are terms defined by legal statue, and while they usually mean slightly different things, what they mean depends on which state you are in. In Florida, a conservatorship is basically a guardianship where the person is ‘absent’ rather than merely incapacitated or a minor, while in other states a conservator and guardian might have slightly different legal powers, or one term might be used for adults and the other for minors. For most purposes, then, we can treat the two terms as synonymous.)

A conservatorship is, therefore, an unusual moral relationship. Normally, if I spend someone else’s money, then I am a thief. Normally, I need to consent before a surgeon can operate on me. — no one else has the power to consent for me.

Or at least, conservatorship is an unusual relationship between two adults. It is actually the ordinary relationship between parents and children. If a surgeon wants to operate on a child, the surgeon needs the permission of the parents, not of the child. A parent has the legal right to spend their child’s money, as they see fit, for the child’s good. Conservatorship is, essentially, an extension of the logic of the parent-child relationship. To understand conservatorship, then, it will be useful to keep this moral relationship in mind.

Parents, Children, and Status

My favorite accounts of the moral relationship between parents and children is given by Immanuel Kant in his book The Metaphysics of Morals. Kant divides up the rights we have to things outside ourselves into three categories: property, contract, and status. Arthur Ripstein introduces these categories this way: “Property concerns rights to things; contract, rights against persons; and status contains rights to persons “akin to” rights to things.”

Let’s try to break those down more clearly.

Property concerns rights to things. For example, I have a property right over my laptop. I don’t need to get anyone else’s permission to use my laptop, and anyone else who wanted to use it would have to first get my permission.

There are two essential parts to property: possession and use.

Possession means something like control. I can open up my laptop, turn it on, plug it in, etc. I can exercise some degree of control over what happens to my laptop. If I could not, if my laptop were instantly and irrevocably teleported to the other end of the universe, I could not have a property interest in the laptop any longer. I would no longer have possession, even in an extended sense.

Use, in contrast, means that I have the right to employ the laptop for my purposes. Not only do I have some control over the laptop, I can also exercise that control most anyway I want. I can surf the web, I can type up a Prindle Post, or I can even destroy my laptop with a hammer.

Use is why my laptop is mine, even if you are in current control of it.  If I ask you to watch my laptop while I go to the bathroom, then you have control of the computer, but you don’t have use of it. You don’t have the right to use the computer for whatever purpose you want. If you destroy the laptop while I’m away, then, you committed and injustice against me.

Contract involves rights to other people. If you agree to mow my lawn for twenty dollars, then I have a right that you mow my law. This does not mean that I have possession of you. You are a free person; you remain in control of your actions. So, in contract I have use of you, but not possession of you. I have a right that you do something for my end (mowing my lawn), but I am not in control of you even at that point. I cannot, for instance, take over your mind and guide your actions to force you to mow my lawn (even though I have a right that you mow my lawn).

This is one way in which contract is unlike slavery. A slaveowner does not just claim the use of their slave. They also claim control over the slave. In a slave relationship, the slave is no longer their own master, and so is not understood to have possession of their own life.

Of course, another difference between contract and slavery is that contract is consensual. But that is not the only difference. If the difference were simply that slavery was not consensual, then in principle slavery would be okay if someone agrees to become a slave. But Kant rejected that thought. Kant argued that a slavery contract was illegitimate, even if the slave had originally consented.

Status is the final relation of right, and it is status that Kant thinks characterizes parents and children. According to Kant, status is the inverse of contract. In contract, I have the use, but not the possession, of someone else. In status, I have the possession of another but not use.

What could that mean?

Remember that t to have possession of someone is to have a certain control over them. Parents have control over the lives of their children. Parents can, for instance, spend their children’s money, and parents can force their children to behave in certain ways. Not only that, but parents can do this without the consent of their children. These relationships of status, then, are very different from relations of contract.

But then why isn’t a parent’s control over their child akin to slavery?

To distinguish relations of slavery from relations of status, we need to attend to the second half of a status relationship. Parents have possession of their children, but they do not have the use of their children.

Let’s look at the example of money first. Parents have possession and use of their own money. That means parents controls their own money and have the right to spend it however they want. In contrast, parents have the possession, but not the use, of their children’s money. That means that while parents can control their own money, they cannot just spend it however the parent wants. Instead, parents can only spend the money for the good of the child. While I can give my own money away for no reason, I cannot give my child’s money away for no reason.

Parents have a huge amount of control over their children’s lives. However, Kant thinks that parents can only rightly use that control on behalf of their children. This does not mean that parents cannot require their children to perform chores. But it does mean that the reason parents must assign chores has to be for the moral development of the child. Kant was critical, for instance, of people who had children just so that they would have extra hands to help with work on a family farm. Because children cannot consent to the control that parents have, therefore, parents wrong their children if they ever use that control for their own good as opposed to the good of the child.

The Fiduciary Requirement

Parents, then, act as a kind of trustee of their child’s life; they are a fiduciary. The word “fiduciary” is a legal word, which describes “a person who is required to act for the benefit of another person on all matters within the scope of their relationship.” As Arthur Ripstein notes, the fiduciary relationship is structurally parallel to the parental relationship.

“The legal relation between a fiduciary and a beneficiary is one such case. Where the beneficiary is not in a position to consent (or decline to consent), or the inherent inequality or vulnerability of the relationship makes consent necessarily problematic, the fiduciary must act exclusively for the benefit of the beneficiary. It is easier for the fiduciary to repudiate the entire relationship by resigning than for a parent to repudiate a relationship with a child. But from the point of view of external freedom the structure is exactly the same: one party may not enlist the other, or the other’s assets, in support of ends that the other does not share.”

This is a powerful explanatory idea, and recognizing these fiduciary relationships helps us explain various forms of injustice. For example, since in a fiduciary relationship one is only supposed to act for the good of a trustee, this can be used to explain what is unjust about insider trading. If I use my position in a company to privately enrich myself, then I am abusing my office in the company. The private knowledge I have as an employee is available to me for managing the affairs of the company. To use that knowledge for private gain is to unjustly use the property of someone else.

This relationship can also help us understand political corruption. The reason it is unjust for presidents to use their office to enrich themselves, is because their presidential powers are given for public use for the sake of the nation. To manage the government for private purposes is to unjustly mismanage the resources entrusted to the president by the people.

Why Status

But even if we know the sort of relationship that obtains between parents and children — a type of fiduciary relationship — we still need to know why such a relationship is justified. After all, I can’t take control of your life, even if I use that control for your own good. I can’t do so even if I am wiser than you and would make better decisions than you would yourself. Because your life is your own, you have possession of your own life, not matter how much happier you would be if I took control.

The reason why parents have possession of their children is not that parents are wiser or smarter than their kids. Instead, it is because Kant thinks that children are not yet fully developed persons. Children because of the imperfect and still developing position in which they find themselves, are not able to be in full control of themselves (for a nice defense of this view of children see this article by Tamar Schapiro). Of course, the legal relationships here are crude. It is not as though the moment someone turns 18 they instantly pass the threshold of full moral personhood. Growing up is a messy process, and this is why parents should give children more and more control as they mature and develop.

Conservatorship

And just as the messiness of human development means that people should often have some control over their lives before they reach the age of 18, so too that messiness means that sometimes people must lose some control over even after they reach adulthood.

Just as children are not fully developed moral persons, so someone with Alzheimer’s is not a fully developed moral person. We appoint a conservator over someone with Alzheimer’s not because the conservator will make better choices, but because people with Alzheimer’s are often incapable of making fully developed decisions for themselves.

This, then, is the basic moral notion of conservatorship. A conservator has possession but not use of their charge. They can make decisions on their behalf, but those decisions have to be made for the charge’s good. And such a relationship is justified when someone is unable to be a fully autonomous decision-maker, because in some way their own moral personhood is imperfect or damaged.

An End to Pandemic Precautions?

photograph of masked man amongst blurred crowd

I feel like I have bad luck when it comes to getting sick. Every time there’s a cold going around, I seem to catch it, and before I started regularly getting the flu shot, I would invariably end up spending a couple of weeks a year in abject misery. During the pandemic, however, I have not had a single cold or flu. And I’m far from alone: not only is there plentiful anecdotal evidence, but there is solid scientific evidence that there really was no flu season to speak of this year in many parts of the world. It’s easy to see why: the measures that have been recommended for preventing the spread of the coronavirus – social distancing, wearing masks, sanitizing and washing your hands – turn out to be excellent ways of preventing the spread of cold and flu viruses, as well.

Now, parts of the world are gradually opening up again: in some countries social distancing measures and mask mandates are being relaxed, and people are beginning to congregate again in larger numbers. It is not difficult to imagine a near-future in which pumps of hand sanitizer are abandoned, squirt bottles disappear from stores, and the sight of someone wearing a mask becomes a rarity. A return to normal means resuming our routines of socializing and working (although these may end up looking very different going forward), but it also means a return to getting colds and flus.

Does it have to? While aggressive measures like lockdowns have been necessary to help stop the spread of the coronavirus, few, I think, would think that such practices should be continued indefinitely in order to avoid getting sick a couple times a year. On the other hand, it also doesn’t seem to be overly demanding to ask that people take some new precautions, such as wearing a mask during flu season, or sanitizing and washing their hands on a more regular basis. There are good reasons to continue these practices, at least to some extent: while no one likes being sick with a cold or flu, for some the flu can be more than a minor inconvenience.

So, consider this claim: during the course of the COVID-19 pandemic, we have had a moral obligation to do our part in preventing its spread. This is not an uncontroversial claim: some have argued that personal liberties outweigh any duty one might have towards others when it comes to them getting sick (especially when it comes to wearing masks), and some have argued that the recommended mandates mentioned above are ineffective (despite the scientific evidence to the contrary). I don’t think either of these arguments are very good; that’s not, however, what I want to argue here. Instead, let’s consider a different question: if it is, in fact, the case that we have had (and continue to have) moral obligations to take measures to help prevent the spread of coronavirus, do such obligations extend to the diseases – like colds and flus – that will return after the end of the pandemic? I think the answer is: yes. Kind of.

Here’s what this claim is not: it is not the claim that social distancing must last forever, that you have to wear a mask everywhere forever, or that you can never eat indoors, or have a beer on a patio, or go into a shop with more than a few people at a time, etc. Implementing these restrictions in perpetuity in order to prevent people from getting colds and flus seems far too demanding.

Here’s what the claim is: there are much less-demanding actions that one ought to take in order to help stop the spread of common viruses, in times when the chance of contracting such a virus is high (e.g., cold and flu season). For instance, you have no doubt acquired a good number of masks and a good quantity of hand sanitizer over the past year-and-change, and have likely become accustomed to using them. They are, I think, merely a mild inconvenience: I doubt that anyone actively enjoys wearing a mask when they take the subway, for example, or squirting their hands with sanitizer every time they go in and out of a store, but it’s a small price to pay in order to help the prevention of the spread of viruses.

In addition, while in the pre-corona times there was perhaps a social stigma against wearing medical masks in public in particular, it seems likely that we’ve all gotten used to seeing people wearing masks by now. Indeed, in many parts of the world it is already commonplace for people to wear masks during cold and flu season, or when they are sick or are worried that people they spend time with are sick. That such practices have been ubiquitous in some countries is reason to think that they are not a terrible burden.

There is, of course, debate about which practices are most effective at preventing the spread of other kinds of viruses. Some recent data suggest that while masks can be effective at helping reduce the spread of the flu, perhaps the most effective measures have been ones pertaining to basic hygiene, especially washing your hands. Given that we have become much more cognizant of such measures during the pandemic, it is reasonable to think that it would not be too demanding to expect that people continue to be as conscientious going forward.

Again, note that this is a moral claim, and not, say, a claim about what laws or policy should be. Instead, it is a claim that some of the low-cost, easily accomplishable actions that have helped prevent the spread of a very deadly disease should continue when it comes to preventing the spread of less-deadly ones. Ultimately, returning to normal does not mean having to give up on some of the good habits we’ve developed during the course of the pandemic.

On “Dog-Wagging” News: Why What “Lots of People” Say Isn’t Newsworthy

photograph of crowd of paparazzi cameras at event

On June 17th, Lee Sanderlin walked into a Waffle House in Jackson, Mississippi; fifteen hours later, he walked out an internet sensation. As a penalty for losing in his fantasy football league, Sanderlin’s friends expected him to spend a full day inside the 24-hour breakfast restaurant (with some available opportunities for reducing his sentence by eating waffles). When he decided to live-tweet his Waffle House experience, Sanderlin could never have expected that his thread would go viral, eventually garnering hundreds of thousands of Twitter interactions and news coverage by outlets like People, ESPN, and The New York Times.

For the last half-decade or so, the term ‘fake news’ has persistently gained traction (even being voted “word of the year” in 2017). While people disagree about the best possible definition of the term (should ‘fake news’ only refer to news stories intentionally designed to trick people or could it countenance any kind of false news story or maybe something else?), it seems clear that a story about what Sanderlin did in the restaurant is not fake: it genuinely happened, so reporting about it is not spreading misinformation.

But that does not mean that such reporting is spreading newsworthy information.

While a “puff piece” or “human interest story” about Sanderlin in the Waffle House might be entertaining (and, by extension, might convince internet users to click a link to read about it), its overall value as a news story seems suspect. (The phenomenon of clickbait, or news stories marketed with intentionally noticeable headlines that trade accuracy for spectacle, is a similar problem.) Put differently, the epistemic value of the information contained in this news story seems problematic: again, not because it is false, but rather because it is (something like) pointless or irrelevant to the vast majority of the people reading about it.

Let’s say that some piece of information is newsworthy if its content is either in the public interest or is otherwise sufficiently relevant for public distribution (and that it is part of the practice of good journalism to determine what qualifies as fitting this description). When the president of the United States issues a statement about national policy or when a deadly disease is threatening to infect millions, then this information will almost certainly be newsworthy; it is less clear that, say, the president’s snack order or an actor’s political preferences will qualify. In general, just as we expect their content to be accurate, we expect that stories deemed worthy to be disseminated through our formal “news” networks carry information that news audiences (or at least significant subsets thereof) should care about: in short, the difference between a news site and a gossip blog is a substantive one.

(To be clear: this is not to say that movie releases, scores of sports games, or other kinds of entertainment news are not newsworthy: they could easily fulfill either the “public interest” or the “relevance” conditions of the ‘newsworthy’ definition in the previous paragraph.)

So, why should we care about non-newsworthy stories spreading? That is to say, what’s so bad about “the paper of record” telling the world about Sanderlin’s night in a Mississippi Waffle House?

Two problems actually come to mind: firstly, such stories threaten to undermine the general credibility of the institution spreading that information. If I know that a certain website gives equal attention to stories about COVID-19 vaccination rates, announcements of Supreme Court decisions, Major League baseball game scores, and crackpots raging about how the Earth is flat, then I will (rightly, I think) have less confidence that the outlet is capable of reporting accurate information in general (given its decision to spread demonstrably false conspiracy theories). In a similar way, if an outlet gives attention to non-newsworthy stories, then it can water down the perceived import of the other genuinely newsworthy stories that it typically shares. (Note that this problem is compounded further when amusing non-newsworthy stories spread more quickly on the basis of their entertaining quirks, thereby altering the average public profile of the institution spreading them.)

But, secondly, non-newsworthy stories pose a different kind of threat to the epistemic environment than do fake news stories: whereas the latter can infect the community with false propositions, the former can infect the community with bullshit (in a technical sense of the term). According to philosopher Harry Frankfurt, ‘bullshit’ is a tricky kind of speech act: if Moe knows that a statement is false when he asserts it, then Moe is lying; if Moe doesn’t know or care whether a statement is true or false when he asserts it, then Moe is bullshitting. Paradigmatically, Frankfurt says that bullshitters are looking to provoke a particular emotional response from their audience, rather than to communicate any particular information (as when a politician uses rhetoric to affectively appeal to a crowd, rather than to, say, inform them of their own policy positions). Ultimately, Frankfurt argues that bullshit is a greater threat to truth than lies are because it changes what people expect to get out of a conversation: even if a particular piece of bullshit turns out to be true, that doesn’t mean that the person who said it wasn’t still bullshitting in the first place.

So, consider what happened when an attendee at a campaign rally for Donald Trump in 2015 made a series of false assertions about (among other things) Barack Obama’s supposedly-foreign citizenship and the alleged presence of camps operating inside the United States to train Muslims to kill people: then-candidate Trump responded by saying:

“We’re going to be looking at a lot of different things. You know, a lot of people are saying that, and a lot of people are saying that bad things are happening out there. We’re going to look at that and plenty of other things.”

Although Trump did not clearly affirm the conspiracy theorist’s racist and Islamophobic assertions, he nevertheless licensed them by saying that “a lot of people are saying” what the man said. Notice also that Trump’s assertion might or might not be true — it’s hard to tell how we would actually assess the accuracy of a statement like “a lot of people are saying that” — but, either way, it seems like the response was intended more to provoke a certain affective response in Trump’s audience. In short, it was an example of Frankfurtian bullshit.

Conspiracy theories about Muslim “training camps” or Obama’s unAmerican birthplace are not newsworthy because, among other things, they are false. But a story like “Donald Trump says that “a lot of people are saying” something about training camps” is technically true (and is, therefore, not “fake news”) because, again, Trump actually said such a thing. Nevertheless, such a story is pointless or irrelevant — it is not newsworthy — there is no good reason to spread it throughout the epistemic community. In the worst cases, non-newsworthy stories can launder falsehoods by wrapping them in the apparent neutrality of journalistic reporting.

For simplicity’s sake, we might call this kind of not-newsworthy story an example of “dog-wagging news” because, just as “the tail wagging the dog” evokes an image where the “small or unimportant entity (the tail) controls a bigger, more important one (the dog),” a dog-wagging news story is one where something about the story other than its newsworthiness leads to its propagation throughout the epistemic environment.

In harmless cases, dog-wagging stories are amusing tales about Waffle Houses and fantasy football losses; in more problematic cases, dog-wagging stories help to perpetuate conspiracy theories and worse.