Back to Prindle Institute

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

Why Don’t People Cheat at Wordle?

photograph of Wordle being played on phone

By now, you’ve probably encountered Wordle, the colorful daily brainteaser that gives you six attempts to guess a five-letter word. Created in 2020 by Josh Wardle, the minimalistic website has gone viral in recent weeks as players have peppered their social media feeds with the game’s green-and-yellow boxes. To some, the Wordle craze is but the latest passing fad capturing people’s attention mid-pandemic; to others, it’s a window into a more thoughtful conversation about the often social nature of art and play.

Philosopher of games C. Thi Nguyen has argued that a hallmark feature of games is their ability to crystallize players’ decision-making processes, making their willful (and reflexive) choices plain to others; to Nguyen, this makes games a “unique art form because they work in the medium of agency.” I can appreciate the tactical cleverness of a game of chess or football, the skillful execution of a basketball jump shot or video game speedrun, or the imaginative deployment of unusual forms of rationality towards disposable ends (as when we praise players for successfully deceiving their opponents in a game of poker or Mafia/Werewolf, despite generally thinking that deception is unethical) precisely because the game’s structure allows me to see how the players are successfully (and artistically) navigating the game’s artificial constraints on their agency. In the case of Wordle, the line-by-line, color-coded record of each guess offers a neatly packaged, easily interpretable transcript of a player’s engagement with the daily puzzle: as Nguyen explains, “When you glance at another player’s grid you can grasp the emotional journey they took, from struggle to likely victory, in one tiny bit of their day.”

So, why don’t people cheat at Wordle?

Surely, the first response here is to simply reject the premise of the question: it is almost certainly the case that some people do cheat at Wordle in various ways or, furthermore, lie about or manipulate their grids before sharing them on social media. How common such misrepresentations are online is almost impossible to say.

But two facets of Wordle’s virality on social media suggest an important reason for thinking that many players have strong reasons to authentically engage with the vocabulary game; I have in mind here:

  1. the felt pressure against “spoiling” the daily puzzle’s solution, and
  2. the visceral disdain felt by non-players at the ubiquity of Wordle grids on their feeds.

In the first case, despite no formal warning presented by the game itself (and, presumably, no “official” statement from either Wordle’s creator or players), there exists a generally unspoken agreement online to avoid giving away puzzle answers. Clever sorts of innuendo and insinuation are frequent among players who have discovered the day’s word, as are meta-level commentaries on the mechanics or difficulty-level of the latest puzzle, but a natural taboo has arisen against straightforwardly announcing Wordle words to one’s followers (in a manner akin to the taboo against spoiling long-awaited movie or television show plots). In the second case, social media users not caught up in Wordle’s grid have frequently expressed their annoyance at the many posts filled with green-and-yellow boxes flying across their feeds.

Both of these features seem to be grounded in the social nature of Wordle’s phenomenology: it is one thing to simply play the game, but it is another thing entirely to share that play with others. While I could enjoy solving Wordle puzzles privately without discussing the experience with my friends, Wordle has become an online phenomenon precisely because people have fun doing the opposite: publicly sharing their grids and making what Nguyen calls a “steady stream of small communions” with other players via the colorful record of our agential experiences. It might well be that the most fun part of Wordle is not simply the experience of cleverly solving the vocab puzzle, but of commiserating with fellow players about their experiences as well; that is to say, Wordle might be more akin to fishing than to solving a Rubik’s cube — it’s the story and its sharing that we ultimately really care about. Spoiling the day’s word doesn’t simply solve the puzzle for somebody, but ruins their chance to engage with the story (and the community of players that day); similarly, the grids might frustrate non-players for the same reason that inside jokes annoy those not privy to the punchline — they underline the person’s status as an outsider.

So, this suggests one key reason why people might not want to cheat at Wordle: it would entail not simply fudging the arbitrary rule set of an agency-structuring word game, but would also require the player to violate the very participation conditions of the community that the player is seeking to enjoy in the first place. That is to say, if the fun of Wordle is sharing one’s real experiences with others, then cheating at Wordle is ultimately self-undermining — it gives you the right answer without any real story to share.

Notice one last point: I haven’t said anything here about whether or not it’s unethical to cheat at Wordle. In general, you’ll probably think that your obligations to tell the truth and avoid misrepresentation will apply to your Wordle habits in roughly the same way that they apply elsewhere (even if you’re not unfairly disadvantaging an opponent by cheating). But my broader point here is that cheating at Wordle doesn’t really make sense — at best, cheating might dishonestly win you some undeserved recognition as a skilled Wordle player, but it’s not really clear why you might care about that, particularly if the Wordle community revolves around communion moreso than competition.

Instead, swapping Wordle grids can offer a tantalizing bit of fun, authentic connection (something we might particularly crave as we enter Pandemic Year Three). So, pick your favorite starting word (mine’s “RATES,” if you want a suggestion) and give today’s puzzle your best shot; maybe we’ll both guess this one in just three tries!

The Morality of “Sharenting”

black-and-white photograph of embarrassed child

The cover of Nirvana’s Nevermind — featuring a naked baby diving after a dollar bill in a pool of brilliant, blue water — is one of the most iconic of the grunge era, and perhaps of the ‘90s. But not everyone looks back on that album with fond nostalgia. Just last week, Spencer Elden — the man pictured as the baby on that cover — renewed his lawsuit against Nirvana, citing claims of child pornography.

Cases like this are nothing new. Concerns regarding the exploitation of children in the entertainment industry have existed for, well, as long as the entertainment industry. What is new, however, is the way in which similar concerns might be raised for non-celebrity children. The advent of social media means that the public sharing of images and videos of children is no longer limited to Hollywood. Every parent with an Instagram account is capable of doing this. The practice even has a name: sharenting. Indeed, those currently entering adulthood are unique in that they are the first generation to have had their entire childhoods shared online — and some of them aren’t very happy about it. So it’s worth asking the question: is it morally acceptable to share imagery of children online before they can give their informed consent?

One common answer to this question is to say that it’s simply up to the parent or guardian. This might be summed up as the “my child, my choice” approach. Roughly, it relies on the idea that parents know what is in the best interests of their child, and therefore reserve the right to make all manner of decisions on their behalf. As long as parental consent is involved whenever an image or video of their child is shared, there’s nothing to be concerned about. It’s a tempting argument, but it doesn’t stand up to scrutiny. Being a parent doesn’t provide you with the prerogative to do whatever you want with your child. We wouldn’t, for example, allow parental consent as a justification for child labor or sex trafficking. If every parent did know what was best for their child, there wouldn’t be a need for institutions like the Child Protection Service. Child abuse and neglect wouldn’t exist. But they do. And that’s because sometimes parents get things wrong. The “my child, my choice” argument, then, is not a good one. So we must look for an alternative.

We might instead take a “consequentialist” approach — that is, to weigh up the good consequences and bad consequences of sharenting to see if it results in a net good. To be fair, there are many good things that come from the practice. For one, social media provides an opportunity for parents to share details of a very important part — perhaps the most important part — of their lives. In doing so, they are able to strengthen their relationships with family, friends, and other parents, bonding with — and learning from — each other along the way. Such sharing also enables geographically distant loved ones to be more involved in a child’s life. This is something that’s become even more important in a world that has undergone unprecedented travel restrictions as a result of the COVID-19 pandemic.

But the mere existence of these benefits is not enough to justify sharenting. They must be weighed against the actual and potential harms of the practice. And there are many. Sharing anything online — especially imagery of young children — is an enormously risky endeavor. Even images that are shared under supposedly private conditions can easily enter the public forum — either through irresponsible resharing by well-intentioned loved ones, or by the notoriously irresponsible management of our data by social media companies.

Once this imagery is in the public domain, it can be used for all kinds of nefarious purposes. But we needn’t explore such dark avenues. Many of us have a lively sense of our own privacy, and don’t want our information shared with the general public regardless of how it ends up being used. It makes sense to imagine that our children — once capable of giving informed consent — will feel the same way. Much of the imagery shared of them online involves private, personal moments intended only for themselves and those they care about. Any invasion of that privacy is a bad thing.

Which brings us to yet another way of analyzing this subject. Instead of focusing purely on the consequences of sharenting, we might instead apply what’s referred to as a “deontological” approach. One of the most famous proponents of deontology was Immanuel Kant. In its most straight-forward formulation, Kant’s ethical theory tells us to always seek to treat others as an end in themselves, not as a means to some other end. This approach reveres respect for the autonomy of others, and abhors using people for your own purposes. Thus, even if there are goods to be gained from sharenting, these should be ignored if the child — upon developing their autonomy — would wish that their private lives had never been made public.

What both the consequentialist approach and the deontological approach seem to boil down to, then, is a question of what the child will want once they are capable of giving informed consent. And this is something we can never know. They may develop into a gregarious braggart who shares every detail of their life online. But they may just as likely turn into a fiercely private individual who wants no record of their childhood — awkward and embarrassing as these always tend to be — in the digital ether. Given this uncertainty, what should parents do? It’s difficult to say, but perhaps the safest approach might be to apply some kind of “precautionary principle.” This principle states that where an unnecessary action brings a significant risk of harm, we should refrain from acting. So, given the potential harm associated with sharenting and the largely unnecessary nature of the practice (especially when similar goods can be achieved in other ways; for example, by mailing photographs to loved ones the old-fashioned way), we should respect our children’s right to privacy — at least until they can give their informed consent to having their private lives shared publicly.

On Anxiety and Activism

"The End Is Nigh" poster featuring a COVID spore and gasmask

The Plough Quarterly recently released a new essay collection called Breaking Ground: Charting Our Future in a Pandemic Year. In a contribution by Joseph Keegin, “Be Not Afraid,” he details some of his memories of his father’s final days, and the looming role that “outrage media” played in their interactions. He writes,

My dad had neither a firearm to his name, nor a college degree. What he did have, however, was a deep, foundation-rattling anxiety about the world ubiquitous among boomers that made him—and countless others like him—easily exploitable by media conglomerates whose business model relies on sowing hysteria and reaping the reward of advertising revenue.

Keegin’s essay is aimed at a predominantly religious audience. He ends his essay by arguing that Christians bear a specifically religious obligation to fight off the fear and anxiety that makes humans easy prey to outrage media and other forms of news-centered absorption. He argues this partly on Christian theological grounds — namely, that God’s historical communications with humans is almost always preceded by the command to “be not afraid,” as a lack of anxiety is necessary for recognizing and following truth.

But if Keegin is right about the effects of this “deep, foundation-rattling anxiety” on our epistemic agency, then it is not unreasonable to wonder if everyone has, and should recognize, some kind of obligation to avoid such anxiety, and to avoid causing it in others. And it seems as though he is right. Numerous studies have shown a strong correlation between feeling dangerously out-of-control and the tendency to believe conspiracy theories, especially when it comes to COVID-19 conspiracies (here, here, here). The more frightening media we consume, the more anxious we become. The more anxious we become, the more media we consume. And as this cycle repeats, the media we are consuming tends to become more frightening, and less veridical.

Of course, nobody wants to be the proverbial “sucker,” lining the pocket books of every website owner who knows how to write a sensational headline. We are all aware of the technological tactics used to manipulate our personal insecurities for the sake of selling products and, for the most part, I would imagine we strive to avoid this kind of vulnerability. But there is a tension here. While avoiding this kind of epistemically-damaging anxiety sounds important in the abstract, this idea does not line up neatly with the ways we often talk about, and seek to advance, social change.

Each era has been beset by its own set of deep anxieties: the Great Depression, the Red Scare, the Satanic Panic, and election fears (on both sides of the aisle) are all examples of relatively recent social anxieties that lead to identifiable epistemic vulnerabilities. Conspiracies about Russian spies, gripping terror over nuclear war, and unending grassroots ballot recount movements are just a few of the signs of the epistemic vulnerability that resulted from these anxieties. The solution may at first seem obvious: be clear-headed and resist getting caught up in baseless media-driven fear-mongering. But, importantly, not all of these anxieties are baseless or the result of purposeless fear-mongering.

People who grew up during the depression often worked hard to instill an attitude of rationing in their own children, prompted by their concern for their kids’ well-being; if another economic downturn hit, they wanted their offspring to be prepared. Likewise, the very real threat of nuclear war loomed large throughout the 1950s-1980s, and many people understandably feared that the Cold War would soon turn hot. Even elementary schools held atom bomb drills, for any potential benefit to the students in the case of an attack. One can be sure that journalists took advantage of this anxiety as a way to increase readership, but concerned citizens and social activists also tried to drum up worry because worry motivates. If we think something merits concern, we often try to make others feel this same concern, both for their own sake and for the sake of those they may have influence over. But if such deep-seated cultural anxieties make it easier for others to take advantage of us through outrage media, conspiracy theories, and other forms of anxiety-confirming narratives, is such an approach to social activism worth the future consequences?

To take a more contemporary example, let’s look at the issue of climate change. According to a recent study, out of 10,000 “young people” (between the ages of 16 and 25) surveyed, almost 60% claimed to be “very” or “extremely” worried about climate change. 45% of respondents said their feelings about climate change affected their daily life and functioning in negative ways. If these findings are representative, surely this counts as the Generation Z version of the kind of “foundation-rattling anxiety” that Keegin observed in his late father.

There is little doubt where this anxiety comes from: news stories and articles routinely point out record-breaking temperatures, numbers of species that go extinct year to year, and the climate-based causes of extreme weather patterns. Pop culture has embraced the theme, with movies like “The Day After Tomorrow,” “Snowpiercer,” and “Reminiscent,” among many others, painting a bleak picture of what human life might look like once we pass the point of no return. Unlike any other time in U.S. history, politicians are proposing extremely radical, lifestyle-altering policies in order to combat the growing climate disaster. If such anxieties leave people epistemically vulnerable to the kinds of outrage media and conspiracy theory rabbit holes that Keegin worries about, are these fear-inducing tactics to combat climate change worth it?

On the surface, it seems very plausible that the answer here is “yes!” After all, if the planet is not habitable for human life-forms, it makes very little difference whether or not the humans that would have inhabited the planet would have been more prone to being consumed by the mid-day news. If inducing public anxiety over the climate crisis (or any other high stakes social challenge or danger) is effective, then likely the good would outweigh the bad. And surely genuine fear does cause such behavioral effects. Right?

But again, the data is unclear. While people are more likely to change their behavior or engage in activism when they believe some issue is actually a concern, too much concern, anxiety, or dread seems to soon produce the opposite (sometimes tragic) effect. For example, while public belief in, and concern over, climate change is higher than ever, actual climate change legislation has not been adapted in decades, and more and more elected officials deny or downplay the issue. Additionally, the latest surge of the Omicron variant of COVID-19 has renewed the social phenomenon of pandemic fatigue, the condition of giving up on health and safety measures due to exhaustion and hopelessness regarding their efficacy.

In an essay discussing the pandemic, climate change, and the threat of the end of humanity, the philosopher Agnes Callard analyzes this phenomenon as follows:

Just as the thought that other people might be about to stockpile food leads to food shortages, so too the prospect of a depressed, disaffected and de-energized distant future deprives that future of its capacity to give meaning to the less distant future, and so on, in a kind of reverse-snowball effect, until we arrive at a depressed, disaffected and de-energized present.

So, if cultural anxieties increase epistemic vulnerability, in addition to, very plausibly, leading to a kind of hopelessness-induced apathy toward the urgent issues, should we abandon the culture of panic? Should we learn how to rally interest for social change while simultaneously urging others to “be not afraid”? It seems so. But doing this well will involve a significant shift from our current strategies and an openness to adopting entirely new ones. What might these new strategies look like? I have no idea.

The Ethical and Epistemic Consequences of Hiding YouTube Dislikes

photograph of computer screen displaying YouTube icon

YouTube recently announced a major change on their platform: while the “like” and “dislike” buttons would remain, viewers would only be able to see how many likes a video had, with the total number of dislikes being viewable only by the creator. The motivation for the change is explained in a video released by YouTube:

Apparently, groups of viewers are targeting a video’s dislike button to drive up the count. Turning it into something like a game with a visible scoreboard. And it’s usually just because they don’t like the creator or what they stand for. That’s a big problem when half of YouTube’s mission is to give everyone a voice.

YouTube thus seems to be trying to protect its creators from certain kinds of harms: not only can it be demoralizing to see that a lot of people have disliked your video, but it can also be particularly distressing if those dislikes have resulted from targeted discrimination.

Some, however, have questioned YouTube’s motives. One potential motive, addressed in the video, is that YouTube is removing the public dislike count in response to some of their own videos being overwhelmingly disliked (namely, the “YouTube Rewind” videos and, ironically, the video announcing the change itself). Others have proposed that the move aims to increase viewership: after all, videos with many more dislikes than likes are probably going to be viewed less often, which means fewer clicks on the platform. Some creators have even posited that the move was made predominantly to protect large corporations, as opposed to small creators: many of the most disliked videos belong to corporations, and since YouTube has an interest in maintaining a good relationship with them, they would also have an interest in restricting people’s ability to see how disliked their content is.

Let’s say, however, that YouTube’s motivations are pure, and that they really are primarily intending to prevent harms by removing the public dislike count on videos. A second criticism has come in the form of the loss of informational value: the number of dislikes on a video can potentially provide the viewer with information about whether the information contained in a video is accurate. The dislike count is, of course, far from a perfect indicator of video quality, because one can dislike for reasons that don’t have to do with the information it contains: again, in instances in which there have been targeted efforts to dislike a video, dislikes won’t tell you whether it’s really a good video or not. On the other hand, there do seem to be many cases in which looking at the dislike count can let you know if you should stay away: videos that are clickbait, misleading, or generally poor quality can often quickly and easily be identified by an unfavorable ratio of likes to dislikes.

A worry, then, is that without this information, one may be more likely to not only waste one’s time by watching low-quality or inaccurate videos, but also that one may be more likely to be exposed to misinformation. For instance, consider the class of clickbait videos prevalent on YouTube, one in which people will make impressive-looking crafts or food through a series of improbable steps. Seeing that a video of this type has received a lot of dislikes helps the viewer contextualize it as something that’s perhaps just for entertainment value, and should not be taken seriously.

Should YouTube continue to hide dislike counts? In addressing this question, we are perhaps facing a conflict in different kinds of values: on the one hand, you have the moral value of protecting small or marginalized creators from targeted dislike campaigns; on the other hand, you have the epistemic disvalue of removing potentially useful information that can help viewers avoid believing misleading information (as well as the practical value of saving people the time and effort of watching unhelpful videos). It can be difficult to try to balance different values: in the case of the removal of public dislike counts, the question becomes whether the moral benefit is strong enough to outweigh the epistemic detriment.

One might think that the epistemic detriments are not, in fact, too significant. In the video released by YouTube, this issue is addressed, if only very briefly: referring to an experiment conducted earlier this year in which public dislike counts were briefly removed from the platform, the spokesperson states that they had considered how dislikes give views “a sense of a video’s worth.” He then states that,

[W]hen the teams looked at the data across millions of viewers and videos in the experiment they didn’t see a noticeable difference in viewership regardless of whether they could see the dislike count or not. In other words, it didn’t really matter if a video had a lot of dislikes or not, they still watched.

At the end of the video, they also stated, “Honestly, I think you’re gonna get used to it pretty quickly and keep in mind other platforms don’t even have a Dislike button.”

These responses, however, are non-sequiturs: whether viewership increased or decreased does not say anything about whether people are able to judge a video’s worth without a public dislike count. Indeed, if anything it reinforces the concern that people will be more likely to consume content that is misleading or of low informational value. That other platforms do not contain dislike buttons is also irrelevant: that other social media platforms do not have a dislike button may very well just mean that it is difficult to evaluate the quality of information present on those platforms. Furthermore, users on platforms such as Twitter have found other ways to express that a given piece of information is of low value, for example by ensuring that a tweet has a high ratio of responses to likes, something that seems much less likely to be effective on a platform like YouTube.

Even if YouTube does, in fact, have the primary motivation of protecting some of its creators from certain kinds of harms, one might wonder whether there are better ways of addressing the issue, given the potential epistemic detriments.

The Ethics of Protest Trolling

image of repeating error windows

There is a new Trump-helmed social media site being developed, and it’s been getting a lot of attention from the media. Called “Truth Social,” the site and associated app initially went up for only a few hours before it was taken offline due to trolling. Turns out, the site’s security was not exactly top-of-the-line: users were able to claim handles that you think would have been reserved for others – including “donaldjtrump” and “mikepence” – and then used their new accounts to post a variety of images that few people would want to be associated with their name.

This isn’t the first time a far-right social media site has been targeted by internet pranksters. Upon its release, GETTR, a Twitter clone founded by one of Trump’s former spokespersons, was flooded with hentai and other forms of cartoon pornography. While a defining feature of far-right social media thus far has been a fervor for “free speech” and a rejection of “cancel culture,” it is clear that such sites do not want this particular kind of content clogging up their feeds.

Those familiar with the internet will recognize posting irrelevant, gross, and generally not-suitable-for-work images on sites in this manner as acts of trolling. So, here’s a question: is it morally permissible to troll?

The question quickly becomes complicated when we realize that “trolling” is not a well-defined act, and encompasses potentially many different forms of behavior. There has been some philosophical work on the topic: for example, in the excellently titled “I Wrote this Paper for the Lulz: The Ethics of Internet Trolling,” philosopher Ralph DiFranco distinguishes 5 different forms of trolling.

There’s malicious trolling, which is intended to specifically harm a target, often through the use of offensive images or slurs. There’s also jocular trolling, actions that are not done out of any intention to harm, but rather to poke fun at someone in a typically lighthearted manner. While malicious trolling seems to be generally morally problematic, jocular trolling can certainly also risk crossing a moral line (e.g., when “it’s just a prank, bro!” videos go wrong).

There’s also state-sponsored trolling, which was a familiar point of discussion during the 2016 U.S. elections, wherein companies in Russia were accused of creating fake profiles and posts in order to support Trump’s campaign; concern trolling, wherein someone feigns sympathy in an attempt to elicit a genuine response, which they are then ridiculed for; and subcultural trolling, wherein someone again pretends to be authentically engaged, this time in a discussion or issue in order elicit genuine engagement by the target. Again, it’s easy to see how many of these kinds of acts can be morally troubling: intentional interference with elections, and feigning sincerity to provoke someone else generally seem like the kind of behaviors that one ought not perform.

What about the kinds of acts we’re seeing being performed on Truth Social, and that we’ve seen on other far-right social media apps like GETTR? They seem to be a form of trolling, but do they fall into any of the above categories? And what should we think about their moral status?

As we saw above, trolling captures a wide variety of phenomena, and not all of them have been fully articulated. I think that the kind of trolling I’m focusing on here – i.e., that which is involved in snatching up high-profile usernames and clogging up feeds with irrelevant images – doesn’t neatly fit into any of the above categories. Instead, let’s call it something else: protest trolling.

Protest trolling has a lot of the hallmarks of other forms of trolling – it often involves acts that are meant to distract a particular target or targets, and involves what the troll finds funny (e.g., inappropriate pictures of Sonic the Hedgehog). Unlike other forms of trolling, however, it is not necessarily done in “good fun,” nor is it necessarily meant to be malicious. Instead, it’s meant to express one’s principled disagreement with a target, be it an individual, group, or platform.

Compare, for example, a protest of a new university policy that involves a student sit-in. A group of students will coordinate their efforts to disrupt the activities of those in charge, an act that expresses their disagreement with the institution, governance, and/or authority figure. The act itself is intentionally disruptive, but is not itself motivated by malice: they are not acting in this way because they want others to be harmed, even though some harm may come about as a result.

While the analogy to the case of online trolling is imperfect, there are, I think, some important similarities between a student sit-in and the flooding of right-wing social media with irrelevant content. Both are primarily meant to disrupt, without specifically intending harm, and both are directed towards a perceived threat to one’s core values. For instance, we have seen how right-wing media has perpetrated violence, both in terms of violent acts and towards members of marginalized groups. One might thereby be concerned that a whole social network dedicated to the expression of such views could result in similar harms, and is thus worth protesting.

Of course, in the case of online trolling there may be other intentions at play: for example, the choice of material that’s been used to disrupt these services is clearly meant to shock, gross-out, and potentially even offend its core users. Furthermore, not every such action will have principled intentions: some will simply want to jump on the bandwagon because it seems fun, as opposed to actually expressing a principled disagreement.

There are, then, many tangled issues surrounding the intentions and execution of different forms of protest trolling. However, just as many cases of real-life protesting are disruptive without being unethical, so, too, may cases of protest trolling be potentially morally unproblematic.

Praise and Resentment: The Moral of ‘Bad Art Friend’

black-and-white photograph of glamorous woman looking in mirror

The story of the “Bad Art Friend” has taken social media by storm. For those who have yet to brave the nearly 10,000 word New York Times article, here is a summary of the tale: Dawn Dorland, a writer, decided to donate one of her kidneys after completing her M.F.A. She kept her social media friends abreast of her donation and surgery, and noticed (some time after the donation) that one of her friends had failed to comment on the donation. Dorland wrote to the friend (Sonya Larson, herself a writer) asking her why she hadn’t said anything about Dorland’s altruistic activities. They exchanged pleasantries, Sonya praised her for her sacrifice, and all seemed well. Several months later, however, Sonya published a short story inspired by Dorland’s kidney donation which set off a bevy of legal and relational blows involving multiple lawsuits and, potentially, ruined careers.

There are a slew of ethical issues and questions embedded in the text and subtext of this story: questions about the differences between plagiarism and inspiration, questions about appropriate boundaries in friendships and acquaintanceships, and questions about the legality and propriety of lawsuits. But a majority consensus has seemed to emerge about the protagonist of this story: almost universally, readers are not on the side of Dawn Dorland.

Elizabeth Bruenig, in an op-ed for The Atlantic, describes Dorland as the “patron-saint” of our “social-media age,” emphasizing the description is not a complement. She characterizes Dorland’s initial behavior towards Larson as follows:

“Dorland, in particular, went looking for [victimhood], soliciting Larson for a reason the latter hadn’t congratulated her for her latest good deed, suspecting—rightly—a chillier relationship than collegial email etiquette would suggest. She kept seeking little indignities to be wounded by—and she kept finding them. Her retaliations quickly outpaced Larson’s offenses, such as they were.”

Bruenig is right that Dorland considered herself to be wronged by Larson’s apparent apathy. And insofar as we find it implausible that Larson really did wrong her in this way, it is understandable why Bruenig might analyze the situation as one in which Dorland sought out a kind of victimhood status. This may explain part of why Dorland’s behavior immediately turns us off — looking for victimhood, or claiming it too quickly, seems like a kind of injustice to those who really are victims of really bad actions or circumstances. In diverting attention to extremely mild wrongs (if they were wrongs at all) done to herself, Dorland distracts people from truly awful situations that merit their consideration. Human attention is zero-sum: if I am paying attention to you, then that means I am not paying attention to something else. So, there is a consequentialist argument to be made that I should not seek out “victimhood” status and, thereby, attention, if the public’s attention would be better spent elsewhere.

Yet, Bruenig’s analysis does not consider the fact that our mild disgust at Dorland begins even before she voices her complaints to Larson. They begin even before she speaks to Larson at all. They begin where Dorland seeks out praise and attention for her (admittedly very brave) act of donating her kidney. But did Dorland actually do anything wrong in seeking out praise for her praise-worthy act? Does our disgust stem from genuine moral assessment, or a deeper kind of resentment of people who act more selflessly than we do?

The philosopher Immanuel Kant theorized that it was morally impermissible to treat others as a mere means to our own ends — we must always consider them to be intrinsically valuable creatures themselves, and our actions must reflect this. We may, therefore, think that Dorland’s seeking of praise for her donation indicates that she was using the kidney recipient as a mere means to gaining praise, popularity, or notoriety.

Still, it is not clear that Kant’s concepts would apply in this case. Dorland’s donation of her kidney indicates that, while she may have used the opportunity as a means to other social ends, she was not using the recipient merely as a means — in saving his life, she acted toward him in acknowledgement of his value as a person. There is nothing in Kant’s moral philosophy which prohibits us from using people to attain our ends, so long as we respect them as persons while doing so.

From a utilitarian perspective, seeking praise for your good works may even maximize happiness, meaning that it would be the morally correct thing to do. For example, by seeking praise for your honorable deeds, you may draw attention to what you did, encouraging others to display the same amount of selflessness and charity. Additionally, you yourself would derive happiness from the praise, and it doesn’t seem that anybody would lose happiness by praising you. Therefore, it seems that seeking such accolades may benefit everyone and harm no one.

A virtue ethical approach to the issue may seem to yield different results. After all, surely there is something unvirtuous about someone who seeks out praise for supposedly altruistic actions? Many consider humility to be a virtue, and Dorland’s constant social media updates and attention-seeking behavior seem to indicate a lack of humility in her character. Perhaps we are turned off by the desire for praise because it indicates a character vice: pompousness, perhaps, or neediness.

And yet, historically virtue ethicists have praised the (appropriate) seeking of praise. In his Nicomachean Ethics, book four, Aristotle calls it the virtue of “small honors,” which we might more simply understand as the virtue of seeking to do, and be rewarded for, honorable things. Of course, Aristotle still holds that I should not seek praise for things that are not praiseworthy, nor should I act in praiseworthy ways purely for the praise. Still, seeking honor (and the praise that arguably ought to go with it) in moderate amounts is a virtue. At least for Aristotle.

There is a case to be made that our distaste for those who seek praise has a distinctly Christian origin. In Christian scriptures — specifically, the Gospel of Matthew, chapter 6 — Jesus preaches against seeking recognition for acts of charity:

“Be careful not to practice your righteousness in front of others to be seen by them. If you do, you will have no reward from your Father in heaven. So when you give to the needy, do not announce it with trumpets, as the hypocrites do in the synagogues and on the streets, to be honored by others. Truly I tell you, they have received their reward in full. But when you give to the needy, do not let your left hand know what your right hand is doing, so that your giving may be in secret. Then your Father, who sees what is done in secret, will reward you.”

In the Christian tradition, the idea is that those who seek recognition from others in the here and now eliminate their opportunity to build character and, perhaps, gain other spiritual rewards. One may have earthly, social rewards, or longer-lasting spiritual rewards, but one may not have both.

Yes, I suspect there are many who would not claim Christianity who nevertheless are repelled by the idea of someone asking for praise for donating a kidney. Those familiar with Friedrich Nietzsche’s writings will recall his extensive critique of Christian moral thought which, he wrote, “has waged deadly war against this higher type of man; placed all the basic instincts of his type under ban” (The Anti-Christ, p. 5). Nietzsche argued that traditional Christian morality — which he referred to as “slave morality” — served only to make humans weak, powerless, and full of resentment at those who were powerful and flourishing. One can imagine a Nietzschean critique of our distaste for those announcing their good deeds in the public square: perhaps, rather than a kind of virtuous disgust, what we are truly experiencing is resentment toward someone acting with more courage than we have.

No matter your opinion on Bad Art Friend and all the drama that story contains, it is worth reflecting on how we respond when someone announces their good deeds to the public. Why do we prefer discretion? What is wrong with desiring praise and honor? These questions may be worth investigating deeper, lest we act in ordinary human resentment rather than careful moral consideration.

What Is Cancel Culture?

image of Socrates drinking hemlock

There has been much bemoaning of “cancel culture” in recent years. The fear seems to be that there is a growing trend coming from the left to “cancel” ideas and even people that fall out of favor with proponents of left-wing political ideology. Social media and online bullying contribute to this phenomenon; people leave comments shaming “bad actors” into either apologizing, leaving social media, or sometimes just digging in further.

It’s worth taking some time to think about the history of “cancellation.” For better or for worse, cancellation is a political tool that can be used either to entrench or to disrupt the dominant power hierarchy. Ideas and people have been “canceled” as long as there have been social creatures with reactive attitudes. Humans aren’t even the only species to engage in cancel behavior. In communities of animals in which cooperative behavior is important, groups will often shun members who behave selfishly. In other cases, groups of animals may ostracize members that do not seem to respect the authority of the alpha male. What we now call “cancel culture” is just one form of the general practice of using sentiments such as approval or disapproval or praise and blame to influence behavior and shape social interactions.

One of history’s most famous cancellations was the trial and execution of Socrates, who was “canceled” in the most extreme of ways because the influence that he had over the youth of Athens posed an existential threat to those with the power in that community. The challenge that he presented was that he might encourage the younger generation to reassess values and construct a new picture of what their communities might look like. At his trial, Socrates says,

“For I do nothing but go about persuading you all, old and young alike, not to take thought for your persons and your properties, but first and chiefly to care about the greatest improvement of the soul. I tell you that virtue is not given by money, but that from virtue come money and every other good of man, public as well as private. This is my teaching, and if this is the doctrine which corrupts the youth, my influence is ruinous indeed.”

For this, he was made to drink hemlock.

Galileo was canceled for the heresy of advancing the idea that the earth revolved around the sun rather than the other way around. This view of the universe was in conflict with the view endorsed by the Catholic Church, so Galileo’s book of dialogues was prohibited, and he lived out the rest of his life under house arrest.

In the more recent past, Martin Luther King Jr. was canceled — not only on his assassination, but prior to that, when many of his former compatriots in the struggle for civil rights broke ranks with him over his opposition to the Vietnam War and his battle to end poverty.

Through the years, people have been “canceled” for being Christian, Pagan, Catholic, Protestant, Atheist, Gay, Female, Transgender, Communist, and Socialist. They’ve been canceled for speaking up too much or too little, for being too authentic or not authentic enough. Books have been burned, ideas have been suppressed, people’s reputations have changed with the direction of the prevailing winds. Cancellation belongs to no single political party or ideology.

Nevertheless, “cancellation” in the 21st century is presented to us as a new and nebulous phenomenon — a liberal fog that has drifted in to vaporize the flesh of anyone who harbors conservative ideas. But what does it mean, exactly, to “cancel” a person? Perhaps the most common use of the word “cancel” in an ordinary context has to do with events. If I get a cold and I cancel my philosophy courses for the day, then those courses are no longer taking place. Similarly, in the most extreme cases, to “cancel” someone is to get rid of them forever — to kill them. Socrates, Hypatia, and even Jesus were “canceled” in this way.

There are other cases of cancellation which are pretty extreme, even if they don’t result in death. Instead, the person or group might be imprisoned or otherwise punished by the government. For example, during World War II, many Japanese Americans were “canceled” and put in internment camps just for being Japanese during a time when Americans were prone to xenophobia against that particular group. Then, of course, there was the McCarthy era, when people all across the country had to worry about their lives or livelihoods being destroyed if it were discovered or even suspected that they were sympathetic to communism. This cancel culture witch hunt affected the careers of stars like Charlie Chaplin, Langston Hughes, and Orson Wells. Positive proof of membership in the party wasn’t even necessary. Of one case Joseph McCarthy famously said, “I do not have much information on this except the general statement of the agency…that there is nothing in the files to disprove his Communist connections.”

Thankfully, when we use the word “cancel” these days, we are usually referring to something less extreme. We tend to mean that a certain segment of society will no longer support the “canceled” person in various ways — they will not consume their products, enjoy their art, listen to their thoughts, or otherwise support their general platform. The most common cases are those of politicians and artists of various types. Many people no longer watch Kevin Spacey movies after learning that he frequently engaged in sexual harassment of co-workers.

The linchpin — and the feature that makes it tricky — is that cancel culture is one of the consequences of the display of people’s reactive attitudes. It is these very reactive attitudes — guilt, shame, praise, blame — that are involved in moral judgments. Such judgments also involve assessment of harm. People often point out, when attempting to hold a bad actor responsible, that the bad actor’s behavior is resulting in a serious set of bad consequences for their community. These kinds of considerations are important — they make the world a better place. We don’t want to throw the baby out with the bathwater; we don’t want to give up holding people morally responsible for their actions because we are too afraid of “canceling” the wrong person. There are cases in which cancellation seems like precisely the correct course of action. We shouldn’t continue to hold in high regard rapists and serial harassers like Bill Cosby and Harvey Weinstein. We shouldn’t support the platforms of racists and child molesters.

For these reasons, cancel culture shouldn’t be depicted as the emerging new villain in the plot of the 2020s. This culture has always been around and always will be, though, granted, it is amplified by social media and the internet. Sometimes it does some real good. The reality is that this has all been so politicized that it is unlikely that they’ll be much ideological shift on these issues. If we allow Socrates’ ancient ideas to “corrupt” our minds, we’ll keep asking questions: “Is this a power play?” “Should this behavior be tolerated?” “Is this a case that calls for compassion and understanding?” Improvement of the soul calls for nuance.

Facebook Groups and Responsibility

image of Facebook's masthead displayed on computer screen

After the Capitol riot in January, many looked to the role that social media played in the organization of the event. A good amount of blame has been directed at Facebook groups: such groups have often been the target of those looking to spread misinformation as there is little oversight within them. Furthermore, if set to “private,” these groups run an especially high risk of becoming echo chambers, as there is much less opportunity for information to flow freely within them. Algorithms that Facebook uses to populate your feed were also part of the problem: more popular groups are more likely to be recommended to others, which led to some of the more pernicious groups getting a much broader range of influence than they would have otherwise. As noted recently in the Wall Street Journal, while it was not long ago that Facebook saw groups as the heart of the platform, abuses of the feature has forced the company to make some significant changes into how they are run.

The spread of misinformation in Facebook groups is a complex and serious problem. Some proposals have been made to try to ameliorate it: Facebook itself implemented a new policy in which groups that were the biggest troublemakers – civics groups and health groups – would not be promoted during the first three weeks of their existence. Others have called for more aggressive proposals. For instance, a recent article in Wired suggested that:

“To mitigate these problems, Facebook should radically increase transparency around the ownership, management, and membership of groups. Yes, privacy was the point, but users need the tools to understand the provenance of the information they consume.”

A worry with Facebook groups, as well as a lot of communication online generally, is that it can be difficult to tell what the source of information is, as one might post information anonymously or under the guise of a username. Perhaps with more information about who was in charge of a group, then, one would be able to make a better decision as to whether to accept the information that one finds within it.

Are you part of the problem? If you’re actively infiltrating groups with the intent of spreading misinformation, or building bot armies to game Facebook’s recommendation system, then the answer is clearly yes. I’m guessing that you, gentle reader, don’t fall into that category. But perhaps you are a member of a group in which you’ve seen misinformation swirling about, even though you yourself didn’t post it. What is the extent of your responsibility if you’re part of a group that spreads misinformation?

Here’s one answer: you are not responsible at all. After all, if you didn’t post it, then you’re not responsible for what it says, or if anyone else believes it. For example, let’s say you’re interested in local healthy food options, and join the Healthy Food News Facebook group (this is not a real group, as far as I know). You might then come across some helpful tips and recipes, but also may come across people sharing their views that new COVID-19 vaccines contain dangerous chemicals that mutate your DNA (they don’t). This might not be interesting to you, and you might think that it’s bunk, but you didn’t post it, so it’s not your problem.

This is a tempting answer, but I think it’s not quite right. The reason is because of how Facebook groups work, and how people are inclined to find information plausible online. As noted above, sites like Facebook employ various algorithms to determine which information to recommend to its users. A big factor that goes into such suggestions is how popular a topic or group is: the more engagement a post gets, the more likely it’s going to show up in your news feed, and the more popular a group is, the more likely it will be recommended to others. What this means is that mere membership in such a group will contribute to that group’s popularity, and thus potentially to the spread of the misinformation it contains.

Small actions within such a group can also have potentially much bigger effects. For instance, in many cases we put little thought into “liking” or reacting positively to a post: perhaps we read it quickly and it coheres with our worldview, so we click a thumbs-up, and don’t really give it much thought afterwards. From our point of view, liking a post does not mean that we wholeheartedly believe it, and it seems that there is a big difference between liking something and posting it yourself. However, these kinds of engagements influence the extent to which that post will be seen by others, and so if you’re not liking in a conscientious way, you may end up contributing to the spread of bad information.

What does this say about your responsibilities as a member of a Facebook group? There are no doubt many such groups that are completely innocuous, where people do, in fact, only share helpful recipes or perhaps even discuss political issues in a calm and reasoned way. So it’s not as though you necessarily have an obligation to quit all of your Facebook groups, or to get off the platform altogether. However, given that otherwise innocent actions like clicking “like” on a post can have much worse effects in groups in which misinformation is shared, and that being a member of such a group at all can contribute to its popularity and thus the extent to which it can be suggested to others means that if you find yourself a member of such a group, you should leave it.

QAnon and Two Johns

photograph of 'Q Army" sign displayed at political rally

In recent years, threats posed to and by free speech on the internet have grown larger and more concerning. Such problems as authoritarian regimes smothering dissent and misinformation campaigns targeting elections and public health have enjoyed quite a share of the limelight. Social media platforms have sought (and struggled) to address such challenges. Recently, a new insidious threat posed by free speech has emerged: far-right conspiracy theories. The insurrection of January 6th unveiled the danger of speech promoting such beliefs, namely ones the QAnon theory embraces. The insurrection demonstrated that speech promoting the anti-government extremist theory can not only engender violence but existentially threaten the United States. Such speech so threatens harm by manipulating individuals into believing in the necessity of violence to combat the schemes of a secretive, satanic elite. In the days following the insurrection, social media platforms rushed to combat this threat. Twitter alone removed more than 70,000 QAnon-focused accounts from its platform.

This bold but wise move was met with resistance, however. Right-wing media commentators were quick to decry this and similar policies as totalitarian censorship. Legal experts retorted that, as private entities, social media companies can restrict speech on their platform as they please. This is because the First Amendment to the U.S. Constitution protects citizens from legal restrictions on free speech, not the rules of private organizations. Such legal experts may be perfectly correct, and unequivocally siding with them might seem to offer a temptingly quick way to dismiss fanatic right-wing commentators. Nevertheless, caring only about government restrictions on speech seems perilous: such a stance neglects the great importance of social restrictions on speech.

The weight of social restrictions on speech (and behavior, more generally) is very real. Jean-Jacques Rousseau referred to such social restrictions as moral laws. He even seemed to regard this class of laws as more fundamental than the constitutional, civil, and criminal classes. Moral laws are inscribed in the very “hearts of the citizens” and include “morals, customs, and especially opinion.” Violations of these laws are typically penalized with either criticism or ostracism (or both). The emergence of “cancel culture” provides conspicuous examples (for better or worse) of this structure in action, from Gina Carano to John Schnatter. First, an individual (typically, a public figure) violates a moral law (frequently, customary prohibitions on racist speech). Then, the individual receives a punishment (often, in the form of damage to reputation and career). The prohibitions on QAnon-focused Twitter accounts are a form of ostracism: those promoting QAnon beliefs have been expelled from the Twitter community for transgressing moral laws, namely peace (by promoting violence) and honesty (by promoting misinformation). As Twitter has become an integral forum for political discourse (politicians, like former President Trump, heavily rely on the platform to both court popular support and bash their rivals), this Twitter expulsion amounts to marginalization within, or partial expulsion from, general public discourse. Upon considering this, the real restrictiveness of such prohibitions on speech should now be evident.

Once the real strength of social restrictions on speech is acknowledged, a certain tension becomes apparent: that between our liberties concerning speech and our liberties in regard to property. To elaborate, there appears to be a tension between Twitter users and Twitter shareholders (particularly, the right to set and enforce private restrictions on the speech shared over the platform they own). Efforts to balance the two can perhaps be aided by the wisdom of two great Johns: John Locke and Jean-Jacques Rousseau. Their writings offer some thought-provoking perspectives on the grounds and scope of each of the parties’ freedoms.

John Locke believed that rights are derived from nature. He thought they were contained in what he called the Law of Nature: “no one ought to harm another in [their] Life, Health, Liberty, or Possessions.” Certainly, this general rule implies the rights to free speech and property. Moreover, it follows that those particular rights extend only so far as they accord with that rule. Locke’s theory can thus affirm both natural rights and natural limits to them. Stated in Lockean terms, then, the now-removed QAnon accounts apparently promoted speech which transgressed natural limits on the right to free speech (by promoting violence).

Unlike Locke, Jean-Jacques Rousseau held that rights are derived from social agreement, not nature. He held that this social agreement takes the form of continuous negotiation by all members of the “body politic:” manifold “individual wills” are boiled into an all-binding “general will.” In this perspective, the rights to free speech and property extend only so far as social agreement allows. Rousseau’s theory can thus recognize the value of including diverse individuals in social discourse while also recognizing the validity of socially-established regulations on that discourse. Understood in this perspective, Twitter expelled the QAnon accounts for violating regulations on social discourse (namely, by supporting violence and thus threatening the process of discourse itself).

Locke’s and Rousseau’s perspectives can provide a useful guide to assessing the issues related to free speech and the internet. Each perspective offers a framework which seems reasonable and yet is opposed to the other. Considering both, then, should allow for multi-sided and nuanced discussion. Employing these two frameworks (and other conceivable ones), as well as considering the opinions of more recent thinkers, can potentially enrich public discourse surrounding free speech and the internet.

In the Limelight: Ethics for Journalists as Public Figures

photograph of news camera recording press conference

Journalistic ethics are the evolving standards that dictate the responsibilities journalists have to the public. As members of the press, journalists play an important role in the accessibility of information, and unethical journalistic practices can have a detrimental impact on the knowledgeability of the population. Developing technology is a major factor in changes to journalism and the way journalists navigate ethical dilemmas. Both the field of journalism and its ethics have been revolutionized by the internet.

The increased access to social media and other public platforms of self-expression have expanded the role of journalists as public figures. The majority of journalistic ethical concerns focus on journalists’ actions in the scope of their work. As the idea of privacy changes, more people feel comfortable sharing their lives online and journalists’ actions outside of their work come further under scrutiny. Increasingly, questions of ethics in journalism include journalists’ non-professional lives. What responsibilities do journalists have as public-facing individuals?

As a student of journalism, I am all too aware that there is no common consensus on the issue. At the publication I write for, staff members are restricted from participating in protests for the duration of their employment. In a seminar class, a professional journalist discussed workplace moratoriums they’d encountered on publicly stating political leanings and one memorable debate about whether or not it was ethical for journalists to vote — especially in primaries, on the off-chance that their vote or party affiliation could become public. Each of these scenarios stems from a common fear that a journalist will become untrustworthy to their readership due to their actions outside of their work. With less than half the American public professing trust in the media, according to Gallup polls, journalists are facing intense pressure to prove themselves worthy of trust.

Journalists have a duty to be as unbiased as possible in their reporting — this is a well-established standard of journalism, promoted by groups like the Society for Professional Journalists (SPJ). How exactly they accomplish that is changing in the face of new technologies like social media. Should journalists avoid publicizing their personal actions and opinions and opt-out of any personal social media? Or should they restrict them entirely to avoid any risk of them becoming public? Where do we draw the lines?

The underlying assumption here is that combating biased reporting comes down to the personal responsibility of journalists to either minimize their own biases or conceal them. At least a part of this assumption is flawed. People are inherently biased; a person cannot be completely impartial. Anyone who attempts to pretend otherwise actually runs a greater risk of being swayed by these biases because they become blind to them. The ethics code of the SPJ advises journalists to “avoid conflicts of interest, real or perceived. Disclose unavoidable conflicts.” Although this was initially written to be applied to journalists’ professional lives, I believe that that short second sentence is a piece of the solution. “Disclose unavoidable conflicts.” More effective than hiding biases is being clear about them. Journalists should be open about any connections or political leanings that intersect with their field. It truly provides the public with all the information and the opportunity to judge the issues for themselves.

I don’t mean to say that journalists should be required to make parts of their private lives public if they don’t intersect with their work. However, they should not be asked to hide them either. Although most arguments don’t explicitly suggest journalists hide their biases, they either suggest journalists avoid public action that could reveal a bias or avoid any connection that could result in a bias — an entirely unrealistic and harmful expectation. Expecting journalists to either pretend to be bias-free or to isolate themselves from the issues they cover as much as possible results in either dishonesty or “parachute journalism” — journalism in which reporters are thrust into situations they do not understand and don’t have the background to report on accurately. Fostering trust with readers and deserving that trust should not be accomplished by trying to turn people into something they simply cannot be, but by being honest about any potential biases and working to ensure the information is as accurate as possible regardless.

The divide between a so-called “public” or “professional” life and a “private” life is not always as clear as we might like, however. Whether they like it or not, journalists are at least semi-public figures, and many use social media to raise awareness for their work and the topics they cover, while also using social media in more traditional, personal ways. In these situations, it can become more difficult to draw a line between sharing personal thoughts and speaking as a professional.

In early 2020, New York Times columnist Ben Smith wrote a piece criticizing New Yorker writer Ronan Farrow for his journalism, including, in some cases the exact accuracy or editorializing of tweets Farrow had posted. Despite my impression that Smith’s column was in itself inaccurate, poorly researched and hypocritical, it raised important questions about the role of Twitter and other social media in reporting. A phrase I saw numerous times afterwards was “tweets are not journalism” — a criticism of the choice to place the same importance on and apply the same journalistic standards to Farrow’s Twitter account as his published work.

Social media makes it incredibly easy to share information, opinions, and ideas. It is far faster than many other traditional methods of publishing. It can, and has been, a powerful tool for journalists to make corrections and updates in a timely manner and to make those corrections more likely to be viewed by people who already read a story and might not check it again. If a journalist intends them to be, tweets can, in fact, be journalism.

Which brings us back to the issue of separating public from private. Labeling advocacy, commentary, and advertisement (and keeping them separated) is an essential part of ethical journalism. But which parts of these standards should be extrapolated to social media, and how? Many individuals will use separate accounts to make this distinction. Having a work account and personal account, typically with stricter privacy settings, is not uncommon. It does, however, prevent many of the algorithmic tricks people may use to make their work accessible, and accessibility is an important part of journalism. Separating personal and public accounts effectively divides an individual’s audience and prevents journalists from forming more personal connections to their audience in order to publicize their work. It also prevents the engagement benefits of more frequent posting that comes from using a single account. By being asked to abstain from a large part of what is now ordinary communication with the public, journalists are being asked to hinder their effectiveness.

Tagging systems within social media currently provide the best method for journalists to mark and categorize these differences, but there’s no “standard practice” amongst journalists on social media to help readers navigate these issues, and so long as debates about journalistic ethics outside of work focus on trying to restrict journalists from developing biases at all, it won’t become standard practice. Adapting to social media means shifting away from the idea that personal bias can be prevented by isolating individuals from the controversial issues, rather than helping readers and journalists understand, acknowledge, and deconstruct biases in media for themselves by promoting transparency and conversation.

Trump and the Dangers of Social Media

photograph of President Trump's twitter bio displayed on tablet

In the era of Trump, social media has been both the medium through which political opinions are disseminated and a subject of political controversy itself. Every new incendiary tweet feeds into another circular discussion about the role sites like Twitter and Facebook should have in political discourse, and the recent attack on the U.S. Capitol by right-wing terrorists is no different. In what NPR described as “the most sweeping punishment any major social media company has ever taken against Trump,” Twitter has banned the president from using their platform. Not long before Twitter’s announcement, Facebook banned him as well, and now Parler, the conservative alternative to Twitter, has been removed from the app store by Apple.

While these companies are certainly justified in their desire to prevent further violence, is this all too little, too late? Much in the same way that members of the current administration have come under fire for resigning with only two weeks left in office, and not earlier, it seems that social media sites could have acted sooner to squash disinformation and radical coordination, potentially averting acts of domestic terror like this one.

At the same time, there isn’t a simple way to cleanse social media sites of white supremacist violence; white supremacy is insidious and often very difficult to detect through an algorithm. This places social media sites in an unwinnable situation: if you allow QAnon conspiracy theories to flourish unchecked, then you end up with a wide base of xenophobic militants with a deep hatred for the left. But if you force conspiracy theorists off your site, they either migrate to new, more accommodating platforms (like Parler), or resort to an ever-evolving lexicon of dog-whistles that are much harder to keep track of.

Furthermore, banning Trump supporters from social media sites only feeds into their imagined oppression; what they view as “censorship” (broad social condemnation for racist or simply untrue opinions) only serves as proof that their First Amendment rights are being trampled upon. This view, of course, ignores the fact that the First Amendment is something the government upholds, not private companies, which Trump-appointee Justice Kavanaugh affirmed in the Supreme Court in 2019. But much in the same way that the Confederacy’s romantic appeal relies on its defeat, right-wing pundits who are banned from tweeting might become martyrs for their base, adding more fuel to the fire of their cause. As David Graham points out, that process has already begun; insurrectionists are claiming the status of victims, and even Republican politicians who condemn the violence in one moment tacitly validate the rage of conspiracy theorists in another.

The ethical dilemma faced by social media sites at this watershed moment encompasses more than just politics. It also encompasses the idea of truth itself. As Andrew Marantz explained in The New Yorker,

“For more than five years now, a complacent chorus of politicians and talking heads has advised us to ignore Trump’s tweets. They were just words, after all. Twitter is not real life. Sticks and stones may break our bones, but Trump’s lies and insults and white-supremacist propaganda and snarling provocations would never hurt us.” But, Marantz goes on, “The words of a President matter. Trump’s tweets have always been consequential, just as all of our online excrescences are consequential—not because they are always noble or wise or true but for the opposite reason. What we say, online and offline, affects what we believe and what we do—in other words, who we are.”

We have to rise about our irony and detachment, and understand as a nation that language is not divorced from reality. Conspiracy theories, which depend in large part on language games and fantasy, must be addressed to prevent further violence, and only an openness to truth can help us move beyond them as a nation.

Bella Thorne and Celebrities Inhabiting Shared Spaces

photograph of bella thorne on red carpet with crowd behind her

The age of technology has brought many new things into modern life, but arguably one of the most influential and important is social media. A radically new world was created online where everyone around the globe can be connected within seconds no matter their location. One of the groups to take advantage of this instant connection was celebrities as social media and online platforms allow them to connect with their fans directly and give audiences glimpses into their private lives, without having to actually even meet in-person. This has given rise to the phenomena of celebrity culture where the public can know almost any aspect of a star’s life. Some have used this trend to help build their fame and monetize their brands. While celebrities have every right to use these platforms just like any other member of the public, they enter into these spaces with an unfair advantage. They have a following and a brand, which usually disrupts some of the communities that are made up of the public, who might depend on these platforms to make a living. There’s a fine line here for celebrities to watch, as their introduction to these spaces threatens to undermine these platforms, and perhaps eliminate, or at least adulterate, this communal space.

Recently, one platform in particular, OnlyFans, has taken over the pornography market by allowing individuals to have autonomy over what and when they create. This form of pornography can be highly personal with subscribers getting to know the performers whose bodies and lives they are consuming. With OnlyFans, as long as you gain a following, anyone can make money through this form of sex work, without having to find a studio, or work in the public space. A new creator on the platform, actress Bella Thorne who started her career with the popular Disney show “Shake it Off,” broke records within 24 hours of her appearance on the site. She announced her introduction to OnlyFans with Paper Magazine where she wanted to discuss “the politics behind female body shaming & sex.” Immediately, she made headlines with her addition, which inevitably began sparking conversations around sex work and female sexuality — the discussion that she hoped would be happening.

There are both advantages and disadvantages to a celebrity of Bella Thorne’s caliber joining OnlyFans. Sex work has historically been a job that is not seen as a valid form of work and is criminalized in most countries around the world. As a consequent of this criminalization there are specific dangers that sex workers face in their line of employment, which are usually ignored by politicians, police officers, and society as a whole. If celebrities begin to partake in creating this type of content, however, a normalization may begin, which could work to validate and decriminalize sex work, and possibly address those issues that sex workers face daily. This appears to be Bella Thorne’s intention behind her move to OnlyFans. But she gravely miscalculated the responsibility she had to ensure that she didn’t hurt the very community she was trying to help.

Sex workers who rely on their income from OnlyFans faced a crisis as the website suddenly changed their policies, limiting the freedom and ability of performers to make a living off the platform. The catalyst to these changes was directly after Thorne made her debut on the platform, however, OnlyFans claims the two weren’t connected. Thorne made $1 million dollars within her first day on the site and $2 million after the first week. She also caused massive refunds after people paid for a nude photo, which in reality was not nude, and therefore many of those subscribers were demanding their money back from OnlyFans. Shortly after, the platform set limits on how much creators could charge for their content and the amount that consumers could give in tips to performers. Additionally, they lengthened the time that performers would receive their income to 30 days. A company that was once a safe space for sex workers to earn their living is now catering to the effects of celebrities. They profit from the audience that these big names bring on to their site, all the while ignoring the concerns of everyday sex workers whose livelihoods depend on the platform. For Bella Thorne, joining the platform is a way to have fun with her sexuality and popularity without the censorship or judgement of platforms like Instagram. She does not depend on that money for rent or food. She experiences little to none of the stigma that sex workers face daily. Her actions did not help the sex worker community, but actually severely hurt a community that is already one of the most marginalized.

What responsibility does Thorne even have in starting these conversations over sexual politics and female sexuality? How should she use her celebrity status and the privilege of millions of followers listening and watching her? One cannot ignore the fact that a lot of this increasing legitimacy of sex work has centered around middle-upper class white women beginning to explore the realms of sex work, while women of color continue to experience the stigma and marginalization of sex work. While sex work may slowly begin to be seen as a proper line of employment, there seems to be an otherness appearing in it, in which it is acceptable for certain women, but deplorable for others to take part in. This normalization is beginning to look more like a gentrification in which white women profit off the work that other women have been doing for decades, which would of course only continue to hurt a large portion of the sex worker community. So, perhaps it was not even Thorne’s place to be the catalyst to start those conversations she wants to have. Her attempt to make that conversation was centered around herself and her own experiences. Instead of reaching out to women already experienced in the industry, she decided to see for herself the inner workings of the industry. But, it is impossible for a celebrity like her to experience sex work in a way that accurately represents the issues that sex workers deal with in reality.

Bella Thorne, however, is not the only celebrity to hop on this trend. The biggest name to recently join the platform is rapper Cardi B, although she won’t be creating sexual content, but rather exclusive content on her life and music. Some other celebrities like rapper Tyga, or YouTuber Tana Mangeau are deciding to follow in Thorne’s direction and make sexual content for their consumers. All of these celebrities can bring waves of fans to the site looking to buy subscriptions for the exclusive content. Whether their selling sex or exclusive updates on music, however, they will be entering a platform that already has plenty of competition for subscribers. Sex workers and musicians depend on their subscriptions from OnlyFans to continue paying rent or buying groceries, especially in the midst of a global pandemic, which are concerns that none of these celebrities would ever have to troubles themselves with. While the platform may be useful for them to promote their albums, have fun with their sexuality, or connect with fans, all their profits are solely pocket money for them. They could accomplish all of those things through Instagram pages with their millions of followers, or with a multitude of opportunities that are not open to the public. Celebrities need to recognize the havoc that they can wreak on the lives of everyday people when they decide to turn their livelihoods into fun experiments on social media.

Ethical Considerations of Deepfakes

computer image of two identical face scans

In a recent interview for MIT Technology Review, art activist Barnaby Francis, creator of deepfake Instagram account @bill_posters_uk, mused that deepfake is “the perfect art form for these kinds of absurdist, almost surrealist times that we’re experiencing.” Francis’ use of deepfakes to mimic celebrities and political leaders on Instagram is aimed at raising awareness about the danger of deepfakes and the fact that “there’s a lot of people getting onto the bandwagon who are not really ethically or morally bothered about who their clients are, where this may appear, and in what form.” While deepfake technology has received alarmist media attention in the past few years, Francis is correct in his assertion that there are many researchers, businesses, and academics who are pining for the development of more realistic deepfakes.

Is deepfake technology ethical? If not, what makes it wrong? And who holds the responsibility to prevent the potential harms generated by deepfakes: developers or regulators?

Deepfakes are not new. The first mention of deepfake was by a reddit user in 2017, who began using the technology to create pornographic videos. However, the technology soon expanded to video games as a way to create images of people within a virtual universe. However, the deepfake trend suddenly turned toward more global agendas, with fake images and videos of public figures and political leaders being distributed en masse. One altered video of Joe Biden was so convincing that even President Trump fell for it. Last year, there was a deepfake video of Mark Zuckerberg talking about how happy he was to have thousands of people’s data. At the time, Facebook maintained that deepfake videos would stay up, as they did not violate their terms of agreement. Deepfakes have only increased since then. In fact, there exists an entire YouTube playlist with deepfake videos dedicated to President Trump.

In 2020, those who have contributed to deepfake technology are not only individuals in the far corners of the internet. Researchers at the University of Washington have also developed deepfakes using algorithms in order to combat their spread. Deepfake technology has been used to bring art to life, recreate the voices of historical figures, and to use celebrities’ likeness to communicate powerful public health messages. While the dangers of deepfakes have been described by some as dystopian, the methods behind their creation have been relatively transparent and accessible.

One problem with deepfakes are that they mimic a person’s likeness without their permission. The original Deepfakes, which used photos or videos of a person mixed with pornography uses a person’s likeness for sexual gratification. Such use of a person’s likeness might never personally affect them, but could still be considered wrong, since they are being used as a source of pleasure and entertainment, without consent. These examples might seem far-fetched, but in 2019 a now-defunct app called DeepNude, sought to do exactly that. Even worse than using someone’s likeness without their knowledge, is if the use of their likeness is intended to reach them and others, in order to humiliate or damage their reputation. One could see the possibility of a type of deepfake revenge-porn, where scorned partners attempt to humiliate their exes by creating deepfake pornography. This issue is incredibly pressing and might be more prevalent than the other potential harms of deepfakes. One study, for example, found that 96% of existing deepfakes take the form of pornography.

Despite this current reality, much of the moral concern over deepfakes is grounded in their potential to easily spread misinformation. Criticism around deepfakes in recent years has been mainly surrounding their potential for manipulating the public to achieve political ends. It is becoming increasingly easy to spread a fake video depicting a politician who is clearly incompetent or spreading a questionable message, which might detract from their base. On a more local level, deepfakes could be used to discredit individuals. One could imagine a world in which deepfakes are used to frame someone in order to damage their reputation, or even to suggest they have committed a crime. Video and photo evidence is commonly used in our civil and criminal justice system, and the ability to manipulate videos or images of a person, undetected, arguably poses a grave danger to a justice system which relies on our sense of sight and observation to establish objective fact. Perhaps even worse than framing the innocent could be failing to convict the guilty. In fact, a recent study in the journal Crime Science found that deepfakes pose a serious crime threat when it comes to audio and video impersonation and blackmail. What if a deepfake is used to replace a bad actor with a person who does not exist? Or gives plausible deniability to someone who claims that a video or image of them has been altered?

Deepfakes are also inherently dishonest. Two of the most popular social media networks, Instagram and TikTok, inherently rely upon visual media which could be subject to alteration by self-imposed deepfakes. Even if a person’s likeness is being manipulated with their consent and also could have positive consequences, it still might be considered wrong due to the dishonest nature of its content. Instagram in particular has been increasingly flooded with photoshopped images, as there is an entire app market that exists solely for editing photos of oneself, usually to appear more attractive. The morality of editing one’s photos has been hotly contested amongst users and between feminists. Deepfakes only stand to increase the amount of media that is self-edited and the moral debates that come along with putting altered media of oneself on the internet.

Proponents of deepfakes argue that their positive potential far outweighs the negative. Deepfake technology has been used to spark engagement with the arts and culture, and even to bring historical figures back to life, both for educational and entertainment purposes. Deepfakes also hold the potential to integrate AI into our lives in a more humanizing and personal manner. Others, who are aware of the possible negative consequences of deepfakes, argue that the development and research of this technology should not be impeded, as the advancement of the technology can also contribute to research methods of spotting it. And there is some evidence backing up this argument, as the development of deepfake progresses, so do the methods for detecting it. It is not the moral responsibility of those researching deepfake technology to stop, but rather the role of policymakers to ensure the types of harmful consequences mentioned above do not wreak havoc on the public. At the same time, proponents such as David Greene, of the Electronic Frontier Foundation, argue that too stringent limits on deepfake research and technology will “implicate the First Amendment.”

Perhaps then it is not the government nor deepfake creators who are responsible for their harmful consequences, but rather the platforms which make these consequences possible. Proponents might argue that the power of deepfakes is not necessarily from their ability to deceive one individual, but rather the media platforms on which they are allowed to spread. In an interview with Digital Trends, the creator of Ctrl Shift Face (a popular deepfake YouTube channel), contended that “If there ever will be a harmful deepfake, Facebook is the place where it will spread.” While this shift in responsibility might be appealing, detractors might ask how practical it truly is. Even websites that have tried to regulate deepfakes are having trouble doing so. Popular pornography website, PornHub, has banned deepfake videos, but still cannot fully regulate them. In 2019, a deepfake video of Ariana Grande was watched 9 million times before it was taken down.

In December, the first federal regulation pertaining to deepfakes passed through the House, the Senate, and was signed into law by President Trump. While increased government intervention to prevent the negative consequences of deepfakes will be celebrated by some, researchers and creators will undoubtedly push back on these efforts. Deepfakes are certainly not going anywhere for now, but it remains to be seen if the potentially responsible actors will work to ensure their consequences remain net-positive.

What Reddit Can Teach Us About Moral Philosophy

eye looking through keyhole

Moral philosophy is enjoying a moment in popular culture. Shows like The Good Place have made ethics accessible to broad audiences, and publishing houses churn out books on the philosophical underpinnings of franchises like Star Wars and Game of Thrones. One example of this can be found on Reddit, a social media site that hosts a myriad of topic-based forums. In particular, the subreddit “Am I the Asshole?” exemplifies pop culture’s breezy and accessible approach to moral philosophy, while also shedding light on how and why we engage in ethical questions.

This extremely popular subreddit boasts over two million subscribers, and claims to offer “A catharsis for the frustrated moral philosopher in all of us, and a place to finally find out if you were wrong in an argument that’s been bothering you.” Reddit users post stories about relationship problems, family squabbles, and workplace tension. Any conflict will do, so long as it doesn’t involve physical violence and the original poster has some reason to believe they were in the wrong. Those who comment on these stories are required to pass judgment, expressed using the subreddit’s shorthand language, followed by a brief explanation of their ruling. A person can be either NTA (not the asshole), YTA (you are the asshole), ESH (everyone sucks here, for situations where all parties did something indefensible), and NAH (no assholes here, for situations where no one is in the wrong). A bot eventually sifts through these comments, and the post is labeled with the most popular judgment.

As the word “asshole” implies, this isn’t the place to rail against class oppression or the cruelties of fate. AITA focuses on everyday interpersonal drama, and it’s understood that being labeled YTA isn’t necessarily a judgment on the entirety of a person’s character. Every judgment is situational, though commentators may point out larger patterns of problematic thought or behavior if they emerge, and the subreddit operates under a shared understanding that many of us act immorally without malicious intent. Even in the somewhat rigid lexicon of judgment, there’s room for shades of gray, for the ambiguities of social life. As Tove Danovich writes in an article for The Ringer, “The scope of the problems on AITA, even when the judgment is a difficult one to make, is human, and therefore more manageable. They’re medium questions asked and answered by medium people who just want to be a little bit better.”

Even though the subreddit’s scope is somewhat limited, one aspect of the AITA’s culture offers a window into the role narrative plays in shaping our sense of right and wrong. Scrolling through the front page, one very frequently encounters stories where the original poster was indisputably in the right: “Someone ran me over with their car, and as I went flying over their windshield, I accidentally dented their front hood. AITA?” So many users were annoyed by these posts that they started their own parody subreddit, “Am I the Angel,” where the saintly and oblivious tone of AITA posts are mocked. An ungenerous interpretation of these posts would be that some people just want a pat on the head. They aren’t actually looking for a moral judgment, they want to vent about a situation they already recognize as unfair. Alternatively, one could argue that we so often lack perspective on our own lives, and what seems obviously wrong to an impartial third party may be less transparent from the inside. But one AITA user suggests a different interpretation. On a post from a man who no longer wanted to let his disabled neighbor park in their driveway, Reddit user boppitywop commented, “I think the majority of these posts are because people feel guilty, and they are looking to assuage their guilt. [The original poster] is not the asshole but they’ve made someone’s life a lot more inconvenient and doesn’t feel good about it. [This subreddit] serves the purpose of socially normalizing something that a person feels bad about.”

This comment reveals both the limits and potential of AITA. Some situations are morally intractable, and require a lot more than interpersonal skills (or an understanding of one’s wrongdoing) to effectively address. But they also correctly point out the social function of storytelling. AITA posts help users renegotiate the boundaries between right and wrong in a way that feels deeply communal. Norms are both established and questioned in this online space. The judgment system may feel very open-and-shut, but reading through the comment section of popular posts reveals an ongoing dialogue with the moral philosophy of everyday life.

But in any narrative, language often betrays the biases or intentions of the teller. One ubiquitous trope you’ll notice if you fall down the AITA rabbit hole is what I would call the “sudden turn”: in the middle of an encounter, the antagonist of the story will begin to bawl, shriek, or throw a tantrum without clear provocation. The other person is portrayed as irrational or inscrutable, and one often feels the gap here where their perspective on the situation could fit. Commenters are often very perceptive about the original poster’s word choice, but the way the story is told inevitably colors our judgment of the encounter. This sense of messiness and instability accurately reflects how we experience conflict, and reminds us that all moral arguments, whether large or small, contain some speck of subjectivity.

It’s a simple truth that judging people we don’t know is fun, sometimes even addicting. The voyeuristic element of AITA is certainly worthy of critique, but at the same time, anonymity is crucial to the communal storytelling experience. In an era where few define themselves by a single ethical belief system, AITA helps readers wade through the mire of modern life, and testifies to a universal desire to understand what we owe to one another.

What Would Kierkegaard Make of Twitter?

photogrph of Twitter homepage on computer screene

In the weeks leading up to Election Day 2020, Twitter and other social media companies announced they would be voluntarily implementing new procedures to discourage the spread of misinformation across their platforms; on November 12th, Twitter indicated that it would maintain some of those procedures indefinitely, arguing that they were successful in slowing the spread of election misinformation. In general, the procedures in question are examples of “nudges” designed to subtly influence the user to think twice before spreading information further through the social network; dubbed “friction” by the social media industry, examples include labeling (and, in some cases, hiding) tweets containing misleading, disputed, or unverified claims, and double-prompting a user who attempts to share a link to an article that they have not opened. While the general effectiveness of social media friction remains unclear (although at least one study related to COVID-19 misinformation has shown promise), Twitter has argued that their recent policy changes have led to a 29% reduction in quote-tweeting (where a user simultaneously comments on and shares a tweet) and a 20% overall reduction in tweet-sharing, both of which have slowed the spread of misleading information.

We currently have no shortage of ethical questions arising from the murky waters of social networks like Twitter. From the viral spread of “fake news” and propaganda to the problems of epistemic bubbles and echo chambers to malicious agents spearheading disinformation campaigns to the fostering of violence-producing communities like QAnon and more, alerts about the risks posed by social media programs are aplenty (including here at The Prindle Post, such as Desdemona Lawrence’s article from August of 2018). Given the size of Twitter’s user base (it was the fourth-most-visited website by traffic in October 2020 with over 353 million users visiting the site over 6.1 billion times), even relatively uncommon problems could still manifest in significant numbers and no clear solution has arisen for limiting the spread of falsehoods that would not also limit benign Twitter usage.

But is there such a thing as benign Twitter usage?

The early existentialist philosopher and theologian Søren Kierkegaard might think not. Writing from Denmark in the early 1800s, Kierkegaard was exceedingly skeptical of the social movements of his day; as he explains in The Present Age: On the Death of Rebellion, “A revolutionary age is an age of action; ours is the age of advertisement and publicity. Nothing ever happens but there is immediate publicity everywhere.” Instead of living full, meaningful lives, Kierkegaard criticized his contemporaries for simply desiring to talk about things in ways that, ultimately, amounted to little more than gossip. Moreover, Kierkegaard saw how this would underlie a superficiality of love for showing off to “the Public” (the abstract collection of people made up of “individuals at the moments when they are nothing”); all this “talkativeness” would produce a constant “state of tension” that, in the end, “exhausts life itself.” Towards the end of his essay, Kierkegaard summarizes his criticism of his social environment by saying that “Everyone knows a great deal, we all know which way we ought to go and all the different ways we can go, but nobody is willing to move.”

This all probably sounds unsettlingly familiar to anyone with a Twitter account.

Instead of giving into the seductions and the talkativeness of the present age, Kierkegaard argues for the value of silence, saying that “only someone who knows how to remain essentially silent can really talk — and act essentially” (that is, act in a way that would give one’s life genuine meaning). Elsewhere, in the first Godly Discourse of The Lily of the Field and the Bird of the Air, Kierkegaard draws a lesson from birds and flowers about the value of quietly focusing on what genuinely matters. As a Christian theologian, Kierkegaard locates ultimate value in “the Kingdom of God” and argues that lilies and birds do not speak, but are simply present in the world in a way that mimics a humble, unassuming, simple presence before God. The earnestness or authenticity that comes from learning how to live in silence allows a person to avoid the distractions prevalent in the posturing of social games. “Out there with the lily and the bird,” Kierkegaard writes, “you perceive that you are before God, which most often is quite entirely forgotten in talking and conversing with other people.”

Indeed, the talkativeness and superficiality inherent to the operation of social media networks like Twitter would trouble Kierkegaard to no end, even before considering the myriad ways in which such networks can be abused. And, in a similar way, whatever we now consider to be of ultimate importance (be that Kierkegaard’s God or something else), the phenomenology of distraction away from its pursuit is no small thing. Twitter can (and should) continue to try and address its role in the spread of misinformation and the like, but no matter how much friction it creates for its users, it seemingly can’t promote contemplative silence: “talkativeness” is a necessary Twitter feature.

So, Kierkegaard would likely not be interested in the Twitter Bird much at all; instead, he would say, we should attend to the birds of the air and the lilies of the field so that we can learn how to silently begin experiencing life and other things that truly matter.

In Defense of Mill

collage of colorful speech bubbles

In recent years, commentators — particularly those who lean left — have become increasingly dubious about John Stuart Mill’s famous defense of an absolutist position on free speech. Last week, for instance, The New York Times published a long piece by Yale Law School professor Emily Bazelon in which she echoes a now-popular complaint about Mill: that his arguments are fundamentally over-optimistic about the likelihood that the better argument will win the day, or that “good ideas win.” In this column, I will argue that this complaint rests on a mistaken view of Mill.

Mill’s argument, briefly stated, is that no matter whether a given belief is true, false, or partly true, its assertion will be useful for discovering truth and maintaining knowledge of the truth, and therefore it should not be suppressed. True beliefs are usually suppressed because they are believed to be either false or harmful, but according to Mill, to suppress a belief on these grounds is to imply that one’s grasp of the truth or of what is harmful is infallible. Mill, an empiricist, believed that no human being has infallible access to the truth. Even if the belief is actually false, its assertion can generate debate, which will lead to greater understanding and ensure that truths do not lapse into “mere dogma.” Finally, if the belief is partially true, it should not be suppressed because it can be indispensable to discovering the “whole” truth.

Notice that Mill’s whole argument concerns the assertion of beliefs, or the communication of what the speaker genuinely takes to be true. The key assumption in Mill’s argument is thus not that the truth will win out in the rough and tumble of debate. This may well be true — at least, it may be true in the long run, when every participant is really engaging in debate, or the evaluation of truth claims. Rather, Mill is taking as given that a lot of the public discourse is aimed at communicating truth claims in good faith. The problem is that much of this discourse is not intended to inform others about what speakers actually believe. Much of the public discourse is propaganda — speech aimed at achieving some political outcome, rather than at communicating belief. As Bazelon points out, referring to the deluge of disinformation that currently swamps our national public conversation,

“The conspiracy theories, the lies, the distortions, the overwhelming amount of information, the anger encoded in it — these all serve to create chaos and confusion and make people, even nonpartisans, exhausted, skeptical and cynical about politics. The spewing of falsehoods isn’t meant to win any battle of ideas. Its goal is to prevent the actual battle from being fought, by causing us to simply give up.”

The purpose of disinformation propaganda is to overwhelm people with contradictory claims and ultimately to encourage their retreat into apolitical cynicism. Even where propagandists appear to be in the business of putting forward truth claims, this is always in bad faith: propagandists aren’t trying to express truth claims. 

Where does this leave Mill? Mill may have been mistaken in overlooking the pervasiveness of propaganda. However, his defense of free speech need not extend to propaganda. If Mill is concerned only with defending communicative acts that are aimed at expressing belief, then we have no reason to think that Mill needs to defend propaganda. Thus, a Millian defense of speech can distinguish between speech that is intended primarily to express a truth claim and speech that is intended primarily to effect some political outcome. While the former must be protected from suppression, the latter need not be, precisely because the latter is not aimed at, nor likely to produce, greater understanding.

Of course, this distinction might be difficult to draw in practice. Nevertheless, new policies recently rolled out by social media platforms appear to be aimed precisely at suppressing the spread of harmful propaganda. Twitter banned political ads a year ago, and last month Facebook restricted its Messenger app by preventing mass forwarding of private messages. Facebook’s Project P (P for propaganda) was an internal effort after the 2016 election to take down pages that spread Russian disinformation. Bazelon recommends pressuring social media platforms into changing their algorithms or identifying disinformation “super spreaders” and slowing the virality of their posts. Free speech absolutists might decry such measures as contrary to John Stuart Mill’s vision, but I have suggested that this might be a mistake.

Anti-Maskers and the Dangers of Collective Endorsement

photograph of group of hands raised

Tensions surrounding the coronavirus pandemic continue to run high, especially in parts of America in which discussions over measures to control spread of the virus have become something of a political issue. Recently, some of these tensions erupted in the form of protests of “anti-maskers”: in Florida, for example, a group of such individuals marched through a Target, telling people to take off their masks, and playing the song “We’re Not Going to Take It.” Presumably the “it” that they were no longer interested in taking pertained to what they perceived to be a violation of personal liberties, as they felt as though they were being forced to wear a mask against their wills. While evidence regarding the effectiveness of masks at keeping oneself and others safe continues to grow, there nevertheless remains a vocal minority that believes otherwise.

A lot of thought has been put into the problem of why it is that people continually ignore good scientific evidence, especially when the consequences of doing so are potentially dire. There is almost certainly no singular, easy answer to the problem. However, there is one potential reason that I think is worth focusing on, namely that anti-maskers, among many others of those who reject the best available scientific evidence on a number of issues, will tend to trust sources that they find on social media instead of through more reputable outlets. For instance, one investigation of why anti-maskers hold their beliefs pointed to the effects of Facebook groups in which such beliefs are discussed and shared. Indeed, despite their efforts to contain the spread of such misinformation, anti-masker Facebook groups remain easy to find.

However, the question remains: why would anyone believe a group of random Facebook users over scientific experts? The answer to this is no doubt multifaceted as well. But one reason may come down to a matter of trust, and that the ways we determine who is trustworthy works differently online than it does in other contexts.

As frequent internet users will no doubt be familiar with already, it can often be difficult to identify trustworthy sources of information online. One reason is that the internet offers varying degrees of anonymity: the consequence is that one will potentially not have much information about the person one’s talking with, especially given the possibility that people can fabricate aspects of their identities in online environments. Furthermore, interacting with others through text boxes on a computer screen is a very different kind of interaction than one that occurs face-to-face. For instance, researchers have shown that there are different “communication cues” that we pick up on when interacting with each other, including verbal cues like tone of voice, volume of speech, and rate at which one is speaking, and visual cues like facial expressions and body language. These kinds of cues are important when we make judgments about whether we should believe what the other person is saying, and are largely absent in a lot of online communication.

With less information about each other to go on when interacting online, we will then tend to look to other sources of information when determining who to trust. One thing internet users tend to appeal to is endorsement. For instance, when reading things on social media or message board sites we tend to put more trust in those posts that have the most hearts, or likes, or upvotes, etc. This is perhaps most apparent when you’re trying to decide what product to buy: we tend to gravitate towards those with not only the highest ratings, but those that have the most high ratings (something with one 5 star review doesn’t mean much, but a product with hundreds of high reviews means a lot more). The same can be the case when it comes to determining which information to believe: if your post has thousands of endorsements then I’m probably going to at least give it a look, whereas if it has very few, I’ll probably pass it by.

There is good reason to trust information that is highly endorsed. As noted above, it can be hard to determine who to trust online because it’s not clear whether someone is really who they say they are. It’s easy for me to join a Facebook group and tell everyone that I’m an epidemiologist, for example, and without having access to any more information about me you’ve got little other than my word to go on. Something that’s much harder to fake, though, is a whole bunch of likes, or hearts, or upvotes. So the thought is that if enough other people endorse something, that’s good reason to trust it. So here’s one reason why people getting their information off social media might trust that information more than that coming from the experts, namely because it is highly endorsed by many other members of their group.

At the same time, people might be more willing to believe those with whom they interact with online in virtue of the fact that they are interacting with them. For instance, when a scientific body like the CDC tells you that you should be wearing a mask, information is traveling in only one direction. When interacting with groups online, though, it can be much easier to trust those that you are interacting with, and not merely deferring to. Again, this is one of the problems raised by online communication: while there is lots of good information available, it can be easier to trust those with whom one can engage with, as opposed to just take orders from them.

Again, given that the problem is complex and multifaceted means that there will not be a one-size-fits-all solution. That said, it is worthwhile to think about how it might be possible for those with the good information to establish relationships of trust with those who need it, given the unique qualities of online environments.

Under Discussion: Platforms of Power and Privilege

image of megaphone amplifying certains rays from an array of color bands

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: The Harper’s Letter.

Many individuals in the public sphere have signed an open letter referred to as the Harper’s Letter. The gist of the letter is that the free discourse of ideas is currently being hampered by what has been called “cancel culture” — the sudden and wide-ranging criticism that individuals in the public eye are subject to when private citizens find their speech or behavior unacceptable. The undersigned of this letter represent all manner of points across the political spectrum and a variety of professions.

The letter itself tends to fixate on contributors who occupy a privileged position in public debate: editors, authors, journalists, professors. In focusing on the figures with high-impact voices in public dialog, the letter misses important features of open discourse. As participants in dialog, there are responsibilities we have to one another as speakers, as contributors to our public discourse. We do not voice opinions, or indeed act, in a vacuum. We speak and operate in contexts where a great deal of our meaning is determined by our intended audience and the conversations we are entering into. In short, what we say and do depend on the world around us for its meaning and impact.

The public figures that signed the Harper’s letter have received public sanction for their speech and behavior, and the letter comes off as a complaint about the public consequences of their own behavior as it does their characterization of public discourse in general. When speaking as a public figure, or occupying a privileged position in public discourse in general, your voice has a context that is open to criticism by a broad audience, and, unfortunately for some, that audience finds their voices wanting.

No one deserves to occupy a particular space in the public sphere in our society, or to be above criticism. No one deserves to be in a privileged position, such as editors, authors, and other media figures who speak to the public. Such figures are open to criticism and bear the consequences for their behavior according to public judgment and standards just as private members of the community do, but at the scale of their privileged position. No one has to listen to them or subject them to “exposure, argument, or persuasion” (as the Harper’s Letter seems to demand of immoral and toxic, misinformed behavior and speech that is particularly damaging to society when amplified by these privileged voices).

We have categories that limit harmful speech, such as “harassment,” “libel,” and “slander” that handle those instances where criticism becomes out of line, but the Harper’s Letter equates publicly criticizing speech or figures being de-platformed with being “silenced.” If one’s livelihood depends on public opinion, then part of their professional expertise is managing their public image, and they have not performed it adequately when they are subject to the amount of public criticism that the undersigned describe.

However, it may be more or less appropriate to take public criticism as the standard by which the speech of individuals should be judged. As public figures, we can, for instance, consider what the standards are for your position as a public figure? Depending on the grounds for the attention and status you have, public criticism may be more or less warranted, or perhaps should have more or less of a degree of impact on your life and career.

An example from the letter regards issues with faculty in universities. On occasion, professors’ scheduled talks at other universities have been met with student protests, making them unable to present their speech. These cases are complicated, as faculty aren’t exactly public figures, but when they are asked to speak, they are being given a voice above other possible speakers — it is not part of their explicit job, and the inviting institution had options for which voices to promote. In that sense, criticism by the audience can be appropriate regarding the speech. Audiences are constituent members of acts of speech, and speakers don’t inherently deserve one. On the wider professional level in academia, your research is judged by your peers, and you are not a privileged speaker or public figure. When your peers find your scholarship wanting, your speech is silenced according to some loose standard. Such an incident happened recently with Daniel Feller, the historian who gave the Plenary Address at the 2020 Shear Conference. Professor Feller drew criticism when his speech diminished the atrocities committed by Andrew Jackson against Native Americans, with many of his peers submitting scholarly evidence countering the points in his speech.

There are further examples where individuals draw criticism for their speech and behavior that are in line with the undersigned’s personal grievances. With individual figures whose careers are primarily in the public sphere, the standards for criticisms can be more amorphous. Whether they should have a career, or whether their place in the public eye should be actively discouraged, could depend on a number of things.

For instance, one complicated example of publicity and criticism concerns businesses and the sudden, large outcry policies and statements can provoke. Large numbers regularly criticize business leaders in and the direction of their profits, calling for their removal from their positions or boycotting their companies’ goods. Chik-fil-A, Soul Cycle, and LaCroix are just a few recent examples. These leaders have a very privileged position; economic realities influence the political reality in an extreme way in the United States. As of 2015, corporations spend more money lobbying Congress than taxpayers spend funding Congress.

It might be helpful to consider other comparisons. Civil servants, for instance, depend on their constituents for their legitimacy. If their voters disagree with anything about their lives, conduct, opinions, or personality, it is sufficient for them to lose their justification for having that position. The grey area here is the connection between celebrity and political role. Often in order to remove someone from their role in politics, public messaging plays a large part and this involves open criticism that damages reputations and employs strategies that are frequently controversial. This is also the feature that makes public criticism and campaigning to remove individuals from the public roles they occupy difficult to parse.

News anchors and other media figures explicitly depend on their behaviors and speech to be understood in particular ways and to meet societal standards where sufficient amounts of their audience approve of their speech and behavior. When their speech and behavior elicits sudden and large public outcry, this is a professional rather than a personal issue, more similar to civil servants than academics.

For artists, the connection between creating art and the celebrity it can bring is more complicated than for civil servants and media figures. If artists take on the mantle of public figure, they also take on the potential for public criticism and blame.

There are two identifiable threads that people find alarming when sudden and marked criticism targets public figures. First, it can seem undeserved, or an overreaction, in which case the outcry seems unjust, or unfairly backing someone into a corner or painting one with too broad a brush. This leads to a defensive response by the object of criticism, and a vulnerable and defensive reaction by some of the audience of the events. The response this engenders denies that the wrongdoing was “all that bad.” It suggests that we should be more tolerant to the behavior that is being called out.

When the defensive reaction elicits a denial of the misstep or outright wrong of the public figure, this can obviously be very problematic. Everyone goes wrong at multiple times in their life be it through behavior, speech, or intention. Public figures *do* go wrong, just as private members of society do. We need to not only acknowledge that, but also take seriously the over-sized impact that public figures have in our society: they influence our communities more than private members do. The increased impact brings with it more responsibility for critical reflection.

It is *wrong* for J.K. Rowling to promote transphobic and hateful positions, just as it is *wrong* for Louis C.K. to commit sexual misconduct. It was similarly wrong for Chik-fil-A to promote damaging and hateful treatment of LGBT+ people. Al Franken lost his political position over allegations of sexual misconduct, and Matt Lauer was removed from the “Today” show due to allegations of rape and sexual misconduct. These examples are of entities that exist in the public sphere, and who faced backlash and criticism based on their expressed views or behavior. Just as there’s not a justification for them to be in the public sphere in the first place, there is not a justification for them to be exempt from losing their place after enough people take issue with their behavior.

Second, it can seem as though there is no possible way to behave in such a way to avoid the strong backlash that some public figures have received. This amplifies the vulnerable, threatened feelings not just among the public figures, but also private members of society who might identify with those behaviors. It may seem that there is no getting away from some types of criticism, of going wrong in some sort of way. And this kind of condemnation cuts off further conversation about repair and progress.

Consider the months in 2019 when many public figures were exposed for having worn blackface in the past. Unfortunately, few who were revealed to have taken part in this obviously offensive and unacceptable behavior took responsibility for their actions. Few admitted to having done something wrong, expressed regret having since learned what made their actions unacceptable, or indicated that they were grateful to those who helped them grow and reflect on their former understanding, etc. The idea that there is no way to respond to criticism or wrong-doing does not help progress or understanding. Again, people will make mistakes. While nearly everyone will not make the mistakes listed here, it could be earnest dialog rather than defensiveness that is the focus when communal moral standards are not met. When private members of society see public figures being castigated, it is an important step past the fear of “cancel culture” to realize that they themselves are not under threat and that most likely they would not do what these figures did in the first place. It is also important to keep in mind that our moral missteps be approached with an attitude geared toward growth and repair.

Adopting such an attitude can be an extremely difficult task. As the Harper’s Letter attests, the criticism that occurs on social media — and that criticism’s real-life consequences — encourage defensive reactions. The threat wielded by such sudden and effective criticism evokes feelings of vulnerability and insecurity. But it is important to remember that it is the place in the public eye, or one’s professional reputation that is under threat, not the person’s safety or even freedom of speech. Further, threatening their place in the public sphere is frequently warranted, especially when their profession confers public status, as with politicians, news anchors, celebrities, etc.

In the end, the discussion of freedom of speech is a red herring that distracts us from our principal target. We should instead be focusing on why individuals receive the attention that they do, and whether the appropriate form of moral engagement when they fail to meet moral standards is to criticize their place in the public sphere. This can result in mutual progress, as opposed to mere removal.

Retweets, Endorsements, and Indirect Speech Acts

image of retweet icon

Over the weekend, President Trump engaged in a rare retraction, deleting a retweet of a video of pro-Trump protesters at a Florida retirement village. Midway through this video, a man in a golf cart sporting ‘Trump 2020’ and ‘America First’ placards, raises his fist and clearly shouts ‘white power’ at a group of anti-Trump protesters. The retweet stayed up for around three hours on Saturday morning, before it was taken down after uproar. In subsequent statements, the White House press secretary Kayleigh McEnany has tried to maintain both that the 45th president of the United States watched the video before retweeting, and that he nonetheless didn’t hear the slogan shouted in the middle of the video. We might find this is a little difficult to believe, given his record of sharing white supremacist slogans and iconography.

Setting to one side the question of whether the president actually watched the video before sharing it, this example opens up a more general question: when should one be held responsible for one’s retweets? Is it possible to hide behind the defense that a retweet involves someone else speaking (and in this case making a white supremacist hand gesture), or does retweeting involve repeating what someone else has said, meaning that a retweeter can be held just as responsible as the original poster?

One way to make sense of our responsibilities for sharing other peoples’ words is to deny that there is an important distinction between tweeting and retweeting. On this view, when we share other people’s words, we make them our own, meaning that we put our credibility behind them, express belief in them, and take responsibility for them.

This view faces a number of problems.

The Oxford philosopher G.E. Moore observed that it is absurd to make a claim while denying that one believes that claim. The sentence ‘I went to the park yesterday, but I don’t believe that I did’ is perfectly grammatical, but it is a very strange thing to say. Explanation of so-called Moorean sentences differ, but almost everyone agrees that uttering a Moorean sentence is a strange thing to do. By contrast, it is perfectly possible to retweet an article with the comment that you don’t believe its headline claim. Here’s an example:

(To be clear, I don’t have any strong views about the number of bikes sold, and cycling weekly is a reputable source: this is just an example.) Relatedly, there is a whole genre of tweets in which a fact checker retweets an article or picture, along with a claim that the article is false.

 

If retweeting were equivalent to tweeting, this genre of debunking tweet would involve making a claim and denying it. This wouldn’t be just absurd: it is a flat out contradiction.

Retweets that involve promises, requests, or questions similarly don’t behave like tweets. If you tweet a promise to your partner to clean your house every day in August, and I retweet it, I haven’t thereby promised to clean your flat too!

These differences suggest that we ought to draw a pretty clear distinction between tweeting and retweeting.

A natural strategy in thinking about kinds of online communication is to look for features of offline communication that have similar features. There are two offline devices of communication that are good candidates for making sense of retweets: quotation and pointing.

In a recent paper Neri Marsili explores the view that retweets function like quotation. This view take the original format of retweets — a sentence prefaced by ‘RT’ — seriously and claims that retweeting is like putting quotation marks round a sentence and saying so-and-so said: […]. This view can deal with retweeting with a comment by treating it as a quotation embedded into a longer sentence. It is perfectly reasonable for you to say “Josh said that he went to the park yesterday, but I don’t believe that he did,” or “Josh said that he went to the park yesterday, but he didn’t.”

The problem with this view comes from the diversity of retweets. Besides retweets of sentences, we also find retweets of pictures, gifs, polls, and videos. Unlike sentences, gifs and the like aren’t the kinds of things that one can put in quotation marks, so this view can’t be correct.

An alternative view, suggested by Jessica Pepp, Eliot Michaelson, and Rachel Sterken (and ultimately endorsed by Marsili) treats retweeting as akin to pointing. Pointing is an extremely common and flexible referential device associated with words like ‘this’ and ‘that’. By itself, it can function as a device for directing attention. If we were on a walk together, I might stop and point to draw your attention to an interesting bird. We can also use it to make claims about the world (“that [points] is a very ugly chair”), to answer questions (“which student cheated on the test?”), and even to make commands (“give me that [points]!”). One piece of evidence for this view is the fact that is extremely natural to use ‘this’ and ‘that’ with retweets; in fact some tweets are simply labelled with an imperious ‘THIS’.

The proposal is that retweets function like pointing, with the comments functioning like the sentence that refers to the object pointed towards. On this view, disbelieving and debunking retweets work a bit like the sentences “I don’t believe this [points]” and “this [points] is false” which are clearly reasonable sentences.

So far, we’ve got a bit clearer on how to think about what kind of communicative action retweeting is, but we haven’t yet addressed the issue of responsibility for retweeting. On the view under consideration, a plain retweet is purely referential; it’s like pointing to a bird whilst on a walk to draw others’ attention to it. Retweets with comments may clarify whether the speaker means to endorse the retweeted comment, but merely retweeting doesn’t clarify whether one has endorsed the claim.

Here we can bring in another piece of philosophical technology: indirect speech acts. Indirect speech acts involve performing one direct communicative acts as a means to performing another indirect act. For example, directly asking the question “do we have any beer in the fridge?” might involve indirectly making a request for you to get me a beer. Indirect speech acts are highly conventionalized and context-sensitive. If I’m clearly drawing up a shopping list, asking “do we have any beer in the fridge?” will probably function as a straight question (unless I have a habit of drinking a beer while writing lists).

The suggestion is that retweeting can involve two distinct speech acts: a direct referential act and an indirect act of endorsement. We might think about retweeting an article in order to endorse it as being a little bit like opening a newspaper on an interesting article and leaving it in the spot where your partner goes to have their morning coffee.

Frustratingly, this means that there is no easy answer to the question of what responsibility we bear for retweets. As we’ve just seen, indirect speech acts are highly context-dependent. There may be some internet communities where the conventions around retweeting involve strong endorsement. If I share an article about a new treatment for COVID-19 into a Facebook group for medical professionals, I might be endorsing both the headline claim of the article, and the supplementary claims it makes. By contrast, if I share an article about the performance benefits of a new Nike running shoe into a running group that habitually shares different studies, and where it is common knowledge that these studies are based on shaky science, I might merely be drawing attention to a new piece of information.

What happens when a communicative situation lacks clear norms about the significance of retweeting? Well, things get messy. One person might retweet a controversial article meaning to call attention to its argument, and be interpreted as endorsing it wholesale. Another person might share a picture of a protest meaning to endorse the cause of the protesters, and be interpreted as mocking or belittling them. In this kind of situation, context collapse is rife, and it becomes difficult to rely on shared presuppositions and conventions about communication.

In this defective speech situation, it is extremely difficult to make sense of which indirect speech acts we are performing. When we hold one another responsible for indirect speech acts associated with retweets, we are not implementing established norms for indirect communication, we are trying to create conventions for indirect communication based on sharing content online.

What kinds of conventions do we want to have? Regina Rini suggests that we ought to have a convention whereby retweeting conveys endorsement of the central claims in a retweeted article, accompanied by robust practices of holding users accountable for what they share. An alternative convention would be that retweeting doesn’t convey endorsement of any of the claims in an article (perhaps it merely conveys that something is interesting), in which case we could hold one another to much lower standards. A third possibility is to have a bundle of different conventions for different situations. Maybe the context of political speech involves endorsement of all claims and robust accountability, and contexts of private speech are much more relaxed. This conclusion is unsatisfying, but it does help clarify what is at stake in debates about retweets: we aren’t trying to describe independent and general conventions, but to create linguistic communities that can meet our intellectual needs.

Censoring “Gratuitous” Violence

black-and-white photograph of protestor taking photo of "White Silence is Violence" sign with phone

The video of George Floyd dying after nine long minutes by suffocation at the hands of a Minneapolis police officer is gruesome, sickening, and has prompted countless people to action. The officers responsible for his death have been arrested and charged. In response to protests, numerous state and local governments are instituting police reforms. Black people have been killed by police before. But given that this particular video of unambiguous violence perpetrated by police has been circulated so widely, is so appalling, and instigated such a fierce response, this example stands out.

From this fact, a rough argument may be sketched. Sharing videos of horrifying violence prompts positive social change, so let’s share more videos of horrifying violence. If such a video is helping to stop police violence, why not share other violent videos to help stop gang violence, war violence, and terrorist violence? In fact, why not share videos showing the effects of structural violence, videos of suicides due to social isolation and industrial accidents due to lack of regulation? Scrolling through Twitter or Facebook, one might see a video of a cute baby taking her first steps, then a video of a terrorist execution, then a video of a bunch of newborn puppies, and a video of a young man sticking a gun in his mouth and pulling the trigger. Even if you think it is good that the Floyd video was widely shared, you probably don’t support turning your morning scroll through social media into such a traumatic experience. To understand this apparent contradiction in instinct, let us consider how violent content is treated on social media today and the arguments for and against censoring it on these platforms and in general.

First, we need to consider what “violent content” is and how it is understood by social media companies. While there may be an intuitive sense that violent content only includes uses of force for the purpose of causing harm, social media companies take a more expansive view. Twitter, for example, includes under the category of “graphic violence,” accidents and any “serious physical harm.” But, these companies also tend to distinguish between what Twitter calls “graphic violence” and “gratuitous gore,” as though there is some amount of violence or gore that is not in some way “gratuitous” to our experience of the world.

While graphic violence may include “bodily fluids including blood, feces, semen,” and is only hidden behind a “sensitive media” label and blur, “gratuitous” gore, which includes dismemberment, mutilation, burned human remains, and exposed internal organs and bones, is banned completely. But what exactly is the meaningful difference between these two categories? For example, a decapitation would certainly count as gratuitous gore and would be extremely off-putting. But, the video of Floyd being killed is merely graphic violence, even though it can easily be just as off-putting, if not more so. In fact, while a decapitation may be quick and relatively painless, Floyd died slowly of suffocation. Why is one “gratuitous” and not the other? Why is one censored and not the other?

From the start, companies can have two kinds of motivations for doing anything: moral ones and amoral ones. Either they do something because it is right or thoughts of right and wrong simply don’t factor into their decision. Twitter presents a moral argument for their censorship. They say that “We prohibit gratuitous gore content because research has shown that repeated exposure to violent content online may negatively impact an individual’s wellbeing.” Twitter does not make clear what they mean by well-being but if they mean an immediate sensation of feeling good or ill, their argument is trivially true. Only a sadist really enjoys the suffering of others and has their immediate well-being improved by viewing it.

And there might be a legitimate basis for Twitter’s claim. There is some evidence that regular viewing of violence can be desensitizing, though “regular viewing” here means in excess of two hours every day and none of the science is settled. But, there is also an obvious profit motive for Twitter’s censorship—if you associate negative feelings with your use of Twitter, you are unlikely to use it as frequently, and fewer users means less ad revenue. Regardless of the morality of this censorship, Twitter is motivated to censor for the sake of profits. So then, what are the moral reasons that could support this sort of censorship?

To answer this question, let’s first consider the odd bunch of people who do seek out violent content, taboo gratuitous gore in particular, to watch. One particularly popular community of these people was the Reddit group r/watchpeople die, which had over 400,000 members before it was banned. At that size, it is difficult to chalk the membership of that group up to just sadists, sociopaths, and other such extraordinarily deviant people. In fact, the moderators and power users of this subreddit were pretty much normal people, some married, plenty having friends. They didn’t fit the stereotype of obsessive death and gore watchers.

In fact, Rule #3 of the subreddit (as shown in this Wayback Machine archive of the subreddit’s homepage on September 20, 2018, shortly before its quarantine) included this expectation, bolded by the mods to highlight it: “Be respectful of the dead! This is important. Human beings have lost their lives. This subject matter is not to be taken lightly.” The subreddit also described itself as “a community for documenting and observing the disturbing reality of death” and as “not intended to be a shock or gore subreddit.” Finally, they referenced two famous philosophical ideas: “Memento mori,” the Latin Stoic maxim to always remember one’s inevitable death, and “Maranasati,” a similar idea in Buddhism. Gratuitous gore is often referred to online as “gore porn” as the basis for viewing it is thought to come from a similar place as the animalistic urge to view other kinds of pornography. However, in light of the seemingly principled basis for this community, it is tough to say that all viewing of gratuitous gore is pornographic.

Sue Tait, a lecturer in the field of mass communications at the University of Canterbury, elaborates on this idea in her article, “Pornographies of Violence? Internet Spectatorship on Body Horror.” She considers four different ways people in these sorts of communities interact with gratuitous gore. She refers to these as four kinds of gazes viewers have:

“I identify a range of spectatorship positions [viewers] take up, including: an amoral gaze, whereby the suffering subject becomes a source of stimulation and pleasure; a vulnerable gaze, where viewers experience harm from graphic imagery; an entitled gaze, where viewers frame their looking through anti-censorship discourses; and a responsive gaze, whereby looking is a precedent to action.”

To contextualize these gazes, let’s consider some examples from before. The amoral gaze would be the one taken up by the sadists. The vulnerable gaze is the one Twitter worries about its viewers having-they worry people will associate the “hurt” they feel at viewing gratuitous gore with the site itself and stop using it. The r/watchpeopledie community’s focus on “documenting and observing the disturbing reality of death” would be an example of the entitled gaze. And last but not least, the responsive gaze would be the one taken up by those who were prompted to action by the video of Floyd’s death and any one who would be prompted to similar action by similar, but gorier, content, like many on r/watchpeopledie were.

With the idea of these different kinds of gazes in mind, we can now construct a variety of arguments for and against the censorship of violent content.

According to virtue ethics, we might support censorship of gratuitous gore if it seems that regular exposure to gratuitous gore encourages vices in viewers. For example, if conclusive research comes out showing that exposure to violent media causes people to be more aggressive, cruel, or unempathetic, that would be a reason to support censoring gratuitous gore, the most extreme form of violent media. (In particular, we might worry about how this media influences the character of children whose morals are viewed as being particularly malleable.) This would be particularly true if a community encouraged people to take up an amoral gaze.

On the other hand, we might oppose the censorship of gratuitous gore if it seems that same exposure promotes virtue, rather than vice, in the viewers. If viewers take up a responsive gaze, rather than an amoral one, people may be encouraged to be more compassionate. As Stalin is reported to have said, “If only one man dies of hunger, that is a tragedy. If millions die, that’s only statistics.” Seeing the “disturbing reality of death,” over and over again, be it by hunger or by violence, might prevent people from losing touch with the horror of various kinds of violence and actually work to take action as they did with police violence after seeing the video of Floyd’s murder.

Immanuel Kant, the father of deontology — morals based on duties — made a creative argument against the abuse of animals that could be used to justify the censorship of gratuitous gore. While Kant did not believe animals had rights, or even any kind of consciousness, he still opposed sadistic animal abuse saying, “If any acts of animals are analogous to human acts and spring from the same principles, we have duties towards the animals because thus we cultivate the corresponding duties towards human beings.” In short, we shouldn’t abuse animals pointlessly lest we become able to do the same to people. In the same way, if repeated exposure to gratuitous gore hampers the cultivation of our duties toward people (as would be the case upon taking up the amoral gaze), such as not to murder them, then censorship of gratuitous gore would be justified.

But, deontology can also be used to oppose the censorship of gratuitous gore. Those who take up an entitled gaze might argue that we have a duty to uphold free speech or that we even have a duty to “document” deaths, for various purposes. People might also have a duty to bear witness to the reality of death for some further end as according to the maxim “Memento mori.”

Finally, we can give consequentialist arguments for and against censorship. If, on the whole, the viewing of gratuitous gore leads to more people doing harm to each other, then it should be censored. If not, if, according to the responsive gaze, people’s viewing leads to great social change, then it absolutely should not be censored.

This argument is especially powerful in an affluent nation like the United States. If you are an American, and if you are just a little lucky, you will have to see only a few people die, you will only attend a handful of funerals, and those funerals you do attend will recognize the deaths of people who we think were more or less supposed to die, that is, the elderly. But, Americans are an exception and though we can hide from death for most of our lives, the world is not a happy place where only those who have lived long lives, or who get unlucky with serious diseases like cancer, have to die. All sorts of horrible causes of death, from childbirth, infectious disease, war, and industrial accidents, are still very common in the Global South. You can find a particularly horrifying intersection of all of these in the Democratic Republic of the Congo where resource conflict has led to widespread poverty, civil war, and unsafe mining operations. But, some combination of these horrors can be found in most areas of the world.

We are terribly desensitized to all these horrors as these deaths are reduced to mere numbers. Few Americans have seen the effects of poverty, war, and sickness in these far away places. And, as they say “out of sight, out of mind.” If only a small portion of people take up the responsive gaze and stand up against these atrocities, and actually manage to remedy some of them, that would be an enormous consequentialist benefit that outweighs all the temporary harm it does to the “well-being” of comfortably, relatively wealthy (on the world scale), American viewers.

Overall, a violent video is not moral or immoral in isolation. Rather, the viewing of violent videos may be moral or immoral depending on the context. The morality of censoring gratuitous gore and other violent content may also depend on human nature. If most people, most of the time take on an amoral gaze or vulnerable gaze when viewing violent media, then by most accounts, censorship is justified. But, if people are basically good, then they might mostly take on the responsive gaze and untold benefits would result from ending the censorship of violent content. While it very well may be that some or all violent content deserves censorship, we ought to examine our reasons for censoring it. We ought to consider whether that censorship has a true moral basis or whether viewing violence is just uncomfortable, forcing us to reflect on the horrors of the world in a way from which we are usually, blissfully, isolated.

Conspiracy Theories and Emotions in the Time of Coronavirus

image of screen with social media app icons displayed

While it is not unusual in this day and age to come across a conspiracy theory on social media, the coronavirus pandemic seems to be producing far more than its fair share. Here are some recent examples of such theories that have been circulating:

Each theory is backed by its own kind of specious reasoning. Some, like the conspiracy theory that flu shots will result in positive coronavirus test results, are based on posts made by a discredited doctor; others, like the 5G conspiracy, appears to be a holdover from previous conspiracies about the installation of 3G cell towers, just in an updated form. For others, it is not clear whether the conspiracy theorists are motivated by political concerns (were the Dr. Fauci conspiracy true, for example, it would validate Trump’s recommendations) or something else.

Regardless, the spread of these and other conspiracy theories are not inert. For instance, those who believe in the 5G conspiracy theory have been setting 5G towers on fire in the UK. What’s more, the generation of such theories does not seem to be slowing down: in the time of writing these two paragraphs I have been notified of the existence of three conspiracy theories regarding a potential coronavirus vaccine: that the first volunteer in a UK vaccine trial has died (no, she hasn’t); that a vaccine already exists for dogs but is not being used for humans (no and no); and that a vaccine does in fact already exist, and was in fact developed as part of that UK vaccine trial I just mentioned (again, no).

While some of these theories at least purport to be based on scientific information, many just seem so far outside the realm of plausibility that it is difficult to imagine how anyone could believe them, let alone take them seriously enough to light a cellphone tower on fire. So what’s going on? Why are there so many conspiracy theories, and how is it that they’re actually getting some traction?

There are of course many potential explanations for how it is that such theories start and are spread. However, part of the explanation for why it is that coronavirus conspiracy theories in particular are so numerous could be that many people are feeling really stressed out. Consider some recent research that suggests that one’s willingness to accept fake news stories and conspiracy theories peddled online depends in part on how emotionally charged those stories are; in other words, stories that grab your attention with language that are likely to make you feel angry, sad, surprised, etc., are going to be ones that, on average, are shared among one’s friends and followers. You have probably come across this trick by producers of less-than-reputable web content before: when a headline tells you that “you won’t believe ___!” they are counting on your piqued curiosity driving you to their site. While often this is merely annoying, when it comes to fake news and conspiracy theories it can be much more dangerous.

Why it is that emotionally-charged stories are shared more is up for debate. One such theory, however, posits that being in an emotionally-charged state will make it more likely that you won’t be reasoning to the best of your abilities. For instance, I might tell you a sob-story to pull on your heartstrings in an attempt to get you to believe what I’m saying; or, I might present you with a salacious headline designed to make you angry, etc. Having put you in a certain emotional state might then result in you not critically engaging with the content of my message, which might then make it more likely that you’ll believe and/or share my story (after all, there’s no better way to express your anger than to talk about it on the internet).

Part of what might explain how the recent flurry of coronavirus-related conspiracy theories and fake news stories continue to be spread and believed, then, is that they manipulate people who are already feeling a lot of emotional strain. As many people are worried, stressed, and anxious for a multitude of pandemic-related reasons, it is perhaps not surprising that stories and theories that attempt to make one even more worried, or angry, or whatever else, should interfere with one’s ability to think as clearly as possible.

Given that these continue to be stressful times, we should probably not expect to see these conspiracy theories dwindling in numbers any time soon. At the same time, given the connection between emotional manipulation and the spread of fake news, we perhaps have another avenue by which to address the problem, namely via an increased attention on our own mental well-being. It has become a mainstay of pandemic advice that one should pay particular attention to one’s mental health, especially when one is feeling stressed and isolated. While this is good advice in general (and indeed, is good advice regardless of whether there is a pandemic or not) it may have the added benefit of reducing the spread of coronavirus-related fake news and conspiracy theories: managing one’s stress may help manage one’s ability to critically engage with information one receives online to the best of one’s ability.