← Return to search results
Back to Prindle Institute

Children Deserve Less Screen Time in Schools

photograph of group of children using smartphones and tablets

School closures because of COVID-19 should teach us a lot about the future of educational reform. Unfortunately, we aren’t learning the lessons we should.

For many years, educational innovators championed visions of personalized learning. What these visions have in common is the belief that one-size-fits-all approaches to education aren’t working, and that technology can do better.

Before we place too much hope in technological solutions to educational problems, we need to think seriously about COVID school closures. School districts did their best to mobilize technological tools to help students, but we’ve learned that students are now further behind than ever. School closures offered the perfect opportunity to test the promise of personalized learning. And though technology-mediated solutions are most certainly not to blame for learning loss, they didn’t rise to the occasion.

To be clear, personalized learning has its place. But when we think about where to invest our time and attention when it comes to the future of schooling, we must expand where we look.

I’ve felt this way for many years. The New York Times published an article back in 2011 that reported the rise of Waldorf schooling in Silicon Valley. While technologists were selling the idea that technology would revolutionize learning, they were making sure their children were staying far away from screens, especially in schools. They knew then what we are slowly finding out: technology, especially social media, has the power to harm student mental health. It also has the potential to undermine democracy. Unscrupulous agents are actively targeting teenagers, teaching them to hate themselves and others.

It is surprising that in all the calls for parents to take back the schools, most of the attention is being paid to what is and isn’t in libraries and what is and isn’t being assigned. Why do Toni Morrison’s novels provoke so much vitriol and yet the fact that kids starting in kindergarten watch so many inane “educational videos” on their “smart boards” doesn’t.

What is more, so many school districts provide children with laptops and some even provide mobile hotspots so children can always be online. We are getting so upset about what a student might read that we neglect all the hateful, violent, and pornographic images and texts immediately available to children and teenagers through school-issued devices.

Parents are asking what is assigned and what is in libraries when they should ask: How many hours a day do my children spend on a screen? And if they are spending a great deal of time on their screens: What habits are they developing?

If we focused on these questions, we’d see that our children are spending too much time on screens. We’d learn that our children are shying away from work that is challenging because they are used to thinking that learning must be fun and tailored to them.

We must get children outside their comfort zones through an encounter with content that provokes thinking. Playing games, mindlessly scrolling, and responding to personalized prompts don’t get us here.  What we need is an education built on asking questions that provoke conversation and engagement.

Overreliance on technology has a narrowing function. We look for information that is easy to assimilate into our preferred ways of thinking. By contrast, a good question is genuinely disruptive, just as a good conversation leaves us less sure of what we thought we knew and more interested in learning about how and what other people think.

In our rush for technological solutions, we’ve neglected the art of asking questions and cultivating conversation. Expecting personalized learning to solve problems, we’ve forgotten the effort it takes to engage each other and have neglected the importance of conversation.

Rather than chase the next technological fix — fixes that failed us when we needed them most — we should invest in the arts of conversation. Not only will this drive deeper learning, but it can help address the mental health crisis and rising polarization because conversation teaches students that they are more than what any algorithm thinks they are.

Real conversation reminds us that we are bigger than we can imagine, and this is exactly what our children deserve and what our society needs. Classrooms need to move students away from an overreliance on technology and into a passionate engagement with their potential and the wonder of new and difficult ideas.

Our screens are driving us into smaller and smaller worlds. This makes us sad, angry, anxious, and intolerant. Many young people don’t have the willpower to put the screen down, so schools need to step up. Initially, it won’t be easy. Screens are convenient pacifiers. But our children shouldn’t be pacified, they deserve to be engaged. And this is where we need to devote energy and attention as we approach the upcoming academic year.

Indeterminate Fear: Moral Panics and Floating Signifiers

photograph of child standing in field watching hot air balloons

Fear is a powerful tool, and it can distort our moral landscape. When social anxiety over an issue reaches the level of a moral panic, fear does not remain solely a private emotion. Rather, it is a driver of political action, potentially serving as a basis for hatred and, at times, violence. Think of the recent attack at the University of Waterloo, in which a former student stabbed a professor and two others during a gender studies class. As police stated in their press release, “investigators believe this was a hate-motivated incident related to gender expression and gender identity.” It’s hard to imagine that the widespread anxiety over transgender issues didn’t play a role.

A moral panic is widespread fear of some apparent evil that threatens one’s deeply-held values or way of life. Potential examples in recent memory include panics over rock ‘n’ roll in the 1950s, new age spiritualism in the ‘90s, and postmodernism in the ‘00s. Current contenders include fear over the “social contagion” surrounding gender identity, as well as fears over Critical Race Theory and wokeism (even in M&Ms).

How do moral panics work? What can an understanding of them tell us about the issues our society treats as important? And might we have reasons to doubt our feelings of fear? To begin to address these questions, let’s consider one use of language that often arises in a moral panic: what’s known as a floating signifier.

A floating signifier is a term whose meaning is so broad and ill-defined as to possibly encompass a number of issues, claims, and events. As may be gathered from the metaphor, what a floating signifier means (signifies) can’t quite be tied down. Its specifics are left to be determined by each individual.

Moral panics often rely centrally on a word that serves as a floating signifier, and these terms are common in political language. “Woke” is a good example. The term “woke” broadly means something like socially progressive, but its specifics are few. As former president Donald Trump remarked, “woke” is “just a term they use. Half the people can’t even define it. They don’t know what it is.” But not knowing what it is doesn’t stop people from using the term to express their beliefs or garner political support.

In what follows, I’ll discuss some of the features and consequences of floating signifiers in a moral panic.

First, floating signifiers can stoke fear through underspecification. Think of a horror film, where suspense is built against an unknown and unlocatable threat. Why don’t horror films reveal the fearsome monster right away? One explanation is that, when we don’t know who or what is out there, our imaginations can fill in the blanks with something dreadful — or, if you’re like me in this respect, you may find the very blankness of the object itself to be a source of fear. When the monster is not yet terrible in one way or another, it is terrible full stop. By being unspecified, its fearsomeness is also unlimited.

In a subtler way, floating signifiers in a moral panic can also rely on underspecification to stoke fear, often together with more explicit fear-inducing language. Consider the term “woke mind virus,” which sounds like something out of science fiction, or political commentator Matt Walsh’s recent tweet saying “feminism has killed far more people than the atomic bomb. It is perhaps the most destructive force in human history.” The line doesn’t work as well rhetorically if you substitute the common definition of feminism as the pursuit of equal treatment for all people regardless of gender. Rather than discussing specific claims, the broad and amorphous term “feminism” is meant to bring up the specter of a threatening ideology.

Second, floating signifiers can reduce critical engagement with actual claims and ideas — especially in an age of social media. One who is under the impression that Critical Race Theory is a pernicious ideology that “says that white people are inherently bad or evil” is not likely to consider — or even encounter — what that theory actually claims. Floating signifiers allow for quick signaling and dissemination of ideas on social and traditional media, but they do so in such a way that what matters is not what those ideas are. What matters for their political purpose is what the fear of those ideas can do in terms of public approval, votes, and legislation — or in other words, in terms of power.

This lack of informativeness is not accidental. Floating signifiers aren’t meant to inform; rather, they serve as buzz-words that call in a group by signaling a common enemy: feminists, the woke mob, anyone who threatens “our” values. This function leads us to their third feature: floating signifiers give the appearance of full agreement and unity among participants, without actually requiring full agreement. When the enemy is broadly and loosely defined, “we” can all share a common enemy — each facing the enemy that arises in their own mind. Thus, floating signifiers galvanize and mobilize groups more than specifics often could, since they don’t require specifics to be agreed on, or even specified in the first place.

So what is there to do? On a structural level, the situation is somewhat grim. Some of the factors that contribute to moral panics are a part (or a consequence) of what philosopher C. Thi Nguyen calls a hostile epistemic environment: a set of external factors that exploit our cognitive limitations and vulnerabilities. Human beings are always subject to the limitations placed on us by our own time constraints and mental capacities. We can’t investigate every claim we encounter for ourselves. We must rely on experts. We are embedded in communities of trust that we rely on for good information. And our society — technology, media, and social institutions — can exploit these limitations in order to increase engagement and therefore ad revenue. Social media’s role in spreading news and ideology makes it a major driver of moral panics, and its influence is not easy to quell. Sensation sells. Outrage garners engagement. It’s hard to see a way out.

At the individual and community level, however, it is easier to imagine some actions that could help in the face of indeterminate fear. I’ll end with a few brief suggestions.

We can examine our own beliefs and preconceptions: Do I actually know what this word means? Do the claims I’m reading seem exaggerated or extreme? What do the people I disagree with actually believe? We can hold each other accountable, willing to discuss uncomfortable issues. As philosopher Barrett Emerick encourages, we can love each other enough not to give up on our loved ones’ moral development, as we hope they won’t give up on our own.

Perhaps simplest (though not easiest) of all, we can take time away from the source of the moral panic. We can try to reduce its influence in our own lives and the lives of those around us. After all, a break from the fearsome engine of social media and in the company of loving friends or family — what the too-online refer to as “touching grass” — is often good for the spirit.

Kids and Social Media: Why the First Amendment Argument Fails

photograph of children playing on smartphone

Utah’s recent push for legal restrictions on the social media consumption of minors represents the most aggressive legislation of its kind to date. Of course, many other countries have placed stringent restrictions on the social media usage of their citizens, but the United States has been reluctant to follow suit. The reasons why a liberal society might be hesitant to restrict citizens’ access to these platforms are obvious enough. The United States enjoys a Bill of Rights that legally ensures the freedom of speech, and because social media platforms serve as an important mechanism for exercising one’s freedom of speech in the modern world, restricting citizens’ access to these platforms might be deemed unconstitutional. Additionally, insofar as political liberalism calls for governments to make minimal value judgments, heavy-handed restrictions in the name of state paternalism are often undesirable. Thus, we’ve landed as a society in a position where the negative  impacts of social media usage are well-known, but there is no consensus on an appropriate remedy.

Due to the concerns mentioned above, I think there are strong reasons to refrain from legal intervention with the social media usage of adults. However, the picture gets more complicated when considering minors. There is strong legal precedent for limiting children’s access to certain products before they reach a particular stage of cognitive maturity. For example, the United States limits alcohol and tobacco consumption to those twenty-one or older, as well as places age restrictions on purchasing weapons and driving cars. Virtually no Americans advocate for completely abolishing these restrictions, making us functionally committed to the notion that certain rights enjoyed by adults should not be granted to children.

There might very well be compelling arguments against the legal regulation of social media usage for children. However, one of the most commonly utilized arguments against such regulations — the argument from the First Amendment — stands on shaky ground. The First Amendment is composed of five distinct rights: the rights to the free exercise of religion, freedom of speech, freedom of the press, peaceful assembly, and government petition. Those who believe the First Amendment precludes placing restrictions on the social media accounts of children claim minors have their freedom of expression protected by the First Amendment, and thus such restrictions are unconstitutional. The New Yorker recently published a piece arguing for this position, and similar arguments can also be found here and here. While such a stance is understandable, the argument ultimately rests on an implausible interpretation of the scope of the First Amendment.

Obviously, there are many nuances involved in theories of constitutional interpretation, but on any viable interpretive framework, special constraints apply to minors that do not apply to adults. With children, the exercise of a number of constitutionally protected rights is constrained in various ways, and the extent to which children are able to exercise any particular right is determined by a number of factors, including the risks associated with the expression of that right, as evidenced by the categorical exclusion of children from the right to bear arms. Of course, the right to bear arms is not the only right that children cannot fully exercise. We can also consider the nature of other First Amendment rights, such as the freedom to assemble and the freedom of religious practice. It is clear enough there is at least some sense in which the right to peaceful assembly applies to children. Minors can meet up in groups and can even attend political protests. However, a child’s right to peaceful assembly is clearly also constrained by parental consent. For example, law enforcement is permitted to limit an eight-year-old’s right to protest if the child’s parents have not consented to her being present.

Furthermore, while it is true that children bear a right against a government imposed religion, children oftentimes have a religion imposed on them by others. A child’s ability to seek out the religion of her choice is functionally highly limited by her upbringing and family of origin. For instance, if a child grows up in a conservative Jewish family, the child is likely compelled to engage in the practices associated with the Jewish tradition. Families are legally permitted to exercise a certain degree of coercive power over their children which shapes the degree and extent to which they practice a particular religion. Probably only a small minority of people would contend that this constitutes a rights violation on behalf of the child, while most people tend to agree that an adult being coerced (even if non-governmentally) to practice a particular religion does constitute a rights infringement of some kind.

The right to free speech seems to function quite like the rights of assembly and religion in that there is certainly some sense in which children have a right to free expression. The Supreme Court has ruled on a number of cases pertaining directly to the issue of free speech and minors. One of the most influential of such cases is Tinker v. Des Moines Independent Community School District, where the Court ruled that minors have a right to self-expression in schools insofar as it is not highly disruptive of the academic environment. While, in this particular case, the Court ruled in favor of the free speech rights of K-12 students, the Court has historically decided that college undergraduates (i.e., legal adults) enjoy greater free speech protections than do younger students. More specifically, there are various cases where the Court appeals to age-based considerations to defend substantive limitations on the speech of minors. One such case is Bethel School District 403 v. Fraser, where the Court ruled public schools can prohibit students from engaging in particularly crude or offensive speech.

If we look at the implications of the Bill of Rights, there are certain rights that simply do not apply in any meaningful sense to children due to the severity of the associated risks (e.g., the right to bear arms), as well as certain rights (e.g., the freedom of assembly and the freedom of religious practice) which apply in a limited way to children. My argument is that the right to free speech falls into this latter category. While there is clear legal precedent that children are allowed to freely express themselves to a certain degree, there is also strong precedent for reducing the scope of that right. For this reason, simple appeal to the First Amendment is insufficient as an argument against the type of legislation proposed by Utah.

This is, of course, not to imply that such legislation is entirely legally and morally straightforward. Perhaps a legitimate concern is that allowing legal restrictions of social media in the case of minors will have a slippery slope effect, eventually endangering the free speech rights of adults. Another potential route to striking down the Utah bill is to argue for the expansion of the free speech rights articulated in cases like Tinker v. Des Moines. Whether the types of restrictions proposed by Utah constitute a viable solution to the negative impacts of social media on young people’s lives is a debate which will need to be settled both in the courts of law as well as in the courts of public opinion over the coming months and years.

Due Attention: Addictive Tech, the Stunted Self, and Our Shrinking World

photograph of crowd in subway station

In his recent article, Aaron Schultz asks whether we have a right to attentional freedom. The consequences of a life lived with our heads buried in our phones – consequences not only for individuals but for society at large – are only becoming more and more visible. At least partly to blame are tech’s (intentionally) addictive qualities, and Schultz documents the way AI attempts to maximize our engagement by taking an internal X-ray of our preferences while we surf different platforms. Schultz’s concern is that as better and better mousetraps get built, we see more and more of our agency erode each day. Someday, we’ll come to see the importance of attentional freedom – freedom from being reduced to prey for these technological wonders. Hopefully, that occurs before it’s too late.

Attention is a crucial concept to consider when thinking about ourselves as moral beings. Simone Weil, for instance, claims that attention is what distinguishes us from animals: when we pay attention to our body, we aim at bringing consciousness to our actions and behaviors; when we pay attention to our mind, we strive to shut out intrusive thoughts. Attention is what allows us, from a theoretical perspective, to avoid errors, and from a moral, practical perspective, to avoid wrong-doing.

Technological media captures our attention in almost an involuntary manner. What often starts as a simple distraction – TikTok, Instagram, video games – may quickly lead to addiction, triggering compulsive behaviors with severe implications.

That’s why China, in 2019, imposed a limit on gaming and social media gaming use. Then, in 2021, in an attempt to further control and reduce mental and physical health problems of the young population, stricter limits for online gaming on school days were enforced, and children and teenagers’ use was limited to one hour a day on weekends and holidays.

In Portugal, meanwhile, there is a crisis among children who, from a very young age, are being diagnosed with addiction to online gaming and gambling –  an addiction which compromises their living habits and routine such as going to school, being with others, or taking a shower. In Brazil, a recent study showed that 28% of adolescents show signs of hyperactivity and mental disorder from tech use to the point that they forget to eat or sleep.

The situation is no different in the U.S., where a significant part of the population uses social media and young people spend most of their time in front of a screen, developing a series of mental conditions inhibiting social interaction. Between online gaming and social media use, we are witnessing a new kind of epidemic that attacks the very foundations of what it is to be human, to be able to relate to the world and to others.

The inevitable question is: should Western countries follow the Chinese example of controlling tech use? Should it be the government’s job to determine how many hours per day are acceptable for a child to remain in the online space?

For some, the prospect of big brother’s protection might look appealing. But let us remember Tocqueville’s warning of the despotism and tutorship inherent in this temptation – of making the  State the steward of our interests. Not only is the strategy paternalistic, in curbing one’s autonomy and the freedom to make one’s own choices, but it is also totalitarian in its predisposition, permitting the State control of one more sphere of our lives.

This may seem an exaggeration. Some may think that the situation’s urgency demands the strong hand of the State. However, while an unrestrained use of social media and online gaming may have severe implications for one’s constitution, we should recognize the problem for what it is. Our fears concerning technology and addiction are merely a symptom of another more profound problem: the difficulty one has in relating to others and finding one’s place in the world.

What authors like Hannah Arendt, Simone Weil, Tocqueville, and even Foucault teach us is that the construction of our moral personality requires laying roots in the world. Limiting online access will not, by itself, resolve the underlying problem. You may actually end up by throwing children to an abyss of solitude and despair by exacerbating the difficulties they have in communicating. We must ask: how might we rescue the experience of association, of togetherness, of sharing physical spaces and projects?

Here is where we go back to the concept of attention. James used to say that attention is the

taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness is of its essence. It implies withdrawal from some things in order to deal effectively with others. 

That is something that social media, despite catching our (in)voluntary attention, cannot give us. So, our withdrawal into social media must be compensated with a positive proposal of attentive activity to (re)learn how to look, interpret, think, reflect upon things and most of all to (re)learn how to listen and be with others. More than 20 years ago, Robert Putnam documented the loss of social capital in Bowling Alone. Simone Weil detailed our sense of “uprootedness” fifty years prior to that. Unfortunately, today we’re still looking for a cure that will have us trading in our screens for something that we can actually do attentively together. Legislation is unlikely to fill that void, alone.

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

Why Don’t People Cheat at Wordle?

photograph of Wordle being played on phone

By now, you’ve probably encountered Wordle, the colorful daily brainteaser that gives you six attempts to guess a five-letter word. Created in 2020 by Josh Wardle, the minimalistic website has gone viral in recent weeks as players have peppered their social media feeds with the game’s green-and-yellow boxes. To some, the Wordle craze is but the latest passing fad capturing people’s attention mid-pandemic; to others, it’s a window into a more thoughtful conversation about the often social nature of art and play.

Philosopher of games C. Thi Nguyen has argued that a hallmark feature of games is their ability to crystallize players’ decision-making processes, making their willful (and reflexive) choices plain to others; to Nguyen, this makes games a “unique art form because they work in the medium of agency.” I can appreciate the tactical cleverness of a game of chess or football, the skillful execution of a basketball jump shot or video game speedrun, or the imaginative deployment of unusual forms of rationality towards disposable ends (as when we praise players for successfully deceiving their opponents in a game of poker or Mafia/Werewolf, despite generally thinking that deception is unethical) precisely because the game’s structure allows me to see how the players are successfully (and artistically) navigating the game’s artificial constraints on their agency. In the case of Wordle, the line-by-line, color-coded record of each guess offers a neatly packaged, easily interpretable transcript of a player’s engagement with the daily puzzle: as Nguyen explains, “When you glance at another player’s grid you can grasp the emotional journey they took, from struggle to likely victory, in one tiny bit of their day.”

So, why don’t people cheat at Wordle?

Surely, the first response here is to simply reject the premise of the question: it is almost certainly the case that some people do cheat at Wordle in various ways or, furthermore, lie about or manipulate their grids before sharing them on social media. How common such misrepresentations are online is almost impossible to say.

But two facets of Wordle’s virality on social media suggest an important reason for thinking that many players have strong reasons to authentically engage with the vocabulary game; I have in mind here:

  1. the felt pressure against “spoiling” the daily puzzle’s solution, and
  2. the visceral disdain felt by non-players at the ubiquity of Wordle grids on their feeds.

In the first case, despite no formal warning presented by the game itself (and, presumably, no “official” statement from either Wordle’s creator or players), there exists a generally unspoken agreement online to avoid giving away puzzle answers. Clever sorts of innuendo and insinuation are frequent among players who have discovered the day’s word, as are meta-level commentaries on the mechanics or difficulty-level of the latest puzzle, but a natural taboo has arisen against straightforwardly announcing Wordle words to one’s followers (in a manner akin to the taboo against spoiling long-awaited movie or television show plots). In the second case, social media users not caught up in Wordle’s grid have frequently expressed their annoyance at the many posts filled with green-and-yellow boxes flying across their feeds.

Both of these features seem to be grounded in the social nature of Wordle’s phenomenology: it is one thing to simply play the game, but it is another thing entirely to share that play with others. While I could enjoy solving Wordle puzzles privately without discussing the experience with my friends, Wordle has become an online phenomenon precisely because people have fun doing the opposite: publicly sharing their grids and making what Nguyen calls a “steady stream of small communions” with other players via the colorful record of our agential experiences. It might well be that the most fun part of Wordle is not simply the experience of cleverly solving the vocab puzzle, but of commiserating with fellow players about their experiences as well; that is to say, Wordle might be more akin to fishing than to solving a Rubik’s cube — it’s the story and its sharing that we ultimately really care about. Spoiling the day’s word doesn’t simply solve the puzzle for somebody, but ruins their chance to engage with the story (and the community of players that day); similarly, the grids might frustrate non-players for the same reason that inside jokes annoy those not privy to the punchline — they underline the person’s status as an outsider.

So, this suggests one key reason why people might not want to cheat at Wordle: it would entail not simply fudging the arbitrary rule set of an agency-structuring word game, but would also require the player to violate the very participation conditions of the community that the player is seeking to enjoy in the first place. That is to say, if the fun of Wordle is sharing one’s real experiences with others, then cheating at Wordle is ultimately self-undermining — it gives you the right answer without any real story to share.

Notice one last point: I haven’t said anything here about whether or not it’s unethical to cheat at Wordle. In general, you’ll probably think that your obligations to tell the truth and avoid misrepresentation will apply to your Wordle habits in roughly the same way that they apply elsewhere (even if you’re not unfairly disadvantaging an opponent by cheating). But my broader point here is that cheating at Wordle doesn’t really make sense — at best, cheating might dishonestly win you some undeserved recognition as a skilled Wordle player, but it’s not really clear why you might care about that, particularly if the Wordle community revolves around communion moreso than competition.

Instead, swapping Wordle grids can offer a tantalizing bit of fun, authentic connection (something we might particularly crave as we enter Pandemic Year Three). So, pick your favorite starting word (mine’s “RATES,” if you want a suggestion) and give today’s puzzle your best shot; maybe we’ll both guess this one in just three tries!

The Morality of “Sharenting”

black-and-white photograph of embarrassed child

The cover of Nirvana’s Nevermind — featuring a naked baby diving after a dollar bill in a pool of brilliant, blue water — is one of the most iconic of the grunge era, and perhaps of the ‘90s. But not everyone looks back on that album with fond nostalgia. Just last week, Spencer Elden — the man pictured as the baby on that cover — renewed his lawsuit against Nirvana, citing claims of child pornography.

Cases like this are nothing new. Concerns regarding the exploitation of children in the entertainment industry have existed for, well, as long as the entertainment industry. What is new, however, is the way in which similar concerns might be raised for non-celebrity children. The advent of social media means that the public sharing of images and videos of children is no longer limited to Hollywood. Every parent with an Instagram account is capable of doing this. The practice even has a name: sharenting. Indeed, those currently entering adulthood are unique in that they are the first generation to have had their entire childhoods shared online — and some of them aren’t very happy about it. So it’s worth asking the question: is it morally acceptable to share imagery of children online before they can give their informed consent?

One common answer to this question is to say that it’s simply up to the parent or guardian. This might be summed up as the “my child, my choice” approach. Roughly, it relies on the idea that parents know what is in the best interests of their child, and therefore reserve the right to make all manner of decisions on their behalf. As long as parental consent is involved whenever an image or video of their child is shared, there’s nothing to be concerned about. It’s a tempting argument, but it doesn’t stand up to scrutiny. Being a parent doesn’t provide you with the prerogative to do whatever you want with your child. We wouldn’t, for example, allow parental consent as a justification for child labor or sex trafficking. If every parent did know what was best for their child, there wouldn’t be a need for institutions like the Child Protection Service. Child abuse and neglect wouldn’t exist. But they do. And that’s because sometimes parents get things wrong. The “my child, my choice” argument, then, is not a good one. So we must look for an alternative.

We might instead take a “consequentialist” approach — that is, to weigh up the good consequences and bad consequences of sharenting to see if it results in a net good. To be fair, there are many good things that come from the practice. For one, social media provides an opportunity for parents to share details of a very important part — perhaps the most important part — of their lives. In doing so, they are able to strengthen their relationships with family, friends, and other parents, bonding with — and learning from — each other along the way. Such sharing also enables geographically distant loved ones to be more involved in a child’s life. This is something that’s become even more important in a world that has undergone unprecedented travel restrictions as a result of the COVID-19 pandemic.

But the mere existence of these benefits is not enough to justify sharenting. They must be weighed against the actual and potential harms of the practice. And there are many. Sharing anything online — especially imagery of young children — is an enormously risky endeavor. Even images that are shared under supposedly private conditions can easily enter the public forum — either through irresponsible resharing by well-intentioned loved ones, or by the notoriously irresponsible management of our data by social media companies.

Once this imagery is in the public domain, it can be used for all kinds of nefarious purposes. But we needn’t explore such dark avenues. Many of us have a lively sense of our own privacy, and don’t want our information shared with the general public regardless of how it ends up being used. It makes sense to imagine that our children — once capable of giving informed consent — will feel the same way. Much of the imagery shared of them online involves private, personal moments intended only for themselves and those they care about. Any invasion of that privacy is a bad thing.

Which brings us to yet another way of analyzing this subject. Instead of focusing purely on the consequences of sharenting, we might instead apply what’s referred to as a “deontological” approach. One of the most famous proponents of deontology was Immanuel Kant. In its most straight-forward formulation, Kant’s ethical theory tells us to always seek to treat others as an end in themselves, not as a means to some other end. This approach reveres respect for the autonomy of others, and abhors using people for your own purposes. Thus, even if there are goods to be gained from sharenting, these should be ignored if the child — upon developing their autonomy — would wish that their private lives had never been made public.

What both the consequentialist approach and the deontological approach seem to boil down to, then, is a question of what the child will want once they are capable of giving informed consent. And this is something we can never know. They may develop into a gregarious braggart who shares every detail of their life online. But they may just as likely turn into a fiercely private individual who wants no record of their childhood — awkward and embarrassing as these always tend to be — in the digital ether. Given this uncertainty, what should parents do? It’s difficult to say, but perhaps the safest approach might be to apply some kind of “precautionary principle.” This principle states that where an unnecessary action brings a significant risk of harm, we should refrain from acting. So, given the potential harm associated with sharenting and the largely unnecessary nature of the practice (especially when similar goods can be achieved in other ways; for example, by mailing photographs to loved ones the old-fashioned way), we should respect our children’s right to privacy — at least until they can give their informed consent to having their private lives shared publicly.

On Anxiety and Activism

"The End Is Nigh" poster featuring a COVID spore and gasmask

The Plough Quarterly recently released a new essay collection called Breaking Ground: Charting Our Future in a Pandemic Year. In a contribution by Joseph Keegin, “Be Not Afraid,” he details some of his memories of his father’s final days, and the looming role that “outrage media” played in their interactions. He writes,

My dad had neither a firearm to his name, nor a college degree. What he did have, however, was a deep, foundation-rattling anxiety about the world ubiquitous among boomers that made him—and countless others like him—easily exploitable by media conglomerates whose business model relies on sowing hysteria and reaping the reward of advertising revenue.

Keegin’s essay is aimed at a predominantly religious audience. He ends his essay by arguing that Christians bear a specifically religious obligation to fight off the fear and anxiety that makes humans easy prey to outrage media and other forms of news-centered absorption. He argues this partly on Christian theological grounds — namely, that God’s historical communications with humans is almost always preceded by the command to “be not afraid,” as a lack of anxiety is necessary for recognizing and following truth.

But if Keegin is right about the effects of this “deep, foundation-rattling anxiety” on our epistemic agency, then it is not unreasonable to wonder if everyone has, and should recognize, some kind of obligation to avoid such anxiety, and to avoid causing it in others. And it seems as though he is right. Numerous studies have shown a strong correlation between feeling dangerously out-of-control and the tendency to believe conspiracy theories, especially when it comes to COVID-19 conspiracies (here, here, here). The more frightening media we consume, the more anxious we become. The more anxious we become, the more media we consume. And as this cycle repeats, the media we are consuming tends to become more frightening, and less veridical.

Of course, nobody wants to be the proverbial “sucker,” lining the pocket books of every website owner who knows how to write a sensational headline. We are all aware of the technological tactics used to manipulate our personal insecurities for the sake of selling products and, for the most part, I would imagine we strive to avoid this kind of vulnerability. But there is a tension here. While avoiding this kind of epistemically-damaging anxiety sounds important in the abstract, this idea does not line up neatly with the ways we often talk about, and seek to advance, social change.

Each era has been beset by its own set of deep anxieties: the Great Depression, the Red Scare, the Satanic Panic, and election fears (on both sides of the aisle) are all examples of relatively recent social anxieties that lead to identifiable epistemic vulnerabilities. Conspiracies about Russian spies, gripping terror over nuclear war, and unending grassroots ballot recount movements are just a few of the signs of the epistemic vulnerability that resulted from these anxieties. The solution may at first seem obvious: be clear-headed and resist getting caught up in baseless media-driven fear-mongering. But, importantly, not all of these anxieties are baseless or the result of purposeless fear-mongering.

People who grew up during the depression often worked hard to instill an attitude of rationing in their own children, prompted by their concern for their kids’ well-being; if another economic downturn hit, they wanted their offspring to be prepared. Likewise, the very real threat of nuclear war loomed large throughout the 1950s-1980s, and many people understandably feared that the Cold War would soon turn hot. Even elementary schools held atom bomb drills, for any potential benefit to the students in the case of an attack. One can be sure that journalists took advantage of this anxiety as a way to increase readership, but concerned citizens and social activists also tried to drum up worry because worry motivates. If we think something merits concern, we often try to make others feel this same concern, both for their own sake and for the sake of those they may have influence over. But if such deep-seated cultural anxieties make it easier for others to take advantage of us through outrage media, conspiracy theories, and other forms of anxiety-confirming narratives, is such an approach to social activism worth the future consequences?

To take a more contemporary example, let’s look at the issue of climate change. According to a recent study, out of 10,000 “young people” (between the ages of 16 and 25) surveyed, almost 60% claimed to be “very” or “extremely” worried about climate change. 45% of respondents said their feelings about climate change affected their daily life and functioning in negative ways. If these findings are representative, surely this counts as the Generation Z version of the kind of “foundation-rattling anxiety” that Keegin observed in his late father.

There is little doubt where this anxiety comes from: news stories and articles routinely point out record-breaking temperatures, numbers of species that go extinct year to year, and the climate-based causes of extreme weather patterns. Pop culture has embraced the theme, with movies like “The Day After Tomorrow,” “Snowpiercer,” and “Reminiscent,” among many others, painting a bleak picture of what human life might look like once we pass the point of no return. Unlike any other time in U.S. history, politicians are proposing extremely radical, lifestyle-altering policies in order to combat the growing climate disaster. If such anxieties leave people epistemically vulnerable to the kinds of outrage media and conspiracy theory rabbit holes that Keegin worries about, are these fear-inducing tactics to combat climate change worth it?

On the surface, it seems very plausible that the answer here is “yes!” After all, if the planet is not habitable for human life-forms, it makes very little difference whether or not the humans that would have inhabited the planet would have been more prone to being consumed by the mid-day news. If inducing public anxiety over the climate crisis (or any other high stakes social challenge or danger) is effective, then likely the good would outweigh the bad. And surely genuine fear does cause such behavioral effects. Right?

But again, the data is unclear. While people are more likely to change their behavior or engage in activism when they believe some issue is actually a concern, too much concern, anxiety, or dread seems to soon produce the opposite (sometimes tragic) effect. For example, while public belief in, and concern over, climate change is higher than ever, actual climate change legislation has not been adapted in decades, and more and more elected officials deny or downplay the issue. Additionally, the latest surge of the Omicron variant of COVID-19 has renewed the social phenomenon of pandemic fatigue, the condition of giving up on health and safety measures due to exhaustion and hopelessness regarding their efficacy.

In an essay discussing the pandemic, climate change, and the threat of the end of humanity, the philosopher Agnes Callard analyzes this phenomenon as follows:

Just as the thought that other people might be about to stockpile food leads to food shortages, so too the prospect of a depressed, disaffected and de-energized distant future deprives that future of its capacity to give meaning to the less distant future, and so on, in a kind of reverse-snowball effect, until we arrive at a depressed, disaffected and de-energized present.

So, if cultural anxieties increase epistemic vulnerability, in addition to, very plausibly, leading to a kind of hopelessness-induced apathy toward the urgent issues, should we abandon the culture of panic? Should we learn how to rally interest for social change while simultaneously urging others to “be not afraid”? It seems so. But doing this well will involve a significant shift from our current strategies and an openness to adopting entirely new ones. What might these new strategies look like? I have no idea.

The Ethical and Epistemic Consequences of Hiding YouTube Dislikes

photograph of computer screen displaying YouTube icon

YouTube recently announced a major change on their platform: while the “like” and “dislike” buttons would remain, viewers would only be able to see how many likes a video had, with the total number of dislikes being viewable only by the creator. The motivation for the change is explained in a video released by YouTube:

Apparently, groups of viewers are targeting a video’s dislike button to drive up the count. Turning it into something like a game with a visible scoreboard. And it’s usually just because they don’t like the creator or what they stand for. That’s a big problem when half of YouTube’s mission is to give everyone a voice.

YouTube thus seems to be trying to protect its creators from certain kinds of harms: not only can it be demoralizing to see that a lot of people have disliked your video, but it can also be particularly distressing if those dislikes have resulted from targeted discrimination.

Some, however, have questioned YouTube’s motives. One potential motive, addressed in the video, is that YouTube is removing the public dislike count in response to some of their own videos being overwhelmingly disliked (namely, the “YouTube Rewind” videos and, ironically, the video announcing the change itself). Others have proposed that the move aims to increase viewership: after all, videos with many more dislikes than likes are probably going to be viewed less often, which means fewer clicks on the platform. Some creators have even posited that the move was made predominantly to protect large corporations, as opposed to small creators: many of the most disliked videos belong to corporations, and since YouTube has an interest in maintaining a good relationship with them, they would also have an interest in restricting people’s ability to see how disliked their content is.

Let’s say, however, that YouTube’s motivations are pure, and that they really are primarily intending to prevent harms by removing the public dislike count on videos. A second criticism has come in the form of the loss of informational value: the number of dislikes on a video can potentially provide the viewer with information about whether the information contained in a video is accurate. The dislike count is, of course, far from a perfect indicator of video quality, because one can dislike for reasons that don’t have to do with the information it contains: again, in instances in which there have been targeted efforts to dislike a video, dislikes won’t tell you whether it’s really a good video or not. On the other hand, there do seem to be many cases in which looking at the dislike count can let you know if you should stay away: videos that are clickbait, misleading, or generally poor quality can often quickly and easily be identified by an unfavorable ratio of likes to dislikes.

A worry, then, is that without this information, one may be more likely to not only waste one’s time by watching low-quality or inaccurate videos, but also that one may be more likely to be exposed to misinformation. For instance, consider the class of clickbait videos prevalent on YouTube, one in which people will make impressive-looking crafts or food through a series of improbable steps. Seeing that a video of this type has received a lot of dislikes helps the viewer contextualize it as something that’s perhaps just for entertainment value, and should not be taken seriously.

Should YouTube continue to hide dislike counts? In addressing this question, we are perhaps facing a conflict in different kinds of values: on the one hand, you have the moral value of protecting small or marginalized creators from targeted dislike campaigns; on the other hand, you have the epistemic disvalue of removing potentially useful information that can help viewers avoid believing misleading information (as well as the practical value of saving people the time and effort of watching unhelpful videos). It can be difficult to try to balance different values: in the case of the removal of public dislike counts, the question becomes whether the moral benefit is strong enough to outweigh the epistemic detriment.

One might think that the epistemic detriments are not, in fact, too significant. In the video released by YouTube, this issue is addressed, if only very briefly: referring to an experiment conducted earlier this year in which public dislike counts were briefly removed from the platform, the spokesperson states that they had considered how dislikes give views “a sense of a video’s worth.” He then states that,

[W]hen the teams looked at the data across millions of viewers and videos in the experiment they didn’t see a noticeable difference in viewership regardless of whether they could see the dislike count or not. In other words, it didn’t really matter if a video had a lot of dislikes or not, they still watched.

At the end of the video, they also stated, “Honestly, I think you’re gonna get used to it pretty quickly and keep in mind other platforms don’t even have a Dislike button.”

These responses, however, are non-sequiturs: whether viewership increased or decreased does not say anything about whether people are able to judge a video’s worth without a public dislike count. Indeed, if anything it reinforces the concern that people will be more likely to consume content that is misleading or of low informational value. That other platforms do not contain dislike buttons is also irrelevant: that other social media platforms do not have a dislike button may very well just mean that it is difficult to evaluate the quality of information present on those platforms. Furthermore, users on platforms such as Twitter have found other ways to express that a given piece of information is of low value, for example by ensuring that a tweet has a high ratio of responses to likes, something that seems much less likely to be effective on a platform like YouTube.

Even if YouTube does, in fact, have the primary motivation of protecting some of its creators from certain kinds of harms, one might wonder whether there are better ways of addressing the issue, given the potential epistemic detriments.

The Ethics of Protest Trolling

image of repeating error windows

There is a new Trump-helmed social media site being developed, and it’s been getting a lot of attention from the media. Called “Truth Social,” the site and associated app initially went up for only a few hours before it was taken offline due to trolling. Turns out, the site’s security was not exactly top-of-the-line: users were able to claim handles that you think would have been reserved for others – including “donaldjtrump” and “mikepence” – and then used their new accounts to post a variety of images that few people would want to be associated with their name.

This isn’t the first time a far-right social media site has been targeted by internet pranksters. Upon its release, GETTR, a Twitter clone founded by one of Trump’s former spokespersons, was flooded with hentai and other forms of cartoon pornography. While a defining feature of far-right social media thus far has been a fervor for “free speech” and a rejection of “cancel culture,” it is clear that such sites do not want this particular kind of content clogging up their feeds.

Those familiar with the internet will recognize posting irrelevant, gross, and generally not-suitable-for-work images on sites in this manner as acts of trolling. So, here’s a question: is it morally permissible to troll?

The question quickly becomes complicated when we realize that “trolling” is not a well-defined act, and encompasses potentially many different forms of behavior. There has been some philosophical work on the topic: for example, in the excellently titled “I Wrote this Paper for the Lulz: The Ethics of Internet Trolling,” philosopher Ralph DiFranco distinguishes 5 different forms of trolling.

There’s malicious trolling, which is intended to specifically harm a target, often through the use of offensive images or slurs. There’s also jocular trolling, actions that are not done out of any intention to harm, but rather to poke fun at someone in a typically lighthearted manner. While malicious trolling seems to be generally morally problematic, jocular trolling can certainly also risk crossing a moral line (e.g., when “it’s just a prank, bro!” videos go wrong).

There’s also state-sponsored trolling, which was a familiar point of discussion during the 2016 U.S. elections, wherein companies in Russia were accused of creating fake profiles and posts in order to support Trump’s campaign; concern trolling, wherein someone feigns sympathy in an attempt to elicit a genuine response, which they are then ridiculed for; and subcultural trolling, wherein someone again pretends to be authentically engaged, this time in a discussion or issue in order elicit genuine engagement by the target. Again, it’s easy to see how many of these kinds of acts can be morally troubling: intentional interference with elections, and feigning sincerity to provoke someone else generally seem like the kind of behaviors that one ought not perform.

What about the kinds of acts we’re seeing being performed on Truth Social, and that we’ve seen on other far-right social media apps like GETTR? They seem to be a form of trolling, but do they fall into any of the above categories? And what should we think about their moral status?

As we saw above, trolling captures a wide variety of phenomena, and not all of them have been fully articulated. I think that the kind of trolling I’m focusing on here – i.e., that which is involved in snatching up high-profile usernames and clogging up feeds with irrelevant images – doesn’t neatly fit into any of the above categories. Instead, let’s call it something else: protest trolling.

Protest trolling has a lot of the hallmarks of other forms of trolling – it often involves acts that are meant to distract a particular target or targets, and involves what the troll finds funny (e.g., inappropriate pictures of Sonic the Hedgehog). Unlike other forms of trolling, however, it is not necessarily done in “good fun,” nor is it necessarily meant to be malicious. Instead, it’s meant to express one’s principled disagreement with a target, be it an individual, group, or platform.

Compare, for example, a protest of a new university policy that involves a student sit-in. A group of students will coordinate their efforts to disrupt the activities of those in charge, an act that expresses their disagreement with the institution, governance, and/or authority figure. The act itself is intentionally disruptive, but is not itself motivated by malice: they are not acting in this way because they want others to be harmed, even though some harm may come about as a result.

While the analogy to the case of online trolling is imperfect, there are, I think, some important similarities between a student sit-in and the flooding of right-wing social media with irrelevant content. Both are primarily meant to disrupt, without specifically intending harm, and both are directed towards a perceived threat to one’s core values. For instance, we have seen how right-wing media has perpetrated violence, both in terms of violent acts and towards members of marginalized groups. One might thereby be concerned that a whole social network dedicated to the expression of such views could result in similar harms, and is thus worth protesting.

Of course, in the case of online trolling there may be other intentions at play: for example, the choice of material that’s been used to disrupt these services is clearly meant to shock, gross-out, and potentially even offend its core users. Furthermore, not every such action will have principled intentions: some will simply want to jump on the bandwagon because it seems fun, as opposed to actually expressing a principled disagreement.

There are, then, many tangled issues surrounding the intentions and execution of different forms of protest trolling. However, just as many cases of real-life protesting are disruptive without being unethical, so, too, may cases of protest trolling be potentially morally unproblematic.

Praise and Resentment: The Moral of ‘Bad Art Friend’

black-and-white photograph of glamorous woman looking in mirror

The story of the “Bad Art Friend” has taken social media by storm. For those who have yet to brave the nearly 10,000 word New York Times article, here is a summary of the tale: Dawn Dorland, a writer, decided to donate one of her kidneys after completing her M.F.A. She kept her social media friends abreast of her donation and surgery, and noticed (some time after the donation) that one of her friends had failed to comment on the donation. Dorland wrote to the friend (Sonya Larson, herself a writer) asking her why she hadn’t said anything about Dorland’s altruistic activities. They exchanged pleasantries, Sonya praised her for her sacrifice, and all seemed well. Several months later, however, Sonya published a short story inspired by Dorland’s kidney donation which set off a bevy of legal and relational blows involving multiple lawsuits and, potentially, ruined careers.

There are a slew of ethical issues and questions embedded in the text and subtext of this story: questions about the differences between plagiarism and inspiration, questions about appropriate boundaries in friendships and acquaintanceships, and questions about the legality and propriety of lawsuits. But a majority consensus has seemed to emerge about the protagonist of this story: almost universally, readers are not on the side of Dawn Dorland.

Elizabeth Bruenig, in an op-ed for The Atlantic, describes Dorland as the “patron-saint” of our “social-media age,” emphasizing the description is not a complement. She characterizes Dorland’s initial behavior towards Larson as follows:

“Dorland, in particular, went looking for [victimhood], soliciting Larson for a reason the latter hadn’t congratulated her for her latest good deed, suspecting—rightly—a chillier relationship than collegial email etiquette would suggest. She kept seeking little indignities to be wounded by—and she kept finding them. Her retaliations quickly outpaced Larson’s offenses, such as they were.”

Bruenig is right that Dorland considered herself to be wronged by Larson’s apparent apathy. And insofar as we find it implausible that Larson really did wrong her in this way, it is understandable why Bruenig might analyze the situation as one in which Dorland sought out a kind of victimhood status. This may explain part of why Dorland’s behavior immediately turns us off — looking for victimhood, or claiming it too quickly, seems like a kind of injustice to those who really are victims of really bad actions or circumstances. In diverting attention to extremely mild wrongs (if they were wrongs at all) done to herself, Dorland distracts people from truly awful situations that merit their consideration. Human attention is zero-sum: if I am paying attention to you, then that means I am not paying attention to something else. So, there is a consequentialist argument to be made that I should not seek out “victimhood” status and, thereby, attention, if the public’s attention would be better spent elsewhere.

Yet, Bruenig’s analysis does not consider the fact that our mild disgust at Dorland begins even before she voices her complaints to Larson. They begin even before she speaks to Larson at all. They begin where Dorland seeks out praise and attention for her (admittedly very brave) act of donating her kidney. But did Dorland actually do anything wrong in seeking out praise for her praise-worthy act? Does our disgust stem from genuine moral assessment, or a deeper kind of resentment of people who act more selflessly than we do?

The philosopher Immanuel Kant theorized that it was morally impermissible to treat others as a mere means to our own ends — we must always consider them to be intrinsically valuable creatures themselves, and our actions must reflect this. We may, therefore, think that Dorland’s seeking of praise for her donation indicates that she was using the kidney recipient as a mere means to gaining praise, popularity, or notoriety.

Still, it is not clear that Kant’s concepts would apply in this case. Dorland’s donation of her kidney indicates that, while she may have used the opportunity as a means to other social ends, she was not using the recipient merely as a means — in saving his life, she acted toward him in acknowledgement of his value as a person. There is nothing in Kant’s moral philosophy which prohibits us from using people to attain our ends, so long as we respect them as persons while doing so.

From a utilitarian perspective, seeking praise for your good works may even maximize happiness, meaning that it would be the morally correct thing to do. For example, by seeking praise for your honorable deeds, you may draw attention to what you did, encouraging others to display the same amount of selflessness and charity. Additionally, you yourself would derive happiness from the praise, and it doesn’t seem that anybody would lose happiness by praising you. Therefore, it seems that seeking such accolades may benefit everyone and harm no one.

A virtue ethical approach to the issue may seem to yield different results. After all, surely there is something unvirtuous about someone who seeks out praise for supposedly altruistic actions? Many consider humility to be a virtue, and Dorland’s constant social media updates and attention-seeking behavior seem to indicate a lack of humility in her character. Perhaps we are turned off by the desire for praise because it indicates a character vice: pompousness, perhaps, or neediness.

And yet, historically virtue ethicists have praised the (appropriate) seeking of praise. In his Nicomachean Ethics, book four, Aristotle calls it the virtue of “small honors,” which we might more simply understand as the virtue of seeking to do, and be rewarded for, honorable things. Of course, Aristotle still holds that I should not seek praise for things that are not praiseworthy, nor should I act in praiseworthy ways purely for the praise. Still, seeking honor (and the praise that arguably ought to go with it) in moderate amounts is a virtue. At least for Aristotle.

There is a case to be made that our distaste for those who seek praise has a distinctly Christian origin. In Christian scriptures — specifically, the Gospel of Matthew, chapter 6 — Jesus preaches against seeking recognition for acts of charity:

“Be careful not to practice your righteousness in front of others to be seen by them. If you do, you will have no reward from your Father in heaven. So when you give to the needy, do not announce it with trumpets, as the hypocrites do in the synagogues and on the streets, to be honored by others. Truly I tell you, they have received their reward in full. But when you give to the needy, do not let your left hand know what your right hand is doing, so that your giving may be in secret. Then your Father, who sees what is done in secret, will reward you.”

In the Christian tradition, the idea is that those who seek recognition from others in the here and now eliminate their opportunity to build character and, perhaps, gain other spiritual rewards. One may have earthly, social rewards, or longer-lasting spiritual rewards, but one may not have both.

Yes, I suspect there are many who would not claim Christianity who nevertheless are repelled by the idea of someone asking for praise for donating a kidney. Those familiar with Friedrich Nietzsche’s writings will recall his extensive critique of Christian moral thought which, he wrote, “has waged deadly war against this higher type of man; placed all the basic instincts of his type under ban” (The Anti-Christ, p. 5). Nietzsche argued that traditional Christian morality — which he referred to as “slave morality” — served only to make humans weak, powerless, and full of resentment at those who were powerful and flourishing. One can imagine a Nietzschean critique of our distaste for those announcing their good deeds in the public square: perhaps, rather than a kind of virtuous disgust, what we are truly experiencing is resentment toward someone acting with more courage than we have.

No matter your opinion on Bad Art Friend and all the drama that story contains, it is worth reflecting on how we respond when someone announces their good deeds to the public. Why do we prefer discretion? What is wrong with desiring praise and honor? These questions may be worth investigating deeper, lest we act in ordinary human resentment rather than careful moral consideration.

What Is Cancel Culture?

image of Socrates drinking hemlock

There has been much bemoaning of “cancel culture” in recent years. The fear seems to be that there is a growing trend coming from the left to “cancel” ideas and even people that fall out of favor with proponents of left-wing political ideology. Social media and online bullying contribute to this phenomenon; people leave comments shaming “bad actors” into either apologizing, leaving social media, or sometimes just digging in further.

It’s worth taking some time to think about the history of “cancellation.” For better or for worse, cancellation is a political tool that can be used either to entrench or to disrupt the dominant power hierarchy. Ideas and people have been “canceled” as long as there have been social creatures with reactive attitudes. Humans aren’t even the only species to engage in cancel behavior. In communities of animals in which cooperative behavior is important, groups will often shun members who behave selfishly. In other cases, groups of animals may ostracize members that do not seem to respect the authority of the alpha male. What we now call “cancel culture” is just one form of the general practice of using sentiments such as approval or disapproval or praise and blame to influence behavior and shape social interactions.

One of history’s most famous cancellations was the trial and execution of Socrates, who was “canceled” in the most extreme of ways because the influence that he had over the youth of Athens posed an existential threat to those with the power in that community. The challenge that he presented was that he might encourage the younger generation to reassess values and construct a new picture of what their communities might look like. At his trial, Socrates says,

“For I do nothing but go about persuading you all, old and young alike, not to take thought for your persons and your properties, but first and chiefly to care about the greatest improvement of the soul. I tell you that virtue is not given by money, but that from virtue come money and every other good of man, public as well as private. This is my teaching, and if this is the doctrine which corrupts the youth, my influence is ruinous indeed.”

For this, he was made to drink hemlock.

Galileo was canceled for the heresy of advancing the idea that the earth revolved around the sun rather than the other way around. This view of the universe was in conflict with the view endorsed by the Catholic Church, so Galileo’s book of dialogues was prohibited, and he lived out the rest of his life under house arrest.

In the more recent past, Martin Luther King Jr. was canceled — not only on his assassination, but prior to that, when many of his former compatriots in the struggle for civil rights broke ranks with him over his opposition to the Vietnam War and his battle to end poverty.

Through the years, people have been “canceled” for being Christian, Pagan, Catholic, Protestant, Atheist, Gay, Female, Transgender, Communist, and Socialist. They’ve been canceled for speaking up too much or too little, for being too authentic or not authentic enough. Books have been burned, ideas have been suppressed, people’s reputations have changed with the direction of the prevailing winds. Cancellation belongs to no single political party or ideology.

Nevertheless, “cancellation” in the 21st century is presented to us as a new and nebulous phenomenon — a liberal fog that has drifted in to vaporize the flesh of anyone who harbors conservative ideas. But what does it mean, exactly, to “cancel” a person? Perhaps the most common use of the word “cancel” in an ordinary context has to do with events. If I get a cold and I cancel my philosophy courses for the day, then those courses are no longer taking place. Similarly, in the most extreme cases, to “cancel” someone is to get rid of them forever — to kill them. Socrates, Hypatia, and even Jesus were “canceled” in this way.

There are other cases of cancellation which are pretty extreme, even if they don’t result in death. Instead, the person or group might be imprisoned or otherwise punished by the government. For example, during World War II, many Japanese Americans were “canceled” and put in internment camps just for being Japanese during a time when Americans were prone to xenophobia against that particular group. Then, of course, there was the McCarthy era, when people all across the country had to worry about their lives or livelihoods being destroyed if it were discovered or even suspected that they were sympathetic to communism. This cancel culture witch hunt affected the careers of stars like Charlie Chaplin, Langston Hughes, and Orson Wells. Positive proof of membership in the party wasn’t even necessary. Of one case Joseph McCarthy famously said, “I do not have much information on this except the general statement of the agency…that there is nothing in the files to disprove his Communist connections.”

Thankfully, when we use the word “cancel” these days, we are usually referring to something less extreme. We tend to mean that a certain segment of society will no longer support the “canceled” person in various ways — they will not consume their products, enjoy their art, listen to their thoughts, or otherwise support their general platform. The most common cases are those of politicians and artists of various types. Many people no longer watch Kevin Spacey movies after learning that he frequently engaged in sexual harassment of co-workers.

The linchpin — and the feature that makes it tricky — is that cancel culture is one of the consequences of the display of people’s reactive attitudes. It is these very reactive attitudes — guilt, shame, praise, blame — that are involved in moral judgments. Such judgments also involve assessment of harm. People often point out, when attempting to hold a bad actor responsible, that the bad actor’s behavior is resulting in a serious set of bad consequences for their community. These kinds of considerations are important — they make the world a better place. We don’t want to throw the baby out with the bathwater; we don’t want to give up holding people morally responsible for their actions because we are too afraid of “canceling” the wrong person. There are cases in which cancellation seems like precisely the correct course of action. We shouldn’t continue to hold in high regard rapists and serial harassers like Bill Cosby and Harvey Weinstein. We shouldn’t support the platforms of racists and child molesters.

For these reasons, cancel culture shouldn’t be depicted as the emerging new villain in the plot of the 2020s. This culture has always been around and always will be, though, granted, it is amplified by social media and the internet. Sometimes it does some real good. The reality is that this has all been so politicized that it is unlikely that they’ll be much ideological shift on these issues. If we allow Socrates’ ancient ideas to “corrupt” our minds, we’ll keep asking questions: “Is this a power play?” “Should this behavior be tolerated?” “Is this a case that calls for compassion and understanding?” Improvement of the soul calls for nuance.

Facebook Groups and Responsibility

image of Facebook's masthead displayed on computer screen

After the Capitol riot in January, many looked to the role that social media played in the organization of the event. A good amount of blame has been directed at Facebook groups: such groups have often been the target of those looking to spread misinformation as there is little oversight within them. Furthermore, if set to “private,” these groups run an especially high risk of becoming echo chambers, as there is much less opportunity for information to flow freely within them. Algorithms that Facebook uses to populate your feed were also part of the problem: more popular groups are more likely to be recommended to others, which led to some of the more pernicious groups getting a much broader range of influence than they would have otherwise. As noted recently in the Wall Street Journal, while it was not long ago that Facebook saw groups as the heart of the platform, abuses of the feature has forced the company to make some significant changes into how they are run.

The spread of misinformation in Facebook groups is a complex and serious problem. Some proposals have been made to try to ameliorate it: Facebook itself implemented a new policy in which groups that were the biggest troublemakers – civics groups and health groups – would not be promoted during the first three weeks of their existence. Others have called for more aggressive proposals. For instance, a recent article in Wired suggested that:

“To mitigate these problems, Facebook should radically increase transparency around the ownership, management, and membership of groups. Yes, privacy was the point, but users need the tools to understand the provenance of the information they consume.”

A worry with Facebook groups, as well as a lot of communication online generally, is that it can be difficult to tell what the source of information is, as one might post information anonymously or under the guise of a username. Perhaps with more information about who was in charge of a group, then, one would be able to make a better decision as to whether to accept the information that one finds within it.

Are you part of the problem? If you’re actively infiltrating groups with the intent of spreading misinformation, or building bot armies to game Facebook’s recommendation system, then the answer is clearly yes. I’m guessing that you, gentle reader, don’t fall into that category. But perhaps you are a member of a group in which you’ve seen misinformation swirling about, even though you yourself didn’t post it. What is the extent of your responsibility if you’re part of a group that spreads misinformation?

Here’s one answer: you are not responsible at all. After all, if you didn’t post it, then you’re not responsible for what it says, or if anyone else believes it. For example, let’s say you’re interested in local healthy food options, and join the Healthy Food News Facebook group (this is not a real group, as far as I know). You might then come across some helpful tips and recipes, but also may come across people sharing their views that new COVID-19 vaccines contain dangerous chemicals that mutate your DNA (they don’t). This might not be interesting to you, and you might think that it’s bunk, but you didn’t post it, so it’s not your problem.

This is a tempting answer, but I think it’s not quite right. The reason is because of how Facebook groups work, and how people are inclined to find information plausible online. As noted above, sites like Facebook employ various algorithms to determine which information to recommend to its users. A big factor that goes into such suggestions is how popular a topic or group is: the more engagement a post gets, the more likely it’s going to show up in your news feed, and the more popular a group is, the more likely it will be recommended to others. What this means is that mere membership in such a group will contribute to that group’s popularity, and thus potentially to the spread of the misinformation it contains.

Small actions within such a group can also have potentially much bigger effects. For instance, in many cases we put little thought into “liking” or reacting positively to a post: perhaps we read it quickly and it coheres with our worldview, so we click a thumbs-up, and don’t really give it much thought afterwards. From our point of view, liking a post does not mean that we wholeheartedly believe it, and it seems that there is a big difference between liking something and posting it yourself. However, these kinds of engagements influence the extent to which that post will be seen by others, and so if you’re not liking in a conscientious way, you may end up contributing to the spread of bad information.

What does this say about your responsibilities as a member of a Facebook group? There are no doubt many such groups that are completely innocuous, where people do, in fact, only share helpful recipes or perhaps even discuss political issues in a calm and reasoned way. So it’s not as though you necessarily have an obligation to quit all of your Facebook groups, or to get off the platform altogether. However, given that otherwise innocent actions like clicking “like” on a post can have much worse effects in groups in which misinformation is shared, and that being a member of such a group at all can contribute to its popularity and thus the extent to which it can be suggested to others means that if you find yourself a member of such a group, you should leave it.

QAnon and Two Johns

photograph of 'Q Army" sign displayed at political rally

In recent years, threats posed to and by free speech on the internet have grown larger and more concerning. Such problems as authoritarian regimes smothering dissent and misinformation campaigns targeting elections and public health have enjoyed quite a share of the limelight. Social media platforms have sought (and struggled) to address such challenges. Recently, a new insidious threat posed by free speech has emerged: far-right conspiracy theories. The insurrection of January 6th unveiled the danger of speech promoting such beliefs, namely ones the QAnon theory embraces. The insurrection demonstrated that speech promoting the anti-government extremist theory can not only engender violence but existentially threaten the United States. Such speech so threatens harm by manipulating individuals into believing in the necessity of violence to combat the schemes of a secretive, satanic elite. In the days following the insurrection, social media platforms rushed to combat this threat. Twitter alone removed more than 70,000 QAnon-focused accounts from its platform.

This bold but wise move was met with resistance, however. Right-wing media commentators were quick to decry this and similar policies as totalitarian censorship. Legal experts retorted that, as private entities, social media companies can restrict speech on their platform as they please. This is because the First Amendment to the U.S. Constitution protects citizens from legal restrictions on free speech, not the rules of private organizations. Such legal experts may be perfectly correct, and unequivocally siding with them might seem to offer a temptingly quick way to dismiss fanatic right-wing commentators. Nevertheless, caring only about government restrictions on speech seems perilous: such a stance neglects the great importance of social restrictions on speech.

The weight of social restrictions on speech (and behavior, more generally) is very real. Jean-Jacques Rousseau referred to such social restrictions as moral laws. He even seemed to regard this class of laws as more fundamental than the constitutional, civil, and criminal classes. Moral laws are inscribed in the very “hearts of the citizens” and include “morals, customs, and especially opinion.” Violations of these laws are typically penalized with either criticism or ostracism (or both). The emergence of “cancel culture” provides conspicuous examples (for better or worse) of this structure in action, from Gina Carano to John Schnatter. First, an individual (typically, a public figure) violates a moral law (frequently, customary prohibitions on racist speech). Then, the individual receives a punishment (often, in the form of damage to reputation and career). The prohibitions on QAnon-focused Twitter accounts are a form of ostracism: those promoting QAnon beliefs have been expelled from the Twitter community for transgressing moral laws, namely peace (by promoting violence) and honesty (by promoting misinformation). As Twitter has become an integral forum for political discourse (politicians, like former President Trump, heavily rely on the platform to both court popular support and bash their rivals), this Twitter expulsion amounts to marginalization within, or partial expulsion from, general public discourse. Upon considering this, the real restrictiveness of such prohibitions on speech should now be evident.

Once the real strength of social restrictions on speech is acknowledged, a certain tension becomes apparent: that between our liberties concerning speech and our liberties in regard to property. To elaborate, there appears to be a tension between Twitter users and Twitter shareholders (particularly, the right to set and enforce private restrictions on the speech shared over the platform they own). Efforts to balance the two can perhaps be aided by the wisdom of two great Johns: John Locke and Jean-Jacques Rousseau. Their writings offer some thought-provoking perspectives on the grounds and scope of each of the parties’ freedoms.

John Locke believed that rights are derived from nature. He thought they were contained in what he called the Law of Nature: “no one ought to harm another in [their] Life, Health, Liberty, or Possessions.” Certainly, this general rule implies the rights to free speech and property. Moreover, it follows that those particular rights extend only so far as they accord with that rule. Locke’s theory can thus affirm both natural rights and natural limits to them. Stated in Lockean terms, then, the now-removed QAnon accounts apparently promoted speech which transgressed natural limits on the right to free speech (by promoting violence).

Unlike Locke, Jean-Jacques Rousseau held that rights are derived from social agreement, not nature. He held that this social agreement takes the form of continuous negotiation by all members of the “body politic:” manifold “individual wills” are boiled into an all-binding “general will.” In this perspective, the rights to free speech and property extend only so far as social agreement allows. Rousseau’s theory can thus recognize the value of including diverse individuals in social discourse while also recognizing the validity of socially-established regulations on that discourse. Understood in this perspective, Twitter expelled the QAnon accounts for violating regulations on social discourse (namely, by supporting violence and thus threatening the process of discourse itself).

Locke’s and Rousseau’s perspectives can provide a useful guide to assessing the issues related to free speech and the internet. Each perspective offers a framework which seems reasonable and yet is opposed to the other. Considering both, then, should allow for multi-sided and nuanced discussion. Employing these two frameworks (and other conceivable ones), as well as considering the opinions of more recent thinkers, can potentially enrich public discourse surrounding free speech and the internet.

In the Limelight: Ethics for Journalists as Public Figures

photograph of news camera recording press conference

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Journalistic ethics are the evolving standards that dictate the responsibilities reporters have to the public. As members of the press, news writers play an important role in the accessibility of information, and unethical journalistic practices can have a detrimental impact on the knowledgeability of the population. Developing technology is a major factor in changes to journalism and the way journalists navigate ethical dilemmas. Both the field of journalism, and its ethics, have been revolutionized by the internet.

The increased access to social media and other public platforms of self-expression have expanded the role of journalists as public figures. The majority of journalistic ethical concerns focus on journalists’ actions in the scope of their work. As the idea of privacy changes, more people feel comfortable sharing their lives online and journalists’ actions outside of their work come further under scrutiny. Increasingly, questions of ethics in journalism include journalists’ non-professional lives. What responsibilities do journalists have as public-facing individuals?

As a student of journalism, I am all too aware that there is no common consensus on the issue. At the publication I write for, staff members are restricted from participating in protests for the duration of their employment. In a seminar class, a professional journalist discussed workplace moratoriums they’d encountered on publicly stating political leanings and one memorable debate about whether or not it was ethical for journalists to vote — especially in primaries, on the off-chance that their vote or party affiliation could become public. Each of these scenarios stems from a common fear that a journalist will become untrustworthy to their readership due to their actions outside of their work. With less than half the American public professing trust in the media, according to Gallup polls, journalists are facing intense pressure to prove themselves worthy of trust.

Journalists have a duty to be as unbiased as possible in their reporting — this is a well-established standard of journalism, promoted by groups like the Society for Professional Journalists (SPJ). How exactly they accomplish that is changing in the face of new technologies like social media. Should journalists avoid publicizing their personal actions and opinions and opt-out of any personal social media? Or should they restrict them entirely to avoid any risk of them becoming public? Where do we draw the lines?

The underlying assumption here is that combating biased reporting comes down to the personal responsibility of journalists to either minimize their own biases or conceal them. At least a part of this assumption is flawed. People are inherently biased; a person cannot be completely impartial. Anyone who attempts to pretend otherwise actually runs a greater risk of being swayed by these biases because they become blind to them. The ethics code of the SPJ advises journalists to “avoid conflicts of interest, real or perceived. Disclose unavoidable conflicts.” Although this was initially written to be applied to journalists’ professional lives, I believe that that short second sentence is a piece of the solution. “Disclose unavoidable conflicts.” More effective than hiding biases is being clear about them. Journalists should be open about any connections or political leanings that intersect with their field. It truly provides the public with all the information and the opportunity to judge the issues for themselves.

I don’t mean to say that journalists should be required to make parts of their private lives public if they don’t intersect with their work. However, they should not be asked to hide them either. Although most arguments don’t explicitly suggest journalists hide their biases, they either suggest journalists avoid public action that could reveal a bias or avoid any connection that could result in a bias — an entirely unrealistic and harmful expectation. Expecting journalists to either pretend to be bias-free or to isolate themselves from the issues they cover as much as possible results in either dishonesty or “parachute journalism” — journalism in which reporters are thrust into situations they do not understand and don’t have the background to report on accurately. Fostering trust with readers and deserving that trust should not be accomplished by trying to turn people into something they simply cannot be, but by being honest about any potential biases and working to ensure the information is as accurate as possible regardless.

The divide between a so-called “public” or “professional” life and a “private” life is not always as clear as we might like, however. Whether they like it or not, journalists are at least semi-public figures, and many use social media to raise awareness for their work and the topics they cover, while also using social media in more traditional, personal ways. In these situations, it can become more difficult to draw a line between sharing personal thoughts and speaking as a professional.

In early 2020, New York Times columnist Ben Smith wrote a piece criticizing New Yorker writer Ronan Farrow for his journalism, including, in some cases the exact accuracy or editorializing of tweets Farrow had posted. Despite my impression that Smith’s column was in itself inaccurate, poorly researched and hypocritical, it raised important questions about the role of Twitter and other social media in reporting. A phrase I saw numerous times afterwards was “tweets are not journalism” — a criticism of the choice to place the same importance on and apply the same journalistic standards to Farrow’s Twitter account as his published work.

Social media makes it incredibly easy to share information, opinions, and ideas. It is far faster than many other traditional methods of publishing. It can, and has been, a powerful tool for journalists to make corrections and updates in a timely manner and to make those corrections more likely to be viewed by people who already read a story and might not check it again. If a journalist intends them to be, tweets can, in fact, be journalism.

Which brings us back to the issue of separating public from private. Labeling advocacy, commentary, and advertisement (and keeping them separated) is an essential part of ethical journalism. But which parts of these standards should be extrapolated to social media, and how? Many individuals will use separate accounts to make this distinction. Having a work account and personal account, typically with stricter privacy settings, is not uncommon. It does, however, prevent many of the algorithmic tricks people may use to make their work accessible, and accessibility is an important part of journalism. Separating personal and public accounts effectively divides an individual’s audience and prevents journalists from forming more personal connections to their audience in order to publicize their work. It also prevents the engagement benefits of more frequent posting that comes from using a single account. By being asked to abstain from a large part of what is now ordinary communication with the public, journalists are being asked to hinder their effectiveness.

Tagging systems within social media currently provide the best method for journalists to mark and categorize these differences, but there’s no “standard practice” amongst journalists on social media to help readers navigate these issues, and so long as debates about journalistic ethics outside of work focus on trying to restrict journalists from developing biases at all, it won’t become standard practice. Adapting to social media means shifting away from the idea that personal bias can be prevented by isolating individuals from the controversial issues, rather than helping readers and journalists understand, acknowledge, and deconstruct biases in media for themselves by promoting transparency and conversation.

Trump and the Dangers of Social Media

photograph of President Trump's twitter bio displayed on tablet

In the era of Trump, social media has been both the medium through which political opinions are disseminated and a subject of political controversy itself. Every new incendiary tweet feeds into another circular discussion about the role sites like Twitter and Facebook should have in political discourse, and the recent attack on the U.S. Capitol by right-wing terrorists is no different. In what NPR described as “the most sweeping punishment any major social media company has ever taken against Trump,” Twitter has banned the president from using their platform. Not long before Twitter’s announcement, Facebook banned him as well, and now Parler, the conservative alternative to Twitter, has been removed from the app store by Apple.

While these companies are certainly justified in their desire to prevent further violence, is this all too little, too late? Much in the same way that members of the current administration have come under fire for resigning with only two weeks left in office, and not earlier, it seems that social media sites could have acted sooner to squash disinformation and radical coordination, potentially averting acts of domestic terror like this one.

At the same time, there isn’t a simple way to cleanse social media sites of white supremacist violence; white supremacy is insidious and often very difficult to detect through an algorithm. This places social media sites in an unwinnable situation: if you allow QAnon conspiracy theories to flourish unchecked, then you end up with a wide base of xenophobic militants with a deep hatred for the left. But if you force conspiracy theorists off your site, they either migrate to new, more accommodating platforms (like Parler), or resort to an ever-evolving lexicon of dog-whistles that are much harder to keep track of.

Furthermore, banning Trump supporters from social media sites only feeds into their imagined oppression; what they view as “censorship” (broad social condemnation for racist or simply untrue opinions) only serves as proof that their First Amendment rights are being trampled upon. This view, of course, ignores the fact that the First Amendment is something the government upholds, not private companies, which Trump-appointee Justice Kavanaugh affirmed in the Supreme Court in 2019. But much in the same way that the Confederacy’s romantic appeal relies on its defeat, right-wing pundits who are banned from tweeting might become martyrs for their base, adding more fuel to the fire of their cause. As David Graham points out, that process has already begun; insurrectionists are claiming the status of victims, and even Republican politicians who condemn the violence in one moment tacitly validate the rage of conspiracy theorists in another.

The ethical dilemma faced by social media sites at this watershed moment encompasses more than just politics. It also encompasses the idea of truth itself. As Andrew Marantz explained in The New Yorker,

“For more than five years now, a complacent chorus of politicians and talking heads has advised us to ignore Trump’s tweets. They were just words, after all. Twitter is not real life. Sticks and stones may break our bones, but Trump’s lies and insults and white-supremacist propaganda and snarling provocations would never hurt us.” But, Marantz goes on, “The words of a President matter. Trump’s tweets have always been consequential, just as all of our online excrescences are consequential—not because they are always noble or wise or true but for the opposite reason. What we say, online and offline, affects what we believe and what we do—in other words, who we are.”

We have to rise about our irony and detachment, and understand as a nation that language is not divorced from reality. Conspiracy theories, which depend in large part on language games and fantasy, must be addressed to prevent further violence, and only an openness to truth can help us move beyond them as a nation.

Bella Thorne and Celebrities Inhabiting Shared Spaces

photograph of bella thorne on red carpet with crowd behind her

The age of technology has brought many new things into modern life, but arguably one of the most influential and important is social media. A radically new world was created online where everyone around the globe can be connected within seconds no matter their location. One of the groups to take advantage of this instant connection was celebrities as social media and online platforms allow them to connect with their fans directly and give audiences glimpses into their private lives, without having to actually even meet in-person. This has given rise to the phenomena of celebrity culture where the public can know almost any aspect of a star’s life. Some have used this trend to help build their fame and monetize their brands. While celebrities have every right to use these platforms just like any other member of the public, they enter into these spaces with an unfair advantage. They have a following and a brand, which usually disrupts some of the communities that are made up of the public, who might depend on these platforms to make a living. There’s a fine line here for celebrities to watch, as their introduction to these spaces threatens to undermine these platforms, and perhaps eliminate, or at least adulterate, this communal space.

Recently, one platform in particular, OnlyFans, has taken over the pornography market by allowing individuals to have autonomy over what and when they create. This form of pornography can be highly personal with subscribers getting to know the performers whose bodies and lives they are consuming. With OnlyFans, as long as you gain a following, anyone can make money through this form of sex work, without having to find a studio, or work in the public space. A new creator on the platform, actress Bella Thorne who started her career with the popular Disney show “Shake it Off,” broke records within 24 hours of her appearance on the site. She announced her introduction to OnlyFans with Paper Magazine where she wanted to discuss “the politics behind female body shaming & sex.” Immediately, she made headlines with her addition, which inevitably began sparking conversations around sex work and female sexuality — the discussion that she hoped would be happening.

There are both advantages and disadvantages to a celebrity of Bella Thorne’s caliber joining OnlyFans. Sex work has historically been a job that is not seen as a valid form of work and is criminalized in most countries around the world. As a consequent of this criminalization there are specific dangers that sex workers face in their line of employment, which are usually ignored by politicians, police officers, and society as a whole. If celebrities begin to partake in creating this type of content, however, a normalization may begin, which could work to validate and decriminalize sex work, and possibly address those issues that sex workers face daily. This appears to be Bella Thorne’s intention behind her move to OnlyFans. But she gravely miscalculated the responsibility she had to ensure that she didn’t hurt the very community she was trying to help.

Sex workers who rely on their income from OnlyFans faced a crisis as the website suddenly changed their policies, limiting the freedom and ability of performers to make a living off the platform. The catalyst to these changes was directly after Thorne made her debut on the platform, however, OnlyFans claims the two weren’t connected. Thorne made $1 million dollars within her first day on the site and $2 million after the first week. She also caused massive refunds after people paid for a nude photo, which in reality was not nude, and therefore many of those subscribers were demanding their money back from OnlyFans. Shortly after, the platform set limits on how much creators could charge for their content and the amount that consumers could give in tips to performers. Additionally, they lengthened the time that performers would receive their income to 30 days. A company that was once a safe space for sex workers to earn their living is now catering to the effects of celebrities. They profit from the audience that these big names bring on to their site, all the while ignoring the concerns of everyday sex workers whose livelihoods depend on the platform. For Bella Thorne, joining the platform is a way to have fun with her sexuality and popularity without the censorship or judgement of platforms like Instagram. She does not depend on that money for rent or food. She experiences little to none of the stigma that sex workers face daily. Her actions did not help the sex worker community, but actually severely hurt a community that is already one of the most marginalized.

What responsibility does Thorne even have in starting these conversations over sexual politics and female sexuality? How should she use her celebrity status and the privilege of millions of followers listening and watching her? One cannot ignore the fact that a lot of this increasing legitimacy of sex work has centered around middle-upper class white women beginning to explore the realms of sex work, while women of color continue to experience the stigma and marginalization of sex work. While sex work may slowly begin to be seen as a proper line of employment, there seems to be an otherness appearing in it, in which it is acceptable for certain women, but deplorable for others to take part in. This normalization is beginning to look more like a gentrification in which white women profit off the work that other women have been doing for decades, which would of course only continue to hurt a large portion of the sex worker community. So, perhaps it was not even Thorne’s place to be the catalyst to start those conversations she wants to have. Her attempt to make that conversation was centered around herself and her own experiences. Instead of reaching out to women already experienced in the industry, she decided to see for herself the inner workings of the industry. But, it is impossible for a celebrity like her to experience sex work in a way that accurately represents the issues that sex workers deal with in reality.

Bella Thorne, however, is not the only celebrity to hop on this trend. The biggest name to recently join the platform is rapper Cardi B, although she won’t be creating sexual content, but rather exclusive content on her life and music. Some other celebrities like rapper Tyga, or YouTuber Tana Mangeau are deciding to follow in Thorne’s direction and make sexual content for their consumers. All of these celebrities can bring waves of fans to the site looking to buy subscriptions for the exclusive content. Whether their selling sex or exclusive updates on music, however, they will be entering a platform that already has plenty of competition for subscribers. Sex workers and musicians depend on their subscriptions from OnlyFans to continue paying rent or buying groceries, especially in the midst of a global pandemic, which are concerns that none of these celebrities would ever have to troubles themselves with. While the platform may be useful for them to promote their albums, have fun with their sexuality, or connect with fans, all their profits are solely pocket money for them. They could accomplish all of those things through Instagram pages with their millions of followers, or with a multitude of opportunities that are not open to the public. Celebrities need to recognize the havoc that they can wreak on the lives of everyday people when they decide to turn their livelihoods into fun experiments on social media.

Ethical Considerations of Deepfakes

computer image of two identical face scans

In a recent interview for MIT Technology Review, art activist Barnaby Francis, creator of deepfake Instagram account @bill_posters_uk, mused that deepfake is “the perfect art form for these kinds of absurdist, almost surrealist times that we’re experiencing.” Francis’ use of deepfakes to mimic celebrities and political leaders on Instagram is aimed at raising awareness about the danger of deepfakes and the fact that “there’s a lot of people getting onto the bandwagon who are not really ethically or morally bothered about who their clients are, where this may appear, and in what form.” While deepfake technology has received alarmist media attention in the past few years, Francis is correct in his assertion that there are many researchers, businesses, and academics who are pining for the development of more realistic deepfakes.

Is deepfake technology ethical? If not, what makes it wrong? And who holds the responsibility to prevent the potential harms generated by deepfakes: developers or regulators?

Deepfakes are not new. The first mention of deepfake was by a reddit user in 2017, who began using the technology to create pornographic videos. However, the technology soon expanded to video games as a way to create images of people within a virtual universe. However, the deepfake trend suddenly turned toward more global agendas, with fake images and videos of public figures and political leaders being distributed en masse. One altered video of Joe Biden was so convincing that even President Trump fell for it. Last year, there was a deepfake video of Mark Zuckerberg talking about how happy he was to have thousands of people’s data. At the time, Facebook maintained that deepfake videos would stay up, as they did not violate their terms of agreement. Deepfakes have only increased since then. In fact, there exists an entire YouTube playlist with deepfake videos dedicated to President Trump.

In 2020, those who have contributed to deepfake technology are not only individuals in the far corners of the internet. Researchers at the University of Washington have also developed deepfakes using algorithms in order to combat their spread. Deepfake technology has been used to bring art to life, recreate the voices of historical figures, and to use celebrities’ likeness to communicate powerful public health messages. While the dangers of deepfakes have been described by some as dystopian, the methods behind their creation have been relatively transparent and accessible.

One problem with deepfakes are that they mimic a person’s likeness without their permission. The original Deepfakes, which used photos or videos of a person mixed with pornography uses a person’s likeness for sexual gratification. Such use of a person’s likeness might never personally affect them, but could still be considered wrong, since they are being used as a source of pleasure and entertainment, without consent. These examples might seem far-fetched, but in 2019 a now-defunct app called DeepNude, sought to do exactly that. Even worse than using someone’s likeness without their knowledge, is if the use of their likeness is intended to reach them and others, in order to humiliate or damage their reputation. One could see the possibility of a type of deepfake revenge-porn, where scorned partners attempt to humiliate their exes by creating deepfake pornography. This issue is incredibly pressing and might be more prevalent than the other potential harms of deepfakes. One study, for example, found that 96% of existing deepfakes take the form of pornography.

Despite this current reality, much of the moral concern over deepfakes is grounded in their potential to easily spread misinformation. Criticism around deepfakes in recent years has been mainly surrounding their potential for manipulating the public to achieve political ends. It is becoming increasingly easy to spread a fake video depicting a politician who is clearly incompetent or spreading a questionable message, which might detract from their base. On a more local level, deepfakes could be used to discredit individuals. One could imagine a world in which deepfakes are used to frame someone in order to damage their reputation, or even to suggest they have committed a crime. Video and photo evidence is commonly used in our civil and criminal justice system, and the ability to manipulate videos or images of a person, undetected, arguably poses a grave danger to a justice system which relies on our sense of sight and observation to establish objective fact. Perhaps even worse than framing the innocent could be failing to convict the guilty. In fact, a recent study in the journal Crime Science found that deepfakes pose a serious crime threat when it comes to audio and video impersonation and blackmail. What if a deepfake is used to replace a bad actor with a person who does not exist? Or gives plausible deniability to someone who claims that a video or image of them has been altered?

Deepfakes are also inherently dishonest. Two of the most popular social media networks, Instagram and TikTok, inherently rely upon visual media which could be subject to alteration by self-imposed deepfakes. Even if a person’s likeness is being manipulated with their consent and also could have positive consequences, it still might be considered wrong due to the dishonest nature of its content. Instagram in particular has been increasingly flooded with photoshopped images, as there is an entire app market that exists solely for editing photos of oneself, usually to appear more attractive. The morality of editing one’s photos has been hotly contested amongst users and between feminists. Deepfakes only stand to increase the amount of media that is self-edited and the moral debates that come along with putting altered media of oneself on the internet.

Proponents of deepfakes argue that their positive potential far outweighs the negative. Deepfake technology has been used to spark engagement with the arts and culture, and even to bring historical figures back to life, both for educational and entertainment purposes. Deepfakes also hold the potential to integrate AI into our lives in a more humanizing and personal manner. Others, who are aware of the possible negative consequences of deepfakes, argue that the development and research of this technology should not be impeded, as the advancement of the technology can also contribute to research methods of spotting it. And there is some evidence backing up this argument, as the development of deepfake progresses, so do the methods for detecting it. It is not the moral responsibility of those researching deepfake technology to stop, but rather the role of policymakers to ensure the types of harmful consequences mentioned above do not wreak havoc on the public. At the same time, proponents such as David Greene, of the Electronic Frontier Foundation, argue that too stringent limits on deepfake research and technology will “implicate the First Amendment.”

Perhaps then it is not the government nor deepfake creators who are responsible for their harmful consequences, but rather the platforms which make these consequences possible. Proponents might argue that the power of deepfakes is not necessarily from their ability to deceive one individual, but rather the media platforms on which they are allowed to spread. In an interview with Digital Trends, the creator of Ctrl Shift Face (a popular deepfake YouTube channel), contended that “If there ever will be a harmful deepfake, Facebook is the place where it will spread.” While this shift in responsibility might be appealing, detractors might ask how practical it truly is. Even websites that have tried to regulate deepfakes are having trouble doing so. Popular pornography website, PornHub, has banned deepfake videos, but still cannot fully regulate them. In 2019, a deepfake video of Ariana Grande was watched 9 million times before it was taken down.

In December, the first federal regulation pertaining to deepfakes passed through the House, the Senate, and was signed into law by President Trump. While increased government intervention to prevent the negative consequences of deepfakes will be celebrated by some, researchers and creators will undoubtedly push back on these efforts. Deepfakes are certainly not going anywhere for now, but it remains to be seen if the potentially responsible actors will work to ensure their consequences remain net-positive.

What Reddit Can Teach Us About Moral Philosophy

eye looking through keyhole

Moral philosophy is enjoying a moment in popular culture. Shows like The Good Place have made ethics accessible to broad audiences, and publishing houses churn out books on the philosophical underpinnings of franchises like Star Wars and Game of Thrones. One example of this can be found on Reddit, a social media site that hosts a myriad of topic-based forums. In particular, the subreddit “Am I the Asshole?” exemplifies pop culture’s breezy and accessible approach to moral philosophy, while also shedding light on how and why we engage in ethical questions.

This extremely popular subreddit boasts over two million subscribers, and claims to offer “A catharsis for the frustrated moral philosopher in all of us, and a place to finally find out if you were wrong in an argument that’s been bothering you.” Reddit users post stories about relationship problems, family squabbles, and workplace tension. Any conflict will do, so long as it doesn’t involve physical violence and the original poster has some reason to believe they were in the wrong. Those who comment on these stories are required to pass judgment, expressed using the subreddit’s shorthand language, followed by a brief explanation of their ruling. A person can be either NTA (not the asshole), YTA (you are the asshole), ESH (everyone sucks here, for situations where all parties did something indefensible), and NAH (no assholes here, for situations where no one is in the wrong). A bot eventually sifts through these comments, and the post is labeled with the most popular judgment.

As the word “asshole” implies, this isn’t the place to rail against class oppression or the cruelties of fate. AITA focuses on everyday interpersonal drama, and it’s understood that being labeled YTA isn’t necessarily a judgment on the entirety of a person’s character. Every judgment is situational, though commentators may point out larger patterns of problematic thought or behavior if they emerge, and the subreddit operates under a shared understanding that many of us act immorally without malicious intent. Even in the somewhat rigid lexicon of judgment, there’s room for shades of gray, for the ambiguities of social life. As Tove Danovich writes in an article for The Ringer, “The scope of the problems on AITA, even when the judgment is a difficult one to make, is human, and therefore more manageable. They’re medium questions asked and answered by medium people who just want to be a little bit better.”

Even though the subreddit’s scope is somewhat limited, one aspect of the AITA’s culture offers a window into the role narrative plays in shaping our sense of right and wrong. Scrolling through the front page, one very frequently encounters stories where the original poster was indisputably in the right: “Someone ran me over with their car, and as I went flying over their windshield, I accidentally dented their front hood. AITA?” So many users were annoyed by these posts that they started their own parody subreddit, “Am I the Angel,” where the saintly and oblivious tone of AITA posts are mocked. An ungenerous interpretation of these posts would be that some people just want a pat on the head. They aren’t actually looking for a moral judgment, they want to vent about a situation they already recognize as unfair. Alternatively, one could argue that we so often lack perspective on our own lives, and what seems obviously wrong to an impartial third party may be less transparent from the inside. But one AITA user suggests a different interpretation. On a post from a man who no longer wanted to let his disabled neighbor park in their driveway, Reddit user boppitywop commented, “I think the majority of these posts are because people feel guilty, and they are looking to assuage their guilt. [The original poster] is not the asshole but they’ve made someone’s life a lot more inconvenient and doesn’t feel good about it. [This subreddit] serves the purpose of socially normalizing something that a person feels bad about.”

This comment reveals both the limits and potential of AITA. Some situations are morally intractable, and require a lot more than interpersonal skills (or an understanding of one’s wrongdoing) to effectively address. But they also correctly point out the social function of storytelling. AITA posts help users renegotiate the boundaries between right and wrong in a way that feels deeply communal. Norms are both established and questioned in this online space. The judgment system may feel very open-and-shut, but reading through the comment section of popular posts reveals an ongoing dialogue with the moral philosophy of everyday life.

But in any narrative, language often betrays the biases or intentions of the teller. One ubiquitous trope you’ll notice if you fall down the AITA rabbit hole is what I would call the “sudden turn”: in the middle of an encounter, the antagonist of the story will begin to bawl, shriek, or throw a tantrum without clear provocation. The other person is portrayed as irrational or inscrutable, and one often feels the gap here where their perspective on the situation could fit. Commenters are often very perceptive about the original poster’s word choice, but the way the story is told inevitably colors our judgment of the encounter. This sense of messiness and instability accurately reflects how we experience conflict, and reminds us that all moral arguments, whether large or small, contain some speck of subjectivity.

It’s a simple truth that judging people we don’t know is fun, sometimes even addicting. The voyeuristic element of AITA is certainly worthy of critique, but at the same time, anonymity is crucial to the communal storytelling experience. In an era where few define themselves by a single ethical belief system, AITA helps readers wade through the mire of modern life, and testifies to a universal desire to understand what we owe to one another.

What Would Kierkegaard Make of Twitter?

photogrph of Twitter homepage on computer screene

In the weeks leading up to Election Day 2020, Twitter and other social media companies announced they would be voluntarily implementing new procedures to discourage the spread of misinformation across their platforms; on November 12th, Twitter indicated that it would maintain some of those procedures indefinitely, arguing that they were successful in slowing the spread of election misinformation. In general, the procedures in question are examples of “nudges” designed to subtly influence the user to think twice before spreading information further through the social network; dubbed “friction” by the social media industry, examples include labeling (and, in some cases, hiding) tweets containing misleading, disputed, or unverified claims, and double-prompting a user who attempts to share a link to an article that they have not opened. While the general effectiveness of social media friction remains unclear (although at least one study related to COVID-19 misinformation has shown promise), Twitter has argued that their recent policy changes have led to a 29% reduction in quote-tweeting (where a user simultaneously comments on and shares a tweet) and a 20% overall reduction in tweet-sharing, both of which have slowed the spread of misleading information.

We currently have no shortage of ethical questions arising from the murky waters of social networks like Twitter. From the viral spread of “fake news” and propaganda to the problems of epistemic bubbles and echo chambers to malicious agents spearheading disinformation campaigns to the fostering of violence-producing communities like QAnon and more, alerts about the risks posed by social media programs are aplenty (including here at The Prindle Post, such as Desdemona Lawrence’s article from August of 2018). Given the size of Twitter’s user base (it was the fourth-most-visited website by traffic in October 2020 with over 353 million users visiting the site over 6.1 billion times), even relatively uncommon problems could still manifest in significant numbers and no clear solution has arisen for limiting the spread of falsehoods that would not also limit benign Twitter usage.

But is there such a thing as benign Twitter usage?

The early existentialist philosopher and theologian Søren Kierkegaard might think not. Writing from Denmark in the early 1800s, Kierkegaard was exceedingly skeptical of the social movements of his day; as he explains in The Present Age: On the Death of Rebellion, “A revolutionary age is an age of action; ours is the age of advertisement and publicity. Nothing ever happens but there is immediate publicity everywhere.” Instead of living full, meaningful lives, Kierkegaard criticized his contemporaries for simply desiring to talk about things in ways that, ultimately, amounted to little more than gossip. Moreover, Kierkegaard saw how this would underlie a superficiality of love for showing off to “the Public” (the abstract collection of people made up of “individuals at the moments when they are nothing”); all this “talkativeness” would produce a constant “state of tension” that, in the end, “exhausts life itself.” Towards the end of his essay, Kierkegaard summarizes his criticism of his social environment by saying that “Everyone knows a great deal, we all know which way we ought to go and all the different ways we can go, but nobody is willing to move.”

This all probably sounds unsettlingly familiar to anyone with a Twitter account.

Instead of giving into the seductions and the talkativeness of the present age, Kierkegaard argues for the value of silence, saying that “only someone who knows how to remain essentially silent can really talk — and act essentially” (that is, act in a way that would give one’s life genuine meaning). Elsewhere, in the first Godly Discourse of The Lily of the Field and the Bird of the Air, Kierkegaard draws a lesson from birds and flowers about the value of quietly focusing on what genuinely matters. As a Christian theologian, Kierkegaard locates ultimate value in “the Kingdom of God” and argues that lilies and birds do not speak, but are simply present in the world in a way that mimics a humble, unassuming, simple presence before God. The earnestness or authenticity that comes from learning how to live in silence allows a person to avoid the distractions prevalent in the posturing of social games. “Out there with the lily and the bird,” Kierkegaard writes, “you perceive that you are before God, which most often is quite entirely forgotten in talking and conversing with other people.”

Indeed, the talkativeness and superficiality inherent to the operation of social media networks like Twitter would trouble Kierkegaard to no end, even before considering the myriad ways in which such networks can be abused. And, in a similar way, whatever we now consider to be of ultimate importance (be that Kierkegaard’s God or something else), the phenomenology of distraction away from its pursuit is no small thing. Twitter can (and should) continue to try and address its role in the spread of misinformation and the like, but no matter how much friction it creates for its users, it seemingly can’t promote contemplative silence: “talkativeness” is a necessary Twitter feature.

So, Kierkegaard would likely not be interested in the Twitter Bird much at all; instead, he would say, we should attend to the birds of the air and the lilies of the field so that we can learn how to silently begin experiencing life and other things that truly matter.

In Defense of Mill

collage of colorful speech bubbles

In recent years, commentators — particularly those who lean left — have become increasingly dubious about John Stuart Mill’s famous defense of an absolutist position on free speech. Last week, for instance, The New York Times published a long piece by Yale Law School professor Emily Bazelon in which she echoes a now-popular complaint about Mill: that his arguments are fundamentally over-optimistic about the likelihood that the better argument will win the day, or that “good ideas win.” In this column, I will argue that this complaint rests on a mistaken view of Mill.

Mill’s argument, briefly stated, is that no matter whether a given belief is true, false, or partly true, its assertion will be useful for discovering truth and maintaining knowledge of the truth, and therefore it should not be suppressed. True beliefs are usually suppressed because they are believed to be either false or harmful, but according to Mill, to suppress a belief on these grounds is to imply that one’s grasp of the truth or of what is harmful is infallible. Mill, an empiricist, believed that no human being has infallible access to the truth. Even if the belief is actually false, its assertion can generate debate, which will lead to greater understanding and ensure that truths do not lapse into “mere dogma.” Finally, if the belief is partially true, it should not be suppressed because it can be indispensable to discovering the “whole” truth.

Notice that Mill’s whole argument concerns the assertion of beliefs, or the communication of what the speaker genuinely takes to be true. The key assumption in Mill’s argument is thus not that the truth will win out in the rough and tumble of debate. This may well be true — at least, it may be true in the long run, when every participant is really engaging in debate, or the evaluation of truth claims. Rather, Mill is taking as given that a lot of the public discourse is aimed at communicating truth claims in good faith. The problem is that much of this discourse is not intended to inform others about what speakers actually believe. Much of the public discourse is propaganda — speech aimed at achieving some political outcome, rather than at communicating belief. As Bazelon points out, referring to the deluge of disinformation that currently swamps our national public conversation,

“The conspiracy theories, the lies, the distortions, the overwhelming amount of information, the anger encoded in it — these all serve to create chaos and confusion and make people, even nonpartisans, exhausted, skeptical and cynical about politics. The spewing of falsehoods isn’t meant to win any battle of ideas. Its goal is to prevent the actual battle from being fought, by causing us to simply give up.”

The purpose of disinformation propaganda is to overwhelm people with contradictory claims and ultimately to encourage their retreat into apolitical cynicism. Even where propagandists appear to be in the business of putting forward truth claims, this is always in bad faith: propagandists aren’t trying to express truth claims. 

Where does this leave Mill? Mill may have been mistaken in overlooking the pervasiveness of propaganda. However, his defense of free speech need not extend to propaganda. If Mill is concerned only with defending communicative acts that are aimed at expressing belief, then we have no reason to think that Mill needs to defend propaganda. Thus, a Millian defense of speech can distinguish between speech that is intended primarily to express a truth claim and speech that is intended primarily to effect some political outcome. While the former must be protected from suppression, the latter need not be, precisely because the latter is not aimed at, nor likely to produce, greater understanding.

Of course, this distinction might be difficult to draw in practice. Nevertheless, new policies recently rolled out by social media platforms appear to be aimed precisely at suppressing the spread of harmful propaganda. Twitter banned political ads a year ago, and last month Facebook restricted its Messenger app by preventing mass forwarding of private messages. Facebook’s Project P (P for propaganda) was an internal effort after the 2016 election to take down pages that spread Russian disinformation. Bazelon recommends pressuring social media platforms into changing their algorithms or identifying disinformation “super spreaders” and slowing the virality of their posts. Free speech absolutists might decry such measures as contrary to John Stuart Mill’s vision, but I have suggested that this might be a mistake.

Anti-Maskers and the Dangers of Collective Endorsement

photograph of group of hands raised

Tensions surrounding the coronavirus pandemic continue to run high, especially in parts of America in which discussions over measures to control spread of the virus have become something of a political issue. Recently, some of these tensions erupted in the form of protests of “anti-maskers”: in Florida, for example, a group of such individuals marched through a Target, telling people to take off their masks, and playing the song “We’re Not Going to Take It.” Presumably the “it” that they were no longer interested in taking pertained to what they perceived to be a violation of personal liberties, as they felt as though they were being forced to wear a mask against their wills. While evidence regarding the effectiveness of masks at keeping oneself and others safe continues to grow, there nevertheless remains a vocal minority that believes otherwise.

A lot of thought has been put into the problem of why it is that people continually ignore good scientific evidence, especially when the consequences of doing so are potentially dire. There is almost certainly no singular, easy answer to the problem. However, there is one potential reason that I think is worth focusing on, namely that anti-maskers, among many others of those who reject the best available scientific evidence on a number of issues, will tend to trust sources that they find on social media instead of through more reputable outlets. For instance, one investigation of why anti-maskers hold their beliefs pointed to the effects of Facebook groups in which such beliefs are discussed and shared. Indeed, despite their efforts to contain the spread of such misinformation, anti-masker Facebook groups remain easy to find.

However, the question remains: why would anyone believe a group of random Facebook users over scientific experts? The answer to this is no doubt multifaceted as well. But one reason may come down to a matter of trust, and that the ways we determine who is trustworthy works differently online than it does in other contexts.

As frequent internet users will no doubt be familiar with already, it can often be difficult to identify trustworthy sources of information online. One reason is that the internet offers varying degrees of anonymity: the consequence is that one will potentially not have much information about the person one’s talking with, especially given the possibility that people can fabricate aspects of their identities in online environments. Furthermore, interacting with others through text boxes on a computer screen is a very different kind of interaction than one that occurs face-to-face. For instance, researchers have shown that there are different “communication cues” that we pick up on when interacting with each other, including verbal cues like tone of voice, volume of speech, and rate at which one is speaking, and visual cues like facial expressions and body language. These kinds of cues are important when we make judgments about whether we should believe what the other person is saying, and are largely absent in a lot of online communication.

With less information about each other to go on when interacting online, we will then tend to look to other sources of information when determining who to trust. One thing internet users tend to appeal to is endorsement. For instance, when reading things on social media or message board sites we tend to put more trust in those posts that have the most hearts, or likes, or upvotes, etc. This is perhaps most apparent when you’re trying to decide what product to buy: we tend to gravitate towards those with not only the highest ratings, but those that have the most high ratings (something with one 5 star review doesn’t mean much, but a product with hundreds of high reviews means a lot more). The same can be the case when it comes to determining which information to believe: if your post has thousands of endorsements then I’m probably going to at least give it a look, whereas if it has very few, I’ll probably pass it by.

There is good reason to trust information that is highly endorsed. As noted above, it can be hard to determine who to trust online because it’s not clear whether someone is really who they say they are. It’s easy for me to join a Facebook group and tell everyone that I’m an epidemiologist, for example, and without having access to any more information about me you’ve got little other than my word to go on. Something that’s much harder to fake, though, is a whole bunch of likes, or hearts, or upvotes. So the thought is that if enough other people endorse something, that’s good reason to trust it. So here’s one reason why people getting their information off social media might trust that information more than that coming from the experts, namely because it is highly endorsed by many other members of their group.

At the same time, people might be more willing to believe those with whom they interact with online in virtue of the fact that they are interacting with them. For instance, when a scientific body like the CDC tells you that you should be wearing a mask, information is traveling in only one direction. When interacting with groups online, though, it can be much easier to trust those that you are interacting with, and not merely deferring to. Again, this is one of the problems raised by online communication: while there is lots of good information available, it can be easier to trust those with whom one can engage with, as opposed to just take orders from them.

Again, given that the problem is complex and multifaceted means that there will not be a one-size-fits-all solution. That said, it is worthwhile to think about how it might be possible for those with the good information to establish relationships of trust with those who need it, given the unique qualities of online environments.