← Return to search results
Back to Prindle Institute

Is Speech Freer Without Fact-Checks?

Meta recently announced that it will end its practice of employing fact-checkers to moderate content on its platforms, including Facebook, Instagram, and Threads. Instead, Meta sites will employ a system similar to the “community notes” feature on Twitter/X, where users can propose notes to provide more information about a post. In a short video posted to Facebook, Mark Zuckerberg explained that he was motivated to make the change due to fact-checkers being too “politically biased,” and accused them of having “destroyed more trust than they created.” At the end of the day, he wanted his platforms to get back to their roots of upholding freedom of speech.

Many have expressed concerns about Zuckerberg’s decision, as well as his motivations for making it. The most prominent concern has been that without moderation from fact-checkers, significantly more false and hateful information will make its way around Meta’s platforms. Others have expressed the concern that Zuckerberg’s decision is itself politically motivated. There is evidence that fact-checked posts are more likely to be shared by conservatives who appeal to low-quality news sites, which has led some to believe that conservative voices are being disproportionately targeted. Zuckerberg’s recent behaviors – including donations to Trump and his removal of protections against certain kinds of hate speech – have led some to conclude that Zuckerberg is really motivated by a dislike of seeing fact-checks on views he and his friends hold.

Even if Zuckerberg truly is motivated by promoting freedom of speech, what is it about having a system of third-party fact-checkers that inhibits free speech? And will shifting responsibility to the community make speech on Meta platforms more free?

Let’s start with the first question: why might third-party fact-checkers inhibit freedom of speech? One might think that the problem is one of quantity: any instance of fact-checking is a form of censorship, perhaps, and thus without dedicated fact-checkers whose job it is to flag content there will simply be less censorship and thus speech will be more free.

If this is our concern, would a community notes system be an improvement? Perhaps. When looking at Twitter/X, for example, The Poynter Institute “found that fewer than 10 percent of Community Notes drafted by users ended up being published on offending posts,” and that these numbers “are even lower for sensitive topics like immigration and abortion.” Shifting to a community notes model where only a fraction of notes ever see the light of day may then decrease the number of posts that are flagged.

At the same time, however, the number of community members of sites like Facebook and Instagram outnumber third-party fact-checkers by many orders of magnitude. So even if a small percentage of community notes end up being published, the result may not be any fewer fact-checks.

Zuckerberg’s argument, however, is not so much with the concept of fact-checking itself, but that third-party fact-checkers are too politically biased, and are thus disproportionately censoring certain views. There has been significant pushback against this claim; regardless, let’s assume for the sake of argument that it’s true. If fact-checkers are biased, will the community be any better?

It’s not clear that it will. After all, the community itself may very well be biased by having a plurality of users falling on one side of the political spectrum. It’s also unclear whether conservative views will receive any fewer flags under a community notes system than a fact-checking system. A recent study in Nature, for example, found that professional fact-checkers and “politically balanced groups of laypeople” largely agreed on which sources of information were low quality, the lion’s share being those that amplified conservative views. If Zuckerberg is concerned that flagging conservative views disproportionately constitutes a form of censorship, then shifting fact-checking responsibilities to the community may not make things any better.

One might think instead that third-party fact-checking just really isn’t necessary anymore. An article in Politico, for instance, recently argued that the “disinformation panic” that started during Trump’s first presidency is “over.” Part of the evidence for this claim is that while the contentious and surprising nature of Trump’s first election win demanded an explanation – which many blamed on misinformation campaigns designed to mislead voters – the second Trump win was definitive and, at least in terms of drama around the manipulation of results, mundane. Per the Politico article: “no one was fooled into voting for Trump.”

There have also been increasingly frequent criticisms that programs dedicated to ameliorating the problems of misinformation and disinformation have largely failed to bear fruit. People spreading false information, the argument goes, is not so much a problem to be solved as it is a feature of humanity to be tolerated; especially so, given the politically fraught nature of labeling information about social issues as either true or false. This is not to say that we should abandon the project of identifying false and potentially harmful information online. Rather, employing third-party fact-checkers is an overcorrection of a non-problem, and thus unnecessarily restricts free speech.

Questions about the fecundity of the study of misinformation and disinformation are open, although there is good evidence that many interventions are, in fact, effective. As stated in the Politico article, there is also good reason to be worried about the quantity and egregiousness of false information being shared on social media in the second term of the Trump presidency given his infamous disinterest in the truth and his choice of appointees. Of course, it may very well turn out that Trump’s rhetoric is met with less opposition during his second term and that changing political winds result in more people agreeing on obvious falsehoods they see on social media. However, this is not an indication that disinformation is over, but instead underlines how those in power have a vested interest in attempting to control narratives around the extent to which disinformation is a problem.

We have seen little reason to think that the existence of fact-checking represents a limitation on free speech, nor have we seen much reason to think that shifting to a community notes model will make things any better. But perhaps shifting responsibility for fact-checking to the community will better promote free speech not by being any less restrictive, but by granting new abilities to its users. By creating a system in which everyone has a say in helping to determine whether some content is fact-checked or flagged, the process becomes democratic in a way that is presumably lacking when outsourcing those duties to third parties, and thus free speech flourishes.

There is a sense in which this shifting of responsibilities gives more freedom to the users, as they now possess an ability they didn’t have before. But a system with only minimal guardrails also risks stifling many more voices. For example, Zuckerberg’s recent changes that allow users to say that gay and trans people have “mental illness” remove restrictions on a certain kind of speech from a certain kind of person, but will undoubtedly result in a lot less speech from members of communities that Meta’s policies refuse to respect. Moderation of speech – be it in the form of fact-checking or policies around what kinds of content are permitted on a platform – can thus promote free speech rather than inhibit it.

It remains to be seen whether Zuckerberg’s version of community notes will be successful in identifying false and misleading information, and it’s perhaps only known to him what his true intentions are in making the change. However, if he really was motivated by making speech freer on his platforms, there’s good reason to think his efforts are misguided.

Content Moderation and Emotional Trauma

image of wall of tvs each displaying something different

In the wake of the Russian invasion of Ukraine, which has been raging violently since February 24th of 2022, Facebook (now known as “Meta”) recently announced its decision to change some of its content-moderation rules. In particular, Meta will now allow for some calls for violence against “Russian invaders,” though Meta emphasized that credible death threats against specific individuals would still be banned.

“As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to the Russian invaders.’ We still won’t allow credible calls for violence against Russian civilians,” spokesman Andy Stone said.

This recent announcement has reignited a discussion of the rationale — or lack thereof — of content moderation rules. The Washington Post reported on the high-level discussion around social media content moderation guidelines: how these guidelines are often reactionary, inconsistently-applied, and not principle-based.

Facebook frequently changes its content moderation rules and has been criticized by its own independent Oversight Board for having rules that are inconsistent. The company, for example, created an exception to its hate speech rules for world leaders but was never clear which leaders got the exception or why.

Still, politicians, academics, and lobbyists continue to call for stricter content moderation. For example, take the “Health Misinformation Act of 2021”, introduced by Senators Amy Klobuchar (D-Minnesota) and Ben Ray Luján (D-New Mexico) in July of 2021. This bill, a response to online misinformation during the COVID-19 pandemic, would revoke certain legal protections for any interactive computer service, e.g., social media websites, that “promotes…health misinformation through an algorithm.” The purpose of this bill is to incentivize internet companies to take greater measures to combat the spread of misinformation by engaging in content-moderation measures.

What is often left out of these discussions, however, is the means by which content moderation happens. It is often assumed that such a monumental task must be left up to algorithms, which can scour through mind-numbing amounts of content at a breakneck speed. However, much of the labor of content-moderation is performed by humans. And in many cases, these human content-moderators are poor laborers working in developing nations for an extremely small salary. For example, employees at Sama, a Kenyan technology company that is the direct employer of Facebook’s Kenya-based content moderators, “remain some of Facebook’s lowest-paid workers anywhere in the world.” While U.S.-based moderators are typically paid a starting wage of $18/hour, Sama moderators make an average of $2.20/hour. And this low wage is their salary after a recent pay-increase, which happened a few weeks ago. Prior to that, Sama moderators made $1.50/hour.

Such low wages, especially for labor outsourced to poor or developing nations, is nothing new. However, content moderation can be a particularly harrowing — in some cases, traumatizing — line of work. In their paper “Corporeal Moderation: Digital Labour as Affective Good,” Dr. Rae Jereza interviews one content moderator named Olivia about her daily work, which includes identifying “non‐moving bod[ies]”, visible within a frame, “following an act of violence or traumatic experience that could reasonably result in death.” The purpose of this is so videos containing dead bodies can be flagged as containing disturbing content. This content moderator confesses to watching violent or otherwise disturbing content prior to her shift, in an effort to desensitize herself to the content she would have to pick through as part of her job. The content that she was asked to moderate ranged over many categories, including “hate speech, child exploitation imagery (CEI), adult nudity and more.”

Many kinds of jobs involve potentially traumatizing duties: military personnel, police, first responders, slaughterhouse and factory farm workers, and social workers all work jobs with high rates of trauma and other kinds of emotional/psychological distress. Some of these jobs are also compensated very poorly — for example, factory and industrial farms primarily hire immigrants (many undocumented) willing to work for pennies on the dollar in dangerous conditions. Poorly-compensated high-risk jobs tend to be filled by people in the most desperate conditions, and these workers often end up in dangerous employment situations that they are nevertheless unable or unwilling to leave. Such instances may constitute a case of exploitation: someone exploits someone else when they take unfair advantage of the other’s vulnerable state. But not all instances of exploitation leave the exploited person worse-off, all things considered. The philosopher Jason Brennan describes the following case of exploitation:

Drowning Man: Peter’s boat capsizes in the ocean. He will soon drown. Ed comes along in a boat. He says to Peter, “I’ll save you from drowning, but only if you provide me with 50% of your future earnings.” Peter angrily agrees.

In this example, the drowning man is made better-off even though his vulnerability was taken advantage of. Just like this case, certain unpleasant or dangerous lines of work may be exploitative, but may ultimately make the exploited employees better-off. After all, most people would prefer poor work conditions to life in extreme poverty. Still, there seems to be a clear moral difference between different instances of mutually-beneficial exploitation. Requiring interest on a loan given to a financially-desperate acquaintance may be exploitative to some extent, but is surely not as morally egregious as forcing someone to give up their child in exchange for saving their life. What we demand in exchange for the benefit morally matters. Can it even be permissible to demand emotional and mental vulnerability in exchange for a living wage (or possibly less)?

Additionally, there is something unique about content moderation in that the traumatic material moderators view on any given day is not a potential hazard of the job — it is the whole job. How should we think about the permissibility of hiring people to moderate content too disturbing for the eyes of the general public? How can we ask some people to weed out traumatizing, pornographic, racist, threatening posts, so that others don’t have to see it? Fixing the low compensation rates may help with some of the sticky ethical issues concerning this sort of work. Yet, it is unclear whether any amount of compensation can truly make hiring people for this line of work permissible. How can you put a price on mental well-being, on humane sensitivity to violence and hate?

On the other hand, the alternatives are similarly bleak. There seem to be few good options when it comes to cleaning up the dregs of virtual hate, abuse, and shock-material.

Fighting Obscenity with Automation

When it comes to policing offensive content online, Facebook’s moderators often have their work cut out for them. With billions of users, filtering out offensive content ranging from pornographic images to videos promoting graphic violence and extremism is a never-ending task. And, for the most part, this job largely falls on teams of staffers who spend most of their days sifting through offensive content manually. The decisions of these staffers – which posts get deleted, which posts stay up – would be controversial in any case. Yet the politically charged context of content moderation in the digital age has left some users feeling censored over Facebook’s policies, sparking a debate on automated alternatives.

Continue reading “Fighting Obscenity with Automation”