Free Speech and the Media Matters Lawsuit
In November 2023, Elon Musk filed a lawsuit against Media Matters (a left-leaning nonprofit dedicated to “monitoring, analyzing, and correcting conservative misinformation”) in response to their investigative report that suggested X, formerly known as Twitter, ran corporate advertisements alongside Nazi content. As a result, corporations including IBM and Comcast pulled their ads, causing further damage to a company whose reputation and finances are already bruised. Media Matters is just the latest to raise concerns about an increase in Nazi and white nationalist content on Twitter enabled by updated content policies.
But Musk is a self-described “free speech absolutist,” and his laissez-faire attitude towards hate speech has informed those policy decisions. While Musk’s position might seem extreme, it is not without precedent. The ACLU, for example, has taken a similar stance. They have come out against Musk in response to a lawsuit similar to the Media Matters one, they also represented white nationalist and Unite the Right organizer Jason Kessler in Virginia courts. They defend this action in a public statement, writing that the government should not be the arbiter of “when and whether the voices opposing a person’s speech can be preferred,” even when that speech is “deeply offensive to others.”
The question of free speech, especially hate speech and misinformation, is nothing new, even to social media. Facebook has long been criticized for its tolerance of hate speech and misinformation, while Twitter has time and again fallen under public ire, with both Republicans and Democrats raising accusations of censorship. To explore this problem, it is first helpful to consider what limiting free speech is not. While the United States Constitution does guarantee the right to free speech under the First Amendment, this pertains to governmental interference. The First Amendment does not obviously prohibit Musk’s permissive content policies, Media Matters’s attack on those policies, nor corporations’ decision to spend their advertising dollars elsewhere.
Even so, social media’s power as a public forum complicates things. Social media corporations may be private, but they play an outsized role socially and politically, often with negative results. We saw this during the COVID pandemic, for example, where social media misinformation was responsible for increased mortality rates. Due to the magnitude of social media’s impact, there might be grounds for restricting speech in those private spheres and some places already do. Germany, for example, has strict hate speech laws that have been expanded to include internet speech, though not without controversy. (One German judge recently ruled that the law was government overreach and an infringement on free speech.) Likewise, in the United States, there are some limits imposed even on constitutionally guaranteed freedoms: while the Second Amendment guarantees the right to bear arms, some guns like short-barreled shotguns and automatic weapons are nonetheless illegal.
While there may be good reasons for favoring permissive free speech policies, they can quickly lead to what the Austrian-British philosopher Karl Popper called the paradox of tolerance. To have a tolerant society, one that is permissive of a plurality of viewpoints and expressions, it must be tolerant of all opinions — except those that are intolerant. Intolerance undermines the very conditions for a tolerant society. In a free society, we should allow for any belief so long as it can be countered by reason. But an intolerant position, on Popper’s account, is one that refuses rational argument. And those who refuse to participate in rational, common discourse often express them through coercion, threatening to destroy the tolerant and, with them, free society. One example might be a Holocaust denier who ignores the historical and testimonial evidence supporting those events as historical fact. This person’s discourse is either irrational or not in good faith; in either case, they are not engaging in rational argument. Popper would say it is no surprise, then, that we often see violent speech and actions coming from those who hold this view.
Using Popper’s criteria for identifying intolerant speech, however, may be especially difficult in our current socio-political climate. As Thomas Hobbes said, when it comes to the perception of our own rationality, “almost all men think they have [it] in a greater degree” than any other. With so much misinformation swirling around, the marketplace of ideas has lost shared conceptions of evidence and reasons. Since the grounds for discourse themselves are questioned, both parties in a dispute are open to accusations of irrationality from the other. Mob rule decides which opinions are disqualified, and this leads right into Popper’s worries about intolerance. So, while his paradox might be a helpful way of framing the problem, it does not offer much practical advice for escaping it.
We might gain more traction, however, by looking to the British philosopher John Stuart Mill. Mill is something like a free speech absolutist himself, arguing that personal liberties must be “absolute and unqualified.” Like Popper, Mill argues that society must allow for unpopular opinions, or it is not a free society at all. When we silence people that we disagree with, we assume that our beliefs are true beyond correction. Likewise, when our beliefs go unchallenged, we hold them superficially and without much conviction. When unpopular ideas are expressed, they provide an opportunity to refine our own opinions and more clearly understand them. For reasons like these, Mill says we should allow freedom of speech and expression with almost no limits, as there is one important exception: the harm principle. Our vast personal freedoms — including the freedom of speech — end when they harm others. There is already legal precedent for restricted speech in the United States on these grounds: fraud and incitement to violence are not protected speech, for example, due to their harmful consequences. A plurality of opinions, however unpopular, should then be welcomed, so long as expressions of those opinions do not cause these kinds of injury to others.
Where does that leave us?
For both Popper and Mill, unqualified free speech rests upon the benefits of free discourse outweighing the risks of abhorrent speech; as a free society, we must allow space for persuasion. But an additional factor not yet considered here is that social media platforms like Twitter, Facebook, and Instagram are different from other media that reach large audiences such as radio, television, and other internet platforms like Substack. Timelines and newsfeeds are uniquely addictive. While social media has the illusion of offering the public a marketplace of ideas, newsfeeds and timelines are designed to engross the user, indefinitely limiting their exposure to opinions that are new or different from their own. Social media can be a tool for discourse, but it is often the antithesis of what Popper and Mill envisioned when advocating for unrestricted speech — an echo chamber that is susceptible to validating poor arguments and calcifying opinions without any opportunity for refutation. While this might be enough of a reason for a free speech absolutist to limit certain speech on social media, there remains the tremendous challenge of how we are to determine what such algorithms should filter and how.
This is no easy task. Musk’s lawsuit assumes the economic harm that Media Matters’s speech caused to X. Yet the advertisers motivated by the economic risks of being associated with Nazi content could make similar arguments. Media Matters’s report is motivated by different harms, namely social, psychological, and physical harms that they believe unrestricted white nationalist content causes. These different types of harm are not easily parsed, and one harm often indirectly causes another; someone physically harmed may not be able to return to work, for example. Yet, in the Media Matters case, direct harms like political polarization, stoking racism (social), increased hate crimes (physical), and doxing or threats (psychological) are more destructive than the direct economic harm caused by lost corporate revenue. Of course, that is only if the types of hate speech they draw attention to in their investigative report are directly responsible for causing those harms.
Does this lead us back to the same challenges facing the paradox of tolerance? Perhaps not. Where the paradox of tolerance faces challenges due to the difficulty of assessing rational discourse, cases of harm might be more easily measured. One important first step could be listening to members of communities affected by hate speech, rather than assuming on their behalf that there is or is not harm. When navigating the difficult problems of internet free speech and its limits, we might find it helpful to begin not by defining free speech, but by asking what counts as harm.