← Return to search results
Back to Prindle Institute
Technology

“Fake News” Is Not Dangerously Overblown

By A.G. Holdier
7 Jun 2021
image of glitched "FAKE NEWS" title accompanied by bits of computer code

In a recent article here at The Prindle Post, Jimmy Alfonso Licon argues that the hype surrounding the problem of “fake news” might be less serious than people often suggest. By pointing to several recent studies, Licon highlights that concerns about social standing actually prevent a surprisingly large percentage of people from sharing fake news stories on social media; as he says, “people have strong incentives to avoid sharing fake news when their reputations are at stake.” Instead, it looks like many folks who share fake news do so because of pre-existing partisan biases (not necessarily because of their gullibility about or ignorance of the facts). If this is true, then calls to regulate speech online (or elsewhere) in an attempt to mitigate the spread of fake news might end up doing more harm than good (insofar as they unduly censor otherwise free speech).

To be clear: despite the “clickbaity” title of this present article, my goal here is not to argue with Licon’s main point; the empirical evidence is indeed consistently suggesting that fake news spreads online not simply because individual users are always fooled into believing a fake story’s content, but rather because the fake story:

On some level, this is frustratingly difficult to test: given the prevalence of expressive responding and other artifacts that can contaminate survey data, it is unclear how to interpret an affirmation of, say, the (demonstrably false) “immense crowd size” at Donald Trump’s presidential inauguration — does the subject genuinely believe that the pictures show a massive crowd or are they simply reporting this to the researcher as an expression of partisan allegiance? Moreover, a non-trivial amount of fake news (and, for that matter, real news) is spread by users who only read a story’s headline without clicking through to read the story itself. All of this, combined with additional concerns about the propagandistic politicization of the term ‘fake news,’ as when politicians invoke the concept to avoid responding to negative accusations against them, has led some researchers to argue that the “sloppy, arbitrary” nature of the term’s definition renders it effectively useless for careful analyses.

However, whereas Licon is concerned about potentially unwarranted threats to free speech online, I am concerned about what the reality of “fake news” tells us about the nature of online speech as a whole.

Suppose that we are having lunch and, during the natural flow of our conversation, I tell you a story about how my cat drank out of my coffee cup this morning; although I could communicate the details to you in various ways (depending on my story-telling ability), one upshot of this speech act would be to assert the following proposition:

1. My cat drank my coffee.

To assert something is to (as explained by Sandford Goldberg) “state, report, contend, or claim that such-and-such is the case. It is the act through which we tell others things, by which we inform an audience of this-or-that, or in which we vouch for something.” Were you to later learn that my cat did not drink my coffee, that I didn’t have any coffee to drink this morning, or that I don’t live with a cat, you would be well within your rights to think that something has gone wrong with my speech (most basically: I lied to you by asserting something that I knew to be false).

The kinds of conventions that govern our speech are sometimes described by philosophers of language as “norms” or “rules,” with a notable example being the knowledge norm of assertion. When I assert Proposition #1 (“My cat drank my coffee”), you can rightfully think that I’m representing myself as knowing the content of (1) — and since I can only know (as opposed to merely believe) something that is true, I furthermore am representing (1) as true when I assert it. This, then, is one of the problems with telling a lie: I’m violating how language is supposed to work when I tell you something false; I’m breaking the rules governing how assertion functions.

Now to add a wrinkle: what if, after hearing my story about my cat and coffee, you go and repeat the story to someone else? Assuming that you don’t pretend like the story happened to you personally, but you instead explain how (1) describes your friend (me) and you’re simply relaying the story as you heard it, then what you’re asserting might be something like:

2. My friend’s cat drank his coffee.

If this other person you’re speaking to later learns that I was lying about (1), that means that you’re wrong about (2), but it doesn’t clearly mean that you’re lying about (2) — you thought you knew that (2) was true (because you foolishly trusted me and my story-telling skills). Whereas I violated one or more norms of assertion by lying to you about (1), it’s not clear that you’ve violated those norms by asserting (2).

It’s also not clear how any of these norms might function when it comes to social media interaction and other online forms of communication.

Suppose that instead of speaking (1) in a conversation, I write about it in a tweet. And suppose that instead of asserting (2) to someone else, you simply retweet my initial post. While at first glance it might seem right to say that the basic norms of assertion still apply as before here, we’ve already seen (with those bullet points in the second paragraph of this article) that fake news spreads precisely because internet users seemingly aren’t as constrained in their digital speech acts. Maybe you retweet my story because you find it amusing (but don’t think it’s true) or because you believe that cat-related stories should be promoted online — we could imagine all sorts of possible reasons why you might retransmit the (false) information of (1) without believing that it’s true.

Some might point out that offline communication can often manifest some of these non-epistemic elements of communication, but C. Thi Nguyen points out how the mechanics of social media intentionally encourage this kind of behavior. Insofar as a platform like Twitter gamifies our communication by rewarding users with attention and acclaim (via tools such as “likes” and “follower counts”), it promotes information spreading online for many reasons beyond the basic knowledge norm of assertion. Similarly, Lucy McDonald argues that this gamification model (although good for maintaining a website’s user base) demonstrably harms the quality of the information shared throughout that platform; when people care more about attracting “likes” than communicating truth, digital speech can become severely epistemically problematic.

Now, add the concerns mentioned above (and by Licon) about fake news and it might be easy to see how those kinds of stories (and all of their partisan enticements) are particularly well-suited to spread through social media platforms (designed as they are to promote engagement, regardless of accuracy).

So, while Licon is right to be concerned about the potential over-policing of online speech by governments or corporations interested in shutting down fake news, it’s also the case that conversational norms (for both online and offline speech) are important features of how we communicate — the trick will be to find a way to manifest them consistently and to encourage others to do the same. (One promising element of a remedy — that does not approximate censorship — involves platforms like Twitter explicitly reminding or asking people to read articles before they share them; a growing body of evidence suggests that these kinds of “nudges” can help promote more epistemically desirable online norms of discourse in line with those well-developed in offline contexts.)

Ultimately, then, “fake news” seems like less of a rarely-shared digital phenomenon and more of a curiously noticeable indicator of a more wide-ranging issue for communication in the 21st century. Rather than being “dangerously overblown,” the problem of fake news is a proverbial canary in the coal mine for the epistemic ambiguities of online speech acts.

A.G. Holdier is a doctoral student in philosophy and public policy at the University of Arkansas interested in cultural capital, social and political epistemology, and the intersection of ethics with philosophy of language. More info available at www.agholdier.com
Related Stories