← Return to search results
Back to Prindle Institute

Should We Protect Disinformation?

Every week, the Associated Press releases a “roundup of some of the most popular but completely untrue stories and visuals” circulating social media. These stories range from recent claims that a Milwaukee election official lost her job because of her involvement in manipulating the 2020 election, to France (a NATO member state) deploying troops in Ukraine the latter of which, if true, would likely have resulted in a serious escalation of the Russia-Ukraine war.

Attempts to combat misinformation and disinformation, like the AP’s fact-checking efforts, have become vital in an information environment increasingly polluted by information content problems, many of which find a home under the label “fake news.” To minimize the threat that these false claims pose to sociopolitical tensions and our information environment, it is worth looking at one of the more harmful forms of fake news, namely, disinformation. Accordingly, we ought to reassess why we protect deceptive speech of this sort under the First Amendment.

The philosophical literature concerning free speech has three broad rationales for its defense: truth-seeking, democracy-preserving, and personal autonomy-based arguments. Truth-seeking defenses are generally consequentialist in that they value free speech for its ability to provide us with truth. One philosopher commonly associated with the truth-seeking rationale is John Stuart Mill. He believed that wrong opinions eventually yield to argument and fact, and consequently, they remain indispensable from dialectic. According to Mill, we ought to enshrine protections for freedom of speech because it plays a fundamental role in our ability to discover truth. Moreover, he argued that to suppress speech is to assume infallibility. Put simply, because we can never be certain about the truth of our perspective, we are never justified in suppressing the speech of others.

It goes without saying that Mill is correct about wrong opinions inevitably leading to arguments. However, his notion that wrong opinions eventually bring about facts is overly optimistic. One cause for concern is Mill’s truth-seeking argument implicitly assumes that every interlocutor acts in good faith. This assumption overlooks the abundance of cases where individuals are not seeking truth but rather something malicious, as with those who create disinformation. In contrast to misinformation, whereby inaccuracies arise inadvertently, disinformation refers to incorrect information intended to deceive, frequently resulting in harm. Instead of contributing to the pursuit of truth, those who create and share disinformation intentionally seek to subvert our efforts to access and act on true beliefs.

Additionally, truth-seeking arguments tend to overlook instances where individuals have been isolated into informational communities. In these information silos, disinformation is rampant because opposing viewpoints have either been omitted or their credibility has been actively undermined, as is the case in epistemic bubbles and echo chambers, respectively. One can forgive Mill for not anticipating algorithmic filtering in social media platforms and its influence on social epistemic structures. Nevertheless, truth-seeking defenses of free speech do not provide a strong argument for treating disinformation as protected speech.

Like their truth-seeking counterparts, democracy-preserving rationales are also typically consequentialist. Rationales of this sort consider freedom of speech invaluable to preserving democracy. Many advocates of democracy-preserving arguments, like the philosopher and free speech advocate Alexander Meiklejohn, structure their argument around the belief that a well-informed electorate is a fundamental component of a democracy. For an electorate to be well-informed, they argue it is necessary to protect freedom of speech.

Although the uninhibited flow of information can serve an important role in cultivating a well-informed electorate, defenders of democracy-preserving rationales often disregard how a laissez-faire approach to speech can undermine the same institutions that freedom of speech is intended to uphold. Take, for example, disinformation’s role in misinforming an electorate and the catastrophic consequences this can have on a democracy. Even a cursory glance at the disinformation campaigns orchestrated by those that sought to overturn the 2020 United States presidential election and their culmination in the 2021 United States Capitol attack demonstrates that speech that deliberately misleads others can undermine democratic institutions.

Perhaps the most compelling rationale for treating disinformation as protected speech comes from personal autonomy arguments. Although these arguments vary depending on one’s notion of autonomy, they generally claim that free speech is a natural right and thus fundamental and inalienable.

Autonomy defenses of free speech are commonly divided into speaker- and listener-centered theories. Procedural speaker-centered theories posit that restricting someone’s speech based on viewpoint undermines that individual’s judgment regarding what they choose to express to others. For example, a restriction against endorsing a particular political candidate would undermine the autonomy of their supporters. Procedural listener-centered theories contend that individuals have sovereignty over what they believe in relation to what they see, hear, and read. For instance, if the state were to intervene before individuals could reach a judgment on their own, this would violate their right and duty to independently decide how to act based on the information they receive. This latter view is often attributed to the philosopher Thomas Scanlon’s earlier work. For Scanlon, when we form beliefs and actions based on information others provide, we depend on our autonomous judgment.

Like the other two rationales, autonomy-based defenses routinely disregard how free speech absolutism can undermine the same institutions or ideals they aim to protect. Philosophers like Susan Brison have argued that misleading or false information, like fake news, can compromise autonomy by undermining one’s ability to make informed decisions. Brison believes this is partly because careful judgment is not always the determining factor that leads to our formation of beliefs. Put otherwise, we don’t always process information rationally. As a result, she argues, restricting fake news and other forms of disinformation does not deprive individuals of information that would be of any value to their ability to form accurate beliefs. Furthermore, it isn’t evident how this notion of autonomy precludes governmental intervention in restricting intentionally deceptive speech.

One might object that stripping legal protections for disinformation could result in individuals using the legal system to censor their political opponents. Critics might also worry that restrictions could result in legal punishment for unintentional instances of fake news. These concerns fittingly note that “fake news” has increasingly been misappropriated as a label for any content one finds disagreeable or seeks to discredit. Furthermore, concern about political censorship is not inapt given that much disagreement about what constitutes fake news correlates with political partisanship.

However, my focus here is on disinformation rather than the broader category of fake news. By defining fake news of the sort that we might consider restricting as synonymous with disinformation, we exclude instances of negligence (i.e., misinformation) or satire where false information is unintentional or palpable enough to recognize the comedic intent of the work.

Another potential objection comes from those who are sympathetic to governmental distrust rationales. Governmental distrust rationales range from worries about inefficiencies in enforcing policies (e.g., wasteful spending) to broader concerns about the government turning tyrannical. In the case of restricting disinformation, proponents claim that providing the government with the power to punish individuals for their speech would be overreaching, that is, too restrictive toward speech that deserves protections (e.g., political dissent, subversive art). However, adherents of governmental distrust rationales neglect to recognize that we already permit the government to restrict certain types of deceptive speech like false advertising. We provide the government with these powers because we recognize the importance of protecting individuals from situations where they are deceived into acting on false beliefs.

If none of the leading rationales for the value of free speech provide a strong basis for protecting disinformation, maybe we ought to heed the suggestion of philosophers who ask us to reflect upon what we value most in our commitment to free speech and consider whether the speech we currently protect aligns with those values. Our options inevitably present unique tradeoffs. However, it’s worth weighing these tradeoffs in consideration with the kind of society we want: one which places full responsibility on individuals to recognize and decide which information is reliable or one that plays a more active role in protecting its citizens from deceptive content.

Accountability for the Boy Who Cried “Misinformation”

It is no secret that we’re obsessed with information right now, particularly the spread of misinformation. The internet age allowed for the transmission of data on a scale never before seen, including “fake news” or, as the preferred nomenclature would have it, misinformation and disinformation. Now, AI can generate and propagate false information at will.

But our zeal in seeing misinformation stamped out has clouded our recognition of the ways the concept can be abused. Misleading information can be problematic, but our response to it can be equally troubling. We’ve contorted the way we discuss contentious issues and complicated our understanding of accountability.

Many of the most egregious examples of false information being spread are done specifically for the purposes of misleading others. For example, since the beginning of the Israel-Hamas war there has been an increase of disinformation about what is actually happening and what each side is doing. Keeping the record straight has been difficult, different groups look to vilify others. It’s easy to confuse the public and sow social instability when they’re not able to see what is happening first-hand. This has been the case, for example, with the disinformation being spread regarding the Russia-Ukraine war.

But apart from geopolitical conflicts, we’ve seen false information being spread about climate change in order to protect financial interests. Political disinformation is also undermining democratic dialogue, and combined with cases of misinformation – cases where deception is not intentional – we can see how the spread of rumor and lies helped fuel the protestors who stormed Congress on January 6th. Cases like this remind us that false or misleading information can be spread deliberately by people seeking to accomplish a particular goal or merely by those who are passing on the latest internet debris they take to be valid.

Both disinformation and misinformation can undermine the ability of society to recognize and respond to social problems. During the pandemic, for instance, massive amounts of false information made it difficult to manage public health, leading to needless deaths. As philosopher John Dewey warned, “We have the physical tools of communication as never before. The thoughts and aspirations congruous with them are not communicated, and hence not common. Without such communication the public will remain shadowy and formless.” Ultimately, misinformation and disinformation prevent the formation of the common understanding necessary to ensure that shared problems can be collectively addressed.

There is, however, potential to abuse the concept of “fake news” for one’s personal ends. You can always accuse others of intentionally peddling fictions – calling out certain kinds of misinformation while conveniently ignoring others when they suit your interests.

Consider this example: A recent study showed that only 3% of Earth’s ecosystems are intact. This finding was very different from those of previous studies, because the study redefined what “intactness” meant. Without a consistent scientific definition of terms, different studies will produce incommensurate results. This means it would be easy to accuse someone of engaging in misinformation if a) they are using a different study than I am and b) I don’t make discrepancies like this clear.

While the crusade to stamp out misinformation seems honorable, it can quickly lead to chaos. It’s important to recognize that scientific findings will conflict when employing different conceptual frameworks and methodologies, and that scientific studies can often be unreliable. It can be tempting to claim that because at least one expert claims something or because one study reaches a certain conclusion, you have the Truth and to contradict it represents misinformation. It can be more tempting still to simply accuse others of misinformation without explanation and write off entire points of view.

The way we liberally label misinformation makes it easy to engage in censorship. Today, there are concerns expressed about the media’s initial coverage of the “lab leak theory,” which may have stifled discussion by immediately branding it as misinformation. This is significant because if there is a widespread public perception that certain ideas are unfairly being dismissed as misinformation, it will undermine public trust. As Dewey also warned, “Whatever obstructs and restricts publicity, limits and distorts public opinion and checks and distorts thinking on social affairs.”

These are dangerous temptations, and it means that we must hold ourselves and others accountable, both for the information we pass on as well as the accusations we throw around.

Media Criticism and Healthy Skepticism

photograph of bagged newspaper abandoned on road

In a recent article in The Conversation, Professor Michael Socolow argues that distrust in the media is, in fact, valuable for a democracy. To make his argument, he presents historical cases of politicians criticizing media outlets, along with examples of journalists and their publishers damaging their own credibility by knowingly putting out materials that were manipulated, fabricated, or outright false. Socolow’s point seems to be two-fold: that political figures encourage citizens to distrust the media, and that journalists may invite this by engaging in unscrupulous behavior. He then notes that only in authoritarian regimes would we see citizens unwilling to express skepticism towards the media. As a result, Socolow concludes, “measured skepticism can be healthy and media criticism comprises an essential component of media literacy – and a vibrant democracy.”

Socolow is correct but in an uninteresting way. Frankly, I am unsure who he is arguing against. Few, if any, think we ought to trust every story in every outlet. But, simultaneously, we should not think there is monolithic perpetually untrustworthy “media.” Socolow gestures towards this middle-ground when he mentions “measured skepticism” in his conclusion. Yet he fails to give any account of what this looks like.

Further, I worry that Socolow’s discussion implicitly sends the message that any criticism is legitimate and healthy. The article opens by noting that being “anti-media” has become part of the Republican political identity, and mentions media criticism by politicians like Donald Trump. But surely some of the criticisms are irresponsible. Socolow also discusses Lyndon Johnson challenging accurate reporting on the Vietnam war. He follows these clearly truth-indifferent and politically-motivated media criticisms with cases of fraudulent behavior by media outlets, such as Dateline rigging GM trucks with explosives during a story on potential safety hazards.

However, there is no differentiation between the bad-faith criticisms and criticisms driven by legitimate misdeeds by members of the media. Socolow treats both as explaining why people might distrust the media, without any explanation of whether we ought to accept both sorts of critique as legitimate.

I think it is worthwhile to spend time considering what measured or healthy skepticism looks like. I cannot give a full account here; that’s a philosophical project on its own. Nonetheless, I hope that some preliminary reflection will help us determine what does and does not contribute to democratic society.

Aristotle famously argued that the virtues – admirable character and intellectual traits, the possession of which makes for an ideal person – are a middle ground or mean between an extreme of excess and an extreme of deficiency. For instance, most would say bravery is a virtue. Suppose that, after initially hearing of Russia’s invasion of Ukraine that I, with no military training or combat experience, bought an AR-15 and booked a flight to Europe to travel to the front lines. We would not call this behavior brave. I am showing an excess of what bravery requires, being too willing to risk my safety to fight against injustice, which crosses the line into being reckless. Conversely, one might fall short of bravery through cowardice. Standing by as an old woman’s purse is stolen because I was afraid of what might happen to me, shows a deficiency in my willingness to face danger. We might apply the same analysis to skepticism. One may be too skeptical, or not skeptical enough. The virtue of healthy skepticism is in the middle of these extremes.

We might start our discussion of healthy skepticism by asking: what does it mean to be skeptical? To be skeptical of something is to doubt it. But what of being skeptical in general? A skeptical person tends to doubt and scrutinize something before accepting it as true.

With Aristotle’s view in hand, we can then say that a healthy skeptic submits claims to an appropriate level of doubt before accepting them. And to determine what an “appropriate” level of doubt is, we may need to first consider what an inappropriate amount looks like.

In Meditations on First Philosophy, Rene Descartes engaged in a kind of skepticism some now call methodological doubt. Descartes attempted to systematically question each of his beliefs, and rejected all those which he was capable of doubting. Indeed, Descrates goes so far as to (temporarily) reject the belief that he had hands or even a body. This is because he could doubt these things – perhaps he was a sleeping spirit who was only dreaming that he had a body. In Descartes’ view, the fact that he could doubt a belief undermined his justification for it.

Philosophers, at least until Gettier, viewed knowledge as a justified true belief. Justified means that the belief has good support – there’s strong evidence behind it, like data or a logical proof. Belief is accepting something as true. Further, something is true when it obtains in reality.

Of course, Descartes skepticism seems extreme. The mere fact that something could possibly be wrong does not mean that belief in it is unjustified. As a result, his skepticism appears exaggerated. This would be like refusing to trust any story in any media outlet, simply because members of the media have at some point lied. It is true that any given story could be fabricated; but that does not mean we should treat all of them as fabricated.

What is the appropriate level of scrutiny to apply stories in the news if Cartesian doubt goes too far?

Ultimately, we have to consider which factors could cause or motivate a media outlet to run a false or inaccurate story (or even refuse to cover a particular story), and weigh those against considerations that support the veracity of the reporting.

When criticizing media in the U.S., we have to keep in mind that, with a few exceptions, media outlets are privately owned. Their goal is to attract viewers, listeners, and/or readers willing to pay a subscription or view an ad in order to make money. This may sometimes affect their coverage. They may be less inclined to report on the misdeeds of their advertisers. Further, to attract a specific demographic, the news outlet may adapt their coverage and tone to cater to a  particular kind of audience. They may also pursue a “scoop” – breaking a unique story first might increase viewership in the future. (Hence why Dateline would be willing to explode GM trucks, despite this angering a potential advertiser.) Each of these factors may shape what outlets are willing to report and the slant of their coverage.

Further still, reports are often created by individuals or a small team. These individuals have private interests – regularly writing reports which drive audience engagement will advance their career. They may have personal connections to the subject matter which bias their reporting in some way. A healthy skeptic understands that the news is, ultimately, produced by people not published out of the ether. We must keep in mind what both individuals and organizations will gain from our acceptance of a particular story before we place our trust in their reports.

So, what reasons would weigh in favor of trusting a report in the media?

I cannot give a comprehensive list here, instead I can offer a few criteria. First, a consensus in reporting on an event provides further justification for accepting a story. The more outlets covering the same story, and deriving similar conclusions about it, the more justified we are in accepting it. Second, the extent to which reporting is consistent with other facts and accounts affects the justification of our believing it. The more easily all the information fits together, the more likely it is all to be true. An aberrant report which claims other commonly reported stories are false is itself likely to be false. Third, reports which are falsifiable are more trustworthy than those which are not. If a media outlet claims that something which could be proven wrong is true, then they are putting their credibility on the line if their report is false. This risk indicates a certain confidence in the judgment. Further, claims which are not falsifiable are typically not worthy of acceptance; the fact that you cannot prove with certainty there isn’t a secret shadow government does not show that we should believe that such a government does indeed exist.

A healthy skepticism towards media outlets, overall, involves a complex set of attitudes and behaviors. To be a healthy skeptic, one should regularly ask who benefits and how. Who stands to gain from presenting the particular story in this particular way? Whose interests are served by remaining silent about a particular event.

Further, a healthy skeptic remembers that all private media outlets are for-profit organizations that rely on advertising, and that even public media companies are often funded by governments. These interests shape their coverage. Someone who adopts an attitude of skepticism – an attitude indeed vital to a well-functioning democracy – does not view “the media” as a monolithic entity, nor do they view the same few outlets as unerringly trustworthy. Instead, they consider each story for what it is: an act of discretion – a specific report published for an intentional reason from a particular point of view. And perhaps most importantly, a healthy skeptic will submit criticisms of the media by public officials and authority figures to the same demanding level of scrutiny.

Transparency and Trust in News Media

When I teach critical thinking, I often suggest that students pay a good deal of attention to the news. When news stories develop, what details do journalists choose to focus on? What details are they ignoring? Why choose to focus on certain details and not others? When new details are added or the story is updated, how does this change the narrative? As someone who regularly monitors the news for ethical analysis, this is a phenomenon I see all the time. A news item gets updated, and suddenly the focus of the piece dramatically changes. This is something that one can’t do in print media, but online media can revise and change the narrative of news after it is published.

Given the rapidly declining public trust in media, is it time for journalists and news groups to be more transparent and accountable about the narratives they choose to focus on (some may even say create) when they present a new story?

One morning last week I began to read an opinion article which is part of a series of articles written by former national NDP leader (and Prime Ministerial candidate) Tom Mulcair for CTV News. The article is about the on-going national Conservative leadership convention taking place, and mostly focuses on one candidate, Pierre Poilievre, and his attempts to appeal to voters in contrast with some of his rivals. I didn’t finish the article that morning, but when I returned to it later that afternoon, I noticed it had a new title.

What was entitled “Tom Mulcair: The Conservative leadership debates will be crucial” that morning was now titled “Tom Mulcair: The Trump side to Poilievre.” This change was surprising, but if one looks carefully, they will note that the article was “updated” an hour after being first published.

Luckily, I had the original article in my browser, and I was able to make comparisons between the updated version and the original. Does the update contain some new information that would prompt the change in title? No. The two articles are nearly identical, except for a minor typo correction. This means that with no meaningful difference, the article’s title was changed from a more neutral one to a far more politically charged title. It is no secret that Donald Trump is not popular in Canada, and so connecting one politician’s rhetoric to Trump’s is going to send a far different message and tone than “leadership debates will be crucial.” The important question, then, is why this change was made?

Is this a case of a news organization attempting to create and sell a political narrative for political purposes? To be fair, the original article always contained a final section entitled “The Trump Side to Poilievre,” but most of the article doesn’t focus on this topic. The more prominent section in the article focuses on issues of housing affordability, so why wasn’t the article changed to “Tom Mulcair: Conservatives address affordability as a theme?”

Is this a case of merely using clickbait-y headlines in the hopes of driving more attention? The point is that we don’t know, and most people would never even be aware of this change, let alone why it was made.

A recent survey of Canadians found that 49% of Canadians believe that journalists are purposely trying to mislead people by saying false or exaggerated claims, 52% believe that news organizations are more concerned with supporting an ideology than informing the public, and 52% believe that the media is not doing well at being objective and non-partisan. Similar sentiments can be found about American media as well. Amusingly, the very article that reports on this Canadian poll seeks to answer who is to blame for this. Apparently, it’s because of the end of the fairness doctrine in the U.S. (something that would have no effect on Canada), the growth of punditry (who gives them airtime?), polarization, and Donald Trump. Missing, of course, is the media pointing the blame at themselves; the sloppy collection of facts, the lazy analyses, the narrow focus on sensational topics. Surely, the loss of confidence in the media has nothing to do with their own lack of accountability and transparency.

News organizations always present a perspective when they report. We do not care about literally everything that happens, so the choice to cover a story and what parts of the story to cover are always going to be a reflection of values.

This is true in news, just as it is true in science. As philosopher of science Philip Kitcher notes, “The aim of science is not to discover any old truth but to discover significant truths.” Indeed, many philosophers of science argue that the notion of objectivity in science as a case of “value freedom” is nonsense. They argue that science will always be infused with values in some form or another in order to derive what it takes to be significant truths, so the intention should be to be as transparent about these matters as possible.

Recently, in response to concerns about bias in AI, there have been calls within the field of machine learning to use data sheets for data sets that would document the motivation, collection process, and recommended uses of a data set. Again, the aim is not necessarily to eliminate all bias and values, but to be more transparent about them to increase accountability. Should the news media consider something similar? Imagine if CTV communicated, not only that there had been an update to their story, but what was included in that update and why, not unlike Wikipedia. This would increase the transparency of the media and make them more accountable for how they choose to package and communicate news.

A 2019 report by the Knight Foundation reports that transparency is a key factor in trust in media. They note that this should not only include things like notifications of conflicts of interest, but also “additional reporting material made available to readers,” that could take the form of editorial disclosure, or a story-behind-the-story, that would explain why an editor thought a story was newsworthy. Organizational scholars Andrew Schnackenberg and Edward Tomlinson suggest that greater transparency can help with public trust in news by improving their perception of competence, integrity, and benevolence.

This also suggests why the news media’s attempt to improve their image have had limited success. Much of the debate about news media, particularly when framed by the news media themselves, focuses on the obligation to “fact check.” The CBC, for example, brags that its efforts to “rebuild trust in journalism” have focused on confirming the authenticity of videos against deep fakes, a corrections and clarifications page (which contains very vague accounts of such corrections), or their efforts to fight disinformation. They say that pundits can opine on the news but not the reporters.

But what they conveniently leave out is that the degradation in trust in news is not just about getting the facts right, it’s about how facts are being organized, packaged, and delivered.

Why include these pundits? Why cover this story? Why cover it in this way? If the media truly wants to improve the public trust, they will need to begin honestly taking responsibility for their own failure to be transparent about editorial decisions, they need to take steps to be held accountable, and they need to focus on how they can be more transparent in their coverage.

On Anxiety and Activism

"The End Is Nigh" poster featuring a COVID spore and gasmask

The Plough Quarterly recently released a new essay collection called Breaking Ground: Charting Our Future in a Pandemic Year. In a contribution by Joseph Keegin, “Be Not Afraid,” he details some of his memories of his father’s final days, and the looming role that “outrage media” played in their interactions. He writes,

My dad had neither a firearm to his name, nor a college degree. What he did have, however, was a deep, foundation-rattling anxiety about the world ubiquitous among boomers that made him—and countless others like him—easily exploitable by media conglomerates whose business model relies on sowing hysteria and reaping the reward of advertising revenue.

Keegin’s essay is aimed at a predominantly religious audience. He ends his essay by arguing that Christians bear a specifically religious obligation to fight off the fear and anxiety that makes humans easy prey to outrage media and other forms of news-centered absorption. He argues this partly on Christian theological grounds — namely, that God’s historical communications with humans is almost always preceded by the command to “be not afraid,” as a lack of anxiety is necessary for recognizing and following truth.

But if Keegin is right about the effects of this “deep, foundation-rattling anxiety” on our epistemic agency, then it is not unreasonable to wonder if everyone has, and should recognize, some kind of obligation to avoid such anxiety, and to avoid causing it in others. And it seems as though he is right. Numerous studies have shown a strong correlation between feeling dangerously out-of-control and the tendency to believe conspiracy theories, especially when it comes to COVID-19 conspiracies (here, here, here). The more frightening media we consume, the more anxious we become. The more anxious we become, the more media we consume. And as this cycle repeats, the media we are consuming tends to become more frightening, and less veridical.

Of course, nobody wants to be the proverbial “sucker,” lining the pocket books of every website owner who knows how to write a sensational headline. We are all aware of the technological tactics used to manipulate our personal insecurities for the sake of selling products and, for the most part, I would imagine we strive to avoid this kind of vulnerability. But there is a tension here. While avoiding this kind of epistemically-damaging anxiety sounds important in the abstract, this idea does not line up neatly with the ways we often talk about, and seek to advance, social change.

Each era has been beset by its own set of deep anxieties: the Great Depression, the Red Scare, the Satanic Panic, and election fears (on both sides of the aisle) are all examples of relatively recent social anxieties that lead to identifiable epistemic vulnerabilities. Conspiracies about Russian spies, gripping terror over nuclear war, and unending grassroots ballot recount movements are just a few of the signs of the epistemic vulnerability that resulted from these anxieties. The solution may at first seem obvious: be clear-headed and resist getting caught up in baseless media-driven fear-mongering. But, importantly, not all of these anxieties are baseless or the result of purposeless fear-mongering.

People who grew up during the depression often worked hard to instill an attitude of rationing in their own children, prompted by their concern for their kids’ well-being; if another economic downturn hit, they wanted their offspring to be prepared. Likewise, the very real threat of nuclear war loomed large throughout the 1950s-1980s, and many people understandably feared that the Cold War would soon turn hot. Even elementary schools held atom bomb drills, for any potential benefit to the students in the case of an attack. One can be sure that journalists took advantage of this anxiety as a way to increase readership, but concerned citizens and social activists also tried to drum up worry because worry motivates. If we think something merits concern, we often try to make others feel this same concern, both for their own sake and for the sake of those they may have influence over. But if such deep-seated cultural anxieties make it easier for others to take advantage of us through outrage media, conspiracy theories, and other forms of anxiety-confirming narratives, is such an approach to social activism worth the future consequences?

To take a more contemporary example, let’s look at the issue of climate change. According to a recent study, out of 10,000 “young people” (between the ages of 16 and 25) surveyed, almost 60% claimed to be “very” or “extremely” worried about climate change. 45% of respondents said their feelings about climate change affected their daily life and functioning in negative ways. If these findings are representative, surely this counts as the Generation Z version of the kind of “foundation-rattling anxiety” that Keegin observed in his late father.

There is little doubt where this anxiety comes from: news stories and articles routinely point out record-breaking temperatures, numbers of species that go extinct year to year, and the climate-based causes of extreme weather patterns. Pop culture has embraced the theme, with movies like “The Day After Tomorrow,” “Snowpiercer,” and “Reminiscent,” among many others, painting a bleak picture of what human life might look like once we pass the point of no return. Unlike any other time in U.S. history, politicians are proposing extremely radical, lifestyle-altering policies in order to combat the growing climate disaster. If such anxieties leave people epistemically vulnerable to the kinds of outrage media and conspiracy theory rabbit holes that Keegin worries about, are these fear-inducing tactics to combat climate change worth it?

On the surface, it seems very plausible that the answer here is “yes!” After all, if the planet is not habitable for human life-forms, it makes very little difference whether or not the humans that would have inhabited the planet would have been more prone to being consumed by the mid-day news. If inducing public anxiety over the climate crisis (or any other high stakes social challenge or danger) is effective, then likely the good would outweigh the bad. And surely genuine fear does cause such behavioral effects. Right?

But again, the data is unclear. While people are more likely to change their behavior or engage in activism when they believe some issue is actually a concern, too much concern, anxiety, or dread seems to soon produce the opposite (sometimes tragic) effect. For example, while public belief in, and concern over, climate change is higher than ever, actual climate change legislation has not been adapted in decades, and more and more elected officials deny or downplay the issue. Additionally, the latest surge of the Omicron variant of COVID-19 has renewed the social phenomenon of pandemic fatigue, the condition of giving up on health and safety measures due to exhaustion and hopelessness regarding their efficacy.

In an essay discussing the pandemic, climate change, and the threat of the end of humanity, the philosopher Agnes Callard analyzes this phenomenon as follows:

Just as the thought that other people might be about to stockpile food leads to food shortages, so too the prospect of a depressed, disaffected and de-energized distant future deprives that future of its capacity to give meaning to the less distant future, and so on, in a kind of reverse-snowball effect, until we arrive at a depressed, disaffected and de-energized present.

So, if cultural anxieties increase epistemic vulnerability, in addition to, very plausibly, leading to a kind of hopelessness-induced apathy toward the urgent issues, should we abandon the culture of panic? Should we learn how to rally interest for social change while simultaneously urging others to “be not afraid”? It seems so. But doing this well will involve a significant shift from our current strategies and an openness to adopting entirely new ones. What might these new strategies look like? I have no idea.

On Journalistic Malpractice

photograph of TV camera in news studio

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In 2005, then-CNN anchor Lou Dobbs reported that the U.S. had suffered over 7,000 cases of leprosy in the previous three years and attributed this to an “invasion of illegal immigrants.” Actually, the U.S. had seen roughly that many leprosy cases over the previous three decades, but Dobbs stubbornly refused to issue a retraction, instead insisting that “If we reported it, it’s a fact.”

In 2020, then-Fox-News anchor Lou Dobbs reported that the results of the election were “eerily reminiscent of what happened with Smartmatic software electronically changing votes in the 2013 presidential election in Venezuela.” Dobbs repeatedly raised questions and amplified conspiracy theories about Donald Trump’s loss, granting guests like Rudy Giuliani considerable airtime to spread misinformation about electoral security.

It’s generally uncontroversial to think that “fake news” is epistemically problematic (insofar as it spreads misinformation) and that it can have serious political consequences (when it deceives citizens and provokes them to act irrationally). Preventing these issues is complicated: any direct governmental regulation of journalists or news agencies, for example, threatens to run afoul of the First Amendment (a fact which has prompted some pundits to suggest rethinking what “free speech” should look like in an “age of disinformation”). To some, technology offers a potential solution as cataloging systems powered by artificial intelligence aim to automate fact-checking practices; to others, such hopes are ill-founded dreams that substitute imaginary technology for individuals’ personal responsibility to develop skills in media literacy.

But would any of these approaches have been able to prevent Lou Dobbs from spreading misinformation in either of the cases mentioned above? Even if a computer program would have tagged the 2005 leprosy story as “inaccurate,” users skeptical of that program itself could easily ignore its recommendations and continue to share the story. Even if some subset of users choose to think critically about Lou Dobbs’ 2020 election claims, those who don’t will continue to spread his conjectures. Forcibly removing Dobbs from the air might seem temporarily effective at stemming the flow of misinformation, but such a move — in addition to being plainly unconstitutional — would likely cause a counter-productive scandal that would only end up granting him even more attention.

Instead, rather than looking externally for ways to stem the tide of fake news and its problems, we might consider solutions internal to the journalistic profession: that is, if we consider journalism as a practice akin to medicine or law, with professional norms dictating how its practitioners ought to behave (even apart from any regulation from the government or society-at-large), then we can criticize “bad journalists” simply for being bad journalists. Questions of epistemic or political consequences of bad journalism are important, but subsequent to the first question focused on professional etiquette and practice.

This is hardly a controversial or innovative claim: although there is no single professional oath that journalists must swear (along the lines of those taken by physicians or lawyers), it is common for journalism schools and employers to promote codes of “journalistic ethics” describing standards for the profession. For example, the Code of Ethics for the Society of Professional Journalists is centered on the principles of accuracy, fairness, harm-minimization, independence, and accountability; the Journalism Code of Practice published by the Fourth Estate (a non-profit journalism watchdog group) is founded on the following three pillars:

  1. reporting the truth,
  2. ensuring transparency, and
  3. serving the community.

So, consider Dobbs’ actions in light of those three points: insofar as his 2005 leprosy story was false, it violates pillar one; because his 2020 election story (repeatedly) sowed dissension among the American public, it fails to abide by pillar three (notably, because it was filled with misinformation, as poignantly demonstrated by the defamation lawsuit Dobbs is currently facing). Even before we consider the socio-epistemic or political consequences of Dobbs’ reporting, these considerations allow us to criticize him simply as a reporter who failed to live up to the standards of his profession.

Philosophically, such an approach highlights the difference between accounts aimed at cultivating a virtuous disposition and those that take more calculative approaches to moral theorizing (like consequentialism or deontology). Whereas the latter are concerned with a person’s actions (insofar as those actions produce consequences or align with the moral law), the former simply focuses on a person’s overall character. Rather than quibbling over whether or not a particular choice is good or bad (and then, perhaps, wondering how to police its expression or mitigate its effects), a virtue theorist will look to how a choice reflects on the holistic picture of an agent’s personality and identity to make ethical judgments about them as a person. Like the famous virtue theorist Aristotle said, “one swallow does not make a summer, nor does one day; and so too one day, or a short time, does not make a man blessed and happy.”

On this view, being “blessed and happy” as a journalist might seem difficult — that is to say, being a good journalist is not an easy thing to be. But Aristotle would likely point out that, whether we like the sound of it or not, this actually seems sensible: it is easy to try and accomplish many things, but actually living a life a virtue — actually being a good person — is a relatively rare feat (hence his voluminous writings on trying to make sense of what virtue is and how to cultivate it in our lives). Professionally speaking, this view underlines the gravity of the journalistic profession: just as being a doctor or a lawyer amounts to shouldering a significant responsibility (for preserving lives and justice, respectively), to become a reporter is to take on the burden of preserving the truth as it spreads throughout our communities. Failing in this responsibility is more significant than failing to perform some other jobs: it amounts to a form of malpractice with serious ethical ramifications, not only for those who depend on the practitioner, but for the practitioner themselves as well.

On “Dog-Wagging” News: Why What “Lots of People” Say Isn’t Newsworthy

photograph of crowd of paparazzi cameras at event

On June 17th, Lee Sanderlin walked into a Waffle House in Jackson, Mississippi; fifteen hours later, he walked out an internet sensation. As a penalty for losing in his fantasy football league, Sanderlin’s friends expected him to spend a full day inside the 24-hour breakfast restaurant (with some available opportunities for reducing his sentence by eating waffles). When he decided to live-tweet his Waffle House experience, Sanderlin could never have expected that his thread would go viral, eventually garnering hundreds of thousands of Twitter interactions and news coverage by outlets like People, ESPN, and The New York Times.

For the last half-decade or so, the term ‘fake news’ has persistently gained traction (even being voted “word of the year” in 2017). While people disagree about the best possible definition of the term (should ‘fake news’ only refer to news stories intentionally designed to trick people or could it countenance any kind of false news story or maybe something else?), it seems clear that a story about what Sanderlin did in the restaurant is not fake: it genuinely happened, so reporting about it is not spreading misinformation.

But that does not mean that such reporting is spreading newsworthy information.

While a “puff piece” or “human interest story” about Sanderlin in the Waffle House might be entertaining (and, by extension, might convince internet users to click a link to read about it), its overall value as a news story seems suspect. (The phenomenon of clickbait, or news stories marketed with intentionally noticeable headlines that trade accuracy for spectacle, is a similar problem.) Put differently, the epistemic value of the information contained in this news story seems problematic: again, not because it is false, but rather because it is (something like) pointless or irrelevant to the vast majority of the people reading about it.

Let’s say that some piece of information is newsworthy if its content is either in the public interest or is otherwise sufficiently relevant for public distribution (and that it is part of the practice of good journalism to determine what qualifies as fitting this description). When the president of the United States issues a statement about national policy or when a deadly disease is threatening to infect millions, then this information will almost certainly be newsworthy; it is less clear that, say, the president’s snack order or an actor’s political preferences will qualify. In general, just as we expect their content to be accurate, we expect that stories deemed worthy to be disseminated through our formal “news” networks carry information that news audiences (or at least significant subsets thereof) should care about: in short, the difference between a news site and a gossip blog is a substantive one.

(To be clear: this is not to say that movie releases, scores of sports games, or other kinds of entertainment news are not newsworthy: they could easily fulfill either the “public interest” or the “relevance” conditions of the ‘newsworthy’ definition in the previous paragraph.)

So, why should we care about non-newsworthy stories spreading? That is to say, what’s so bad about “the paper of record” telling the world about Sanderlin’s night in a Mississippi Waffle House?

Two problems actually come to mind: firstly, such stories threaten to undermine the general credibility of the institution spreading that information. If I know that a certain website gives equal attention to stories about COVID-19 vaccination rates, announcements of Supreme Court decisions, Major League baseball game scores, and crackpots raging about how the Earth is flat, then I will (rightly, I think) have less confidence that the outlet is capable of reporting accurate information in general (given its decision to spread demonstrably false conspiracy theories). In a similar way, if an outlet gives attention to non-newsworthy stories, then it can water down the perceived import of the other genuinely newsworthy stories that it typically shares. (Note that this problem is compounded further when amusing non-newsworthy stories spread more quickly on the basis of their entertaining quirks, thereby altering the average public profile of the institution spreading them.)

But, secondly, non-newsworthy stories pose a different kind of threat to the epistemic environment than do fake news stories: whereas the latter can infect the community with false propositions, the former can infect the community with bullshit (in a technical sense of the term). According to philosopher Harry Frankfurt, ‘bullshit’ is a tricky kind of speech act: if Moe knows that a statement is false when he asserts it, then Moe is lying; if Moe doesn’t know or care whether a statement is true or false when he asserts it, then Moe is bullshitting. Paradigmatically, Frankfurt says that bullshitters are looking to provoke a particular emotional response from their audience, rather than to communicate any particular information (as when a politician uses rhetoric to affectively appeal to a crowd, rather than to, say, inform them of their own policy positions). Ultimately, Frankfurt argues that bullshit is a greater threat to truth than lies are because it changes what people expect to get out of a conversation: even if a particular piece of bullshit turns out to be true, that doesn’t mean that the person who said it wasn’t still bullshitting in the first place.

So, consider what happened when an attendee at a campaign rally for Donald Trump in 2015 made a series of false assertions about (among other things) Barack Obama’s supposedly-foreign citizenship and the alleged presence of camps operating inside the United States to train Muslims to kill people: then-candidate Trump responded by saying:

“We’re going to be looking at a lot of different things. You know, a lot of people are saying that, and a lot of people are saying that bad things are happening out there. We’re going to look at that and plenty of other things.”

Although Trump did not clearly affirm the conspiracy theorist’s racist and Islamophobic assertions, he nevertheless licensed them by saying that “a lot of people are saying” what the man said. Notice also that Trump’s assertion might or might not be true — it’s hard to tell how we would actually assess the accuracy of a statement like “a lot of people are saying that” — but, either way, it seems like the response was intended more to provoke a certain affective response in Trump’s audience. In short, it was an example of Frankfurtian bullshit.

Conspiracy theories about Muslim “training camps” or Obama’s unAmerican birthplace are not newsworthy because, among other things, they are false. But a story like “Donald Trump says that “a lot of people are saying” something about training camps” is technically true (and is, therefore, not “fake news”) because, again, Trump actually said such a thing. Nevertheless, such a story is pointless or irrelevant — it is not newsworthy — there is no good reason to spread it throughout the epistemic community. In the worst cases, non-newsworthy stories can launder falsehoods by wrapping them in the apparent neutrality of journalistic reporting.

For simplicity’s sake, we might call this kind of not-newsworthy story an example of “dog-wagging news” because, just as “the tail wagging the dog” evokes an image where the “small or unimportant entity (the tail) controls a bigger, more important one (the dog),” a dog-wagging news story is one where something about the story other than its newsworthiness leads to its propagation throughout the epistemic environment.

In harmless cases, dog-wagging stories are amusing tales about Waffle Houses and fantasy football losses; in more problematic cases, dog-wagging stories help to perpetuate conspiracy theories and worse.

“Fake News” Is Not Dangerously Overblown

image of glitched "FAKE NEWS" title accompanied by bits of computer code

In a recent article here at The Prindle Post, Jimmy Alfonso Licon argues that the hype surrounding the problem of “fake news” might be less serious than people often suggest. By pointing to several recent studies, Licon highlights that concerns about social standing actually prevent a surprisingly large percentage of people from sharing fake news stories on social media; as he says, “people have strong incentives to avoid sharing fake news when their reputations are at stake.” Instead, it looks like many folks who share fake news do so because of pre-existing partisan biases (not necessarily because of their gullibility about or ignorance of the facts). If this is true, then calls to regulate speech online (or elsewhere) in an attempt to mitigate the spread of fake news might end up doing more harm than good (insofar as they unduly censor otherwise free speech).

To be clear: despite the “clickbaity” title of this present article, my goal here is not to argue with Licon’s main point; the empirical evidence is indeed consistently suggesting that fake news spreads online not simply because individual users are always fooled into believing a fake story’s content, but rather because the fake story:

On some level, this is frustratingly difficult to test: given the prevalence of expressive responding and other artifacts that can contaminate survey data, it is unclear how to interpret an affirmation of, say, the (demonstrably false) “immense crowd size” at Donald Trump’s presidential inauguration — does the subject genuinely believe that the pictures show a massive crowd or are they simply reporting this to the researcher as an expression of partisan allegiance? Moreover, a non-trivial amount of fake news (and, for that matter, real news) is spread by users who only read a story’s headline without clicking through to read the story itself. All of this, combined with additional concerns about the propagandistic politicization of the term ‘fake news,’ as when politicians invoke the concept to avoid responding to negative accusations against them, has led some researchers to argue that the “sloppy, arbitrary” nature of the term’s definition renders it effectively useless for careful analyses.

However, whereas Licon is concerned about potentially unwarranted threats to free speech online, I am concerned about what the reality of “fake news” tells us about the nature of online speech as a whole.

Suppose that we are having lunch and, during the natural flow of our conversation, I tell you a story about how my cat drank out of my coffee cup this morning; although I could communicate the details to you in various ways (depending on my story-telling ability), one upshot of this speech act would be to assert the following proposition:

1. My cat drank my coffee.

To assert something is to (as explained by Sandford Goldberg) “state, report, contend, or claim that such-and-such is the case. It is the act through which we tell others things, by which we inform an audience of this-or-that, or in which we vouch for something.” Were you to later learn that my cat did not drink my coffee, that I didn’t have any coffee to drink this morning, or that I don’t live with a cat, you would be well within your rights to think that something has gone wrong with my speech (most basically: I lied to you by asserting something that I knew to be false).

The kinds of conventions that govern our speech are sometimes described by philosophers of language as “norms” or “rules,” with a notable example being the knowledge norm of assertion. When I assert Proposition #1 (“My cat drank my coffee”), you can rightfully think that I’m representing myself as knowing the content of (1) — and since I can only know (as opposed to merely believe) something that is true, I furthermore am representing (1) as true when I assert it. This, then, is one of the problems with telling a lie: I’m violating how language is supposed to work when I tell you something false; I’m breaking the rules governing how assertion functions.

Now to add a wrinkle: what if, after hearing my story about my cat and coffee, you go and repeat the story to someone else? Assuming that you don’t pretend like the story happened to you personally, but you instead explain how (1) describes your friend (me) and you’re simply relaying the story as you heard it, then what you’re asserting might be something like:

2. My friend’s cat drank his coffee.

If this other person you’re speaking to later learns that I was lying about (1), that means that you’re wrong about (2), but it doesn’t clearly mean that you’re lying about (2) — you thought you knew that (2) was true (because you foolishly trusted me and my story-telling skills). Whereas I violated one or more norms of assertion by lying to you about (1), it’s not clear that you’ve violated those norms by asserting (2).

It’s also not clear how any of these norms might function when it comes to social media interaction and other online forms of communication.

Suppose that instead of speaking (1) in a conversation, I write about it in a tweet. And suppose that instead of asserting (2) to someone else, you simply retweet my initial post. While at first glance it might seem right to say that the basic norms of assertion still apply as before here, we’ve already seen (with those bullet points in the second paragraph of this article) that fake news spreads precisely because internet users seemingly aren’t as constrained in their digital speech acts. Maybe you retweet my story because you find it amusing (but don’t think it’s true) or because you believe that cat-related stories should be promoted online — we could imagine all sorts of possible reasons why you might retransmit the (false) information of (1) without believing that it’s true.

Some might point out that offline communication can often manifest some of these non-epistemic elements of communication, but C. Thi Nguyen points out how the mechanics of social media intentionally encourage this kind of behavior. Insofar as a platform like Twitter gamifies our communication by rewarding users with attention and acclaim (via tools such as “likes” and “follower counts”), it promotes information spreading online for many reasons beyond the basic knowledge norm of assertion. Similarly, Lucy McDonald argues that this gamification model (although good for maintaining a website’s user base) demonstrably harms the quality of the information shared throughout that platform; when people care more about attracting “likes” than communicating truth, digital speech can become severely epistemically problematic.

Now, add the concerns mentioned above (and by Licon) about fake news and it might be easy to see how those kinds of stories (and all of their partisan enticements) are particularly well-suited to spread through social media platforms (designed as they are to promote engagement, regardless of accuracy).

So, while Licon is right to be concerned about the potential over-policing of online speech by governments or corporations interested in shutting down fake news, it’s also the case that conversational norms (for both online and offline speech) are important features of how we communicate — the trick will be to find a way to manifest them consistently and to encourage others to do the same. (One promising element of a remedy — that does not approximate censorship — involves platforms like Twitter explicitly reminding or asking people to read articles before they share them; a growing body of evidence suggests that these kinds of “nudges” can help promote more epistemically desirable online norms of discourse in line with those well-developed in offline contexts.)

Ultimately, then, “fake news” seems like less of a rarely-shared digital phenomenon and more of a curiously noticeable indicator of a more wide-ranging issue for communication in the 21st century. Rather than being “dangerously overblown,” the problem of fake news is a proverbial canary in the coal mine for the epistemic ambiguities of online speech acts.

Is Fake News Dangerously Overblown?

photograph of smartphone displaying 'Fake News' story

“Censorship laws are blunt instruments, not sharp scalpels. Once enacted, they are easily misapplied to merely unpopular or only marginally dangerous speech.”

—Alan Dershowitz, Finding, Framing, and Hanging Jefferson: A Lost Letter, a Remarkable Discovery, and Freedom of Speech in an Age of Terrorism

Fake news, false or misleading information presented as though it’s true, has been blamed for distorting national politics in the United States and undercutting the faith that citizens place in elites and institutions — so much so that Google has recently stepped in to provide a tool to help users avoid being hoodwinked. It looks plausible, at first glance, that fake news is a widespread problem; if people can be fooled into thinking misleading or false information is genuine news, their attitudes and beliefs about politics and policy can be influenced for the worse. In a functioning democracy, we need citizens, and especially voters, to be well-informed — we cannot have that if fake news is commonplace.

A recent study found political polarization — left, right, or center — to be the primary psychological motivation behind people sharing fake news. It seems we aren’t driven by ignorance, but vitriol for one’s political opponents. It isn’t a matter of folks being fooled by political fictions because they lack knowledge of the salient subject matter, say, but rather that people are most inclined to share fake news when it targets political adversaries whom they hate. And this aligns with what we already know about the increasing polarization in American politics: that it’s becoming increasingly difficulty for people in different political parties, notably Republicans and Democrats, to agree on issues that used to be a matter of bipartisan consensus (e.g., a progressive tax structure).

In the face of the (alleged) increasing threat from fake news, some have argued we need stronger intervention on the part of tech companies that is just shy of censorship — that is, fake news is parasitic on free speech, and can perhaps only be controlled by a concerted legal effort, along with help from big technology companies like Facebook and Google.

But perhaps the claim that fake news is widespread is dangerously overblown. How? The sharing of fake news is less common than we are often led to believe. A study from last year found that

“[although] fake news can be made to be cognitively appealing, and congruent with anyone’s political stance, it is only shared by a small minority of social media users, and by specialized media outlets. We suggest that so few sources share fake news because sharing fake news hurts one’s reputation … and that it does so in a way that cannot be easily mended by sharing real news: not only did trust in sources that had provided one fake news story against a background of real news dropped, but this drop was larger than the increase in trust yielded by sharing one real news story against a background of fake news stories.”

There are strong reputation incentives against sharing fake news — people don’t want to look bad to others. (Of course, the researchers also acknowledge the same incentives don’t apply to anonymous individuals who share fake news.) Humans are a cooperative species that rely on help from others for survival — and so it matters how others view us. People wouldn’t want to cooperate with someone with a bad reputation, thus most people will track how they are seen by others. We want to know those we cooperate with have a good reputation; we want them to be sufficiently trustworthy and reliable since we rely on each other for basic goods. As other researchers explain,

“[Humans] depend for their survival and welfare on frequent and varied cooperation with others. In the short run, it would often be advantageous to cheat, that is, to take the benefits of cooperation without paying the costs. Cheating however may seriously compromise one’s reputation and one’s chances of being able to benefit from future cooperation. In the long run, cooperators who can be relied upon to act in a mutually beneficial manner are likely to do better.”

Of course, people sometimes do things which aren’t in their best interests — taking a hit to one’s reputation is no different. The point though is that people have strong incentives to avoid sharing fake news when their reputations are at stake. So we have at least some evidence that fake news is overblown; people aren’t as likely to share fake news, for reputational reasons, than it may appear given the amount of attention the phenomenon of fake news has garnered in the public square. This doesn’t mean, of course, that there isn’t a lot of fake news in circulation on places like, say, social media — there could be substantial fake news shared, but only by a few actors. Moreover, the term ‘fake news’ is often used in a sloppy, arbitrary way — not everything called ‘fake news’ is fake news. (Former President Trump, for example, would often call a story ‘fake news’ if it made him look bad, even if the story was accurate.)

Overstating the problem fake news represents is also troubling as it encourages people to police others’ speech in problematic ways. Actively discouraging people from sharing ‘fake news’ (or worse, silencing them) can be a dangerous road to traverse. The worry is that just as former President Trump did to journalists and critics, folks will weaponize the label ‘fake news’ and use it against their political enemies. While targeting those who supposedly share fake news may prevent misinformation, often it will be used to suppress folks who have unorthodox or unpopular views. As the journalist Chris Hedges observed,

“In late April and early May the World Socialist Web Site, which identifies itself as a Trotskyite group that focuses on the crimes of capitalism, the plight of the working class and imperialism, began to see a steep decline in readership. The decline persisted into June. Search traffic to the World Socialist Web Site has been reduced by 75 percent overall. And the site is not alone. … The reductions coincided with the introduction of algorithms imposed by Google to fight ‘fake news.’ Google said the algorithms are designed to elevate ‘more authoritative content’ and marginalize ‘blatantly misleading, low quality, offensive or downright false information.’ It soon became apparent, however, that in the name of combating ‘fake news,’ Google, Facebook, YouTube and Twitter are censoring left-wing, progressive and anti-war sites.”

Perhaps the phenomenon of fake news really is as bad as some people say — though the evidence suggests that isn’t the case. In any event, we shouldn’t conclude from this that fake news isn’t a problem at all; we may need some form of policing that, while respecting freedom of expression, can empower voters and citizens with tools to allow them to avoid, or at least identify, fake news. But we can acknowledge both the need for fake news oversight and the need to significantly curtail that power.

What’s Wrong with State Media?

Graffiti image of three happy individuals under communist flag with Vietnam skyline behind

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In a statement to the Washington Post earlier this month, Democratic National Committee Chairman Tom Perez announced that Fox News will not be allowed to host a debate for the 2020 Democratic Party primary election cycle. The DNC’s decision was based in part on Jane Mayer’s New Yorker article accusing Fox News of acting as a propaganda machine for President Trump’s administration. Mayer’s article points to frequent cross-hiring between network management and Trump’s campaign and White House administration, as well as the president’s consistent attention to shows like “Fox and Friends” to demonstrate the close relationship between the administration and the media outlet. The article even includes a quote from professor Nicole Hemmer, who calls Fox News “the closest we’ve come to having state TV.” Implicit in Hemmer’s statement and Mayer’s article is the premise that a state-run media network would be a bad thing for the United States. Leaving aside the debate over whether Fox News or any other news organization is disseminating propaganda, it is worth delving into why (or perhaps even whether) we should be worried about a state-run media in the first place.

A state-run news organization would seem to run counter to the values which inspired the First Amendment of the United States Constitution. The American Civil Liberties Union specifically highlights the role of the press as a critic and watchdog of the government in service of the people. Investigative journalism is a necessary component of democratic society. The research undertaken by reporters into not only the government, but also businesses and wider societal trends helps the general public understand the world and current events. It seems likely that an organization funded, overseen, or otherwise closely involved with the government would experience a conflict of interest precluding the total fulfillment of this watchdog duty. Certainly, a country with only state-run media would be missing the opposition viewpoint critical to the democratic process. Without the full breadth of information, the general public would be unable to make informed decisions about the government, therefore depriving the people of the agency of self-governance that defines democracy.

The United States can look to other countries for models of what state-run media might look like. Russia, for instance, is widely regarded as operating state-controlled media: two of the biggest television channels, Channel One Russia and Russia-1, are controlled by the federal government, and the English-language network RT is also funded by the government. These media outlets tend to support the policies of the government, and some have accused these organizations of acting as propaganda machines for the Kremlin. In particular, RT has garnered attention because it is directed to a more global audience; while critics say it is designed to generate international sympathy for misguided or dangerous policies of Vladimir Putin’s administration, the network claims it is simply providing an alternative viewpoint to the largely anti-Russia opinions of other international news networks.

Many regard Russia’s control of media and restriction of free press as problematic. What is it about the media situation in Russia that constitutes a breach of ethics? Is it the presence of state-run media, or is it the absence of prominent independent media outlets? Perhaps the more pressing concern is the active legal restrictions on journalists who attempt to look too closely at issues like corruption. Journalists have been banned from Russia, sentenced to time in prison, and even attacked and killed, often under suspicious circumstances. These are obviously more severe threats to press freedom than state-run media, and one could argue that in the absence of such dire conditions, a state-run news outlet would not be an ethical violation in itself.

Being government-sponsored does not guarantee that a news network will collaborate closely with the government. One of the most well-regarded news organizations in the world is the British Broadcasting Corporation. While the BBC was founded by a royal charter and remains under the auspices of the government of the United Kingdom, its charter explicitly calls for the corporation to be “independent in all matters” and a provider of “impartial” services. One could argue that true independence is impossible while the future of the organization is determined by the government, but the presence of other, non-state news outlets in the United Kingdom suggests a much wider latitude of press freedom than in Russia.

Our fear of state-run media seems to stem from a fear of an Orwellian dystopia in which objective truth is hard to come by and public narratives are constantly malleable. The tendency towards a “post-truth” world seem ripe for sinister developments like manufactured consent, wherein public opinion is gradually and subliminally bent to suit the aims of policy makers and other power players. These fears seem even more troubling in the era of “fake news.” President Trump’s use of the phrase to discredit news outlets like CNN, as well as his suggestion for a state-run cable TV network, could be construed as part of a drive towards more extensive state control of the media.

But is there an upside to state-controlled (or at least state-funded) media? For several years, observers have been bemoaning the rise of clickbait — stories and headlines designed to grab immediate attention, often at the expense of in-depth reporting and thoughtful investigation. The primary motivation for this trend is to ensure a profit in the digital era. Free from the need to turn a profit, a state-funded media outlet would theoretically be better equipped to cover substantial, potentially unpopular stories. This is the mission of America’s Corporation for Public Broadcasting, a government-financed organization that provides some of the funding for public radio stations and other services.

All of this does not absolve Fox News from its duty to provide impartial coverage of government policy. Fox News is not openly an arm of the state: any connection or cooperation between the network and the Trump administration is covert. When it is perceived as an impartial, private corporation, any criticism or praise delivered by the organization to the government is taken as objective assessment, rather than propaganda. But precisely because it is perceived as a free agent, the network also has a duty to fulfill this expectation and act impartially; anything else would be misrepresentation, unethical not only to the extent that lying is unethical, but more so because of the special duty of the press in maintaining the democratic system. At the same time, it is difficult to ascertain true impartiality. The determining factor is intent, rather than outcome. An impartial organization coincidentally supporting the administration on every issue and a partial organization actively colluding with the administration would look practically identical to an outside observer.

The Rise of Political Echo Chambers

Photograph of the White House

Anyone who has spent even a little bit of time on the internet is no doubt familiar with its power to spread false information, as well as the insular communities that are built around the sharing of such information. Examples of such groups can readily be found on your social media of choice: anti-vaccination and climate change denial groups abound on Facebook, while groups like the subreddit “The Donald” boast 693,000 subscribers (self-identified as “patriots”) who consistently propagate racist, hateful, and false claims made by Trump and members of the far-right. While the existence of these groups is nothing new, it is worth considering their impact and ethical ramifications as 2019 gets underway.

Theorists have referred to these types of groups as echo chambers, namely groups in which a certain set of viewpoints and beliefs are shared amongst its members, but in such a way that views from outside the group are either paid no attention or actively thought of as misleading. Social media groups are often presented as examples: an anti-vaxx Facebook group, for example, may consist of members who share their views with other members of the group, but either ignore or consider misleading the tremendous amount of evidence that their beliefs are mistaken. These views tend to propagate because the more that one sees that one’s beliefs are shared and repeated (in other words, “echoed back”) the more confident they become that they’re actually correct.

The potential dangers of echo chambers have received a lot of attention recently, with some blaming such groups for contributing to the decrease in rate of parents vaccinating their children, and to increased political partisanship. Philosopher C Thi Nguyen compares echo chambers to “cults,” arguing that their existence can in part explain what appears to be an increasing disregard for the truth. Consider, for example, The Washington Post’s recent report that Trump made 7,645 false or misleading claims since the beginning of his presidency. While some of these claims required more complex fact-checking than others, numerous claims (e.g. that the border wall is already being built, or those concerning the size of his inauguration crowd) are much more easily assessed. The fact that Trump supporters continue to believe and propagate his claims can be partly explained by the existence of echo chambers: if one is a member of a group in which similar views are shared and outside sources are ignored or considered untrustworthy then it is easier to understand how such claims can continue to be believed, even when patently false.

The harms of echo chambers, then, are wide ranging and potentially significant. As a result it would seem that we have an obligation to attempt to break out of any echo chambers we happen to find ourselves in, and to convince others to get out of theirs. Nguyen urges us to attempt to “escape the echo chamber” but emphasizes that doing so might not be easy: members of echo chambers will continue to receive confirmation from those that they trust and share their beliefs, and, because they distrust outside sources of information, will not be persuaded by countervailing evidence.

As 2019 begins, the problem of echo chambers is perhaps getting worse. As a recent Pew Research Center study reports, polarization along partisan lines has been steadily increasing since the beginning of Trump’s presidency on a wide range of issues. Trump’s consistent labeling of numerous news sources and journalists as untrustworthy is clearly contributing to the problem: Trump supporters will be more likely to treat information provided by those sources deemed “fake news” as untrustworthy, and thus will fail to consider contradictory evidence.

So what do we do about the problem of echo chambers? David Robert Grimes at The Guardian suggests that while echo chambers can be comforting – it is nice, after all, to have our beliefs validated and not to have to challenge our convictions – that such comfort hardly outweighs the potential harms. Instead, Grimes suggests that “we need to become more discerning at analysing our sources” and that “we must learn not to cling to something solely because it chimes with our beliefs, and be willing to jettison any notion when it is contradicted by evidence.”

Grimes’ advice is reminiscent of the American philosopher Charles Sanders Peirce, who considered the type of person who forms beliefs using what he calls a “method of tenacity,” namely someone who sticks to one’s beliefs no matter what. As Peirce notes, such a path is comforting – “When an ostrich buries its head in the sand as danger approaches,” Peirce says, “it very likely takes the happiest course. It hides the danger, and then calmly says there is no danger; and, if it feels perfectly sure there is none, why should it raise its head to see?” – but nevertheless untenable, as no one can remain an ostrich for very long, and will thus be forced to come into contact with ideas that will ultimately force them to address challenges to their beliefs. Peirce insists that the we instead approach our beliefs scientifically, where “the scientific spirit requires a man to be at all times ready to dump his whole cart-load of beliefs, the moment experience is against them.”

Hopefully 2019 will see more people taking the advice of Grimes and Peirce seriously, and that the comfort of being surrounded by familiar beliefs and not having to perform any critical introspection will no longer win out over a concern for truth.

What’s the Story with Fake News?

Photograph of Donald Trump speaking into a microphone

Every day U.S. President Donald Trump calls “fake news” on particular stories or whole sections of the media that he doesn’t like. At the same time there has been a growing understanding, inside and outside the U.S., that “fake news”, that is to say fabricated news, has in recent years had an effect on democratic processes. There is of course a clear difference between these two uses of the term, but they come together in signifying a worrying development in the relations of public discourse to verifiable truth.

Taking the fabricated stories first – what might be called “real fake news” as opposed to Trump’s “fake fake news” (to which we shall return) – an inquiry concluded by the UK parliament in recent weeks that sheds further light on the connections between lies and disinformation, social media, and hindrance of transparent democratic processes makes sobering reading.

On July 24 the British House of Commons Digital, Culture, Media and Sport (DCMS) Committee released its report on ‘disinformation and fake news’. What began as a modest inquiry into recent developments and trends in digital media “delved increasingly into the political use of social media” and grew in scope to become the most detailed look yet to be published by a government body at the use of disinformation and fake news.

The report states that

“…without the knowledge of most politicians and election regulators across the world, not to mention the wider public, a small group of individuals and businesses had been influencing elections across different jurisdictions in recent years.”

Big Technology companies, especially social media companies like Facebook, gather information on users to create psychographic profiles which can be passed on (sold) to third parties and used to target advertising or fabricated news stories tailored to appeal to that individual’s beliefs, ideologies and prejudices in order to influence their behavior. This is a form of psychological manipulation in which “fake news” has been used with the aim of swaying election results. Indeed, the DCMS committee thinks it has helped sway the Brexit vote. Other research suggests it helped to elect Donald Trump in the 2016 U.S. presidential Election.

The report finds that

“…urgent action needs to be taken by the Government and other regulatory agencies to build resilience against misinformation and disinformation into our democratic system. Our democracy is at risk, and now is the time to act, to protect our shared values and the integrity of our democratic institutions.”

It’s not easy to define what “fake news” is. The term is broad enough to include lies, misinformation, conspiracy theories, satire, rumour or stories that are simply wrong. All these categories of falsehood have been around a long time and may not necessarily be malicious. The epistemic assumption that the problem with fake or misleading news is that it is untrue is not always warranted.

Given that information can be mistaken yet believed and shared in good faith, an evaluation of the epistemic failings of false information should perhaps be judged on criteria that include the function or intention of the falsehood and also what is at stake for the intended recipient as well as the purveyor of misinformation. In other words, the definition of fake news should include an understanding of its being maliciously produced with the intention to mislead people for a particular end. That is substantively different from dissenting opinions or information that is wrong, if disseminated or published in good faith.

The DCMS report recommended dropping the term “fake news” altogether and adopting the terms ‘misinformation’ and/or ‘disinformation’. A reason for this recommendation is that “the term has taken on a variety of meanings, including a description of any statement that is not liked or agreed with by the reader.”

The ethical dimensions of fake news seem relatively uncomplicated. Though it is sometimes possible to make a moral case for lying – perhaps to protect someone from harm, for fake news there is no such case to be made, and there is little doubt that its propagators have no such reasoning in mind. We don’t in general want to be lied to because we value truth as a good in itself; we generally feel it is better for us to know the truth, even if it is painful, than not to know it.

The thorny ethical problems arise around the question of what, if anything, fake news has to do with freedom of speech and freedom of press when calls for regulation are on the table. One of the greatest justifications for free speech was put forward by the liberal philosopher John Stuart Mill. Mill thought that suppression of error (by a government) could never rule out accidental (or even deliberate) suppression of truth because we are not epistemically infallible. The history of knowledge is, after all, a history of having very often to correct grave and, sometimes, ludicrous error. Mill convincingly argued that unrestricted discussion allowed truth to flourish. He thought that a “clearer perception and livelier impression of truth [is] produced by its collision with error.”

However, on closer consideration, free speech may not really be what is at stake. Mill’s defense of free press (free opinion) ends where ‘in good faith’ ends, and fake news, as wielded by partisan groups on platforms like Facebook, is certainly not in good faith. Mill’s defense of free and open discussion does not include fake news and deliberate disinformation, which is detrimental to the kind of open discussion Mill had in mind, because rather than promote constructive conversation it is designed to shut conversation down.

Freedoms are always mitigated by harms: my freedom to swing my fist around ends where your nose begins. And the DCMS report is one of numerous recent findings that show the harms of fake news. Even if we grant that free speech doesn’t quite mean freedom to lie through one’s teeth (and press / media doesn’t quite mean Facebook) it still is not easy to come up with a regulatory solution. For one thing, regulations can themselves be open to abuse by governments – which is precisely the kind of thing Mill was at pains to prevent. The term “fake news” has already become a tool for political oppression in Egypt where “spreading false news” has been criminalized in a law under which dissidents and critics of the regime can be, and have already been, prosecuted.

Also, as we grapple with the harms caused by deliberate, targeted misinformation, the freedom of expression question dogs the discussion because social media is, by design, not a tightly controlled conversational space. It can be one of the internet’s great benefits that it has a higher degree of freedom than traditional media — even if that means a higher degree of error. Yet it is clear from the DCMS report that social media “platforms” such as Facebook are culpable, if not legally (since Facebook is at present responsible for the moderation of its own content), then ethically. The company failed to prevent use of its platform for targeted and malicious campaigns of misinformation, and failed to act once it was exposed.

Damian Collins, the Conservative MP for Folkestone and chair of the DCMS committee, spoke of “Facebook’s complete lack of moral responsibility”; the “disingenuous” responses from its executives, and its determination to “time and again… avoid answering… questions to the point of obfuscation”. Given that attention-extraction companies like Facebook are resistant to change because it is against their business model, democratic governments and regulators will have to consider what measures can be taken to mitigate the threats posed by social media in its role in targeted dissemination of misinformation and fake news.

At stake in the problem of fake news is the kind of conversational space necessary for a healthy functioning society. Yet the ‘”fake fake news” of President Donald Trump is arguably more insidious, and perhaps even harder to inoculate against. In what can only be described as an Orwellian twist in the story of fake news, Donald Trump throws the term at the mainstream media even as they report something much more answerable to epistemic standards of truth and fact than the fabricated stories propagated through social media or the transparent lies Trump himself so effortlessly dispenses.

Politicians have long had a reputation for demagoguery and spin, but Trump’s capacity to lie in the face of manifest reality (inauguration crowd size just for one obvious example) and to somehow ‘get away with it’ (at least to his supporters) is extraordinary, and signals a deep fissure in the relation between truth, trust, and civic discourse.

To paraphrase Australian philosopher Raimond Gaita: to deride the serious press as peddling fake news, to deride expertise that proves what justifiably can count as knowledge, is to undermine the conceptual and epistemic space that makes conversations between citizens possible.

J. S. Mill’s vision for a society in which, despite and sometimes through error, truth can be discovered, and where it has an epistemic priority in establishing trust as a foundation for a liberal, democratic life is lost in the contempt for knowledge and truth that is captured in the idiom of this “post-truth” era.

Both senses in which “fake news” is now pervading our civic conversational space threaten public discourse by endangering the very possibility of truth and fact being able to guide, ground and check public discourse. Big Technology and social media have no small part to play in these ills.

An epistemic erosion is underway in public discourse which undermines the conversational space – that space that Mill thought was so important for the functioning of a free society – which allows citizens to grapple with self-understanding and to progress towards more just and better forms of civic life.

Opinion: The Pope, Fake News, and the Gospels

A photo of Pope Francis

After an unpopular visit to South America, Pope Francis now has released a statement condemning “fake news.” It has long been suspected that this Pope has leftist ideological leanings, and it seems that Francis’ remarks about “fake news” are directed against Donald Trump and his populist tactics, although the U.S. president remained unmentioned.

Continue reading “Opinion: The Pope, Fake News, and the Gospels”

Determining Moral Responsibility in the Pizzagate Shooting

On December 4th, North Carolina resident Edgar Welch walked into Comet Ping Pong, a Washington, D.C., pizza restaurant, with an assault rifle strapped to his chest. Inside, he reportedly fired several shots and pointed his rifle at a Comet Ping Pong employee as the restaurant’s patrons scattered. No bystanders were injured, and once Welch failed to find what he came for, he surrendered to police.

This week, Welch will return to court in relation to the incident at Comet Ping Pong, a dramatic turn in what has become known as the “Pizzagate” conspiracy. For weeks prior to the attack, online conspiracy theorists had besieged the restaurant with baseless accusations that it has conspired with politicians like Hillary Clinton to traffic and abuse young children. Welch reportedly latched onto these conspiracies, ultimately deciding to take matters into his own hands through a vigilante “investigation.” While Welch’s legal guilt may seem straightforward, the ethical questions his case raises underscore the complexities of moral responsibility in the time of fake news.

Continue reading “Determining Moral Responsibility in the Pizzagate Shooting”

Fake News and the Future of Journalism

Oscar Martinez is an acclaimed Salvadoran journalist for El Faro, an online newspaper that dedicates itself to conducting investigative journalism in Central America, with a focus on issues like drug trafficking, corruption, immigration, and inequality.  In a recent interview for El Pais, Martinez explains that the only reason he is a journalist is because “sé que sirve para mejorar la vida de algunas personas y para joder la vida de otras: poderosos, corruptos” (“ I know it serves to, both,  improve the lives of some people and to ruin the lives of others: the powerful, the corrupt.”) Ascribing himself to further reflection, in the interview, Martinez distills journalism’s purpose as a “mechanism” to bring about change in society; however, he does raise a red flag: “El periodismo cambia las cosas a un ritmo completamente inmoral, completamente indecente. Pero no he descubierto otro mecanismo para incidir en la sociedad de la que soy parte que escribiendo” (“Journalism changes things at a completely immoral and indecent rate. But I haven’t found another way to incite the society that I am writing in to change”). Martinez’s work sheds light and lends a voice to the plight of millions of individuals, and it is important to acknowledge and admire the invaluable work that Martinez and his colleagues at El Faro do.

Continue reading “Fake News and the Future of Journalism”