Back to Prindle Institute

On Journalistic Malpractice

photograph of TV camera in news studio

In 2005, then-CNN anchor Lou Dobbs reported that the U.S. had suffered over 7,000 cases of leprosy in the previous three years and attributed this to an “invasion of illegal immigrants.” Actually, the U.S. had seen roughly that many leprosy cases over the previous three decades, but Dobbs stubbornly refused to issue a retraction, instead insisting that “If we reported it, it’s a fact.”

In 2020, then-Fox-News anchor Lou Dobbs reported that the results of the election were “eerily reminiscent of what happened with Smartmatic software electronically changing votes in the 2013 presidential election in Venezuela.” Dobbs repeatedly raised questions and amplified conspiracy theories about Donald Trump’s loss, granting guests like Rudy Giuliani considerable airtime to spread misinformation about electoral security.

It’s generally uncontroversial to think that “fake news” is epistemically problematic (insofar as it spreads misinformation) and that it can have serious political consequences (when it deceives citizens and provokes them to act irrationally). Preventing these issues is complicated: any direct governmental regulation of journalists or news agencies, for example, threatens to run afoul of the First Amendment (a fact which has prompted some pundits to suggest rethinking what “free speech” should look like in an “age of disinformation”). To some, technology offers a potential solution as cataloging systems powered by artificial intelligence aim to automate fact-checking practices; to others, such hopes are ill-founded dreams that substitute imaginary technology for individuals’ personal responsibility to develop skills in media literacy.

But would any of these approaches have been able to prevent Lou Dobbs from spreading misinformation in either of the cases mentioned above? Even if a computer program would have tagged the 2005 leprosy story as “inaccurate,” users skeptical of that program itself could easily ignore its recommendations and continue to share the story. Even if some subset of users choose to think critically about Lou Dobbs’ 2020 election claims, those who don’t will continue to spread his conjectures. Forcibly removing Dobbs from the air might seem temporarily effective at stemming the flow of misinformation, but such a move — in addition to being plainly unconstitutional — would likely cause a counter-productive scandal that would only end up granting him even more attention.

Instead, rather than looking externally for ways to stem the tide of fake news and its problems, we might consider solutions internal to the journalistic profession: that is, if we consider journalism as a practice akin to medicine or law, with professional norms dictating how its practitioners ought to behave (even apart from any regulation from the government or society-at-large), then we can criticize “bad journalists” simply for being bad journalists. Questions of epistemic or political consequences of bad journalism are important, but subsequent to the first question focused on professional etiquette and practice.

This is hardly a controversial or innovative claim: although there is no single professional oath that journalists must swear (along the lines of those taken by physicians or lawyers), it is common for journalism schools and employers to promote codes of “journalistic ethics” describing standards for the profession. For example, the Code of Ethics for the Society of Professional Journalists is centered on the principles of accuracy, fairness, harm-minimization, independence, and accountability; the Journalism Code of Practice published by the Fourth Estate (a non-profit journalism watchdog group) is founded on the following three pillars:

  1. reporting the truth,
  2. ensuring transparency, and
  3. serving the community.

So, consider Dobbs’ actions in light of those three points: insofar as his 2005 leprosy story was false, it violates pillar one; because his 2020 election story (repeatedly) sowed dissension among the American public, it fails to abide by pillar three (notably, because it was filled with misinformation, as poignantly demonstrated by the defamation lawsuit Dobbs is currently facing). Even before we consider the socio-epistemic or political consequences of Dobbs’ reporting, these considerations allow us to criticize him simply as a reporter who failed to live up to the standards of his profession.

Philosophically, such an approach highlights the difference between accounts aimed at cultivating a virtuous disposition and those that take more calculative approaches to moral theorizing (like consequentialism or deontology). Whereas the latter are concerned with a person’s actions (insofar as those actions produce consequences or align with the moral law), the former simply focuses on a person’s overall character. Rather than quibbling over whether or not a particular choice is good or bad (and then, perhaps, wondering how to police its expression or mitigate its effects), a virtue theorist will look to how a choice reflects on the holistic picture of an agent’s personality and identity to make ethical judgments about them as a person. Like the famous virtue theorist Aristotle said, “one swallow does not make a summer, nor does one day; and so too one day, or a short time, does not make a man blessed and happy.”

On this view, being “blessed and happy” as a journalist might seem difficult — that is to say, being a good journalist is not an easy thing to be. But Aristotle would likely point out that, whether we like the sound of it or not, this actually seems sensible: it is easy to try and accomplish many things, but actually living a life a virtue — actually being a good person — is a relatively rare feat (hence his voluminous writings on trying to make sense of what virtue is and how to cultivate it in our lives). Professionally speaking, this view underlines the gravity of the journalistic profession: just as being a doctor or a lawyer amounts to shouldering a significant responsibility (for preserving lives and justice, respectively), to become a reporter is to take on the burden of preserving the truth as it spreads throughout our communities. Failing in this responsibility is more significant than failing to perform some other jobs: it amounts to a form of malpractice with serious ethical ramifications, not only for those who depend on the practitioner, but for the practitioner themselves as well.

Come into My Parler

photograph of relection on Chicago Bean of skyline

Efforts to curtail and limit the effect of disinformation reached a fever-pitch in the run up to the 2020 election for President of the United States. Prominent social media platforms, Facebook and Twitter, after long resistance to exerting significant top-down control of user posted content, began actively combating misinformation. Depending on who you ask, this change of course either amounts to seeing reason or abandoning it. In the latter camp are those ditching Facebook and Twitter for relative newcomer, Parler.

Parler bills itself as a free speech platform, exerting top-down control only in response to criminal activity and spam. This nightwatchman approach to moderation makes clear the political orientation of Parler’s founders and those people who have dumped mainstream platforms and moved over to Parler. Libertarian political philosophy concerning the proper role of state power was famously described by American philosopher Robert Nozick as relegating the state to the role of nightwatchman: leaving citizens to do as they please and only intervening to sanction those who break the minimal rules that underpin fair and open dealing.

Those making the switch characterize Facebook and Twitter, on the other hand, as becoming increasingly tyrannical. Any attempt to curate and fact-check introduces bias, claims Parler co-founder John Matze. Whereas Parler aims to be a “neutral platform,” according to Parler co-founder Rebekah Mercer. This kind of political and ideological neutrality is a hallmark aspiration of libertarianism and classical liberalism.

However, Parler’s pretension became hypocrisy, as it banned leftist parody accounts and pornography. However, this is neither surprising nor on its own bad. As some have pointed out, every social media site faces the same set of issues with content and largely responds to it the same way. However, Parler’s aspiration of libertarian neutrality when it comes to speech content makes their terms of service, which allow them to remove user content “at any time and for any reason or no reason,” and their policy of kicking users off the platform “even where the [terms of service] have been followed” particularly obnoxious.

But suppose that Parler stuck to its professed principles. What would it mean to be politically or ideologically neutral, and why would fact-checking compromise it? A simple way of thinking about the matter is embodied by Parler’s espoused position toward speech content: no speech will be treated differently by those in power simply on the basis of its message, regardless of whether that message is Democratic or Republican, liberal or conservative, capitalist or socialist. Stepping from the merely political to the ideological, to remain neutral would be to think that no speech content was false simply on its face. Here is where the “problem” of fact-checking arises.

We live, so we keep being told, in a “post-truth” society. Whatever this exactly means, its practical import is that distinct groups of society disagree fundamentally both over their goals and how to achieve them, politically. The idea of fact-checking as a neutral arbiter between disagreeing parties breaks down in these situations because supposed facts will appear neutral only to parties who agree about how to see the world at a basic level. That is, the appearance of a fact-value distinction will evaporate. (The distinction between facts (i.e., how the world allegedly is without regard to any agents’ perceptions) and values (i.e., how the world ought to be according to a given agent’s goals/preferences) is argued by many to be untenable.)

In this atmosphere, fact-checking takes on the hue of a litmus test, examining statements for their ideological bona fides. When a person’s claim is fact-checked, and found wanting, it will appear to them not that an uninterested judge cast a stoic gaze out onto the world to see whether it is as the person says; instead, the person will feel that the judge looked into their own heart and rejected the claim as undesirable. When people feel this way, they will not stick around and continue to engage. Instead, they’ll pack up and go where they think their claims will get “fair” treatment. None of this is to say that fact-checking is necessarily a futile or oppressive exercise. However, it is a reason to not treat it as a panacea for all disagreement.

Regulating Companies to Free People’s Speech

photograph of ipad with Trump's twitter profile sitting atop various blurred newspaper front pages feating him

US President Donald Trump has signed an executive order instructing the Federal Communications Commission (FCC) to review legislation that shields social media platforms, like Twitter and Facebook, from liability for content posted by their users. This move appears to be a retaliatory gesture against Twitter for linking fact-checking sites to President Trump’s tweets opining the vulnerability to fraud of mail-in ballots for upcoming elections. This is the second time President Trump has drafted an executive order to review this kind of legislation. The first time was in August 2019. But this isn’t simply (another) Trump temper tantrum. Rather it is the latest push in a concerted and bipartisan effort to bring so-called “Big Tech” companies to heel. These efforts in general face a long road of legal and philosophical challenges, and Trump’s effort in particular is likely doomed to failure.

The relevant legislation is the Telecommunications Act of 1996, and more specifically the “Good Samaritan” clause of Section 230 therein. This clause states that no “provider or user” of an “interactive computer service” can be sued for civil harm because of “good faith efforts”  to restrict access to “objectionable” material posted by other users of their service. Other portions of Section 230 give providers and users of interactive computer services immunity against being sued for any civil harm caused by content posted by other users. Essentially, companies like Twitter, Facebook, and Google are given broad discretion to handle the content posted on their sites as they see fit.

Conservative and Republicans complain that Big Tech companies harbor anti-conservative political bias, which they enforce through their platforms’ outsized influence on the dissemination of news and opinion. Texas’ Senator Ted Cruz has argued that Facebook has censored and suppressed conservative expression on its platform. President Trump’s frequent screeds against CNN, The Washington Post, and Twitter echo the same sentiment. In 2018, Google CEO Sundar Pichai was grilled by Republican lawmakers about alleged anti-conservative bias in his company’s handling of search results. Missouri’s Senator Josh Hawley in 2019 introduced a bill to amend Section 230 to remove its broad protections from liability. Hawley’s bill was specifically geared toward addressing alleged anti-conservative bias and offered reinstatement of Section 230’s protection only to companies who submitted themselves to an audit showing that they pursued “politically neutral” practices.

Liberal and Democratic concerns focus largely on the spread of harmful misinformation and disinformation by foreign actors aimed at influencing US elections. But there are two points of bipartisan agreement. The first concerns the scope and magnitude of Big Tech’s influence on the public exchange of information. Agreement here manifests itself in the what criteria lawmakers have put forward as triggering expanded liability, namely size. Senator Josh Hawley’s 2019 bill targeted companies with, “30 million monthly active users in the US, more than 300 million active monthly users worldwide, or more than $500 million in global annual revenue.” The other is point of agreement concerns posted content related to human trafficking for sex work. Legislation amending the Telecommunications Act of 1996 pursuant to curtailing human trafficking was passed with bipartisan support in 2017.

All of this bears on the right to freedom of speech, interpretation of which is a perpetually contentious issue. Conservatives complaining about censorship and suppression allege that their freedom of speech is being infringed by the actions of Big Tech. However a recent judicial decision made short work of one such complaint. The US Court of Appeals dismissed a suit claiming that Twitter, Facebook, Apple, and Google had conspired to suppress conservative speech. In their ruling the judges noted that the First Amendment only protects free speech from interference by government action. This illustrates an important point about the nature of rights that is often missed.

Rights can be thought of as comprising three elements: a right-holder, an obligation, and an obliged party. With the right to freedom of speech the right-holder is any legal person (which includes corporations), the obligation is to refrain from suppression/censorship, and the obligated party is the US government. Constitutional rights tend to follow this pattern. Other rights oblige parties other than just the government. A family can sue someone for killing their mother, or the state may sue on the murder victim’s behalf, because a right to life is both understood to exist at common law and is also enshrined by legislation in statutes against homicide. Here the right holder is any individual person, the obligation is to refrain from killing the right-holder, and the obligated party is every other individual person. (Incidentally, both of these are examples of negative rights: rights which entitle the bearers to protection from specific harmful treatment. There are also positive rights, which entitle the bears to the provision of specific goods, services, or treatment.)

As a matter of principle there is no general legal basis for complaints against Big Tech for suppressing or censoring expression. They are not government actors and so are not obviously bound by the right to free speech as expressed in the first amendment. The US Court of Appeals decision mentioned above says as much. Further these companies are themselves legal persons with respect to political speech under US law. This was one the bases of the US Supreme Courts’ (in)famous Citizens United decision. Because corporations are people too, their political speech is protected. Twitter flagging President Trump’s posts with fact-checking tags is just them exercising their speech in competition with President Trump’s speech. This is the much vaunted “marketplace of ideas” of which conservatives are usually enamored.

As a matter of law Trump’s draft executive order is largely toothless because the text of Section 230’s Good Samaritan clause allows Big Tech companies to take “good faith” actions to “restrict access to … material” even when “such material is constitutionally protected.” Despite the opinion of legislators, there is not even a whiff of a political neutrality requirement. While such a requirement used to exist, it ceased being enforced in 1987 and was fully obliterated in 2011. The decision to cease enforcing this requirement was made by US President Ronald Reagan’s FCC Commissioner, Mark Fowler, because it was seen as violating first amendment protections.

Infringement by the government on freedom of speech is held in court to strict scrutiny. Part of the strict scrutiny standard is that the infringement promotes a “compelling government interest.” If the government exercises its authority over private individuals or groups under the auspices of protecting freedom of speech, what standards will the government ask be met? The entire point of rights like the freedom of speech is to permit persons acting in a private capacity to determine things for themselves. As many critics and advocacy groups have pointed out, allowing the government to set these standards is harmful to free speech rather than protective of it. Legislators appear to remember this only as it suits their political needs.

Who fact-checks the fact-checkers?

photograph of magnifying glass examining text

If you’re reading something about Facebook in the news these days, chances are you’re reading about how bad it is at preventing people from posting false or misleading information (either that, or it’s about concerns that Facebook is not good at keeping your personal information private). The platform has become notorious for being a place where conspiracy theories are allowed to run amok, and where pseudo- or anti-scientific views can receive strong endorsement by its user base. In an attempt to curb the spread of misinformation, Facebook has recently employed a number of fact-checking services. While Facebook has made the use of fact-checkers for a while now, the number of people responsible for the entirety of user output has in the past been tiny, a problem to which Facebook has recently responded by quadrupling the number of their American fact-checking partners. There are a number of websites that offer fact-checking services, and can provide various ratings on posts indicating whether a claim is true or false, or whether it presents information in a misleading way. The hope is that such fact-checking will help stop the spread of false information on Facebook overall, and especially with regard to that which can be actively damaging, such as false claims that vaccines are unsafe.

While making use of fact-checkers seems like a good move on Facebook’s part, some have recently expressed concerns that one of the fact-checking websites that Facebook employs in the US (there are different fact-checking services employed for different countries, a full list of which can be found here) is politically biased: the site Check Your Fact, which is a subsidiary of the website Daily Caller. The Daily Caller is an unambiguously right-wing and pro-Trump website, that often publishes articles denying climate change, and whose founder has expressed white supremacist views. There are concerns, then, that false or misleading claims made on Facebook that support a right-wing political agenda may not receive the kind of scrutiny as other kinds of claims because of the political affiliation of one of the fact-checkers.

Vox recently noted one incident of this type, in which a former conservative fact-checking website that Facebook used – the now defunct Weekly Standard – was over-aggressive with designating a headline critical of then supreme court nominee Brett Kavanaugh as false. Instead of controlling for false information, the fact-checking website in a sense created it, improperly flagging a headline that was, at worst, slightly misleading as outright false.

There are concerns, then, not only about the truth or falsity of individual claims being made on Facebook, but also about whether claims that fact-checkers are making about those claims are themselves true or false. What, then, are we supposed to do when faced with a claim on Facebook that has been fact-checked? Can we fact-check the fact-checkers?

There are, in fact, organizations that attempt to do just that. For instance, Facebook only uses fact-checkers that are certified by Poynter’s International Fact-Checking Network, an organization that evaluates fact-checkers on the bases of a code of principles, including “nonpartisanship and fairness,” “open and honest corrections,” and transparency of sources, funding, organization, and methodology. While all of these principles sound like good ones, we might still be concerned whether such an organization can really pick out the reliable fact-checkers from the unreliable ones. For instance, Check Your Fact does, in fact, pass the standards of the International Fact-Checking Network. 

What, then, of concerns about the partisanship of Facebook’s fact-checking partners? Are they overblown? Or should we go one step further, and fact-check those who fact-check the fact-checkers?

While this is perhaps not a bad idea, most people are probably not going to take the time to research the organization that determines the standards for fact-checkers when scrolling through Facebook. There is, however, perhaps a more pressing matter: in addition to how reliable these fact-checkers are – that is to say, how good they are at determining which claims are true, false, or misleading – there are also concerns about how effective they are – that is to say, how good they are at actually making it known that a false or misleading claim is, in fact, false or misleading. As reported at Poynter, there is reason to think that even if a claim is properly fact-checked as false, more people read the original false claim than the report showing that it is false. A worry, then, is that since information moves so quickly on Facebook it is often incredibly difficult for fact-checkers to keep up.

We might be worried about the efficacy of Facebook fact-checking for another reason, namely that people who have their posts fact-checked as false will probably not be deterred from posting similar such claims in the future. After all, if you believe that the information you are sharing is true, that a website tells you it is false may lead you not to reconsider your views, but instead to simply think that the fact-checking websites are wrong or biased.

So what are we to make of this complicated situation? Despite concerns about reliability and efficacy, making use of fact-checkers still seems to be a step in the right direction for Facebook: anything that can make any progress, even a little, towards stemming the tide of misinformation online is a good thing. What we perhaps should take away from all this is that fact-checking can be used as one tool among many for determining which Facebook posts you should pay attention to and which you should ignore.