Back to Prindle Institute

On “Dog-Wagging” News: Why What “Lots of People” Say Isn’t Newsworthy

photograph of crowd of paparazzi cameras at event

On June 17th, Lee Sanderlin walked into a Waffle House in Jackson, Mississippi; fifteen hours later, he walked out an internet sensation. As a penalty for losing in his fantasy football league, Sanderlin’s friends expected him to spend a full day inside the 24-hour breakfast restaurant (with some available opportunities for reducing his sentence by eating waffles). When he decided to live-tweet his Waffle House experience, Sanderlin could never have expected that his thread would go viral, eventually garnering hundreds of thousands of Twitter interactions and news coverage by outlets like People, ESPN, and The New York Times.

For the last half-decade or so, the term ‘fake news’ has persistently gained traction (even being voted “word of the year” in 2017). While people disagree about the best possible definition of the term (should ‘fake news’ only refer to news stories intentionally designed to trick people or could it countenance any kind of false news story or maybe something else?), it seems clear that a story about what Sanderlin did in the restaurant is not fake: it genuinely happened, so reporting about it is not spreading misinformation.

But that does not mean that such reporting is spreading newsworthy information.

While a “puff piece” or “human interest story” about Sanderlin in the Waffle House might be entertaining (and, by extension, might convince internet users to click a link to read about it), its overall value as a news story seems suspect. (The phenomenon of clickbait, or news stories marketed with intentionally noticeable headlines that trade accuracy for spectacle, is a similar problem.) Put differently, the epistemic value of the information contained in this news story seems problematic: again, not because it is false, but rather because it is (something like) pointless or irrelevant to the vast majority of the people reading about it.

Let’s say that some piece of information is newsworthy if its content is either in the public interest or is otherwise sufficiently relevant for public distribution (and that it is part of the practice of good journalism to determine what qualifies as fitting this description). When the president of the United States issues a statement about national policy or when a deadly disease is threatening to infect millions, then this information will almost certainly be newsworthy; it is less clear that, say, the president’s snack order or an actor’s political preferences will qualify. In general, just as we expect their content to be accurate, we expect that stories deemed worthy to be disseminated through our formal “news” networks carry information that news audiences (or at least significant subsets thereof) should care about: in short, the difference between a news site and a gossip blog is a substantive one.

(To be clear: this is not to say that movie releases, scores of sports games, or other kinds of entertainment news are not newsworthy: they could easily fulfill either the “public interest” or the “relevance” conditions of the ‘newsworthy’ definition in the previous paragraph.)

So, why should we care about non-newsworthy stories spreading? That is to say, what’s so bad about “the paper of record” telling the world about Sanderlin’s night in a Mississippi Waffle House?

Two problems actually come to mind: firstly, such stories threaten to undermine the general credibility of the institution spreading that information. If I know that a certain website gives equal attention to stories about COVID-19 vaccination rates, announcements of Supreme Court decisions, Major League baseball game scores, and crackpots raging about how the Earth is flat, then I will (rightly, I think) have less confidence that the outlet is capable of reporting accurate information in general (given its decision to spread demonstrably false conspiracy theories). In a similar way, if an outlet gives attention to non-newsworthy stories, then it can water down the perceived import of the other genuinely newsworthy stories that it typically shares. (Note that this problem is compounded further when amusing non-newsworthy stories spread more quickly on the basis of their entertaining quirks, thereby altering the average public profile of the institution spreading them.)

But, secondly, non-newsworthy stories pose a different kind of threat to the epistemic environment than do fake news stories: whereas the latter can infect the community with false propositions, the former can infect the community with bullshit (in a technical sense of the term). According to philosopher Harry Frankfurt, ‘bullshit’ is a tricky kind of speech act: if Moe knows that a statement is false when he asserts it, then Moe is lying; if Moe doesn’t know or care whether a statement is true or false when he asserts it, then Moe is bullshitting. Paradigmatically, Frankfurt says that bullshitters are looking to provoke a particular emotional response from their audience, rather than to communicate any particular information (as when a politician uses rhetoric to affectively appeal to a crowd, rather than to, say, inform them of their own policy positions). Ultimately, Frankfurt argues that bullshit is a greater threat to truth than lies are because it changes what people expect to get out of a conversation: even if a particular piece of bullshit turns out to be true, that doesn’t mean that the person who said it wasn’t still bullshitting in the first place.

So, consider what happened when an attendee at a campaign rally for Donald Trump in 2015 made a series of false assertions about (among other things) Barack Obama’s supposedly-foreign citizenship and the alleged presence of camps operating inside the United States to train Muslims to kill people: then-candidate Trump responded by saying:

“We’re going to be looking at a lot of different things. You know, a lot of people are saying that, and a lot of people are saying that bad things are happening out there. We’re going to look at that and plenty of other things.”

Although Trump did not clearly affirm the conspiracy theorist’s racist and Islamophobic assertions, he nevertheless licensed them by saying that “a lot of people are saying” what the man said. Notice also that Trump’s assertion might or might not be true — it’s hard to tell how we would actually assess the accuracy of a statement like “a lot of people are saying that” — but, either way, it seems like the response was intended more to provoke a certain affective response in Trump’s audience. In short, it was an example of Frankfurtian bullshit.

Conspiracy theories about Muslim “training camps” or Obama’s unAmerican birthplace are not newsworthy because, among other things, they are false. But a story like “Donald Trump says that “a lot of people are saying” something about training camps” is technically true (and is, therefore, not “fake news”) because, again, Trump actually said such a thing. Nevertheless, such a story is pointless or irrelevant — it is not newsworthy — there is no good reason to spread it throughout the epistemic community. In the worst cases, non-newsworthy stories can launder falsehoods by wrapping them in the apparent neutrality of journalistic reporting.

For simplicity’s sake, we might call this kind of not-newsworthy story an example of “dog-wagging news” because, just as “the tail wagging the dog” evokes an image where the “small or unimportant entity (the tail) controls a bigger, more important one (the dog),” a dog-wagging news story is one where something about the story other than its newsworthiness leads to its propagation throughout the epistemic environment.

In harmless cases, dog-wagging stories are amusing tales about Waffle Houses and fantasy football losses; in more problematic cases, dog-wagging stories help to perpetuate conspiracy theories and worse.

AstraZeneca, Blood Clots, and Media Reporting

photograph of patients waiting in gym to be vaccinated

In some ways, it seems like most respectable news media have begun to take science more seriously and to take greater care in making sure that claims about COVID are fact-checked and that misinformation is debunked. But there is more to scientific communication than getting the facts right. Often it is the selection, arrangement, and emphasis of facts that matter most and holds the greatest sway over the average person’s comprehension of scientific matters. This can have very serious consequences such as the coverage of the AstraZeneca vaccine and its potential to cause vaccine hesitancy. Does the media have a responsibility to be more careful in how they cover scientific issues?

Not long after the AstraZeneca vaccine was approved in many nations, reports in March indicated that some who took the vaccine developed blood clots. Since then, over thirteen nations have either halted the rollout of the vaccine or limited its usage. While such clots can be lethal, they are treatable. However, the more important consideration is the lack of evidence that the vaccine causes clots and the limited number of cases. There is no direct evidence of a connection between the vaccine and the development of a blood clot. Despite this, the European Medicines Agency in its review of  over 80 cases has concluded that unusual blood clots should be listed as a rare side effect. However, it is the rarity of the symptoms which is even more important. Less than one hundred people out of 20 million people who have received the vaccine have developed blood clots.

This is actually lower than what you’d normally see from unvaccinated people, and in the meantime COVID itself can lead to clots showing up in “almost every organ.” All of this leaves regulators with an inductive risk scenario: if they say that the vaccine is safe, and it isn’t many people could develop clots and potentially die; if they that the vaccine isn’t safe, and it is then it will slow down the rollout of the vaccine and many more people could die. In fact, the experts have been pretty clear that in terms of risk management, the benefits of the AstraZeneca vaccine still outweigh the risks. In other words, even if the vaccine does cause blood clots, the rates are so low that the risk of people dying is far higher if you don’t use the vaccine than if you do. This is why experts have been critical about the suspensions as a “stupid, harmful decision” that will likely lead to more avoidable deaths and will make people more hesitant to get vaccinated. As Dr. Paul Offit of the Vaccine Education Center has said, “While it’s easy to scare people, it’s very hard to unscare them.”

Yet, despite the risk being small and possibly treatable, and the fact that experts have determined that it is still better to use the vaccine anyways, the news media hasn’t been helpful in covering this issue. For example, the Canadian media has chosen to cover (apparently) every case of a blood clot developing despite the messaging ultimately being the same. One story notes, ‘“While this case is unfortunate, it does not change the risk assessment that I have previously communicated to Albertans,’ Dr. Deena Hinshaw said during a teleconference,” while the other reports, “‘We have been very transparent that there could be one case per 100,000,’ he said. ‘We knew this could happen.’” In other words, this is a situation where statistically the formation of a blood clot is expected in limited numbers but is considered acceptable because it is still such a limited risk compared to the much larger benefits. So, it is simply unhelpful to report each confirmed case of something that is expected anyways. After all, we are told that the contraceptive pill carries a greater risk of developing a blood clot, so why cherry-pick cases?

As statistician David Spiegelhalter has suggested, the scare over blood clots has demonstrated our “basic and often creative urge to find patterns even where none exist.” Unsurprisingly, a majority of unvaccinated Canadians now report being uncomfortable with potentially receiving the AstraZeneca vaccine. All of this relates to the moral responsibilities of the media in covering scientific topics where it isn’t merely a matter of reporting facts but reporting them in context. While the media has been “on a crusade against COVID vaccine skepticism” and promoting science-based medicine, to some the selective skepticism of the media has led to charges of hypocrisy as “the press has made a habit of giving finger-wagging lectures about ‘following the science,’ they need to consistently practice what they preach.” Afterall, the media doesn’t choose to report every case of someone who gets a blood clot from a contraceptive.

In fairness, while no one is suggesting that the risk of clots should be ignored, there may be good reason to raise alarm. As The Atlantic reports,

“The risk of a dangerous vaccine reaction could be very real, if also very rare—and major European vaccine authorities have not, in fact, been overcautious, political, or innumerate in responding to this possibility…regulators must address the possibility (still unproved) that perhaps one in every 1 million vaccinated people could have a potentially fatal drug reaction—as more than 1 million vaccine doses are being injected each day in Europe alone.”

In other words, there is a real risk (even if small) and morally speaking it is important to have a public conversation about risks and how to manage them. The public should be aware of the risk and how those risks are appraised. However, the issue has become confused owing to a lack of scientific literacy as well as the media choosing to focus on individual and personal cases. Instead, a more constructive topic of focus could have been on the larger moral issue of managing risk in the face of uncertainty such as when and how to use the precautionary principle.

This isn’t the only case recently where cherry-picking media coverage has proven problematic. Recently a study found that media coverage of COVID-19 in the US has been excessively negative compared to international media. A separate study has found that a significant number of Americans (mostly those who lean Democratic) were likely to overexaggerate the risks of COVID. Further, it is becoming increasingly evident that developing scientific literacy is more difficult than thought, and presenting novel scientific findings in news is problematic anyways. So, if those in the news media wish to present a scientifically-informed picture of public affairs, it is morally imperative that greater attention be paid to the context in which scientific findings are reported.

Bad Science, Bad Science Reporting

3d image of human face with severalpoints of interest circled

It tends to be that only the juiciest of developments in the sciences become newsworthy: while important scientific advances are made on a daily basis, the general public hear about only a small fraction of them, and the ones we do hear about do not necessarily reflect the best science. Case in point: a recent study that made headlines for having developed an algorithm that could detect perceived trustworthiness in faces. The algorithm used as inputs a series of portraits from the 16th to the 19th centuries, along with participant’s judgments of how trustworthy they found the depicted faces. The authors then claimed that there was a significant increase in trustworthiness over the period of time they investigated, which they attributed to lower levels of societal violence and greater economic development. With an algorithm thus developed, they then applied it to some modern-day faces, comparing Donald Trump to Joe Biden, and Meghan Markle to Queen Elizabeth II, among others.

It is perhaps not surprising, then, that once the media got wind of the study that articles with names like “Meghan Markle looks more trustworthy than the Queen” and “Trust us, it’s the changing face of Britain” began popping up online. Many of these articles read the same: they describe the experiment, show some science-y looking pictures of faces with dots and lines on them, and then marvel at how the paper has been published in Nature Communications, a top journal in the sciences.

However, many have expressed serious worries with the study. For instance, some have noted how the paper’s treatment of their subject matter – in this case, portraits from hundreds of years ago – is uninformed by any kind of art history, and that the belief that there was a marked decrease in violence over that time is uniformed by any history at all. Others note how the inputs into the algorithm are exclusively portraits of white faces, leading some to make the charge that the authors were producing a racist algorithm. Finally, many have noted the very striking similarity between what the authors are doing and the long-debunked studies of phrenology and physiognomy, which purported to show that the face of one’s skull and nature of one’s facial features were indicative of their personality traits, respectively.

There are many ethical concerns that this study raises. As some have noted already, developing an algorithm in this manner could be used as a basis for making racist policy decisions, and would seem to lend credence to a form of “scientific racism.” While these problems are all worth discussing, here I want to focus on a different issue, namely how a study lambasted by so many, with so many glaring flaws, made its way to the public eye (of course, there is also the question of how the paper got accepted in such a reputable journal in the first, but that’s a whole other issue).

Part of the problem comes down to how the results of scientific studies are communicated, with the potential for miscommunications and misinterpretations along the way. Consider again how those numerous websites clamoring for clicks with tales of the trustworthiness of political figures got their information in the first place, which was likely from a newswire service. Here is how ScienceDaily summarized the study:

“Scientists revealed an increase in facial displays of trustworthiness in European painting between the fourteenth and twenty-first centuries. The findings were obtained by applying face-processing software to two groups of portraits, suggesting an increase in trustworthiness in society that closely follows rising living standards over the course of this period.”

Even this brief summary is misleading. First, to say that scientists “revealed” something implies a level of certainty and definitiveness in their results. Of course, all results of scientific studies are qualified: there is never an experiment that will say that it is 100% certain of its results, or that, when measuring different variables, that there is a definitive cause and effect relationship between them. The summary does qualify this a little bit – in saying that the study “suggests” an increase in trustworthiness. But this is misleading for another reason, namely that the study does not purport to measure actual trustworthiness, but perceptions of trustworthiness.

Of course, a study about an algorithm measuring what people think trustworthiness looks like is not nearly as exciting as a trustworthiness detection machine. And perhaps because the difference can be easily overlooked, or because the latter is likely to garner much more attention than the former, the mistake shows up in several of the outlets reporting it. For example:

Meghan was one and a half times more trustworthy than the Queen, according to researchers.

Consultants from PSL Analysis College created an algorithm that scans faces in painted portraits and pictures to find out the trustworthiness of the individual.

Meghan Markle has a more “trustworthy” face than the Queen, a new study claims.

From Boris Johnson to Meghan Markle – the algorithm that rates trustworthiness.”

Again, the problem here is that the study never made the claim that certain individuals were, in fact, more trustworthy than others. But that news outlets and other sites report it as such compound worries that one might employ the results of the study to reach unfounded conclusions about who is trustworthy and who isn’t.

So there are problems here at three different levels: first, with the nature and design of the study itself; second, with the way that newswire services summarized the results, making them seem more certain than they really were; and third, with the way that sites that used those summaries presented the results in order to make it look more interesting and legitimate than it really was, without raising any of the many concerns expressed by other scientists. All of these problems compound to produce the worries that the results of the study could be misinterpreted and misused.

While there are well-founded ethical concerns about how the study itself was conducted, it is important not to ignore what happens after the studies are finished and their results disseminated to the public. The moral onus is not only on the scientists themselves, but also on those reporting on the results of scientific studies.

In Steven Pinker’s Enlightenment Now, the Ethics of Reporting Human Progress

A painting of Enlightenment scholars talking in a park.

In 1995, Alan Sokal famously (or perhaps, infamously) wrote a manuscript full of rubbish sentences giving the impression that scientific theories are no more than social constructions. His article was written with the typical pompous (and largely nonsensical) language of postmodern philosophy. He sent the manuscript to the academic journal Social Text, and it was published. Sokal then informed the wider public that he had written the manuscript deliberately as a hoax, in order to expose how far Postmodernism had gone in Western academics. Sokal wanted to prove that, as long as authors wrote in incomprehensible language, gave the appearance of criticizing the scientific establishment, and took a stand against the powers that be (capitalism, patriarchy, Western civilization, etc.), postmodern academics would welcome such writings, regardless of how absurd their claims were.

Continue reading “In Steven Pinker’s Enlightenment Now, the Ethics of Reporting Human Progress”

Thomas S. Monson and the Politics of Obituaries

A portrait of Thomas S. Monson

Thomas S. Monson, President of The Church of Jesus Christ of Latter Day Saints, died on January 2 of this year. Monson led the LDS Church for almost a decade.  On January 3, The New York Times published an obituary for Monson that was not well received by many members of the church.  They felt that it was politically biased and did not paint the life and work of their much-loved leader in a positive light.  

Continue reading “Thomas S. Monson and the Politics of Obituaries”