← Return to search results
Back to Prindle Institute
PsychologyScience

Bad Science, Bad Science Reporting

By Kenneth Boyd
9 Oct 2020
3d image of human face with severalpoints of interest circled

It tends to be that only the juiciest of developments in the sciences become newsworthy: while important scientific advances are made on a daily basis, the general public hear about only a small fraction of them, and the ones we do hear about do not necessarily reflect the best science. Case in point: a recent study that made headlines for having developed an algorithm that could detect perceived trustworthiness in faces. The algorithm used as inputs a series of portraits from the 16th to the 19th centuries, along with participant’s judgments of how trustworthy they found the depicted faces. The authors then claimed that there was a significant increase in trustworthiness over the period of time they investigated, which they attributed to lower levels of societal violence and greater economic development. With an algorithm thus developed, they then applied it to some modern-day faces, comparing Donald Trump to Joe Biden, and Meghan Markle to Queen Elizabeth II, among others.

It is perhaps not surprising, then, that once the media got wind of the study that articles with names like “Meghan Markle looks more trustworthy than the Queen” and “Trust us, it’s the changing face of Britain” began popping up online. Many of these articles read the same: they describe the experiment, show some science-y looking pictures of faces with dots and lines on them, and then marvel at how the paper has been published in Nature Communications, a top journal in the sciences.

However, many have expressed serious worries with the study. For instance, some have noted how the paper’s treatment of their subject matter – in this case, portraits from hundreds of years ago – is uninformed by any kind of art history, and that the belief that there was a marked decrease in violence over that time is uniformed by any history at all. Others note how the inputs into the algorithm are exclusively portraits of white faces, leading some to make the charge that the authors were producing a racist algorithm. Finally, many have noted the very striking similarity between what the authors are doing and the long-debunked studies of phrenology and physiognomy, which purported to show that the face of one’s skull and nature of one’s facial features were indicative of their personality traits, respectively.

There are many ethical concerns that this study raises. As some have noted already, developing an algorithm in this manner could be used as a basis for making racist policy decisions, and would seem to lend credence to a form of “scientific racism.” While these problems are all worth discussing, here I want to focus on a different issue, namely how a study lambasted by so many, with so many glaring flaws, made its way to the public eye (of course, there is also the question of how the paper got accepted in such a reputable journal in the first, but that’s a whole other issue).

Part of the problem comes down to how the results of scientific studies are communicated, with the potential for miscommunications and misinterpretations along the way. Consider again how those numerous websites clamoring for clicks with tales of the trustworthiness of political figures got their information in the first place, which was likely from a newswire service. Here is how ScienceDaily summarized the study:

“Scientists revealed an increase in facial displays of trustworthiness in European painting between the fourteenth and twenty-first centuries. The findings were obtained by applying face-processing software to two groups of portraits, suggesting an increase in trustworthiness in society that closely follows rising living standards over the course of this period.”

Even this brief summary is misleading. First, to say that scientists “revealed” something implies a level of certainty and definitiveness in their results. Of course, all results of scientific studies are qualified: there is never an experiment that will say that it is 100% certain of its results, or that, when measuring different variables, that there is a definitive cause and effect relationship between them. The summary does qualify this a little bit – in saying that the study “suggests” an increase in trustworthiness. But this is misleading for another reason, namely that the study does not purport to measure actual trustworthiness, but perceptions of trustworthiness.

Of course, a study about an algorithm measuring what people think trustworthiness looks like is not nearly as exciting as a trustworthiness detection machine. And perhaps because the difference can be easily overlooked, or because the latter is likely to garner much more attention than the former, the mistake shows up in several of the outlets reporting it. For example:

Meghan was one and a half times more trustworthy than the Queen, according to researchers.

Consultants from PSL Analysis College created an algorithm that scans faces in painted portraits and pictures to find out the trustworthiness of the individual.

Meghan Markle has a more “trustworthy” face than the Queen, a new study claims.

From Boris Johnson to Meghan Markle – the algorithm that rates trustworthiness.”

Again, the problem here is that the study never made the claim that certain individuals were, in fact, more trustworthy than others. But that news outlets and other sites report it as such compound worries that one might employ the results of the study to reach unfounded conclusions about who is trustworthy and who isn’t.

So there are problems here at three different levels: first, with the nature and design of the study itself; second, with the way that newswire services summarized the results, making them seem more certain than they really were; and third, with the way that sites that used those summaries presented the results in order to make it look more interesting and legitimate than it really was, without raising any of the many concerns expressed by other scientists. All of these problems compound to produce the worries that the results of the study could be misinterpreted and misused.

While there are well-founded ethical concerns about how the study itself was conducted, it is important not to ignore what happens after the studies are finished and their results disseminated to the public. The moral onus is not only on the scientists themselves, but also on those reporting on the results of scientific studies.

Ken Boyd holds a PhD in philosophy from the University of Toronto. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com
Related Stories