← Return to search results
Back to Prindle Institute

Bad Science, Bad Science Reporting

3d image of human face with severalpoints of interest circled

It tends to be that only the juiciest of developments in the sciences become newsworthy: while important scientific advances are made on a daily basis, the general public hear about only a small fraction of them, and the ones we do hear about do not necessarily reflect the best science. Case in point: a recent study that made headlines for having developed an algorithm that could detect perceived trustworthiness in faces. The algorithm used as inputs a series of portraits from the 16th to the 19th centuries, along with participant’s judgments of how trustworthy they found the depicted faces. The authors then claimed that there was a significant increase in trustworthiness over the period of time they investigated, which they attributed to lower levels of societal violence and greater economic development. With an algorithm thus developed, they then applied it to some modern-day faces, comparing Donald Trump to Joe Biden, and Meghan Markle to Queen Elizabeth II, among others.

It is perhaps not surprising, then, that once the media got wind of the study that articles with names like “Meghan Markle looks more trustworthy than the Queen” and “Trust us, it’s the changing face of Britain” began popping up online. Many of these articles read the same: they describe the experiment, show some science-y looking pictures of faces with dots and lines on them, and then marvel at how the paper has been published in Nature Communications, a top journal in the sciences.

However, many have expressed serious worries with the study. For instance, some have noted how the paper’s treatment of their subject matter – in this case, portraits from hundreds of years ago – is uninformed by any kind of art history, and that the belief that there was a marked decrease in violence over that time is uniformed by any history at all. Others note how the inputs into the algorithm are exclusively portraits of white faces, leading some to make the charge that the authors were producing a racist algorithm. Finally, many have noted the very striking similarity between what the authors are doing and the long-debunked studies of phrenology and physiognomy, which purported to show that the face of one’s skull and nature of one’s facial features were indicative of their personality traits, respectively.

There are many ethical concerns that this study raises. As some have noted already, developing an algorithm in this manner could be used as a basis for making racist policy decisions, and would seem to lend credence to a form of “scientific racism.” While these problems are all worth discussing, here I want to focus on a different issue, namely how a study lambasted by so many, with so many glaring flaws, made its way to the public eye (of course, there is also the question of how the paper got accepted in such a reputable journal in the first, but that’s a whole other issue).

Part of the problem comes down to how the results of scientific studies are communicated, with the potential for miscommunications and misinterpretations along the way. Consider again how those numerous websites clamoring for clicks with tales of the trustworthiness of political figures got their information in the first place, which was likely from a newswire service. Here is how ScienceDaily summarized the study:

“Scientists revealed an increase in facial displays of trustworthiness in European painting between the fourteenth and twenty-first centuries. The findings were obtained by applying face-processing software to two groups of portraits, suggesting an increase in trustworthiness in society that closely follows rising living standards over the course of this period.”

Even this brief summary is misleading. First, to say that scientists “revealed” something implies a level of certainty and definitiveness in their results. Of course, all results of scientific studies are qualified: there is never an experiment that will say that it is 100% certain of its results, or that, when measuring different variables, that there is a definitive cause and effect relationship between them. The summary does qualify this a little bit – in saying that the study “suggests” an increase in trustworthiness. But this is misleading for another reason, namely that the study does not purport to measure actual trustworthiness, but perceptions of trustworthiness.

Of course, a study about an algorithm measuring what people think trustworthiness looks like is not nearly as exciting as a trustworthiness detection machine. And perhaps because the difference can be easily overlooked, or because the latter is likely to garner much more attention than the former, the mistake shows up in several of the outlets reporting it. For example:

Meghan was one and a half times more trustworthy than the Queen, according to researchers.

Consultants from PSL Analysis College created an algorithm that scans faces in painted portraits and pictures to find out the trustworthiness of the individual.

Meghan Markle has a more “trustworthy” face than the Queen, a new study claims.

From Boris Johnson to Meghan Markle – the algorithm that rates trustworthiness.”

Again, the problem here is that the study never made the claim that certain individuals were, in fact, more trustworthy than others. But that news outlets and other sites report it as such compound worries that one might employ the results of the study to reach unfounded conclusions about who is trustworthy and who isn’t.

So there are problems here at three different levels: first, with the nature and design of the study itself; second, with the way that newswire services summarized the results, making them seem more certain than they really were; and third, with the way that sites that used those summaries presented the results in order to make it look more interesting and legitimate than it really was, without raising any of the many concerns expressed by other scientists. All of these problems compound to produce the worries that the results of the study could be misinterpreted and misused.

While there are well-founded ethical concerns about how the study itself was conducted, it is important not to ignore what happens after the studies are finished and their results disseminated to the public. The moral onus is not only on the scientists themselves, but also on those reporting on the results of scientific studies.

CRISPR and the Ethics of Science Hype

image of pencil writing dna strand

CRISPR is in the news again! And, again, I don’t really know what’s going on.

Okay, so here’s what I think I know: CRISPR is a new-ish technology that allows scientists to edit DNA. I remember seeing in articles pictures of little scissors that are supposed to “cut out” the bad parts of strings of DNA, and perhaps even replace those bad parts with good parts. I don’t know how this is supposed to work. It was discovered sort of serendipitously when studying bacteria and how they fight off viruses, I think, and it all started with people in the yogurt industry. CRISPR is an acronym, but I don’t remember what it stands for. What I do know is that a lot of people are talking about it, and that people say it’s revolutionary.

I also know that while ethical worries abound – not only because of the general worries about the unknown side-effects of altering DNA, but because of concerns about people wanting to make things like designer babies – from what my news feed is telling me, there is reason to get really excited. As many, many, many news outlets have been reporting, there is a new study, published in Nature, that a new advance in CRISPR science means that we could correct or cure or generally get rid of 89% of genetic diseases. I’ve heard of Nature: that’s the journal that publishes only the best stuff.

I’ve also heard that people are so excited that Netflix is even making a miniseries about the discovery of CRISPR and the scientists working on it. The show, titled “Unnatural Selection” [sic], pops up on my Netflix page with the following description:

“From eradicating disease to selecting a child’s traits, gene editing gives humans the chance to hack biology. Meet the real people behind the science.”

In an interview about the miniseries, co-director Joe Egender described his motivation for making the show as follows:

“I come from the fiction side, and I was actually in the thick of developing a sci-fi story and was reading a lot of older sci-fi books and was doing some research and trying to update some of the science. And — I won’t ever forget — I was sitting on the subway reading an article when I first read that CRISPR existed and that we actually can edit the essence of life.”

89% of genetic diseases cured. Articles published in Nature and a new Netflix miniseries. Editing the essence of life. Are you excited yet???

So the point of this little vignette is not to draw attention to the potential ethical concerns surrounding gene-editing technology (if you’d like to read about that, you can do so here), but instead to highlight the kind of ignorance that myself and journalists are dealing with when it comes to reporting on new scientific discoveries. While I told you at the outset that I didn’t really know what was going on with CRISPR, I wasn’t exaggerating by much: I don’t have the kind of robust scientific background required to make sense of the content of the actual research. Here, for example, is the second sentence in the abstract of that new paper on CRISPR everyone is talking about:

“Here we describe prime editing, a versatile and precise genome editing method that directly writes new genetic information into a specified DNA site using a catalytically impaired Cas9 fused to an engineered reverse transcriptase, programmed with a prime editing guide RNA (pegRNA) that both specifies the target site and encodes the desired edit.”

Huh? Maybe I could come to understand what the above paragraph is saying, given enough time and effort. But I don’t have that kind of time. And besides, not all of us need to be scientists: leave the science to them, and I’ll worry about other things.

But this means that if I’m going to learn about the newest scientific discoveries then I need to rely on others to tell me about them. And this is where things can get tricky: the kind of hype surrounding new technologies like CRISPR means that you’ll get a lot of sensational headlines, ones that might border on the irresponsible.

Consider again the statement from the co-director of that new Netflix documentary, that he became interested in CRISPR after he read about how it can be used to “edit the essence of life.” It is unlikely that any scientist has ever made so bald a claim, and for good reason: it is not clear what it means for life to have an “essence”, nor that such a thing, if it exists, could be edited. The claim that this new scientific development could potentially cure up to 89% of genetic diseases is also something that makes an incredibly flashy headline, but again is much more tempered when it comes from the mouths of the actual scientists involved. The authors of the paper, for instance, state that the 89% number comes from the maximum number of genetic diseases that could, conceptually, be cured if the claims described in the paper were perfected. But that’s of course not saying much: many wonderful things could happen in perfect conditions, the question is how likely they are to exist. And, of course, the 89% claim also does not take into account any potential adverse effects of the current gene editing techniques (a worry that has been raised in past studies).

This is not to say that the new technology won’t pan out, or that it will definitely have adverse side effects, or anything like that. But it does suggest some worries we might have with this kind of hyped-up reaction to new scientific developments.

For instance, as someone who doesn’t know much about science, I necessarily rely on people who do in order to tell me what’s going on. But those who tend to be the ones telling me what’s going on – journalists, mostly – don’t seem to be much better off in terms of their ability to critically analyze the information they’re reporting on. We might wonder what kinds of responsibilities these journalists have to make sure that people like me are, in fact, getting an accurate portrayal of the state of the relevant developments.

Things like the Netflix documentary are even further removed from reality. Even though the documentary makers themselves do not make any specific claims as to understand the science involved, they clearly have an exaggerated view of what CRISPR technology is capable of. Creating a documentary following the lives of people who are capable of editing the “essence of life” will certainly give viewers a distorted view.

None of this is to say that you can’t be excited. But with great hype comes great responsibility to present information critically. When it comes to new developments in science, though, it often seems that this responsibility is not taken terribly seriously.

SpaceX and the Ethics of Space Travel

An image of faraway galaxies taken by the Hubble space telescope.

On Tuesday, February 6th, SpaceX will launch a rocket that could be the future of space tourism. If successful, it could be the rocket that takes private tourists around the moon within the year and lay the groundwork for taking humans on missions to Mars. With human expansion within sight at this level, three sets of ethical concerns arise – bioethical concerns, and political concerns both among the nations of Earth and between Earth and those that venture off-planet. Continue reading “SpaceX and the Ethics of Space Travel”