Back to Prindle Institute

Forbidden Knowledge in Scientific Research

By Matthew S.W. Silk
13 Nov 2019

It is no secret that science has the potential to have a profound effect on society. This is often why scientific results can be so ethically controversial. For instance, researchers have recently warned of the ethical problems associated with scientists growing lumps of human brain in the laboratory. The blobs of brain tissue grown from stem cells developed spontaneous brain waves like those found in premature babies. The hope is that the study offers the potential to better understand neurological disorders like Alzheimer’s, but it also raises a host of ethical worries concerning the possibility this brain tissue could reach sentience. In other news, this week a publication in the journal JAMA Pediatrics ignited controversy by reporting a supposed link between fluoride exposure and IQ scores in young children. In addition to several experts questioning the results of the study itself, there is also concern about the potential effect this could have on the debate over the use of fluoride in the water supply; anti-fluoride activists have already jumped on the study to defend their cause. Scientific findings have an enormous potential to dramatically affect our lives. This raises an ethical issue: should there be certain topics, owing to their ethical concerns, that should be off-limits for scientific study?

This question is studied in both science and philosophy, and is sometimes referred to as the problem of forbidden knowledge. The problem can include issues of experimental methods and whether they follow proper ethical protocols (certain knowledge may be forbidden if it uses human experimentation), but it can also include the impact that the discovery or dissemination of certain kinds of knowledge could have on society. For example, a recent study found that girls and boys are equally as good at mathematics and that children’s brains function similarly regardless of gender. However, there have been several studies going back decades which tried to explain differences between mathematical abilities in boys and girls in terms of biological differences. Such studies have the possibility of re-enforcing gender roles and potentially justifying them as biologically determined. This has the potential to spill over into social interactions. For instance, Helen Longino notes that such findings could lead to lower priorities being made to encourage women to enter math and science.

So, such studies have the potential to impact society which is an ethical concern, but is this reason enough make them forbidden? Not necessarily. The bigger problem involves how adequate these findings are, the concern that they could be incorrect, and what society is to do about that until correct findings are published. For example, in the case of math testing, it is not that difficult to find significant correlations between variables, but the limits of this correlation and the study’s potential to identify causal factors are often lost on the public. There are also methodical problems; some standardized tests rely on male-centric questions that can skew results, different kinds of tests and different strategies for preparing for them can also misshape our findings. So even if correlations are found, where there are not major flaws in the assumptions of the study, they may not be very generalizable. In the meantime, such findings, even if they are corrected over time, can create stereotypes in the public that are hard to get rid of.

Because of these concerns, some philosophers argue that either certain kinds of questions be banned from study, or that studies should avoid trying to explain differences in abilities and outcomes according to race or sex. For instance, Janet Kourany argues that scientists have moral responsibilities to the public and they should thus conduct themselves according to egalitarian standards. If a scientist wants to investigate the differences between racial and gender groups, they should seek to explain these in ways without assuming that the difference is biologically determined.

In one of her examples, she discusses studying differences between incidents of domestic violence in white and black communities. A scientist should highlight similarities of domestic violence within white and black communities and seek to explain dissimilarities in terms of social issues like racism or poverty. With a stance like this, research into racial differences explaining differences in rates of domestic violence would thus constitute forbidden knowledge. Only if these alternative egalitarian explanations empirically fail can a scientist then choose to explore race as a possible explanation of differences between communities. By doing so, it avoids perpetuating a possibly empirically flawed account that suggests that blacks might be more violent than other ethnic groups.

She points out that the alternative risks keeping stereotypes alive even while scientists slowly prove them wrong. Just as in the case of studying mathematical differences, the slow settlement of opinion within the scientific community leaves society free to entertain stereotypes as “scientifically plausible” and adopt potentially harmful policies in the meantime. In his research on the matter Philip Kitcher notes that we are susceptible to instances of cognitive asymmetry where it takes far less empirical evidence to maintain stereotypical beliefs than it takes to get rid of them. This is why studying the truth of such stereotypes can be so problematic.

These types of cases seem to offer significant support to labeling particular lines of scientific inquiry forbidden. But the issue is more complicated. First, telling scientists what they should and should not study raises concerns over freedom of speech and freedom of research. We already acknowledge limits on research on the basis of ethical concerns, but this represents a different kind of restriction. One might claim that so long as science is publicly funded, there are reasonable democratically justified limits of research, but the precise boundaries of this restriction will prove difficult to identify.

Secondly, and perhaps more importantly, such a policy has the potential to exacerbate the problem. According to Kitcher,

“In a world where (for example) research into race differences in I.Q. is banned, the residues of belief in the inferiority of the members of certain races are reinforced by the idea that official ideology has stepped in to conceal an uncomfortable truth. Prejudice can be buttressed as those who opposed the ban proclaim themselves to be the gallant heirs of Galileo.”

In other words, one reaction to such bans on forbidden knowledge, so long as our own cognitive asymmetries are unknown to us, will be to fight back that this is an undue limitation on free speech for the sake of politics. In the meantime, those who push for such research can become martyrs and censoring them may only serve to draw more attention to the cause.

This obviously presents us with an ethical dilemma. Given that there are scientific research projects that could have a potentially harmful effect on society, whether the science involved is adequate or not, is it wise to ban such projects as forbidden knowledge? There are reasons to say yes, but implementing such bans may cause more harm or drive more public attention to such issues. Even banning research on the development of brain tissue from stem cells may be wise, but it may also cause such research to move to another country with more relaxed ethical standards, meaning that potential harms could be much worse. These issues surrounding how science and society relate are likely only going to be solved with greater public education and open discussion about what ethical responsibilities we think scientists should have.

Matt has a PhD in philosophy from the University of Waterloo. His research specializes in philosophy of science and the nature of values. He has also published on the history of pragmatism and the work of John Dewey.
Related Stories