← Return to search results
Back to Prindle Institute

Biden, Trump, and the Dangers of Value-Free Science

President Joe Biden observes dosage preparations during a tour of a vaccination center

I don’t think it’s controversial to say that the Trump administration lived in tension with scientific advisors. Because of concerns that Trump politicized science in ways that put life at risk and undermined public trust, the Biden administration is launching a 46-person federal scientific integrity task force to investigate areas where partisanship interfered with scientific decision-making and to come up with ways to keep politics out of science in the future. While risk to scientific integrity is an important concern, the thinking behind this task force risks covering up a problem rather than resolving it.

Critics seeking “evidence-based policy-making” have accused the Trump administration of letting politics interfere with issues including, but not limited to, coronavirus, climate change, and whether Hurricane Dorian threatened Alabama. They also argue that this interference made the response to COVID-19 worse and led to a higher death toll. Jane Lubchenco, deputy director for climate and environment at the White House Office of Science and Technology Policy, noted, “What we have seen in the last administration is that the suppression of science, the reassignment of scientists, the distortion of scientific information around climate change was not only destructive but counterproductive and really problematic.”

But it isn’t clear scientific integrity can be defined in a way that is free from political interference or that it should be. Consider the memo from Biden on the subject which states that “scientific findings should never be distorted and influenced by political considerations.” While this might mean making sure that findings and data are not suppressed or distorted in ad hoc and arbitrary ways, this approach also sounds like an attempt to enforce a value-free ideal of science, which, according to many philosophers of science and scientists themselves, is neither possible nor desirable.

For starters, it isn’t clear that we can completely separate politics from science even if we wanted to. According to philosopher Helen Longino, what we take as evidence for something requires assumptions that are informed by our values. These assumptions often cannot be (and are not) empirically measured, and so “there are no formal rules, guidelines, or processes that can guarantee that social values will not permeate evidential relations.” Such assumptions can dramatically affect the methods taken by scientists including what protocols to follow, what sorts of things to measure, and for how long.

For example, in his book A Tapestry of Values, Keven Elliot provides an example of Woburn Massachusetts in the 1970s when several people became ill and it was noted that the local water had taken on a strange color and taste. Eventually it was discovered that barrels of industrial chemicals were found buried near the city’s wells. Proving a direct link between these chemicals and the many cancers and illnesses in the city proved difficult. A department of public health report about a connection between the two was inconclusive. Later, citizens of the community managed to get a separate study commissioned with significantly more input from the community and which later found that there was a significant correlation between consumption of water from the contaminated wells and the health problems people experienced. As Elliot notes,

“assumptions about the appropriate boundaries of the geographical area to be studied can be very important to scrutinize; if a study incorporates some heavily polluted areas and other areas that are not very polluted, it can make pollution threats appear less serious than they would otherwise be. Similarly, analyzing health effects together for two neighboring towns might yield statistically significant evidence for health problems, whereas analyzing health effects in the two towns separately might not yield statistically significant results.”

In other words, there are many cases where values are needed to inform the methods of research that is taken.

Consider an example from the headlines this week. On Monday it was reported that less than 3% of all land on Earth is fully ecologically intact. Philosophers Kristen Intemann and Inmaculada de Melo-Martin have argued that measuring climate impacts requires values because “impact” depends on judgments about what is worth protecting. As the paper that inspired this week’s headline makes clear, “there is no clear definition of what is meant by intactness and the term is used loosely in the scientific literature.” For some scientists measuring the intactness of an ecosystem is done by measuring anthropogenic influence, whereas for the authors of the paper measuring whether an ecosystem is intact will involve measuring the habitat intactness, faunal intactness, and functional intactness. Depending on how this is measured, we find that the amount of land that is intact varies from 3% to 25%. The decision regarding which of these measures to use is quite significant and will inevitably depend on our values. Whatever we decide, the findings will have an enormous impact on our policies.

Philip Kitcher has argued that science is not just about finding truth, but finding truths we deem significant, which makes democratically-informed values highly desirable. The decision of whether agricultural science should focus on efficiency and maximizing crop yields or sustainability and maintaining future output is something that we might want to be politically-informed. Another area where values are desirable involves cases of inductive risk. As I’ve previously explained it, inductive risk involves cases of dealing with the risks of real world consequences relative to the uncertainty you have in your current conclusion.

A really good example of this thinking at play is the public health advice when it comes to COVID. From social distancing, to mask-wearing, to vaccine use, the guidance has always been a matter of weighing what is known relative to risks of being wrong. This has been pretty blatant. Experts need to weigh the risks of, for example, using the AstraZeneca vaccine despite not knowing a lot about its connection to blood clots because the alternatives are worse. In a case like this, regardless about how you may feel about the scientific findings, when scientists say the benefits outweigh the risks, this is a value judgment, and therefore it is a fair question whether political or ethical values other than those of scientists should be relevant to science in way that doesn’t damage the integrity of the research.

For these reasons, many philosophers have argued that trying to bury values under the rug and pursuing a goal like value-free science isn’t helpful. If, in your attempt to banish political interference, values are only made more subtle and difficult to notice, you only make the problem worse. It’s possible that efforts to secure scientific integrity may stop short of the value-free ideal; the aim may not be to weed out all values, but only “improper political influence.” But then the word “improper” takes on huge significance and requires a lot of clarification. Thus, there is a larger moral question about how much influence democratic values should have over science and whether it is possible to provide an account of integrity that may be politically informed but not just as politically controversial at the end of the day.

The Ethics of Scientific Advice: Lessons from “Chernobyl”

photograph of Fireman's Monument at Cherynobl

The recently-released HBO miniseries Chernobyl highlights several important moral issues that are worth discussing. For example, what should we think about nuclear power in the age of climate change? What can disasters tell us about government accountability and the dangers of keeping unwelcome news from the public? This article will focus on the ethical issues concerning scientists potential to influence government policy. How should scientists advise governments, and who holds them accountable for their advice? 

In the second episode, the Soviet Union begins dumping thousands of tons of sand and boron onto the burning nuclear plant at the suggestion of physicist Valery Legasov. After consulting fellow scientist Ulana Khomyuk (a fictional character who represents the many other scientists involved), Legasov tells Soviet-leader Gorbachev that in order to prevent a potential disaster, drainage pools will need to be emptied from within the plant in an almost certain suicide mission. “We’re asking for your permission to kill three men,” Legasov reports to the Soviet government. It’s hard to imagine a more direct example of a scientist advising a decision with moral implications. 

Policy makers often lack the expertise to make informed decisions, and this provides an opportunity for scientists to influence policy. But should scientists consider ethical or policy considerations when offering advice? 

On one side of this debate are those who argue that scientists primary responsibility is to ensure the integrity of science. This means that scientists should maintain objectivity and should not allow their personal moral or religious convictions to influence their conclusions. It also means that the public should see science as an objective and non-political affair. In essence, science must be value-free.

This value-free side of the debate is reflected in the mini-series’ first episode. It ends with physicist Legasov getting a phone call from Soviet minister Boris Shcherbina telling him that he will be on the commission investigating the accident. When Legasov begins to suggest an evacuation, Shcherbina tells him, “You’re on this committee to answer direct questions about the function of an RBMK reactor…nothing else. Certainly not policy.”

Those who argue for value-free science often argue that scientists have no business trying to influence policy. In democratic nations this is seen as particularly important since policy makers are accountable to voters while scientists are not. If scientists are using ethical judgments to suggest courses of action, then what mechanism will ensure that those value judgments reflect the public’s values?

In order to maintain the value-free status of science, philosophers such as Ronald N. Geire argue that there is an important distinction between judging the truth of scientific hypotheses and judging the practical uses of science. A scientist can evaluate the evidence for a theory or hypotheses, but they shouldn’t evaluate whether one should rely on that theory or hypothesis to make a policy decision. For example, a scientist might tell the government how much radiation is being released and how far it will spread, but they should not advise something like an evacuation. Once the government is informed of relevant details, the decision of how to respond should be left entirely to elected officials. 

Opponents of this view, however, argue that scientists do have a moral responsibility when offering advice to policy makers and believe that scientists shouldering this responsibility is desirable. Philosopher Heather Douglas argues that given that scientists can be wrong, and given that acting on incorrect information can lead to morally important consequences, scientists do have a moral duty concerning the advice they offer to policy makers. Scientists are the only ones who can fully appreciate the potential implications of their work. 

In the mini-series we see several examples where only the scientists fully appreciate the risks and dangers from radiation, and are the strongest advocates of evacuation. In reality, Legasov and a number of other scientists offered advice on how to proceed with cleaning up the disaster. According to Adam Higginbotham’s Midnight in Chernobyl: The Untold Story of the World’s Greatest Nuclear Disaster, the politicians were ignorant of nuclear physics, and the scientists and technicians were too paralyzed by indecision to commit to a solution.

In the real-life disaster, the scientists involved were frequently unsure about what was actually happening. They had to estimate how fast various parts of the core might burn and whether different radioactive elements would be released into the air. Reactor specialist Konstantin Fedulenko was worried that the boron drops were having limited effect and that each drop was hurling radioactive particles into the atmosphere. Legasov disagreed and told him that it was too late to change course. Fedulenko believed it was best to let the graphite fire burn itself out, but Legasov retorted, “People won’t understand if we do nothing…We have to be seen to be doing something.” This suggests that the scientists were not simply offering technical advice but were making judgments based on additional value and policy considerations. 

Again, according to Douglas, given the possibility for error and the potential moral consequences at play, scientists should consider these consequences to determine how much evidence is enough to say that a hypothesis is true or to advise a particular course of action. 

In the mini-series, the government relies on monitors showing a low level of radiation to initially conclude that the situation is not bad enough to warrant an evacuation. However, it is pointed out the radiation monitors being used likely only had a limited maximum range, and so the radiation could be much higher than the monitor would tell them. Given that they may be wrong about the actual amount of radiation and the threat to public health, a morally-responsible scientist might conclude that evacuation be suggested to policy makers. 

While some claim that scientists shouldn’t include these considerations, others argue that they should. Certainly, the issue isn’t limited to nuclear disasters either. Cases ranging from climate change to food safety, chemical and drug trials, economic policies, and even the development of weapons, all present a wide array of potential moral consequences that might be considered when offering scientific advice. 

It’s difficult to say a scientist shouldn’t make morally relevant consequences plain to policy makers. It often appears beneficial, and it sometimes seems unavoidable. But this liberty requires scientists to practice judgment in determining what a morally relevant consequence is and is not. Further, if scientists rely on value judgments when advising government policy, how are scientists to be held accountable by the public? Given these benefits and concerns, whether we want scientists to make such judgments and to what extent their advice should reflect those judgments presents an important ethical dilemma for the public at large. Resolving this dilemma will at least require that we be more aware of how experts provide policy advice.