← Return to search results
Back to Prindle Institute

Private Reasons in the Public Square

photograph of crowd at town hall

The recent Dobbs decision induced a tidal wave of emotions and heated discourse from both sides of the political aisle as well as from all corners of the American cultural landscape. Some rejoiced that it’s a significant move towards establishing a society that upholds the sanctity of human life, while others mourned the loss of a basic liberty. The Dobbs ruling overturned the historic Roe v. Wade verdict, and it has the practical consequence of relegating decisions about the legality of abortion to individual states. Abortion access is no longer a constitutionally protected right, and thus where and when abortion is legal will be determined by the democratic process.

The legal battle at the state level over abortion rights will continue over the coming months and years, giving voters the chance to share their views. Many of these citizens take their most deeply held moral, religious, and philosophical commitments to have obvious implications for how they vote.

But should all of these types of reasons affect how one votes? If other citizens reject your religion or moral framework, should you still choose political policies based on it?

Political philosophers offer a range of responses to these questions. For simplicity’s sake, we can boil down the responses to two major camps. The first camp answers “no,” arguing that only reasons which are shared or shareable amongst all reasonable citizens can serve as the basis for one’s vote. This seems to rule out religious reasons, experience-based reasons, and reasons that are based on controversial moral and philosophical principles, as reasonable people can reject these. So what kinds of reasons are shareable amongst all reasonable citizens? Candidates for inclusion are general liberal ideals, such as a commitment to human equality, individual liberty, and freedom of conscience. Of course, what these general ideals imply for any specific policy measure (as well as how these reasons should be weighed against each other when they conflict) is unclear. Citizens can disagree about how to employ these shared reasons, but at least they are appealing to reasons that are accepted by their fellow reasonable citizens instead of forcing their privately held convictions on others.

The other camp of political philosophers answers “yes,” arguing that so long as one’s reasons are intelligible or understandable to others, they can be used in the public square. This approach lets in many more reasons than the shareable reasons standard. Even if one strongly opposes Catholicism, for example, it is nevertheless understandable why their Catholic neighbor would be motivated to vote according to church teaching against abortion rights. Given the neighbor’s faith commitments, it is intelligible why they vote pro-life. Similarly, even if one accepts the controversial claim that personhood begins at conception, it is easy enough to understand why other reasonable people reject this belief, given there is no consensus in the scientific or philosophical communities. This intelligibility standard will also allow for many citizens to appeal to personal experiences, as it is clear how such experiences might reasonably shape one’s political preferences, even if these experiences are not shared by all reasonable citizens.

Of course, one might notice a potential pitfall with the intelligibility standard. What if a citizen wishes to support a certain policy on the basis of deeply immoral grounds, such as racist or sexist reasons? Can the intelligibility standard keep out such reasons from public discourse?

Defenders of the intelligibility standard might respond that it is not intelligible how a reasonable person could hold such beliefs, blocking these reasons from the public square. Of course, there may also be disagreement over where exactly to draw this line of reasonableness. Advocates of the intelligibility standard hope that there is enough consensus to distinguish between reasonable belief systems (e.g., those of the major world religions and cultures) and unreasonable ones (e.g., those of racist sects and oppressive cults). Naturally, proponents of the shareable reasons standard tend to be dubious that such an intuitive line in the sand exists, doubling down on placing tight restrictions on the types of reasons that are acceptable in the public square.

What is the relevance of this shared vs. intelligible reasons distinction when it comes to the average citizen? Regardless of where one falls in the debate, it is clearly beneficial to reflect on our political beliefs. Appreciating the reasons of other thoughtful citizens can prompt us to take the following beneficial steps:

1. Recognize that your privately held belief system is not shared by every reasonable, well-intentioned citizen. Our political culture is constituted by a wide array of differing opinions about abortion and many other issues, and people often have good reasons for holding the viewpoints they do. Recognition of this empirical fact is a crucial starting point for improving our political climate and having constructive democratic debate.

2. Reflect on why your friends, neighbors, and co-workers might disagree with you on political issues. Morality and politics are complicated matters, and this is reflected by surveys which indicate the depth of disagreement amongst professional experts in these fields. Given this complexity, individuals should be open to potentially revising their previously held beliefs in light of new evidence.

3. Engage with those who do not share your belief system. Inter-group contact has been shown to decrease harmful political polarization. In the wake of the Dobbs decision, this looks like a willingness to engage with those on both the pro-choice and pro-life sides of the aisle.

Regardless of where they fall in the shared reasons versus intelligible reasons debate, citizens have a responsibility to recognize that their political opponents can be reasonable as well. Embracing this idea will lead to more productive democratic discourse surrounding difficult political issues like those bound up in the Dobbs ruling.

The Democratic Limits of Public Trust in Science

photograph of Freedom Convoy trucks

It isn’t every day that Canada makes international headlines for civil unrest and disruptive protests. But the protests which began last month in Ottawa by the “Freedom Convoy” have inspired similar protests around the world and led to the Canadian government declaring a national emergency and seeking special powers to handle the crisis. But what exactly is the crisis that the nation faces? Is it a far-right, conspiratorial, anti-vaccination movement threatening to overthrow the government? Or is it the government’s infringement on rights in the name of “trusting the experts”?

It is easy to take the view that protests are wrong. First, we must acknowledge that the position that the truckers are taking in protesting the mandate is fairly silly. For starters, even if they were successful at getting the Canadian Federal Government to change its position, the United States also requires that truckers be vaccinated to cross the border, so this is a moot point. I also won’t defend the tactics used in the protests including the noise, blocking bridges, etc. However, several people in Canada have pinned part of the blame for the protests on the government, and Justin Trudeau in particular, for politicizing the issue of vaccines and creating a divisive political atmosphere.

First, it is worth noting that Canada has relied more on restrictive lockdown measures as of late compared to other countries, and much of this is driven by the need to keep hospitals from being overrun. However, this is owing to long-term systemic fragility in the healthcare sector, particularly a lack of ICU beds, prompting many – including one of Trudeau’s own MPs – to call for reform to healthcare funding to expand capacity instead of relying so much on lockdown measures. One would think that this would be a topic of national conversation with the public wondering why the government hasn’t done anything about this situation since the beginning of the pandemic. But instead, the Trudeau government has only chosen to focus on a policy of increasing vaccination rates, claiming that they are following “the best science” and “the best public health advice.”

Is there, however, a possibility that the government is hoping that enough people get vaccinated and with enough lockdown measures, they can avoid having the healthcare system collapse, expect the pandemic blows over, and escape without having to address such long-term problems? Maybe, maybe not. But it certainly casts any advice offered or decisions made the government in a very different light. Indeed, one of the problems with expert advice (as I’ve previously discussed here, here, and here) is that it is subject to inductive risk concerns and so the use of expert advice must be democratically-informed.

For example, if we look at a model used by Canada’s federal government, one will note how often its projections are based on different assumptions about what could happen. The model itself may be driven by a number of unstated assumptions which may or may not be reasonable. It is up to politicians to weigh the risks of getting it wrong, and not simply treat experts as if they are infallible. This is important because the value judgments inherent in risk assessment – about the reasonableness of our assumptions as well as the consequences of getting it wrong and potentially overrunning the healthcare system – are what ultimately will determine what restriction measures the government will enact. But this requires democratic debate and discussion. This is where failure of democratic leadership breeds long-term mistrust in expert advice.

It is reasonable to ask questions about what clear metrics a government might use before ending a lockdown, or to ask if there is strong evidence for the effectiveness of a vaccine mandate. But for the public, not all of whom enjoy the benefit of an education in science, it is not so clear what is and is not a reasonable question. The natural place for such a discussion would be the elected Parliament where representatives might press the government for answers. Unfortunately, defense of the protest in any form in Parliament is vilified, with the opposition being told they stand with “people who wave swastikas.” Prime Minister Trudeau has denounced the entire group as a “small fringe minority,” “Nazis,” with “unacceptable views.” However, some MPs have voiced concern about the tone and rhetoric involved in lumping everyone who has a doubt about the mandate or vaccine together.

This divisive attitude has been called out by one of Trudeau’s own MPs who said that people who question existing policies should not be demonized by their Prime Minister, noting “It’s becoming harder and harder to know when public health stops and where politics begins,” adding, “It’s time to stop dividing Canadians and pitting one part of the population against another.” He also called on the Federal government to establish clear and measurable targets.

Unfortunately, if you ask the federal government a direct question like “Is there a federal plan being discussed to ease out mandates?” you will be told that:

there have been moments throughout the pandemic where we have eased restrictions and those decisions have always been made guided by the best available advice that we’re getting from public health experts. And of course, going forward we will continue to listen to the advice that we get from our public health officials.

This is not democratic accountability (and it is not scientific accountability either). “We’re following the science” or “We’re following the experts” is not good enough. Anyone who actually understands the science will know that this is more a slogan than a meaningful claim.

There is also a bit of history at play. In 1970, Trudeau’s father Pierre invoked the War Measures Act during a crisis that resulted in the kidnapping and murder of a cabinet minister. It resulted in rounding up and arrest of hundreds of arrests without warrant or charge. This week the Prime Minister has invoked the successor to that legislation for the first time in Canadian history because…trucks. The police were having trouble moving the trucks because they couldn’t get tow trucks to help clear blocked border crossing. Now, while we can grant that the convoy has been a nuisance and has illegally blocked bridges, we’ve also seen the convoy complying with court-ordered injunctions on honking, we’ve also seen the convoy organizers opposing violence, with no major acts of violence taking place. While there was a rather odd proposal that the convoys could form a “coalition” with the parliamentary opposition to form a new government, I suspect that this is more owing to a failure to understand how Canada’s system of government works rather than a serious attempt to, as some Canadian politicians would claim “overthrow the government.”

The point is that this is an issue that has started with a government not being transparent and accountable, abusing the democratic process in the name of science, and taking advantage of the situation to demonize and delegitimize the opposition. It is in the face of this, and in the face of uncertainty about the intentions of the convoy, and after weeks of not acting sooner to ameliorate the situation, that the government claims that a situation has arisen that, according to the Emergencies Act, is a “threat to the security of Canada…that is so serious as to be a national emergency.” Not only is there room for serious doubt as to whether the convoy situation has reached such a level, but this is taking place during a context of high tension where the government and the media have demonstrated a willingness to overgeneralize and demonize a minority by lobbing as many poisoning the well fallacies as possible and misrepresenting the nature of science. The fact that in this political moment the government seeks greater power is a recipe for abuse of power.

In a democracy, where not everyone enjoys the chance to understand what a model is, how they are made, or how reliable (and unreliable) they can be, citizens have a right to know more about how their government is making use of expert advice in limiting individual freedom. The politicization of the issue using the rhetoric of “following the science,” as well as the government’s slow response and opaque reasoning have only served to make it more difficult for the public to understand the nature of the problem we face. Our public discourse has been stunted by transforming our policy conversations into a narrow one about vaccination and the risk posed by the “alt right.” But there is a much bigger, much more real problem here: the call to “trust the experts” can be used just as easily as a rallying call for rationality as it can be a political tool to demonizing entire groups of people to justify taking away their rights.

A Chicago Suburb Tries Reparations

aerial photograph of Chicago lakefront skyline

Last week, the Chicago suburb of Evanston, home of Northwestern University, introduced the nation’s first government reparations program for African Americans. It was a momentous event regardless of one’s political views, and advocates hope that it will have a “snowball effect” on proposed federal legislation that would create a national commission to study potential reparations. Nevertheless, Evanston’s program, and the broader subject of reparations, remain extremely controversial.

Evanston’s $400,000 program, approved to acknowledge the harm caused by discriminatory housing policies, practices, and inaction going back more than a century, will issue grants up to $25,000 directly to financial institutions or vendors to help with mortgage costs, down payments, and home improvements for qualified applicants. The program will be paid out of Evanston’s $10 million Local Reparations Fund, which will disburse funds collected through annual cannabis taxes over the next decade. Qualifications for the payments include sufficient proof of “origins in any of the Black racial and ethnic groups of Africa,” proof of residency in Evanston between 1919 and 1969 or direct descendance of someone who meets that criterion, or proof of having experienced housing discrimination due to the city’s housing policies or practices after 1969. Beyond repairing past wrongs, the program is also designed to address the declining Black share of the population of Evanston, which fell from 22.5% in 2000 to 16.9% in 2017 according to U.S. census data.

Critics of the program say that it’s little more than an insubstantial gesture and that it benefits the very financial institutions that engaged in discriminatory practices in the past. Perhaps the most damning criticism is that, by denying Black families direct cash payments and the opportunity to decide how to manage their own money, the program is, in the words of Evanston alderwoman Cicely Fleming, a “prime example of white paternalism.” Although a supporter of reparations, she was the lone dissenting vote against the program on Evanston’s City Council. “We have prioritized so-called progressives’ interests in looking virtuous rather than reversing the harm done to Black people for generations,” she wrote in the Chicago Tribune. “I voted ‘no’ as an obligation to my ancestors, my Black family across the nation and my own family in Evanston.” She also pointed out that the program may be under-inclusive in not covering those who may be due reparations but either don’t own a home or don’t plan to purchase one.

There are also potential legal challenges. In a 1995 case called Adarand v. Peña, the Supreme Court held that strict scrutiny applies to all racial classifications imposed by federal, state, or local governments. “Strict scrutiny” means that the program must be narrowly tailored to serve a compelling government interest. In a March 18 letter to the Mayor and members of the City Council, a Washington, D.C. attorney representing the Project on Fair Representation, a conservative not-for-profit legal defense foundation, argued that Evanston’s program fails on both counts: it neither serves a compelling interest nor is narrowly tailored. Only time will tell whether Evanston’s program will actually face a serious legal challenge in the years ahead.

Whatever the particular shortcomings of Evanston’s program, there are more general philosophical objections to reparations that are worth addressing. First, there is what I will call the “anti-classification” argument, ably articulated by Justice Clarence Thomas, who wrote that “there is a moral and constitutional equivalence between laws designed to subjugate a race and those that distribute benefits on the basis of race in order to foster some current notion of equality.” Why is there this equivalence? Because, say the proponents of anti-classification, both kinds of law classify people by race for some purpose. But there is an obvious reply to this objection: while both kinds of law classify people by race, they do not do so for the same purpose, and this difference in purpose is morally relevant. Even if reparations programs are ultimately futile or wrong for some further reason, the notion that there is no intrinsic moral difference between laws that aim to oppress people based on their race and laws that aim to uplift them seems morally obtuse at best.

The second argument is based on the very plausible premise that individuals living today do not bear moral responsibility for the misdeeds of their ancestors. If this is true, and if reparations programs were premised on the idea that they do, then reparations programs would be morally indefensible. But it is important to note that the Evanston program does not rest on the premise that any single individual is responsible for the unequal treatment of Blacks in the past, but rather that the city as a corporate entity bears this responsibility. And this seems much more plausible: a corporate entity can persistently bear moral obligations even if the individuals that make up that entity change over time. For example, if corporation A pollutes a river, then — putting aside the statute of limitations — it may be legally responsible for cleaning up the river even if, by the time it is held to account, no member of its board was alive when the river was polluted.

The final, and perhaps strongest, argument against reparations is based on the fact that nationally, the idea of reparations continues to be extremely unpopular. This is a particularly difficult problem for advocates who would like the federal government to open its vast coffers to reparations programs. Given their unpopularity, reparations programs have the potential to stoke white resentment which, while not grounded in any good argument, has the potential to set back racial progress more than reparations programs would advance it. Yet the possibility of racial backlash was also cited as a reason for activists to moderate their demands and tone down their tactics during the civil rights movement of the 1950s and 1960s, a fact that should make us wary of invoking this concern again. In any case, the Evanston program will be a good trial balloon to see if white residents of that college town are truly as progressive as they claim to be.

In sum, Evanston’s program is a small step forward for the cause of reparations in America. Nevertheless, the program itself, and reparations proposals more generally, face serious challenges from critics on both sides of the aisle.

Climate Services, Public Policy, and the Colorado

photograpg of Colorado River landscape

What does the Colorado River Compact of 1922 have to do with ethical issues in the philosophy of science? Democracy, that’s what! This week The Colorado Sun reported that the Center for Colorado River Studies issued a white paper urging reform to river management in light of climate change to make the Colorado River basin more sustainable. They argue that the Upper Colorado River Commission’s projections for water use are inflated, and that this makes planning the management of the basin more difficult given the impact of climate change.

Under a 1922 agreement among seven U.S. states, the rights to use water from the Colorado River basin are divided into an upper division — Colorado, New Mexico, Utah, and Wyoming — and a lower division — Nevada, Arizona, and California. Each division was apportioned a set amount with the expectation being that the upper division would take longer to develop than the lower division. The white paper charges that the UCRC is relying on inflated water usage projections for the upper division despite demand for development in the upper basin being flat for three decades. In reality, however, the supply of water is far lower than projected in 1922, and climate change has exacerbated the issue. In fact, the supply has shrunk so much that upper basin states have taken efforts to reduce water consumption so that they do not violate the agreement with lower basin states. As the Sun reported, “If it appears contradictory that the upper basin is looking at how to reduce water use while at the same time clinging to a plan for more future water use, that’s because it is.”

To see how this illustrates an ethical problem in philosophy of science, we need to first examine inductive risk. While it is a common enough view that science has nothing to do with values, a consensus among several philosophers of science has formed in the past decade which suggests that not only does science use values, but that this is a good thing. Science never deals with certainty but with inductive generalizations based on statistical modelling. Because one can never be certain, one can always be wrong. Inductive risk involves considering the ethical harms which one should be aware of should their decisions turn out to be wrong. For example, if there is a 90% chance that it will not rain, you may be inclined to wear your expensive new shoes. On the other hand, if you are wrong about that 90% chance, your expensive new shoes will get ruined in the rain. In a case like this, you need to evaluate two factors at the same time: how important are the consequences of being wrong, and, in light of this judgment, how confident do you need to be in your conclusion? If your shoes cost $1000 and ruin very easily, you may want a level of confidence close to 95% or 99% before leaving home. On the other hand, if your shoes are cheap and easy to replace, you may be happy to go outside with a 50% chance of rain.

When dealing with what philosophers call socially-relevant or policy-relevant science, the same inductive risk concerns arise. In an inductive risk situation, we need to make value judgments about how important the consequences of being wrong are, and how accurate we thus ought to be. But what values should be used? According to many philosophers of science, when dealing with socially-relevant science, only democratically-endorsed values are legitimate. The reason for this is straightforward; if values are going to be used that affect public policy-making, then the people should select those values rather than scientists, or other private interests, as that would give them undue power and influence in policy-making.

This brings us back to the Colorado River. A new area of climate science known as “climate services” aims to make climate data more usable for social decision-making by ensuring that the needs of users are central to the collection and analysis of data. Typically, such climate data is not organized to suit the needs of stakeholders and decision-makers. For example, Colorado River Basin managers employed climate services from state and national agencies to create model-based projections of Lake Mead’s ability to supply water. In a recent paper, Wendy Parker and Greg Lusk have explored how inductive risk concerns allow for the use of values in the “co-production” of climate services jointly between providers and users. This means that insofar as inductive risk is a concern, the values of the user can affect everything from model creation, the selection of data, and even the ultimate conclusions reached. Thus, if a group wished to develop land in the Colorado basin, and sought the use of climate services, then the values of that group could affect the information and data that is used and what policies take effect.

According to Greg Lusk, however, this is potentially a problem since if any user who pays for climate services is able to use their own values to affect scientifically-informed policy-making, then this would violate the need for the values to be democratically endorsed. He notes:

“Users could refer to anyone, including government agencies, public interest groups, private industry, or political parties …. The aims or values of these groups are not typically established through democratic mechanisms that secure representative participation and are unlikely to be indicative of the general public’s desires. Yet, the information that climate service providers supplies to users is typically designed to be useful for social and political decision making.”

It is worth noting, for example, that the white paper issued by the Center for Colorado River Studies was funded by the Walton Family Foundation, the USGS Southwest Climate Adaptation Science Center, the Utah Water Research Laboratory, and various other private donors and grants. This report could affect policy maker’s decisions. None of this suggests that the research is biased or bad, but to whatever extent values can influence such reports, and to whatever extent such reports affect policy-making, is the extent to which we should question whose values are playing what roles in information-based policy-making.

In other words, there is an ethical dilemma. On the one hand, climate services can offer major advantages to help users of all kinds prepare for, mitigate, adapt to, or plan development in light of climate change. On the other hand, scientific material designed to be relevant for policy-making, yet heavily influenced by non-democratically endorsable values, can be hugely influential and can affect what we consider to be good data-driven policy. As Lusk notes,

“According to the democratic view, then, the employment of users’s values in climate services would often be illegitimate, and scientists should ignore those values in favor of democratically endorsed ones, to ensure users do not have undue influence over social decision making.”

Of course, even if we accept the democratic view, the problem of defining what “democratically endorsable” means remains. As the events of the past year remind us, democracy is about more than just voting for a representative. In an age of polarization where the values endorsed may be likely to swing radically every four years, or where there is disagreement among various elected governments, deciding which values are endorsable becomes extremely difficult, and ensuring that they are used becomes more impactable. Thus, deciding what place democracy has in science remains an important moral question for philosophers of science, but even more so for the public.

Is Now the Time for an Economics Code of Conduct?

photograph of various banknotes from around the world

One complication of the coronavirus crisis is that it requires that policy decisions weigh public health issues against economic concerns. Economic advisors should be conscious of their own uncertainty as well as the significant and long-term consequences for those acting on their advice. A recent problematic example includes economic advisor Peter Navarro attempting to influence decision making over the use of hydroxychloroquine as a “cure” by claiming his background in statistics made him qualified to address public health matters. While I suspect few would agree with this kind of policy advising, economist advisors still have a vital role to play in conversations regarding the reopening of the economy. Now that the projected infection rates and fatalities of COVID-19 have been revised downward in many regions, concern has shifted to how and when the economy should be restarted. Economic advisors will give advice (and have now given) that could have significant public health consequences. This raises the following question: Given that other professions who work for the public good must adhere to codes of professional ethics, is it time for economists to do the same?

First, we need to consider in general terms why this issue is so pertinent now. With mounting job losses and a prolonged period without production, some of the economic forecasts are grim. The risks are so great the economic downturn could mirror the Great Depression. The hope is that once restrictions are rescinded, we will be facing a “V-shaped” recession where a sudden downturn is followed by a sudden upswing. But the longer the restrictions are in place, the greater risk there is that the economy will take longer to recover. Alternatively, there is the risk that if restrictions are lifted too soon, there will be a second wave of infections without a vaccine. This appears to pit economic concerns against public health concerns, however, the problem is complicated by the fact that a recurring public health crisis would be even more costly to the economy than the current downturn. According to economist Andrew Atkeson, if the epidemic continues to grow the economy will grind to a halt anyways. Even if reopening the economy is warranted, such efforts will be problematic for economic and public health if it is done haphazardly. Economic advising always involves ethical issues, but it is this current question that highlights the ethical significance that policy advice can have.

One might expect that economists, given their potential to bring about significant ethically salient consequences, would have an ethical code to turn to. Such codes are common in other professions which are relevant to the public good. For example, engineering students in Canada and in the United States graduate with a ceremony where they recognize their ethical obligations to the discipline and to the public good, and they wear a ring as a symbol of their commitment to those obligations. Other fields (accountants, lawyers, journalists, and more) are bound by professional codes of conduct. In Western medicine, it is common for students to affirm the Hippocratic Oath. Many of these professional codes stress the importance of nonmaleficence, professional integrity, transparency, and accountability. Economists have no such oaths which they are expected to affirm or swear by.

Of course, one may ask why any kind of professional code of ethics, particularly when it comes to policy advice, is necessary? According to a value-free ideal of science, the conduct of research and the application of research are two different things. In order to keep the study of economics as non-political and value-free as possible, economists must only consider the accuracy of their findings and report those findings accurately to policy makers; after that, the political and ethical concerns belong to policy makers alone. For example, in his 1956 paper “Valuation and Acceptance of Scientific Hypotheses” Richard C. Jeffrey argues that scientists are only supposed to assign probabilities to hypotheses and then allow the acceptance of these hypotheses to be a matter of public acceptance. So, economists should be isolated from policy making and concerns about the public good as their only function is merely to analyze the data.

This argument became prominent in many different forms in the 20th century. Robert Nelson, an economist who formerly worked in the Office of Policy Analysis in the Office of the Secretary of the Interior for almost 20 years, notes in his own working experiences the force that this thinking had. Identifying the desire to clearly separate science from politics as a matter of progressive-era thinking, he notes that while this was the expectation, it was never a matter of practice. He explains:

“Economy policy analysts in government, as I was discovering, were not simply told to study the technical means of implementing a given policy and to report the scientific results back to their superiors. Rather economy policy analysis often functioned themselves as strong advocates for particular policy positions.”

Part of the problem, as Nelson explains it, is that there is a gap between democratic institutions and the degree of expertise required to make complex choices. An expert-policy advisor cannot simply analyze the data and relay their findings because neither the public nor many of these decision makers have the expertise to know what to do with that information. This creates a practical obstacle to the value-free ideal.

In addition, the mere use of certain data or certain statistical indicators can have political salience. As Susan Offutt notes, measurements like unemployment can have political consequences. But so does a lack of agreement on how to measure poverty or a “green” GDP. Deciding what is measured and how is a matter for economists to determine. The analyses found in policy advising are already politically influential even if it is the policy makers who ultimately decide what to do with that information.

Economic advisors right now need to balance a number of concerns. Should the focus be on securing public health? Should the focus be on economic growth? Should personal liberty be a factor? Some of the arguments for establishing an ethical code for economists draw analogies between fields like medicine and environmental policymaking. For example, like the field of medicine there is a distinction between experts and those who are the target of that expertise. This creates asymmetries in power, status, and knowledge. In a doctor-patient relationship this asymmetrical relationship creates ethical responsibilities for the physician to do no harm to a patient. This means that they recognize the degree of uncertainty before advising and recommending treatments, and do not arbitrarily violate the patient’s expressed wishes.

In contrast, economist George DeMartino has argued that economists working for institutions like the IMF, the World Bank, and others have pursued policies on the basis of optimal anticipated outcome rather than risk of failure. He describes how for decades inhabitants of developing countries have been subject to policies based on this thinking and have suffered for it. He explains:

“The 1980s inaugurated an extraordinary, sustained period of avoidable human suffering in the South, a chief cause of which was the failed neoliberal experiment. I use the word ‘experiment’ purposefully, since it seemed clear then and certainly does now that this was an instance in which economists took advantage of an extraordinary, historically unprecedented opportunity to design and test-drive a shiny new economic model over the objections of what were essentially unwilling subjects across the South.”

Would it be ethical for a doctor to advise risky treatments and then to have them carried out against a patient’s wishes? No. So, why should economists be treated differently if they are capable of causing harm on a large scale? Even if medical codes of ethics are not suited to economics, the relevant differences between medicine and economics do not lead to the conclusion that ethics should be of no concern to the economist.

Returning to our current crisis, stop and think about the potential for death, poverty, unemployment, misery, and suffering that is riding on the decisions which are being influenced by policy advisors right now. Should these people be held accountable to an ethical code of conduct?

In his 2005 paper DeMartino notes that despite the power and responsibilities that economic advisors can wield, there is no professional ethics body within the field of economics. Even today, prestigious economics programs at MIT and Princeton do not require economic ethical training. At the end of his paper, DeMartino’s prospective “Economist’s Oath” makes reference to using one’s power for the community good, it specifies that communities are not mere means to ends, and it declares that economics is an imperfect science that carries risks and dangers. Much of what this means in practice would need to be clarified over time, but as a resource to turn to, it could be a promising start. Given that many of these dangers and risks are now present in the COVID-19 crisis, the time may have come when the public should not only expect that economic advisors follow an economics ethical code, they should demand it.

The Ethics of Scientific Advice: Lessons from “Chernobyl”

photograph of Fireman's Monument at Cherynobl

The recently-released HBO miniseries Chernobyl highlights several important moral issues that are worth discussing. For example, what should we think about nuclear power in the age of climate change? What can disasters tell us about government accountability and the dangers of keeping unwelcome news from the public? This article will focus on the ethical issues concerning scientists potential to influence government policy. How should scientists advise governments, and who holds them accountable for their advice? 

In the second episode, the Soviet Union begins dumping thousands of tons of sand and boron onto the burning nuclear plant at the suggestion of physicist Valery Legasov. After consulting fellow scientist Ulana Khomyuk (a fictional character who represents the many other scientists involved), Legasov tells Soviet-leader Gorbachev that in order to prevent a potential disaster, drainage pools will need to be emptied from within the plant in an almost certain suicide mission. “We’re asking for your permission to kill three men,” Legasov reports to the Soviet government. It’s hard to imagine a more direct example of a scientist advising a decision with moral implications. 

Policy makers often lack the expertise to make informed decisions, and this provides an opportunity for scientists to influence policy. But should scientists consider ethical or policy considerations when offering advice? 

On one side of this debate are those who argue that scientists primary responsibility is to ensure the integrity of science. This means that scientists should maintain objectivity and should not allow their personal moral or religious convictions to influence their conclusions. It also means that the public should see science as an objective and non-political affair. In essence, science must be value-free.

This value-free side of the debate is reflected in the mini-series’ first episode. It ends with physicist Legasov getting a phone call from Soviet minister Boris Shcherbina telling him that he will be on the commission investigating the accident. When Legasov begins to suggest an evacuation, Shcherbina tells him, “You’re on this committee to answer direct questions about the function of an RBMK reactor…nothing else. Certainly not policy.”

Those who argue for value-free science often argue that scientists have no business trying to influence policy. In democratic nations this is seen as particularly important since policy makers are accountable to voters while scientists are not. If scientists are using ethical judgments to suggest courses of action, then what mechanism will ensure that those value judgments reflect the public’s values?

In order to maintain the value-free status of science, philosophers such as Ronald N. Geire argue that there is an important distinction between judging the truth of scientific hypotheses and judging the practical uses of science. A scientist can evaluate the evidence for a theory or hypotheses, but they shouldn’t evaluate whether one should rely on that theory or hypothesis to make a policy decision. For example, a scientist might tell the government how much radiation is being released and how far it will spread, but they should not advise something like an evacuation. Once the government is informed of relevant details, the decision of how to respond should be left entirely to elected officials. 

Opponents of this view, however, argue that scientists do have a moral responsibility when offering advice to policy makers and believe that scientists shouldering this responsibility is desirable. Philosopher Heather Douglas argues that given that scientists can be wrong, and given that acting on incorrect information can lead to morally important consequences, scientists do have a moral duty concerning the advice they offer to policy makers. Scientists are the only ones who can fully appreciate the potential implications of their work. 

In the mini-series we see several examples where only the scientists fully appreciate the risks and dangers from radiation, and are the strongest advocates of evacuation. In reality, Legasov and a number of other scientists offered advice on how to proceed with cleaning up the disaster. According to Adam Higginbotham’s Midnight in Chernobyl: The Untold Story of the World’s Greatest Nuclear Disaster, the politicians were ignorant of nuclear physics, and the scientists and technicians were too paralyzed by indecision to commit to a solution.

In the real-life disaster, the scientists involved were frequently unsure about what was actually happening. They had to estimate how fast various parts of the core might burn and whether different radioactive elements would be released into the air. Reactor specialist Konstantin Fedulenko was worried that the boron drops were having limited effect and that each drop was hurling radioactive particles into the atmosphere. Legasov disagreed and told him that it was too late to change course. Fedulenko believed it was best to let the graphite fire burn itself out, but Legasov retorted, “People won’t understand if we do nothing…We have to be seen to be doing something.” This suggests that the scientists were not simply offering technical advice but were making judgments based on additional value and policy considerations. 

Again, according to Douglas, given the possibility for error and the potential moral consequences at play, scientists should consider these consequences to determine how much evidence is enough to say that a hypothesis is true or to advise a particular course of action. 

In the mini-series, the government relies on monitors showing a low level of radiation to initially conclude that the situation is not bad enough to warrant an evacuation. However, it is pointed out the radiation monitors being used likely only had a limited maximum range, and so the radiation could be much higher than the monitor would tell them. Given that they may be wrong about the actual amount of radiation and the threat to public health, a morally-responsible scientist might conclude that evacuation be suggested to policy makers. 

While some claim that scientists shouldn’t include these considerations, others argue that they should. Certainly, the issue isn’t limited to nuclear disasters either. Cases ranging from climate change to food safety, chemical and drug trials, economic policies, and even the development of weapons, all present a wide array of potential moral consequences that might be considered when offering scientific advice. 

It’s difficult to say a scientist shouldn’t make morally relevant consequences plain to policy makers. It often appears beneficial, and it sometimes seems unavoidable. But this liberty requires scientists to practice judgment in determining what a morally relevant consequence is and is not. Further, if scientists rely on value judgments when advising government policy, how are scientists to be held accountable by the public? Given these benefits and concerns, whether we want scientists to make such judgments and to what extent their advice should reflect those judgments presents an important ethical dilemma for the public at large. Resolving this dilemma will at least require that we be more aware of how experts provide policy advice.